doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
2310.09497
49
e g r a l l x l x x Methods heapsort.generate heapsort.likelihood bubblesort.generate bubblesort.likelihood heapsort.generate heapsort.likelihood bubblesort.generate bubblesort.likelihood heapsort.generate heapsort.likelihood bubblesort.generate bubblesort.likelihood NDCG@10 #Inf. Pro. tokens Gen. tokens Lat.(s) 8 5 29 19 10 6 35 20 20 17 73 60 .670 .670 .678 .678 .693 .693 .705 .705 .706 .706 .711 .711 125 125 461 461 130 130 467 467 130 130 468 468 40461 40458 147774 147752 41666 41667 149949 149949 42077 42071 150765 150765 627 - 2302 - 647 - 2335 - 651 - 2342 Table 5: Estimated cost of API calls across different methods, in US dollars. Models ordered from most (top) to least effective (bottom) based on NDCG@10, macro-average across both TREC DL datasets.
2310.09497#49
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
49
By evaluating verbosity and factual accuracy separately, we bet- ter prime the evaluation pipeline for a verbosity metric in the future. Factual accuracy will always remain constant; a response can either be factually correct or incorrect given the data set it is provided. By contrast, verbosity can and should be regulated by the user for their convenience, as is reflected by the feedback received during our qualitative study (see Section 6).
2310.09611#49
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
50
Method pairwise.heapsort setwise.bubblesort pairwise.allpair listwise.likelihood setwise.heapsort pairwise.bubblesort pointwise.yes_no listwise.generation pointwise.qlm NDCG@10 TREC DL 2019 TREC DL 2020 $3.40 $4.67 $90.60 $2.86 $1.27 $11.89 $0.49 $3.75 $0.46 0.6800 0.6800 0.6783 0.6745 0.6743 0.6550 0.6398 0.5929 0.5343 $3.39 $4.62 $90.59 $2.83 $1.28 $12.28 $0.48 $3.49 $0.46 hand, our setwise.heapsort provides a reduction of ≈ 62% in cost by only marginally reducing NDCG@10 (a 0.8% loss).
2310.09497#50
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
50
I think that | would like to use this system —_ I thought there was too much inconsistency in frequently the system 0% 20% «40% «60% 80% 100% 0% 20% «40% + «60% 80% 100% I would imagine that most people would learn to use this system very quickly 80% 100% I found the system unnecessarily complex 0% 20% © 40% «60% + 80% 100% 0% 20% © 40% += «60% I thought the system was easy to use 0% 20% = «40% 60% | think that | would need the support of a technical person to be able to use this system I found the system very cumbersome to use -— ho 80% 100% 0% 20% «40% ~~ «60% © BOX 100% felt very confident using the system 0% 20% «40% += «60% BOK 100% 0% 20% «40% += «60% © BOX 100% Ifound the various functions in this system _I needed to learn a lot of things before I could were well integrated get going with the system 0% 20% + 40% ~~ «60% © BOX 100% 0% 20% «40% + «60% BOX 100% I Strongly Disagree lB Disagree Neutral Agree I Strongly Agree Figure 7: System Usability Scale Survey
2310.09611#50
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
51
hand, our setwise.heapsort provides a reduction of ≈ 62% in cost by only marginally reducing NDCG@10 (a 0.8% loss). 5.3 Impact of using Outputs Logits on Setwise Similar to Pairwise methods, if the model output logits are accessi- ble, our Setwise approaches can also utilize these logits to estimate the likelihood of the most relevant document label. This approach eliminates the need for token generation, requiring only a single LLM forward inference to yield the output results, thus offering a more efficient process. To assess the impact of incorporating model output logits in our Setwise approaches, we conducted experiments on the TREC DL 2019 dataset, with results presented in Table 4. The findings indicate that using model logits resulted in no change in ranking effectiveness, but did lead to lower query latency. This improvement stems from the absence of generated tokens for like- lihood estimation. Hence, we conclude that if access to the model output is available, employing likelihood can further enhance the efficiency for our Setwise approach. 5.4 Effectiveness and Efficiency Trade-offs Our Setwise prompting is characterized by a hyperparameter 𝑐 controlling the number of compared documents within the prompt for each step in the sorting algorithms. In the previous experiments, we always set 𝑐 = 3. Adjusting this hyperparameter allows one
2310.09497#51
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
51
Figure 7: System Usability Scale Survey # 6 EVALUATION: USER STUDY WITH BLIND PEOPLE During the development process, we engaged with a blind partic- ipant who had prior experience using a screen reader on a daily basis. This participant, as a design partner, provided feedback at two intermediate stages of development. In addition to this interme- diate prototype evaluation, we conducted a formal usability study with six additional blind/low-vision individuals. Our evaluation deemed 69.4% or 539 of the 777 questions to be “Correct”. Of particular interest is VizAbility’s ability to avoid hal- lucinations. For instance, VizAbility responded “The variables you mentioned are not provided in the dataset” to the query, “What is the date of this data?”. Framed in the context of the ground truth, “Data pertaining to this question is not provided”, GPT (operating under the binary scale) evaluated the response as “Correct”. Many user questions comprising the testing set were ambiguous or refer- enced variables not found within the respective data sets (as can be witnessed in the example above). This is a natural consequence of emphasizing self-guided exploration. Users will tend to push the boundaries of our model in terms of what questions it can compre- hend; therefore, it is crucial that we incorporate a pipeline to avoid any potential hallucinations.
2310.09611#51
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
52
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon to further enhance efficiency by incorporating more compared documents into the prompt, thereby reducing the number of LLM inference calls. However, we acknowledge that there is an input length limitation to LLMs (in our experiments this is 512 prompt tokens) and setting 𝑐 to a large value may require more aggressive document truncation, likely impacting effectiveness.
2310.09497#52
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
52
5.4 Comparisons to an existing system We also sought to frame our evaluation in the context of similar external systems - one such being an automatic chart question- answering pipeline that generates visual explanations describing how the answer was obtained [31]. In the evaluation of the system with our data set from blind people, the model reported an overall factual accuracy rate of 16% [32]. It is important to note that this model has a limited number of compatible chart types, with it only supporting bar and line charts. Seeking to maintain consistency between the two models, we extracted data solely from the bar and line charts for a more fitting comparison. When narrowing the scope to these two types of visual stimuli, VizAbility reports 68% accuracy in outputting ”Correct” responses (based on the binary scale), signifying a significant improvement in user query handling. 6.1 Participants We recruited six blind/low-vision individuals from the National Institute of the Blind. Their demographics are shown in Table 2. We tried to recruit diverse participants based on their gender and screen reader expertise.
2310.09611#52
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
53
To investigate the trade-off between effectiveness and efficiency inherent in our Setwise approach, we set 𝑐 = 3, 5, 7, 9 while trun- cating the documents in the prompt to 128, 85, 60, 45 tokens2, re- spectively. The NDCG@10, along with query latency for all models while varying 𝑐, is visualized in Figure 3a for the TREC DL datasets. As expected, larger 𝑐 reduces query latency but often degrades effectiveness. Notably, the heap sort algorithm consistently proves more efficient than bubble sort. For instance, with Flan-t5-xl and 𝑐 = 9, heap sort achieves strong NDCG@10 with a query latency of ≈3 seconds. When compared to the other methods outlined in Table 2, this represents the lowest query latency, except for the Pointwise approaches with Flan-t5-large, albeit with superior rank- ing effectiveness. It’s worth noting that the ranking effectiveness decline with larger 𝑐 values could also be attributed to the increased truncation of passages. LLMs with extended input length capacity might potentially yield improved ranking effectiveness for larger 𝑐. This area warrants further exploration in future studies.
2310.09497#53
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
53
6.2 Procedure Upon entering the session, participants opened up our system in a web browser and chose a chart of their choice among the four op- tions: line chart, bar chart, scatterplot, or choropleth map. The study was divided into three parts: the first two focused on the individual components of our multimodal approach—the keyboard-navigable tree view and the conversational module. Each was evaluated in a standalone setting. The final part centered on their combined functionality to assess the potential advantages of their collabora- tive operation. In the beginning, we refrained from providing any external guidance so that the participants’ experiences could better imitate those of a real-world situation. 6.3 Behavioral Observations Here, we detail participants’ actions and feedback while using Viz- Ability during the study sessions. 6.3.1 Navigating the tree view. Participants were able to utilize the tree view using arrow keys and tab shortcuts as reported in prior studies [33, 55], although the learning curve proved to be slightly steeper for P2 and P5. P5 remarked on the “cumbersome” structure of the Tree for the Bar Chart, noting that it was due to the presence of over 170 unique data values. Rather than tediously Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Gorniak et al.
2310.09611#53
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
54
Similarly, the Listwise balance effectiveness and efficiency through the adjustment of the repetition count 𝑟 for the sliding window. In our prior experiments, we consistently set 𝑟 = 5 to ensure that at least 10 of the most relevant documents can be brought to the top. In Figure 3b, we investigate the influence of varying 𝑟 on Listwise approaches. Latency exhibits a linear relationship with 𝑟 , which aligns with expectations. A larger value of 𝑟 can enhance the effec- tiveness of listwise.generate, and beyond 𝑟 > 5, the improvement levels off. Conversely, the listwise.likelihood approach, which lever- ages our Setwise prompting, showcases notably higher effectiveness and efficiency. Even with a small value of 𝑟 the performance of list- wise.likelihood exceeds that of listwise.generate, with the highest performance achieved around 𝑟 = 5.
2310.09497#54
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
54
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY Gorniak et al. PID Gender Age Vision Level Screen Reader Expertise Screen Reader Type Chart Selected P1 Male P2 P3 P4 P5 Male P6 Male Female Female Female 45-54 65 or older Blind since birth 25-34 25-34 45-54 55-64 Blind with later onset Expert Advanced Intermediate Advanced Blind with later onset Blind since birth Blind with later onset Expert Blind with later onset Advanced JAWS VoiceOver JAWS JAWS JAWS NVDA Bar Chart Line Chart Choropleth Map Scatterplot Bar Chart Choropleth Map Table 2: Participant Information Distribution.
2310.09611#54
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
55
5.5 Sensitivity to the Initial Ranking The ranking effectiveness of the original Listwise and Pairwise meth- ods is influenced by the initial ranking order [18, 20]. To investigate this aspect in relation to our approach, we consider different order- ings of the initial BM25 list; specifically, 1) initial BM25 ranking; 2) inverted BM25 ranking; and 3) random shuffled BM25 ranking. Each of these initial rankings was used to test different reranking meth- ods using Flan-t5-large. The results are presented in Figure 4. Differ- ent initial ranking orders negatively impact listwise.generate, pair- wise.heapsort and pairwise.bubblesort; pairwise.heapsort is the most robust method. These findings align with the literature [18, 20]. In contrast, Setwise prompting is far more robust to variations in the initial ranking order. Both listwise.likelihood and setwise.bubblesort exhibit large improvements over listwise.generate and pairwise.bubblesort, in the case of the inverted BM25 ranking and randomly shuffled BM25 ranking. Moreover, they demonstrate a similar level of robust- ness to pairwise.heapsort. This leads us to the conclusion that our 2This reduction in document length is necessary to ensure prompt size is not exceeded. A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
2310.09497#55
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
55
Table 2: Participant Information Distribution. navigating through the data using the down arrow key, P5 wished for a more efficient method to move between specific nodes within the tree view. P2 echoed this sentiment, highlighting the risk of disorientation, particularly with larger and more intricate data sets. Several participants (P1, P3, P4, P5, P6) independently recognized the distinctive structure of the tree view, which presents a data set through visual encoding variables. For example, P5, after navigating a choropleth map and expressing frustration over manually sifting through 172 countries without an apparent order, was pleasantly surprised when using the right arrow key led him to the same data set, this time organized by vaccination rates in 10 percent increments. This participant then confirmed that the tree view was more effective in conveying a visualization’s structure compared to a traditional data table. was able to deduce that the color ‘orange-red’ indicates positive temperature values.
2310.09611#55
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
56
2This reduction in document length is necessary to ensure prompt size is not exceeded. A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models Arxiv, 2023, preprint (a) TREC DL 2019 # (b) TREC DL 2020 Figure 4: Sensitivity to the initial ranking. We use Flan-t5-large and 𝑐 = 4 for the Setwise approach. Setwise prompting approach substantially enhances the zero-shot re-ranking with LLMs in relation to the initial ranking. 6 CONCLUSION We undertook a comprehensive study of existing LLM-based zero- shot document ranking methods, employing strict and consistent experimental conditions. Our primary emphasis was on evaluating both their ranking effectiveness and their efficiency in terms of computational efficiency and runtime latency — factors that are often disregarded in previous studies. Our findings unveil some unforeseen insights, and effectiveness-efficiency trade-offs between different methods. This information equips practitioners with valu- able guidance when selecting the most appropriate method for their specific applications.
2310.09497#56
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
56
was able to deduce that the color ‘orange-red’ indicates positive temperature values. We also observed an affinity for contextual queries among the participant pool. One user (P4) who had little to no experience with map visualizations prior to the study asked: “What is a choropleth map?”, to which the LLM outputted a correct response. However, when the same participant asked, “What is a temporal polarity” (pertaining to the bar chart), the LLM responded with a definition tied to linguistics. Although initially taken aback, the user acknowl- edged the possible ambiguities with the word “temporal polarity” (which has multiple meanings), and upon rephrasing her query to incorporate more precision, received a more accurate response. The participant attributed her realization to the VizAbility’s justification (outputted alongside the response), which explicitly told her that it sourced its answer from the internet.
2310.09611#56
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
57
To further boost efficiency of LLM-based zero-shot document ranking, we introduced an innovative Setwise prompting strategy. Setwise has the potential to enhance both effectiveness and effi- ciency for Listwise approaches provided the model logits are ac- cessible. Setwise also notably enhances the efficiency of sorting- based Pairwise approaches. Furthermore, Setwise prompting offers a straightforward way to balance effectiveness and efficiency by incorporating more documents for comparison in the prompt. Ad- ditionally, approaches equipped with Setwise prompting demon- strated strong robustness to variation in the initial retrieval set used for reranking. Future work should focus on evaluating the Setwise prompting approach on a wider array of LLMs, including LLaMA models [22, 23] as well as the OpenAI LLM APIs. Additionally, recent advanced self-supervised prompt learning techniques [6, 27] could be used to refine the Setwise approach. We make our code and results publicly available at https://github.com/ielab/llm-rankers. [5] Nick Craswell, Bhaskar Mitra, Emine Yilmaz, Daniel Campos, and Ellen M Voorhees. 2020. Overview of the TREC 2019 deep learning track. arXiv preprint arXiv:2003.07820 (2020).
2310.09497#57
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
57
After having used their keyboard to navigate through the tree view, participants were asked to describe the visual stimuli to the best of their capabilities. Responses were mixed, with two partic- ipants (P3 and P4) only being able to identify the two variables that were being compared. This suggests that despite being a good overall indicator of chart structure, the Olli Tree alone is not suffi- cient for complete data visualization. This was reaffirmed by the usefulness rating most individuals attributed to the system, with the average hovering around a 3 out of 5.
2310.09611#57
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
58
[6] Chrisantha Fernando, Dylan Banarse, Henryk Michalewski, Simon Osindero, and Tim Rocktäschel. 2023. Promptbreeder: Self-Referential Self-Improvement Via Prompt Evolution. arXiv preprint arXiv:2309.16797 (2023). [7] Lukas Gienapp, Maik Fröbe, Matthias Hagen, and Martin Potthast. 2022. Sparse Pairwise Re-Ranking with Pre-Trained Transformers. In Proceedings of the 2022 ACM SIGIR International Conference on Theory of Information Retrieval (Madrid, Spain) (ICTIR ’22). Association for Computing Machinery, New York, NY, USA, 72–80. https://doi.org/10.1145/3539813.3545140 [8] Donald Ervin Knuth. 1997. The art of computer programming. Vol. 3. Pearson Education. [9] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners. Advances in neural information processing systems 35 (2022), 22199–22213.
2310.09497#58
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
58
6.3.2 Exploring the conversational module. Although 4 Participants (P1, P2, P3, P5) gravitated towards the text input modality, all af- firmed the importance of retaining an option for voice input as well. All but one participant (P1, P2, P3, P4, P5) immediately asked data-driven questions (either simple fetches for data, like “What is the vaccination percentage for Haiti” or more complex queries involving multiple steps), with P6 instead asking a contextual ques- tion: ”Is there a way to rank the various countries into continents?” (regarding the choropleth map). This coincided with subsequent participant ratings for the usefulness of the four query types, with all users asserting “Analytical Queries” as the most useful for chart comprehension. Most users (P1, P2, P3, P5) could not fathom the possibility that more broad questions were supported.
2310.09611#58
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
59
[10] Percy Liang, Rishi Bommasani, Tony Lee, Dimitris Tsipras, Dilara Soylu, Michi- hiro Yasunaga, Yian Zhang, Deepak Narayanan, Yuhuai Wu, Ananya Kumar, et al. 2022. Holistic evaluation of language models. arXiv preprint arXiv:2211.09110 (2022). [11] Jimmy Lin, Xueguang Ma, Sheng-Chieh Lin, Jheng-Hong Yang, Ronak Pradeep, and Rodrigo Nogueira. 2021. Pyserini: A Python Toolkit for Reproducible In- formation Retrieval Research with Sparse and Dense Representations. In Pro- ceedings of the 44th International ACM SIGIR Conference on Research and De- velopment in Information Retrieval (Virtual Event, Canada) (SIGIR ’21). Asso- ciation for Computing Machinery, New York, NY, USA, 2356–2362. https: //doi.org/10.1145/3404835.3463238 [12] Xueguang Ma, Xinyu Zhang, Ronak Pradeep, and Jimmy Lin. 2023. Zero-Shot Listwise Document Reranking with a Large Language Model. arXiv preprint arXiv:2305.02156 (2023).
2310.09497#59
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
59
Following this independent exploration of the conversational model, participants were made aware of the four distinct types of queries and were once again directed to input their own questions; however, this time around, they had to broadly adhere to one of the 4 query classifications. Users demonstrated a greater proficiency with the conversational module during this guided exploration, with P1 even chaining multiple individual queries to arrive at a broader understanding of the chart. By consecutively asking “What is the temperature for 2020?” and “What color is 2020?”, the participant Integrating the two components. Participants were then in- 6.3.3 troduced to navigation queries. We explained the purpose of these queries, emphasizing their role in wayfinding and orientation, and then allowed them to formulate their own navigation queries. All users concurred that these queries were essential for understanding the tree view, a sentiment echoed in the usefulness ratings they assigned to the integrated system. While previous ratings averaged around 3, after this introduction, participants consistently rated the system between 4 and 5, with 5 being extremely useful.
2310.09611#59
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
60
[13] Aliaksei Mikhailiuk, Clifford Wilmot, Maria Perez-Ortiz, Dingcheng Yue, and Rafal Mantiuk. 2021. Active Sampling for Pairwise Comparisons via Approximate Message Passing and Information Gain Maximization. In 2020 IEEE International Conference on Pattern Recognition (ICPR). [14] Rodrigo Nogueira, Zhiying Jiang, Ronak Pradeep, and Jimmy Lin. 2020. Docu- ment Ranking with a Pretrained Sequence-to-Sequence Model. In Findings of the Association for Computational Linguistics: EMNLP 2020. 708–718. [15] Jay M Ponte and W Bruce Croft. 2017. A language modeling approach to in- formation retrieval. In ACM SIGIR Forum, Vol. 51. ACM New York, NY, USA, 202–208. [16] Ronak Pradeep, Rodrigo Nogueira, and Jimmy Lin. 2021. The expando-mono-duo design pattern for text ranking with pretrained sequence-to-sequence models. arXiv preprint arXiv:2101.05667 (2021).
2310.09497#60
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
60
Most participants tended to input short and concise navigation queries. Rather than inputting “How do I get from my current loca- tion to the percentage vaccinated value for Guam”, one user (P5) opted for the much simpler “Take me to Guam”. Showcasing its conversational strengths, our model was able to precisely identify the starting as well as ending nodes from this colloquial text, yield- ing the instructions: “Press the right arrow key. Press the down arrow key. Press the down arrow key.” 6.4 User Feedback and Reflection Participants completed a post-study questionnaire based on the System Usability Scale (see Figure 7). Notably, most participants (4 Agree; 1 Strongly Agree; 1 Disagree) concurred with the statement: “I found the various functions in this system were well integrated.” Results can be found in Figure 7. Participants also valued Viz- Ability’s commitment to accessibility and transparency, especially within the conversational module. They envisioned real-world ap- plications for VizAbility, relating it to their personal experiences. For instance, P1 saw its potential in providing testing accommoda- tions for GRE exams, noting its superiority over human proctors # VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
2310.09611#60
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
61
[17] Ronak Pradeep, Sahel Sharifymoghaddam, and Jimmy Lin. 2023. RankVicuna: Zero-Shot Listwise Document Reranking with Open-Source Large Language Models. arXiv preprint arXiv:2309.15088 (2023). [18] Zhen Qin, Rolf Jagerman, Kai Hui, Honglei Zhuang, Junru Wu, Jiaming Shen, Tianqi Liu, Jialu Liu, Donald Metzler, Xuanhui Wang, et al. 2023. Large language models are effective text rankers with pairwise ranking prompting. arXiv preprint arXiv:2306.17563 (2023). [19] Devendra Sachan, Mike Lewis, Mandar Joshi, Armen Aghajanyan, Wen-tau Yih, Joelle Pineau, and Luke Zettlemoyer. 2022. Improving Passage Retrieval with Zero-Shot Question Generation. In Proceedings of the 2022 Conference on Empirical Methods in Natural Language Processing. Association for Computational Linguistics, Abu Dhabi, United Arab Emirates, 3781–3797. https://doi.org/10. 18653/v1/2022.emnlp-main.249
2310.09497#61
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
61
# VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction Conference acronym ’XX, June 03–05, 2018, Woodstock, NY in translating visual graphs. P6, who teaches the NVDA screen reader to the BLV community, expressed interest in incorporating the system into his lessons. However, there was also constructive feedback. Although most participants deemed the structure of navigation query responses (a sequence of directions) to be satisfactory, P2 advised that the system should automatically transport the user’s cursor to the desired location, as opposed to currently requiring the user to manually traverse the tree view themselves. One participant (P5) sought more control over the nature of LLM responses out- putted by the conversational model. He brought up the necessity of having some implementation of a dial to regulate the verboseness of the outputted answers. The same user who commented on the cumbersome structure of the tree view (P5) further elaborated that he would prefer a more concise raw data table in its place, espe- cially for less extensive datasets. This same participant (P5), who had earlier commented on the tree view’s cumbersome structure, further expressed a preference for a more concise raw data table, especially for smaller datasets.
2310.09611#61
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
62
[20] Weiwei Sun, Lingyong Yan, Xinyu Ma, Pengjie Ren, Dawei Yin, and Zhaochun Ren. 2023. Is ChatGPT Good at Search? Investigating Large Language Models as Re-Ranking Agent. arXiv preprint arXiv:2304.09542 (2023). [21] Nandan Thakur, Nils Reimers, Andreas Rücklé, Abhishek Srivastava, and Iryna Gurevych. 2021. BEIR: A Heterogeneous Benchmark for Zero-shot Evaluation of Information Retrieval Models. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 2). REFERENCES [1] Monica Agrawal, Stefan Hegselmann, Hunter Lang, Yoon Kim, and David Sontag. 2022. Large language models are zero-shot clinical information extractors. arXiv preprint arXiv:2205.12689 (2022).
2310.09497#62
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
62
7 DISCUSSION & FUTURE WORK Our evaluation studies underscore the potential of VizAbility and also pinpoint areas for enhancement. We reflect on the limitations and challenges, paving the way for future opportunities. relevant follow-up questions after an initial query could further enhance efficient chart exploration. Our quantitative study results indicate room for improvement as well. Areas of enhancement encompass a more accurate un- derstanding of the user’s context when drawing upon external knowledge, discerning unanswerable questions, as well as refining the accuracy of analytical and visual queries. While the conversa- tional module may not fully decipher the inherent ambiguities of natural languages, our commitment to crafting safe and explanatory responses enabled participants to readily rectify errors.
2310.09611#62
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
63
[2] Tom Brown, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D Kaplan, Prafulla Dhariwal, Arvind Neelakantan, Pranav Shyam, Girish Sastry, Amanda Askell, et al. 2020. Language models are few-shot learners. Advances in neural information processing systems 33 (2020), 1877–1901. [3] Aakanksha Chowdhery, Sharan Narang, Jacob Devlin, Maarten Bosma, Gaurav Mishra, Adam Roberts, Paul Barham, Hyung Won Chung, Charles Sutton, Se- bastian Gehrmann, et al. 2022. Palm: Scaling language modeling with pathways. arXiv preprint arXiv:2204.02311 (2022). [22] Hugo Touvron, Thibaut Lavril, Gautier Izacard, Xavier Martinet, Marie-Anne Lachaux, Timothée Lacroix, Baptiste Rozière, Naman Goyal, Eric Hambro, Faisal Azhar, et al. 2023. Llama: Open and efficient foundation language models. arXiv preprint arXiv:2302.13971 (2023).
2310.09497#63
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
63
7.2 Need for Rigorous Benchmark Testing The cornerstone of our project is the conversational module, de- signed to address the inherent limitations of keyboard navigation. While the existing dataset enabled a meaningful evaluation of re- sponse quality based on real-world queries, our study revealed the need for a more extensive benchmarking dataset. Our evalu- ation was constrained not only by the four chart types but also by the limited range of questions, preventing a full assessment of VizAbility’s capabilities. Specifically, we need to evaluate situa- tional questions focused on a user’s current point of interest within the tree view. Moreover, questions that hinge on understanding prior conversational context were not explored in this study. Given the generative capabilities of LLMs, synthetically generating these additional questions could be a viable approach. 7.1 Limitations and Opportunities The user study yielded actionable insights to enhance VizAbility, leading to several post-study modifications. For example, we added data tables as an alternative to the tree view and introduced a direct navigation option to the target node.
2310.09611#63
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
64
[23] Hugo Touvron, Louis Martin, Kevin Stone, Peter Albert, Amjad Almahairi, Yas- mine Babaei, Nikolay Bashlykov, Soumya Batra, Prajjwal Bhargava, Shruti Bhos- ale, et al. 2023. Llama 2: Open foundation and fine-tuned chat models. arXiv preprint arXiv:2307.09288 (2023). [24] Ashish Vaswani, Noam Shazeer, Niki Parmar, Jakob Uszkoreit, Llion Jones, Aidan N. Gomez, Łukasz Kaiser, and Illia Polosukhin. 2017. Attention is All You Need. In Proceedings of the 31st International Conference on Neural Information Processing Systems (Long Beach, California, USA) (NIPS’17). Curran Associates Inc., Red Hook, NY, USA, 6000–6010. [4] Nick Craswell, Bhaskar Mitra, Emine Yilmaz, and Daniel Campos. 2021. Overview of the TREC 2020 deep learning track. arXiv preprint arXiv:2102.07662 (2021).
2310.09497#64
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
64
Despite our initial aim to offer concise and informative answers, P5’s recommendation for user-adjustable response verbosity un- derscored the importance of user agency over designer-imposed settings. Given that speech is processed serially, the text length read by screen readers becomes a pivotal design consideration. This concern has been reiterated in prior research [7, 9, 27, 55]. Similarly, offering users the capability to customize node descriptions in the tree view could prove advantageous. Our quantitative study result shows that there is still an oppor- tunity to improve. These include more accurately understanding the user situation when eliciting contextual knowledge, when to know which question is not answerable, in addition to improving the accuracy of analytical and visual queries. Although the con- versational module is not perfect in figuring out the ambiguous nature of natural languages, our efforts to make responses safe and explanatory still allowed participants to easily recover from mistakes. Participants primarily attempted data queries when no guidance was provided, indicating difficulty in figuring out all four types of queries. This underscores the need for help to bridge the gap in execution. Likewise, one participant (P2) also highlighted the potential benefit of help documentation. Instead of merely offering passive documentation, integrating a real-time help function could be more effective. For example, when a user’s cursor lands on a category, the system could convey tooltip-like info suggesting pos- sible questions about the current selection. Additionally, suggesting
2310.09611#64
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
65
[25] Shuai Wang, Harrisen Scells, Bevan Koopman, and Guido Zuccon. 2023. Can ChatGPT Write a Good Boolean Query for Systematic Review Literature Search?. Arxiv, 2023, preprint In Proceedings of the 46th International ACM SIGIR Conference on Research and Development in Information Retrieval (Taipei, Taiwan) (SIGIR ’23). Association for Computing Machinery, New York, NY, USA, 1426–1436. https://doi.org/10. 1145/3539618.3591703 [26] Jason Wei, Maarten Bosma, Vincent Zhao, Kelvin Guu, Adams Wei Yu, Brian Lester, Nan Du, Andrew M Dai, and Quoc V Le. 2021. Finetuned Language Models are Zero-Shot Learners. In International Conference on Learning Representations. [27] Chengrun Yang, Xuezhi Wang, Yifeng Lu, Hanxiao Liu, Quoc V Le, Denny Zhou, and Xinyun Chen. 2023. Large language models as optimizers. arXiv preprint Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon arXiv:2309.03409 (2023).
2310.09497#65
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
65
In our study, we compared our system exclusively with another that also assumes the availability of chart specifications, emphasiz- ing reasoning over image understanding. While recent vision-based question-answering systems like ChartQA [25] are noteworthy, public chatbots like Bing and Bard have also started supporting image inputs. Although these systems are still in the early stages of understanding synthetic images, such as graphic designs and data visualizations, beyond natural scenes [11], a comparison with VizAbility could be insightful. A balanced evaluation approach might involve using an independent image parser to feed data into VizAbility, thereby concentrating on reasoning capabilities. Addi- tionally, to refine VizAbility, we plan to explore various prompting strategies, such as further leveraging user locality information or adjusting the number of examples in query classification.
2310.09611#65
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09497
66
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, and Guido Zuccon arXiv:2309.03409 (2023). [28] Shengyao Zhuang, Hang Li, and Guido Zuccon. 2021. Deep query likelihood model for information retrieval. In Advances in Information Retrieval: 43rd Euro- pean Conference on IR Research, ECIR 2021, Virtual Event, March 28–April 1, 2021, Proceedings, Part II 43. Springer, 463–470. [29] Shengyao Zhuang and Guido Zuccon. 2021. TILDE: Term independent likelihood moDEl for passage re-ranking. In Proceedings of the 44th International ACM SIGIR Conference on Research and Development in Information Retrieval. 1483–1492.
2310.09497#66
A Setwise Approach for Effective and Highly Efficient Zero-shot Ranking with Large Language Models
Large Language Models (LLMs) demonstrate impressive effectiveness in zero-shot document ranking tasks. Pointwise, Pairwise, and Listwise prompting approaches have been proposed for LLM-based zero-shot ranking. Our study begins by thoroughly evaluating these existing approaches within a consistent experimental framework, considering factors like model size, token consumption, latency, among others. This first-of-its-kind comparative evaluation of these approaches allows us to identify the trade-offs between effectiveness and efficiency inherent in each approach. We find that while Pointwise approaches score high on efficiency, they suffer from poor effectiveness. Conversely, Pairwise approaches demonstrate superior effectiveness but incur high computational overhead. To further enhance the efficiency of LLM-based zero-shot ranking, we propose a novel Setwise prompting approach. Our approach reduces the number of LLM inferences and the amount of prompt token consumption during the ranking procedure, significantly improving the efficiency of LLM-based zero-shot ranking. We test our method using the TREC DL datasets and the BEIR zero-shot document ranking benchmark. The empirical results indicate that our approach considerably reduces computational costs while also retaining high zero-shot ranking effectiveness.
http://arxiv.org/pdf/2310.09497
Shengyao Zhuang, Honglei Zhuang, Bevan Koopman, Guido Zuccon
cs.IR, cs.AI
9 pages
null
cs.IR
20231014
20231014
[ { "id": "2302.13971" }, { "id": "2304.09542" }, { "id": "2204.02311" }, { "id": "2309.03409" }, { "id": "2101.05667" }, { "id": "2309.16797" }, { "id": "2102.07662" }, { "id": "2211.09110" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2307.09288" }, { "id": "2003.07820" }, { "id": "2205.12689" } ]
2310.09611
66
7.3 Integrating into Existing Visualization Tools Since VizAbility operates under the assumption that a chart spec- ification is available, it may not be directly applicable to charts currently found on the web. Instead, our vision is to integrate Viz- Ability within existing data visualization platforms. Prior research underscores that many data visualization practitioners base their choices on the accessibility features of these platforms [26]. An- other study delves into the extent of accessible design support these tools offer [33]. Exploring the design space to determine how Viz- Ability can seamlessly fit into current data visualization workflows would be compelling. Additionally, considering the degree of cus- tomization for data visualization designers, such as setting default verbosity levels, warrants further investigation. Conference acronym ’XX, June 03–05, 2018, Woodstock, NY
2310.09611#66
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
67
Conference acronym ’XX, June 03–05, 2018, Woodstock, NY 8 CONCLUSION In this work, we presented VizAbility, a novel multimodal approach to enhancing accessibility in data visualizations, catering to the needs of BLV community. By seamlessly integrating structured chart and table navigation via keyboard inputs with conversational interactions through verbal commands, VizAbility offers a compre- hensive solution that bridges the gap between traditional visualiza- tion tools and the unique requirements of BLV users. Evaluations of the system underscored its potential value, with participants appreciating the integration of modalities and the system’s commit- ment to accessibility and transparency. Based on our evaluations, we’ve identified several avenues for further refinement, includ- ing the need for user-centric customization options and enhanced guidance mechanisms. Additionally, a more comprehensive bench- marking approach is essential to elevate the performance of our question-answering capabilities. REFERENCES [1] [n. d.]. CSS color codes. https://www.w3.org/wiki/CSS/Properties/color/ keywords. Accessed: October 17, 2023. [2] [n. d.]. LangChain CSV Agent Documentation. https://python.langchain.com/ docs/integrations/toolkits/csv. Accessed: October 17, 2023.
2310.09611#67
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
68
[3] [n. d.]. LangChain: Serp API. https://python.langchain.com/docs/integrations/ tools/serpapi. Accessed on Sep 7, 2023. [4] [n. d.]. Observable Plot. https://observablehq.com/plot/. Accessed on Sep 7, 2023. [5] [n. d.]. Vega View API. https://vega.github.io/vega/docs/api/view/. Accessed: October 17, 2023. [6] [n. d.]. Whisper. https://openai.com/research/whisper. Accessed on Sep 7, 2023. https://www.w3.org/WAI/tutorials/images/ [7] 2023. W3C Complex Images. complex/. [8] Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C Lawrence Zitnick, and Devi Parikh. 2015. Vqa: Visual question answering. In Proceedings of the IEEE international conference on computer vision (Santiago, Chile). IEEE, 2425–2433.
2310.09611#68
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
69
[9] HK Ault, JW Deloge, RW Lapp, MJ Morgan, and JR Barnett. 2002. Evaluation of long descriptions of statistical graphics for blind and low vision web users. In Computers Helping People with Special Needs: 8th International Conference, ICCHP 2002 Linz, Austria, July 15–20, 2002 Proceedings 8. Springer, 517–526. [10] Matt Blanco, Jonathan Zong, and Arvind Satyanarayan. 2022. Olli: An Extensible Visualization Library for Screen Reader Accessibility. In IEEE VIS Posters. http: //vis.csail.mit.edu/pubs/olli [11] Zoya Bylinskii, Nam Wook Kim, Peter O’Donovan, Sami Alsheikh, Spandan Madan, Hanspeter Pfister, Fredo Durand, Bryan Russell, and Aaron Hertzmann. 2017. Learning visual importance for graphic designs and data visualizations. In Proceedings of the 30th Annual ACM symposium on user interface software and technology. 57–69. [12] Jaemin Cho, Jie Lei, Hao Tan, and Mohit Bansal. 2021. Unifying vision-and- language tasks via text generation. In International Conference on Machine Learn- ing. PMLR, 1931–1942.
2310.09611#69
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
70
[13] Pierre Dognin, Igor Melnyk, Youssef Mroueh, Inkit Padhi, Mattia Rigotti, Jarret Ross, Yair Schiff, Richard A Young, and Brian Belgodere. 2020. Image captioning as an assistive technology: Lessons learned from vizwiz 2020 challenge. arXiv preprint arXiv:2012.11696 (2020). [14] Frank Elavsky, Lucas Nadolskis, and Dominik Moritz. 2023. Data Navigator: An accessibility-centered data navigation toolkit. arXiv preprint arXiv:2308.08475 (2023). [15] Christin Engel and Gerhard Weber. 2017. Analysis of tactile chart design. In Proceedings of the 10th International Conference on PErvasive Technologies Related to Assistive Environments. 197–200. [16] Christin Engel and Gerhard Weber. 2017. Improve the accessibility of tactile charts. In Human-Computer Interaction-INTERACT 2017: 16th IFIP TC 13 International Conference, Mumbai, India, September 25–29, 2017, Proceedings, Part I 16. Springer, 187–195.
2310.09611#70
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
71
[17] Christin Engel and Gerhard Weber. 2018. A user study to evaluate tactile charts with blind and visually impaired people. In Computers Helping People with Special Needs: 16th International Conference, ICCHP 2018, Linz, Austria, July 11-13, 2018, Proceedings, Part II 16. Springer, 177–184. [18] Jean-Daniel Fekete, Jarke J Van Wijk, John T Stasko, and Chris North. 2008. The value of information visualization. Information Visualization: Human-Centered Issues and Perspectives (2008), 1–18. Gorniak et al. [19] Leo Ferres, Gitte Lindgaard, Livia Sumegi, and Bruce Tsuji. 2013. Evaluating a tool for improving accessibility to charts and graphs. ACM Transactions on Computer-Human Interaction (TOCHI) 20, 5 (2013), 1–32. [20] Jinlan Fu, See-Kiong Ng, Zhengbao Jiang, and Pengfei Liu. 2023. Gptscore: Evaluate as you desire. arXiv preprint arXiv:2302.04166 (2023). [21] John A Gardner and Vladimir Bulatov. [n. d.]. Making Scientific Graphics Acces- sible With Viewplus Iveo®.
2310.09611#71
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
72
[21] John A Gardner and Vladimir Bulatov. [n. d.]. Making Scientific Graphics Acces- sible With Viewplus Iveo®. [22] A Jonathan R Godfrey, Paul Murrell, and Volker Sorge. 2018. An accessible interaction model for data visualisation in statistics. In Computers Helping People with Special Needs: 16th International Conference, ICCHP 2018, Linz, Austria, July 11-13, 2018, Proceedings, Part I 16. Springer, 590–597. [23] Cagatay Goncu and Kim Marriott. 2011. GraVVITAS: generic multi-touch presen- tation of accessible graphics. In IFIP Conference on Human-Computer Interaction. Springer, 30–48. https://doi.org//10.1007/978-3-642-23774-4_5
2310.09611#72
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
73
[24] Danna Gurari, Qing Li, Abigale J Stangl, Anhong Guo, Chi Lin, Kristen Grauman, Jiebo Luo, and Jeffrey P Bigham. 2018. Vizwiz grand challenge: Answering visual questions from blind people. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (Salt Lake City, UT, USA). IEEE, 3608–3617. [25] Enamul Hoque, Parsa Kavehzadeh, and Ahmed Masry. 2022. Chart Question Answering: State of the Art and Future Directions. arXiv preprint arXiv:2205.03966 (2022). [26] Shakila Cherise S Joyner, Amalia Riegelhuth, Kathleen Garrity, Yea-Seul Kim, and Nam Wook Kim. 2022. Visualization Accessibility in the Wild: Challenges Faced by Visualization Designers. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 83, 19 pages. https://doi.org/10.1145/3491102.3517630
2310.09611#73
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
74
[27] Crescentia Jung, Shubham Mehta, Atharva Kulkarni, Yuhang Zhao, and Yea-Seul Kim. 2021. Communicating visualizations without visuals: Investigation of visu- alization alternative text for people with visual impairments. IEEE transactions on visualization and computer graphics 28, 1 (2021), 1095–1105. [28] Kushal Kafle and Christopher Kanan. 2017. Visual question answering: Datasets, algorithms, and future challenges. Computer Vision and Image Understanding 163 (2017), 3–20. [29] Kushal Kafle, Brian Price, Scott Cohen, and Christopher Kanan. 2018. Dvqa: Understanding data visualizations via question answering. In Proceedings of the IEEE conference on computer vision and pattern recognition (Salt Lake City, UT, USA). IEEE, 5648–5656. [30] Samira Ebrahimi Kahou, Adam Atkinson, Vincent Michalski, Ákos Kádár, Adam Trischler, and Yoshua Bengio. 2017. FigureQA: An Annotated Figure Dataset for Visual Reasoning. CoRR abs/1710.07300 (2017). arXiv:1710.07300 http: //arxiv.org/abs/1710.07300
2310.09611#74
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
75
[31] Dae Hyun Kim, Enamul Hoque, and Maneesh Agrawala. 2020. Answering Ques- tions about Charts and Generating Visual Explanations. In Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems (Honolulu, HI, USA) (CHI ’20). Association for Computing Machinery, New York, NY, USA, 1–13. https://doi.org/10.1145/3313831.3376467 [32] Jiho Kim, Arjun Srinivasan, Nam Wook Kim, and Yea-Seul Kim. 2023. Exploring Chart Question Answering for Blind and Low Vision Users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–15.
2310.09611#75
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
76
[33] N. W. Kim, G. Ataguba, S. C. Joyner, Chuangdian Zhao, and Hyejin Beyond Alternative Text and tables: Comparative Analy- Computer Graph- https://doi.org/10.1111/cgf.14833 Im. 2023. sis of Visualization Tools and Accessibility Methods. ics Forum 42, 3 (2023), 323–335. arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14833 [34] N. W. Kim, S. C. Joyner, A. Riegelhuth, and Y. Kim. 2021. Accessi- ble Visualization: Design Space, Opportunities, and Challenges. Computer Graphics Forum 40, 3 (2021), 173–188. https://doi.org/10.1111/cgf.14298 arXiv:https://onlinelibrary.wiley.com/doi/pdf/10.1111/cgf.14298 [35] Steven Landau and Karen Gourgey. 2001. Development of a talking tactile tablet. Information Technology and Disabilities 7, 2 (2001).
2310.09611#76
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
77
[35] Steven Landau and Karen Gourgey. 2001. Development of a talking tactile tablet. Information Technology and Disabilities 7, 2 (2001). [36] Bongshin Lee, Eun Kyoung Choe, Petra Isenberg, Kim Marriott, and John Stasko. IEEE Computer 2020. Reaching broader audiences with data visualization. Graphics and Applications 40, 2 (2020), 82–90. [37] Yang Liu, Dan Iter, Yichong Xu, Shuohang Wang, Ruochen Xu, and Chenguang Zhu. 2023. Gpteval: Nlg evaluation using gpt-4 with better human alignment. arXiv preprint arXiv:2303.16634 (2023). [38] Alan Lundgard and Arvind Satyanarayan. 2021. Accessible visualization via natural language descriptions: A four-level model of semantic content. IEEE transactions on visualization and computer graphics 28, 1 (2021), 1073–1083. [39] Kim Marriott, Bongshin Lee, Matthew Butler, Ed Cutrell, Kirsten Ellis, Cagatay Goncu, Marti Hearst, Kathleen McCoy, and Danielle Albers Szafir. 2021. Inclusive data visualization for people with disabilities: a call to action. Interactions 28, 3 (2021), 47–51.
2310.09611#77
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
78
[40] Ahmed Masry, Do Xuan Long, Jia Qing Tan, Shafiq Joty, and Enamul Hoque. 2022. ChartQA: A Benchmark for Question Answering about Charts with Visual and Logical Reasoning. arXiv preprint arXiv:2203.10244 (2022). VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction Conference acronym ’XX, June 03–05, 2018, Woodstock, NY [41] Tomas Murillo-Morales and Klaus Miesenberger. 2017. Non-visually performing analytical tasks on statistical charts. In Harnessing the Power of Technology to Improve Lives. IOS Press, 339–346. [42] Sabrina Paneels and Jonathan C Roberts. 2009. Review of designs for haptic data visualization. IEEE Transactions on Haptics 3, 2 (2009), 119–137.
2310.09611#78
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
79
[42] Sabrina Paneels and Jonathan C Roberts. 2009. Review of designs for haptic data visualization. IEEE Transactions on Haptics 3, 2 (2009), 119–137. [43] Prabodh Sakhardande, Anirudha Joshi, Charudatta Jadhav, and Manjiri Joshi. 2019. Comparing user performance on parallel-tone, parallel-speech, serial-tone and serial-speech auditory graphs. In Human-Computer Interaction–INTERACT 2019: 17th IFIP TC 13 International Conference, Paphos, Cyprus, September 2–6, 2019, Proceedings, Part I 17. Springer, 247–266. [44] Arvind Satyanarayan, Dominik Moritz, Kanit Wongsuphasawat, and Jeffrey Heer. 2016. Vega-lite: A grammar of interactive graphics. IEEE transactions on visual- ization and computer graphics 23, 1 (2016), 341–350.
2310.09611#79
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
80
[45] Ather Sharif, Olivia H. Wang, Alida T. Muongchan, Katharina Reinecke, and Jacob O. Wobbrock. 2022. VoxLens: Making Online Data Visualizations Accessible with an Interactive JavaScript Plug-In. In Proceedings of the 2022 CHI Conference on Human Factors in Computing Systems (New Orleans, LA, USA) (CHI ’22). Association for Computing Machinery, New York, NY, USA, Article 478, 19 pages. https://doi.org/10.1145/3491102.3517431 [46] Alexa F. Siu, Danyang Fan, Gene S-H Kim, Hrishikesh V. Rao, Xavier Vazquez, Sile O’Modhrain, and Sean Follmer. 2021. COVID-19 Highlights the Issues Facing Blind and Visually Impaired People in Accessing Data on the Web. In Proceedings of the 18th International Web for All Conference (Ljubljana, Slovenia) (W4A ’21). Association for Computing Machinery, New York, NY, USA, Article 11, 15 pages. https://doi.org/10.1145/3430263.3452432
2310.09611#80
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
81
[47] Marzia Taibbi, Cristian Bernareggi, Andrea Gerino, Dragan Ahmetovic, and Sergio Mascetti. 2014. Audiofunctions: Eyes-free exploration of mathematical functions on tablets. In International Conference on Computers for Handicapped Persons. Springer, 537–544. https://doi.org//10.1007/978-3-319-08596-8_84 [48] John R Thompson, Jesse J Martinez, Alper Sarikaya, Edward Cutrell, and Bongshin Lee. 2023. Chart Reader: Accessible Visualization Experiences Designed with Screen Reader Users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–18.
2310.09611#81
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
82
Screen Reader Users. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems. 1–18. [49] Alexandra Vtyurina, Adam Fourney, Meredith Ringel Morris, Leah Findlater, and Ryen W White. 2019. Verse: Bridging screen readers and voice assistants for enhanced eyes-free web search. In The 21st International ACM SIGACCESS Conference on Computers and Accessibility. ACM, New York, NY, USA, 414–426. [50] Jiaan Wang, Yunlong Liang, Fandong Meng, Haoxiang Shi, Zhixu Li, Jinan Xu, Jianfeng Qu, and Jie Zhou. 2023. Is chatgpt a good nlg evaluator? a preliminary study. arXiv preprint arXiv:2303.04048 (2023). [51] Ruobin Wang, Crescentia Jung, and Y Kim. 2022. Seeing through sounds: Mapping auditory dimensions to data and charts for people with visual impairments. In Computer Graphics Forum, Vol. 41. Wiley Online Library, 71–83.
2310.09611#82
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.09611
83
[52] Jason Wei, Xuezhi Wang, Dale Schuurmans, Maarten Bosma, Fei Xia, Ed Chi, Quoc V Le, Denny Zhou, et al. 2022. Chain-of-thought prompting elicits reasoning in large language models. Advances in Neural Information Processing Systems 35 (2022), 24824–24837. [53] Markus Weninger, Gerald Ortner, Tobias Hahn, Olaf Drümmer, and Klaus Miesen- berger. 2015. ASVG- Accessible Scalable Vector Graphics: intention trees to make charts more accessible and usable. Journal of assistive technologies 9, 4 (2015), 239–246. [54] Qi Wu, Damien Teney, Peng Wang, Chunhua Shen, Anthony Dick, and Anton van den Hengel. 2017. Visual question answering: A survey of methods and datasets. Computer Vision and Image Understanding 163 (2017), 21–40.
2310.09611#83
VizAbility: Multimodal Accessible Data Visualization with Keyboard Navigation and Conversational Interaction
Data visualization serves as a crucial tool for communicating important information in our society. Yet, as visualizations grow more complex, they become less accessible to individuals with visual impairments. Traditional accessibility approaches like alternative text and data tables often fall short of capturing the full potential of data visualization. To bridge this gap, we introduce VizAbility, a novel multimodal accessible system that combines keyboard navigation with conventional interaction, enabling individuals with visual impairments to actively engage with and explore data visualizations. VizAbility utilizes an LLM-based pipeline, seamlessly integrating data, chart structures, user locality, and web-based information to provide comprehensive answers. Our quantitative evaluation validates the LLM-based question-and-answer pipeline, and a user study involving six participants underscores the promising potential of VizAbility's multimodal approach. We discuss how current visualization tools can integrate VizAbility to enhance the accessibility of data visualizations online.
http://arxiv.org/pdf/2310.09611
Joshua Gorniak, Yoon Kim, Stephen Gwon, Donglai Wei, Nam Wook Kim
cs.HC
13 pages, 7 figures
null
cs.HC
20231014
20231014
[ { "id": "2303.04048" }, { "id": "2308.08475" }, { "id": "2203.10244" }, { "id": "2302.04166" }, { "id": "1710.07300" }, { "id": "2012.11696" }, { "id": "2205.03966" }, { "id": "2303.16634" } ]
2310.08118
0
3 2 0 2 t c O 2 1 ] I A . s c [ 1 v 8 1 1 8 0 . 0 1 3 2 : v i X r a # Can Large Language Models Really Improve by Self-critiquing Their Own Plans? # Karthik Valmeekam∗ School of Computing & AI Arizona State University Tempe. [email protected] # Matthew Marquez∗ School of Computing & AI Arizona State University, Tempe. [email protected] Subbarao Kambhampati School of Computing & AI Arizona State University, Tempe. [email protected] # Abstract
2310.08118#0
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08118
1
Subbarao Kambhampati School of Computing & AI Arizona State University, Tempe. [email protected] # Abstract There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM’s performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system’s reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks. # Introduction
2310.08118#1
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
1
# Abstract The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that pre- date recent advances in large language models (LLMs). This study seeks to explore poten- tial improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a point- wise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language mod- els indeed surpasses that of smaller models. Additionally, since LLMs can inherently han- dle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA–RankLLaMA pipeline ex- hibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.1 # Introduction
2310.08319#1
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
2
# Introduction Large Language Models have rapidly captured the attention of the AI research community with their exceptional natural language completion capabilities. Trained on web-scale language corpora, these models have demonstrated the ability to generate seemingly valuable completions across a wide range of topics. This led to a surge of interest in determining whether such models were able to perform well on reasoning tasks. Even though initial anecdotal results showed promise, systematic studies revealed their incompetency in reasoning – be it planning [12] or in simple arithmetic or logic [3]. These results questioning the robustness of their reasoning abilities led to researchers exploring ways to improve these systems. Of particular interest to us is the emerging research on self-critiquing, where the LLMs are used to critique their own candidate generations and iterate. The current works [15, 10, 14] exhibit considerable optimism about using LLMs to critique their own candidate generations, especially in an iterative setting where they keep refining their candidate generations. Additionally, the notion that verifying correctness is computationally simpler than generation for reasoning adds to the optimism. However, there are grounds to be skeptical about it as # ∗Equal Contribution Preprint. Under Review. the complexity of a reasoning task in the classical sense should be irrelevant to models like LLMs that do approximate retrieval.
2310.08118#2
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
2
# Introduction Text retrieval, which entails identifying and rank- ing the most relevant documents or text snippets in response to a query, is crucial in various open- domain language comprehension tasks (Petroni et al., 2021), including web search (Bajaj et al., 2016), open-domain question answering (Chen et al., 2017), and fact verification (Thorne et al., 2018). Retrieval also plays an important role in en- hancing the effectiveness of large language models (LLMs) in a retrieval-augmented generation (RAG) pipeline (Lewis et al., 2020b; Shi et al., 2023). This approach not only mitigates hallucinations but also enables LLMs to access knowledge that is not cap- tured within their parameters (Yang et al., 2023; Jiang et al., 2023). 1https://huggingface.co/castorini
2310.08319#2
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
3
# ∗Equal Contribution Preprint. Under Review. the complexity of a reasoning task in the classical sense should be irrelevant to models like LLMs that do approximate retrieval. Intrigued by the prevailing optimism, in this paper, we set out to systematically investigate the effectiveness of using LLMs to critique their own generations in the context of planning. We look at the simplest class of planning problems, the goal-directed deterministic planning problems colloquially referred to as classical planning problems. Our methodology employs a planning system that utilizes the same LLM for both generation and verification, which we term the LLM+LLM system in an iterative setting. Within this setting, the generator LLM continuously produces candidate plans, drawing upon feedback from the verifier LLM, until the verifier LLM either approves a candidate plan as correct or the number of iterations surpasses a predefined threshold. We present an empirical evaluation of (i) the effect of self-critiquing on the plan generation performance of the overall LLM+LLM system (ii) the performance of the verifier LLM in comparison to the ground-truth verification and finally (iii) the influence of varying feedback levels while critiquing the LLM’s generation on the overall system performance. For our study, we use GPT-4 [9] as both the generator and verifier.
2310.08118#3
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
3
1https://huggingface.co/castorini A typical multi-stage text retrieval pipeline con- sists of a retriever, designed to efficiently locate the top-k relevant texts from a corpus, and a reranker, which further refines the order of the retrieved can- didates to improve output quality (Nogueira and Cho, 2019). Both retrievers and rerankers have sig- nificantly benefited from the advent of pre-trained language models based on Transformers (Vaswani et al., 2017) such as BERT (Devlin et al., 2019) and T5 (Raffel et al., 2020). These models are trained to encode queries and documents into vector repre- sentations for retrieval (Karpukhin et al., 2020; Lin, 2021) or to directly score the relevance between a query and a document for reranking (Nogueira et al., 2019; Zhuang et al., 2023).
2310.08319#3
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
4
Our findings suggest that self-critiquing degrades the plan generation performance compared to when an external, sound verifier is utilized. This decline in performance can be directly attributed to the verifier LLM’s subpar results. The verifier LLM yields a significant number of false positives, which can severely undermine the system’s reliability. Furthermore, we explored whether the nature of feedback on invalid plans influences plan generation performance. Our results indicate that the type of feedback—whether it’s merely binary verification or combined with detailed feedback on the errors of the generated plan—doesn’t significantly impact plan generation performance. Thus, our systematic investigation offers compelling preliminary evidence to question the efficacy of LLMs as verifiers for planning tasks within an iterative, self-critiquing framework. In the rest of the paper, we first present the related work, then the required background before delving into the methodology and the evaluation. # 2 Related Work
2310.08118#4
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
4
Recent large language models with billions of pa- rameters, fine-tuned to follow instructions, such as InstructGPT (Ouyang et al., 2022), GPT-4 (Open- AI, 2023), and LLaMA (Touvron et al., 2023a,b), have exhibited extraordinary capabilities in many NLP tasks, surpassing previous smaller pre-trained language models (Zhao et al., 2023). For retrieval, recent methods such as LRL (Ma et al., 2023), RankGPT (Sun et al., 2023), and PRP (Qin et al., 2023) have explored prompting LLMs to perform zero-shot reranking using pairwise or listwise ap- proaches. These methods leverage LLMs by view- ing reranking as text generation. However, we see a number of potential issues. First, these methods do not address the entire multi- stage pipeline, as it is challenging to cast retrieval from a large corpus as a text generation task. Sec- ond, they do not leverage labeled data when avail- able. Finally, these rerankers are not efficient be- cause they do not support parallel scoring and are slowed by their multi-pass decoding design.
2310.08319#4
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
5
# 2 Related Work There has been significant interest in investigating the reasoning capabilities of LLMs, spanning from planning [12] to logic and arithmetic [3], and even puzzles [15]. As the initial excitement from triumphant anecdotes about LLMs’ reasoning capabilities began to wane with systematic studies [12, 11, 3], researchers proposed that allowing LLMs to verify their own candidate solutions and iterate over this process could enhance their reasoning abilities [10, 7, 6, 14]. Our work systematically investigates the effect of iterative self-critiquing in the context of planning. There have also been studies that utilize multiple LLMs to generate and verify candidate solutions, either in the form of a debate [2] or through cross-examination [1]. However, these studies still rely solely on the verification/self-critiquing abilities of the LLMs, an aspect our work critically examines in the context of planning. Our results provide compelling reasons to question the use of LLMs for self-critiquing in planning. # 3 Background
2310.08118#5
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
5
Therefore, we argue that fine-tuning state-of- the-art large language models to function as re- trievers and rerankers can yield better effective- ness than previous smaller models. This approach can also optimally utilize LLMs within multi-stage pipelines. Thus, we are motivated to investigate the following research question: How do state-of- the-art large language models perform when specif- ically fine-tuned for multi-stage text retrieval?
2310.08319#5
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
6
# 3 Background We specifically are interested in classical planning problems that are represented within the PDDL (Planning Domain and Definition Language) framework [8]. These problem classes consist of a domain, initial state and a goal state. The domain consists of a set of predicates and a set of actions. The state-space of the planning problem is represented with some truth-assignment on the predicates. Every action in domain have a set of preconditions which determine when an action can be applied and a set of effects which determine the modifications to the state after the action is applied. A plan here is a sequence of actions which are present in the domain that when executed in the initial state, satisfy the goal conditions. 2 PDDL Files Generator LLM (generates candidate plans) Back-prompting with feedback if plan is invalid Instance i files Prompt Generator Verification prompt Generation prompt VAL (for ground-truth verification) Final Plan If plan is valid or back-prompting iterations > 15 Figure 1: Overall evaluation architecture # 4 Methodology # 4.1 The LLM+LLM planning system
2310.08118#6
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
6
Our study aims to answer this question by con- ducting a comprehensive investigation into fine- tuning the latest LLaMA-2 model (Touvron et al., 2023b), a state-of-the-art, open-source large lan- guage model, as both a retriever and a reranker, which we refer to as RepLLaMA and RankLLaMA, respectively. Specifically, we utilize the MS MARCO (Bajaj et al., 2016) and BEIR (Thakur et al., 2021) datasets for our experiments. Our find- ings suggest that large language models surpass pre- vious smaller models, achieving state-of-the-art ef- fectiveness for both retrieval and reranking through a straightforward training regime and exhibiting strong zero-shot effectiveness. Furthermore, we ob- serve that LLMs, which are inherently pre-trained on longer contexts, demonstrate potential in repre- senting entire documents, thereby eliminating the need for traditional segmenting and pooling strate- gies for document retrieval. # 2 Method # 2.1 Preliminaries
2310.08319#6
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
7
Figure 1: Overall evaluation architecture # 4 Methodology # 4.1 The LLM+LLM planning system The LLM+LLM planning system (as shown in Figure 1) consists of a generator LLM and a verifier LLM. For a given instance, the generator LLM produces a candidate plan, while the verifier LLM determines its correctness. If the plan is found to be incorrect, the verifier provides feedback detailing the reasons for its failure. This feedback is then relayed to the generator LLM, prompting the generation of a new candidate plan. It’s worth noting that there are no constraints on the type or format of feedback the verifier LLM produces. The system ceases generation either when the verifier LLM approves the candidate plan as valid or when the number of prompting iterations exceeds a set threshold (for our experiments, this threshold is set at 15 iterations). This method is similar to the backprompting technique described in [12]. However, the main distinction lies in the type of verifier employed. In our system, both the verifier and generator are LLMs, whereas the referenced approach utilizes an external sound verifier, VAL [4]. For all our experiments, GPT-4 serves as the default LLM. # 4.2 Prompt generation
2310.08118#7
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
7
# 2 Method # 2.1 Preliminaries Task Definition Given a query Q and a corpus C = {D1, D2, ..., Dn} consisting of n documents, the goal of text retrieval is to find the k documents that are most relevant to the query Q, with k ≪ n. In a multi-stage retrieval pipeline composed by a retriever and a reranker, the retriever’s task is to efficiently generate the top-k candidates that are relevant to the query based on the similarity metric Sim(Q, D) ∈ R. The reranker’s task is to reorder these k candidate documents further to improve the relevance order using a more effective, but typ- ically more computationally expensive reranking model. Note that “document” in this context can refer to an arbitrary information snippet, including sentences, passages, or full documents. While a multi-stage pipeline can contain multiple rerankers, in this paper we focus on a single reranker.
2310.08319#7
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
8
# 4.2 Prompt generation For the LLM+LLM Planning system described above, we utilize distinct prompts for the generator and verifier LLMs. The prompt generator (as shown in Figure 1) utilizes the PDDL domain and instance files to generate the required prompts in natural language. Our prompts are structured similarly to the natural language prompts found in [12]. For plan generation, our prompts are one-shot: we begin by presenting the domain description, followed by an example instance (along with its corresponding plan). We then present the query instance. These example instances are randomly selected from our set of instances, and this forms the input for the generator LLM. For the verifier LLM, we adopt a zero-shot approach. Here, we present the domain description, followed by the query instance and its corresponding plan. The verifier LLM is then tasked with verifying the query plan and providing feedback if necessary. As mentioned earlier, we do not restrict the type or format of the feedback for the verifier LLM. Detailed examples of the prompts given to both the generator and verifier LLMs can be found in the Appendix. # 5 Evaluation and Analysis
2310.08118#8
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
8
Modern retrievers typically follow a bi-encoder architecture that encodes text into vector representa- tions, with Sim(Q, D) computed as the dot product of the vector representations of the query Q and a document D (Karpukhin et al., 2020). In con- trast, a (pointwise) reranker typically takes both the query and a candidate document as input to directly generate a relevance score. These scores are then used to reorder the candidates (Nogueira et al., 2019; Gao et al., 2021). LLaMA LLaMA (Touvron et al., 2023a) is an auto-regressive, decoder-only large language model based on the Transformer architecture. The model is characterized by its billions of param- eters, pre-trained on a vast amount of web data. Being uni-directional means that the model’s at- tention mechanism only considers the preceding elements in the input sequence when making pre- dictions. Specifically, given an input sequence x = [t1, t2, ..., tn−1], the model computes the prob- ability of the next token tn based solely on the preceding tokens. The prediction process can be mathematically represented as P (tn|t1, ..., tn−1), where P denotes the probability and tn represents the next element in the sequence.
2310.08319#8
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
9
# 5 Evaluation and Analysis We evaluate our planning system on Blocksworld, a widely recognized common-sense planning domain in AI planning literature [5]. We generate 100 random instances for evaluation across various methods. To provide a ground-truth assessment of the final LLM plan’s correctness, we employ an external sound verifier, VAL [4]. For all experiments, GPT-4 [9] serves as the chosen LLM and was run with a temperature of 0, thereby making it deterministic. 3 # 5.1 Effect of self-critiquing on plan generation We assessed the impact of self-critiquing on plan generation by comparing the LLM+LLM back- prompting system with two other baselines. The first baseline is the LLM+VAL backprompting system, which mirrors the backprompting method described in [12]. In this method, the plan pro- duced by the LLM is validated by an external sound verifier, VAL. If the plan is found lacking, the generator-LLM is reprompted using feedback from VAL. The second baseline involves a generator- LLM without backprompting. Here, the generator LLM receives a single prompt, and the resulting plan is considered final.
2310.08118#9
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
9
# 2.2 Retriever Our retriever model, called RepLLaMA, follows the bi-encoder dense retriever architecture pro- posed in DPR (Karpukhin et al., 2020), but with the backbone model initialized with LLaMA. Previous work on dense retriever models of- ten uses a bi-directional encoder-only model like BERT, taking the representation of the prepended [CLS] token as the dense representation of the text input. However, as LLaMA is uni-directional, we append an end-of-sequence token </s> to the input query or document to form the input sequence to LLaMA. Thus, the vector embedding of a query or a document is computed as: VT = Decoder(‘t1 t2 ... tk</s>’)[−1] where Decoder(·) represents the LLaMA model, which returns the last layer token representation for each input token. We take the representation of the end-of-sequence token as the representation of the input sequence t1 . . . tk, which can be either a query Q or a document D. Relevance of D to Q is computed in terms of the dot product of their corresponding dense representation VQ and VD as Sim(Q, D) =< VQ, VD >.
2310.08319#9
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
10
As illustrated in Table 1, the LLM+LLM backprompting approach slightly outperforms the non- backprompting method in terms of accuracy. However, it falls short when compared to the LLM+VAL system. It’s worth noting that the marginal improvement over the generator-LLM-only method might not solely be attributed to the LLM verifier. The backprompting itself, which offers the generator LLM multiple opportunities to produce a plan, could be a contributing factor. The subpar performance of the LLM+LLM system, especially when compared to LLM+VAL, can likely be traced back to the substantial number of type-1 errors produced by the LLM verifier. It’s evident that incorporating a sound verifier in the backprompting process can significantly enhance overall performance. Plan Generation Method Accuracy Avg. Number of iterations LLM+LLM w/ Backprompting (BP) 55/100 (55%) 3.48 LLM+VAL w/ BP 88/100 (88%) 4.18 Generator LLM only w/o BP 40/100 (40%) 1.00 # Table 1: Comparison between various plan generation methods on the Blocksworld domain. # 5.2 Analysis on the self-critique verifier
2310.08118#10
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
10
The model is then optimized end-to-end accord- ing to InfoNCE loss: L(Q, D*, {Dx}) = —logp(D = D* | Q) = exp(Sim(Q, D*)) ~ 18 Sp(Sim(@,D*)) + SS exp(Sim(Q, Dy) D; €{Dn} i )) Here, D+ represents a document that is relevant to the query Q (based on human labels), while {DN } denotes a set of documents that is not relevant to the query. The set of negative documents includes both hard negatives, which are sampled from the top-ranking results of an existing retrieval system, and in-batch negatives, which are derived from the positive documents and hard negative documents associated with other queries in the same training batch. In practice, dense retrieval training tends to benefit from a larger set of hard negatives and in-batch negatives. During the inference phase, the query is typ- ically encoded in real-time and the top-k similar documents are searched within the pre-encoded cor- pus using an efficient approximate nearest neigh- bour search library such as HNSW (Malkov and Yashunin, 2020). However, in this study, we opt to perform exact nearest neighbour search using flat indexes to evaluate model effectiveness. # 2.3 Reranker
2310.08319#10
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
11
# Table 1: Comparison between various plan generation methods on the Blocksworld domain. # 5.2 Analysis on the self-critique verifier We base our evaluation of the verifier LLM on its binary verification (i.e., determining whether the plan is valid or not) of the final plan produced by the LLM+LLM system. It’s important to note that the system halts either when the verifier LLM considers the plan valid or when the number of iterations surpasses 15. We compare the LLM verifier’s output with ground truth classifications made using VAL [4], a sound verifier. To make the ground truth determination available for each input plan, we separately evaluate that plan using VAL as well. As illustrated in Table 2, out of the 100 instances, the verifier accurately identifies 61 (or 61%). However, a deeper examination of the verifier’s errors reveals a concerning number of false positives. In this context, a false positive refers to the verifier LLM deeming a generated plan valid when, in fact, it is not. Out of the 100 instances, the verifier LLM produces 54 true positives and 38 false positives (type-1 errors). This indicates that the verifier deemed 38 plans, which were actually invalid, to be valid which can be catastrophic if such a system is deployed in scenarios where correctness is paramount.
2310.08118#11
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
11
# 2.3 Reranker Our reranker model, referred to as RankLLaMA, is trained as a pointwise reranker. This approach involves passing a query and a candidate document together as model input, with the model generating a score that indicates the relevance of the document to the query (Nogueira et al., 2019). In more detail, RankLLaMA reranks a query– document pair as follows: input = ‘query: {Q} document: {D}</s>’ Sim(Q, D) = Linear(Decoder(input)[−1]) where Linear(·) is a linear projection layer that projects the last layer representation of the end-of- sequence token to a scalar. Similar to the retriever, the model is optimized by contrastive loss. How- ever, in this case, the negative documents do not involve in-batch negatives. To train a reranker that is optimized to rerank candidates from a specific retriever in a multi-stage pipeline, hard negatives should be sampled from the top-ranking results from that retriever. Specif- ically, in our case, the hard negative training data for RankLLaMA are selected from the top-ranking results of RepLLaMA.
2310.08319#11
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
12
Accuracy True Positive Rate False Positive Rate True Negative Rate False Negative Rate Verifier LLM 61/100 (61%) 54/55 (98.2%) 38/45 (84.45%) 7/45 (15.55%) 1/55 (1.8%) Table 2: Breakdown of Plan Verification results on Blocksworld domain. The denominators (in aspects other than Accuracy) are ground-truth values based on VAL. # 5.3 Effect of the levels of feedback on plan generation While the use of a sound verifier appears to enhance overall performance, we sought to further investigate the impact of varied levels of feedback on plan generation performance. We assessed the system’s performance across four distinct feedback levels: 4 1. No Feedback: At this level, the initial plan generated by the LLM is considered to be final and no feedback is provided to the LLM. 2. Binary Feedback: This level simply indicates whether the generated plan is valid or not. 3. Inexecutable Action Feedback: If the plan is invalid and inexecutable, this feedback high- lights the first inexecutable action and the unmet preconditions causing the inexecutability. If the plan is executable but fails to meet all goal conditions, the unmet goal conditions are presented. This feedback mirrors what VAL provides.
2310.08118#12
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
12
During the inference stage, the top candidate documents retrieved by RepLLaMA are reordered. This reordering is based on the relevance score that RankLLaMA assigns to each query–document pair, with the documents arranged in descending order of relevance. # 3 Experiments We conduct experiments on MS MARCO passage ranking and document ranking datasets to inves- tigate the effectiveness of the multi-stage text re- trieval pipeline built using RepLLaMA and Rank- LLaMA for both passage and document retrieval. # 3.1 Passage Retrieval Dataset We train our retriever and reranker mod- els with LLaMA on the training split of the MS MARCO passage ranking dataset (Bajaj et al., 2016), which consists of approximately 500k train- ing examples. As discussed in Section 2.2, the incorporation of hard negatives is crucial for the effective training of the retriever. In our case, we use a blend of BM25 and CoCondenser (Gao and Callan, 2022b) hard negatives to ensure that the hard negatives are derived from both sparse and dense retrieval results, thereby enhancing the diver- sity of the samples. For the reranker, we select the hard negatives from the top-200 candidates gener- ated by the retriever.
2310.08319#12
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
13
4. Open Conditions Feedback: This level treats the plan as a partial-order plan [13] and presents all the actions for which there exists atleast one unmet pre-condition and the corresponding unmet preconditions. Further it also presents the unmet goal conditions. Table 3 showcases the LLM’s performance when subjected to various levels of feedback (including one with no feedback). Interestingly, the amount of feedback provided to the LLM seems to have minimal influence on its performance improvement. As long as the binary feedback is accurate and the LLM is given ample opportunities to generate a plan, the detailed feedback on invalid plans doesn’t appear to significantly enhance the LLM’s performance. We have provided examples for each feedback level in the Appendix. Levels of feedback Accuracy Avg. no of steps No feedback 40/100 (40%) 1.00 Only binary feedback 37/50 (74%) 5.38 Binary + First error feedback (by VAL) 43/50 (86%) 4.18 Binary + All errors feedback 43/50 (86%) 4.42 Table 3: Performance of LLM+VAL system on plan generation with varied levels of feedback. # 6 Conclusion and Future Work
2310.08118#13
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
13
We evaluate the effectiveness of our models us- ing the development split of the MS MARCO pas- sage ranking task, comprising 6980 queries. Ef- fectiveness is reported using MRR@10 as the met- ric. In addition, we also evaluate our models on the TREC DL19/DL20 passage ranking test collec- tions (Craswell et al., 2020, 2021), which include 43 and 54 queries, respectively. These collections utilize the same passage corpus as MS MARCO, but provide query sets with dense, graded human relevance judgments. Following standard practice, we adopt nDCG@10 as the evaluation metric in our experiments. In addition, we assess the zero-shot effectiveness of RepLLaMA and RankLLaMA on BEIR (Thakur et al., 2021), which is a compilation of 18 datasets that spans a variety of domains (e.g., news, medi- cal) and retrieval tasks (e.g., fact verification, ques- tion answering). We focus our evaluation on the 13 datasets that are publicly available.
2310.08319#13
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
14
Table 3: Performance of LLM+VAL system on plan generation with varied levels of feedback. # 6 Conclusion and Future Work In this paper, we conducted a systematic investigation into the ability of Large Language Models (LLMs) to critique their own outputs, specifically within the context of classical planning problems. While recent research has been optimistic about LLMs’ potential in self-critiquing, especially in iterative settings, our findings present a different perspective. Our empirical evaluations on Blocksworld, a simple common-sense domain, highlighted the in- effectiveness of self-critiquing in LLMs in the context of planning. We showed that the verifier LLM generates a significant number of false positives which be detrimental to the overall system’s reliability, particularly in domains where the correctness of plans is paramount. Interestingly, the nature of feedback, whether binary or detailed, did not have a pronounced impact on plan generation performance, suggesting that the core issue lies in the LLM’s binary verification capabilities rather than the granularity of feedback. In the future, we plan to conduct more extensive experiments with respect to the number of instances, the number of domains and prompting methods (such as chain-of-thought). # References
2310.08118#14
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08118
15
In the future, we plan to conduct more extensive experiments with respect to the number of instances, the number of domains and prompting methods (such as chain-of-thought). # References [1] Roi Cohen, May Hamri, Mor Geva, and Amir Globerson. Lm vs lm: Detecting factual errors via cross examination. arXiv preprint arXiv:2305.13281, 2023. [2] Yilun Du, Shuang Li, Antonio Torralba, Joshua B Tenenbaum, and Igor Mordatch. Improv- ing factuality and reasoning in language models through multiagent debate. arXiv preprint arXiv:2305.14325, 2023. [3] Nouha Dziri, Ximing Lu, Melanie Sclar, Xiang Lorraine Li, Liwei Jian, Bill Yuchen Lin, Peter West, Chandra Bhagavatula, Ronan Le Bras, Jena D Hwang, et al. Faith and fate: Limits of transformers on compositionality. arXiv preprint arXiv:2305.18654, 2023. 5 [4] Richard Howey, Derek Long, and Maria Fox. VAL: Automatic plan validation, continuous effects and mixed initiative planning using PDDL. In 16th IEEE International Conference on Tools with Artificial Intelligence, pages 294–301. IEEE, 2004.
2310.08118#15
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
15
Model size Source prev. DEV DL19 DL20 top-k MRR@10 R@1k nDCG@10 nDCG@10 BM25 (Lin et al., 2021) ANCE (Xiong et al., 2021) CoCondenser (Gao and Callan, 2022b) GTR-base (Ni et al., 2022) GTR-XXL (Ni et al., 2022) OpenAI Ada2 (Neelakantan et al., 2022) bi-SimLM (Wang et al., 2023) RepLLaMA - 125M 110M 110M 4.8B ? 110M 7B Retrieval - - - - - - - - |C| |C| |C| |C| |C| |C| |C| |C| 18.4 33.0 38.2 36.6 38.8 34.4 39.1 41.2 85.3 95.9 98.4 98.3 99.0 98.6 98.6 99.4 50.6 64.5 71.7 - - 70.4 69.8 74.3 48.0 64.6 68.4 - - 67.6 69.2 72.1 Reranking monoBERT (Nogueira et al., 2019) cross-SimLM (Wang et al., 2023) RankT5
2310.08319#15
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
16
[5] IPC. International planning competition, 1998. [6] Geunwoo Kim, Pierre Baldi, and Stephen McAleer. Language models can solve computer tasks. arXiv preprint arXiv:2303.17491, 2023. [7] Aman Madaan, Niket Tandon, Prakhar Gupta, Skyler Hallinan, Luyu Gao, Sarah Wiegreffe, Uri Alon, Nouha Dziri, Shrimai Prabhumoye, Yiming Yang, et al. Self-refine: Iterative refinement with self-feedback. arXiv preprint arXiv:2303.17651, 2023. [8] Drew McDermott, Malik Ghallab, Adele E. Howe, Craig A. Knoblock, Ashwin Ram, Manuela M. Veloso, Daniel S. Weld, and David E. Wilkins. Pddl-the planning domain definition language. 1998. [9] OpenAI. Gpt-4 technical report, 2023. [10] Noah Shinn, Beck Labash, and Ashwin Gopinath. Reflexion: an autonomous agent with dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023.
2310.08118#16
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
16
69.2 72.1 Reranking monoBERT (Nogueira et al., 2019) cross-SimLM (Wang et al., 2023) RankT5 (Zhuang et al., 2023) RankLLaMA RankLLaMA-13B 110M 110M bi-SimLM 220M 7B 13B BM25 GTR RepLLaMA RepLLaMA 1000 200 1000 200 200 37.2 43.7 43.4 44.9 45.2 85.3 98.7 98.3 99.4 99.4 72.3 74.6 - 75.6 76.0 72.2 72.7 - 77.4 77.9 RankVicuna (Pradeep et al., 2023) PRP (Qin et al., 2023) RankGPT3.5 (Sun et al., 2023) RankGPT4 (Sun et al., 2023) 7B 20B ? ? BM25 BM25 BM25 RankGPT3.5 100 100 100 30 - - - - - - - - 66.8 72.7 65.8 75.6 65.5 70.5 72.9 70.6
2310.08319#16
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08118
17
dynamic memory and self-reflection. arXiv preprint arXiv:2303.11366, 2023. [11] Tom Silver, Varun Hariprasad, Reece S Shuttleworth, Nishanth Kumar, Tomás Lozano-Pérez, and Leslie Pack Kaelbling. PDDL planning with pretrained large language models. In NeurIPS 2022 Foundation Models for Decision Making Workshop, 2022. [12] Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, and Subbarao Kambhampati. On the planning abilities of large language models–a critical investigation. arXiv preprint arXiv:2305.15771, 2023. [13] Daniel S Weld. An introduction to least commitment planning. AI magazine, 15(4):27–27, 1994. [14] Yixuan Weng, Minjun Zhu, Shizhu He, Kang Liu, and Jun Zhao. Large language models are reasoners with self-verification. arXiv preprint arXiv:2212.09561, 2022.
2310.08118#17
Can Large Language Models Really Improve by Self-critiquing Their Own Plans?
There have been widespread claims about Large Language Models (LLMs) being able to successfully verify or self-critique their candidate solutions in reasoning problems in an iterative mode. Intrigued by those claims, in this paper we set out to investigate the verification/self-critiquing abilities of large language models in the context of planning. We evaluate a planning system that employs LLMs for both plan generation and verification. We assess the verifier LLM's performance against ground-truth verification, the impact of self-critiquing on plan generation, and the influence of varying feedback levels on system performance. Using GPT-4, a state-of-the-art LLM, for both generation and verification, our findings reveal that self-critiquing appears to diminish plan generation performance, especially when compared to systems with external, sound verifiers and the LLM verifiers in that system produce a notable number of false positives, compromising the system's reliability. Additionally, the nature of feedback, whether binary or detailed, showed minimal impact on plan generation. Collectively, our results cast doubt on the effectiveness of LLMs in a self-critiquing, iterative framework for planning tasks.
http://arxiv.org/pdf/2310.08118
Karthik Valmeekam, Matthew Marquez, Subbarao Kambhampati
cs.AI
null
null
cs.AI
20231012
20231012
[ { "id": "2305.10601" }, { "id": "2305.13281" }, { "id": "2303.11366" }, { "id": "2212.09561" }, { "id": "2303.17651" }, { "id": "2303.17491" }, { "id": "2305.18654" }, { "id": "2305.15771" }, { "id": "2305.14325" } ]
2310.08319
17
Table 1: The effectiveness of RepLLaMA and RankLLaMA on the MS MARCO passage corpus compared to existing methods. For the retriever, we compare against models trained with binary human judgments, without distillation from a reranker. Evaluation figures are copied from the original papers except for OpenAI Ada2, which is the successor to cpt-text (Neelakantan et al., 2022) and available as a commercial API. The effectiveness numbers of Ada2 are taken from Lin et al. (2023). both the training and inference stages, ensuring that their L2-norms are equal to 1. After pre-encoding the entire corpus, we end up with a 135G flat index for brute-force search.
2310.08319#17
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08319
18
A challenge in fine-tuning LLMs for retrieval is the high GPU memory costs associated with con- trastive learning, as it requires large batch sizes for in-batch negatives. To address this, we em- ploy recent memory efficiency solutions, includ- ing LoRA (Hu et al., 2022), flash attention (Dao, 2023), and gradient checkpointing to reduce GPU memory usage. Both the retriever and reranker are trained with a batch size of 128, with 15 hard negative passages sampled for each query. At in- ference time, RepLLaMA retrieves the top-1000 passages from the corpus and RankLLaMA reranks the top-200 passages retrieved by RepLLaMA. To explore whether increases in model size can further improve effectiveness, we also train a version of RankLLaMA using LLaMA-2-13B initialization.3 In-Domain Evaluation Table 1 presents the ef- fectiveness of RepLLaMA and RankLLaMA on the MS MARCO passage corpus in comparison to existing methods. 3https://huggingface.co/meta-llama/ Llama-2-13b-hf
2310.08319#18
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08319
19
3https://huggingface.co/meta-llama/ Llama-2-13b-hf For retrieval, RepLLaMA outperforms all com- peting methods, achieving the highest effective- ness. The closest system in terms of effective- ness is bi-SimLM (Wang et al., 2023), which Rep- LLaMA outperforms by 2 points MRR@10 on the dev queries. However, bi-SimLM involves a pre- training stage for enhancing the text representation. In contrast, RankLLaMA directly uses the off-the- shelf LLaMA model as initialization. When com- pared to the GTR-XXL retriever, which also uses a model with billions of parameters based on the T5- encoder (Ni et al., 2022), our model achieves higher MRR@10 and Recall@1k on the dev queries and on TREC DL19/DL20. Specifically, RepLLaMA achieves 2.4 points higher MRR@10 and 0.4 points higher Recall@1k than GTR-XXL.
2310.08319#19
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08319
21
For reranking, RankLLaMA reranks the top-200 passages from RepLLaMA, resulting in the high- est end-to-end effectiveness of any multi-stage reBM25 GTR-XXL cpt-text-XL Ada2 SGPT RepLLaMA RankT5 RankLLaMA RankLLaMA model size add. pretrain - - 4.8B Y 175B Y ? ? 5.8B Y 7B N 220M - 7B - 13B - Arguana Climate-FEVER DBPedia FEVER FiQA HotpotQA NFCorpus NQ Quora SCIDOCS SciFact TREC-COVID Touche-2020 39.7 16.5 31.8 65.1 23.6 63.3 32.2 30.6 78.9 14.9 67.9 59.5 44.2 54.0 26.7 40.8 74.0 46.7 59.9 34.2 56.8 89.2 16.1 66.2 50.1 25.6 43.5 22.3 43.2 77.5 51.2 68.8 40.7 - 63.8 - 75.4 64.9 29.1 56.7 23.7 40.2 77.3 41.1 65.4 35.8 48.2 87.6 18.6 73.6
2310.08319#21
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08319
22
64.9 29.1 56.7 23.7 40.2 77.3 41.1 65.4 35.8 48.2 87.6 18.6 73.6 81.3 28.0 51.4 30.5 39.9 78.3 37.2 59.3 36.2 52.4 84.6 19.7 74.7 87.3 25.4 48.6 31.0 43.7 83.4 45.8 68.5 37.8 62.4 86.8 18.1 75.6 84.7 30.5 33.0 21.5 44.2 83.2 44.5 71.0 38.1 61.4 83.1 18.1 75.0 80.7 44.0 56.0 28.0 48.3 83.9 46.5 75.3 30.3 66.3 85.0 17.8 73.2 85.2 40.1 50.8 29.2 48.7 86.2 48.1 76.4 28.4 66.7 81.7 19.1 73.0 86.1 40.6 Average 43.7 49.3 - 52.1 52.1 55.1 53.7 56.6 56.5
2310.08319#22
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08319
23
Table 2: Zero-shot effectiveness of RepLLaMA and RankLLaMA on BEIR datasets. The “add. pretrain” row indicates whether the retriever model has undergone additional contrastive pre-training before supervised fine-tuning. The zero-shot effectiveness numbers of Ada2 are taken from Kamalloo et al. (2023). trieval system that we are aware of. Our complete RepLLaMA–RankLLaMA pipeline beats the pre- vious state-of-the-art reranker, RankT5 (Zhuang et al., 2023), by 1.5 points MRR@10. Furthermore, our RankLLaMA-13B model outperforms the 7B model, achieving 0.3 points higher MRR@10 and slightly higher nDCG@10 on both DL19 and DL20. This indicates the potential for further improve- ments with even larger models. In contrast, RepLLaMA uses the base pre-trained model as initialization, achieving the highest zero- shot effectiveness we are aware of while maintain- ing simplicity. RankLLaMA-7B further enhances the retriever’s effectiveness by an average of 1.5 points on nDCG@10. Interestingly, the larger RankLLaMA-13B model does not appear to yield any further improvements.
2310.08319#23
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08319
24
Compared to RankGPT4 (Sun et al., 2023), which prompts GPT-4 to perform passage rerank- ing through permutation generation within a multi- stage retrieval pipeline, our RepLLaMA–Rank- LLaMA pipeline outperforms it by 0.4 and 7.3 nDCG@10 points on DL19 and DL20, respectively. As a pointwise reranker, RankLLaMA can rerank candidate passages in parallel, which means that inference can be accelerated to reduce latency as compared to RankGPT, which depends on a se- quential sliding-window strategy to rerank. Zero-Shot Evaluation The zero-shot evaluation of RepLLaMA and RankLLaMA on the BEIR datasets is presented in Table 2. Both models demonstrate superior zero-shot effectiveness, out- performing existing models. RepLLaMA surpasses other existing dense retrievers with billions of pa- rameters. Specifically, it outperforms SGPT (Muen- nighoff, 2022) and Ada2 by 3 points and exceeds GTR-XXL by approximately 6 points. Note that these methods require an unsupervised contrastive pre-training stage before the supervised fine-tuning. # 3.2 Document Retrieval
2310.08319#24
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08319
25
# 3.2 Document Retrieval Dataset The document retrieval task aims to rank document-length texts, which present the challenge of handling long input sequences (Bajaj et al., 2016). As illustrated in Figure 1, the MS MARCO document ranking corpus has an average docu- ment length of around 1500 tokens. Notably, only 24% of the documents have fewer than 512 to- kens, which is the maximum input length for most previous rerankers based on smaller pre-trained language models like BERT (Devlin et al., 2019). The standard solution to manage long sequences for retrieval is the MaxP strategy (Dai and Callan, 2019), which involves dividing the document into overlapping segments and determining the docu- ment relevance score based on the segment with the highest score. However, this process involves a heuristic pooling strategy and runs the risk of losing information spread across long contexts. Recent language models pre-trained on longer sequences (e.g., 4096 tokens for LLaMA-2) offer the poten- tial to represent document-length texts “in one go”, reducing the need for segmentation. 1.0 0.8 0.6 0.4 0.2 0.0 10 100 512 1000 2048 4096 10000 Sequence Length # CDF
2310.08319#25
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08319
27
Model size Source prev. Seg. Dev top-k Y/N MRR@100 R@1k DL19 nDCG@10 DL20 nDCG@10 BM25 (Lin et al., 2021) BM25-Q2D (Pradeep et al., 2021) CoCondenser-MaxP RepLLaMA - - 110M 7B Retrieval - - - - |C| |C| |C| |C| N Y Y N 23.0 31.8 42.5 45.6 85.3 94.9 93.9 98.9 51.8 61.2 64.8 65.0 52.9 59.6 64.0 63.2 Reranking monoT5 (Pradeep et al., 2021) MORES+ (Gao and Callan, 2022a) RankLLaMA 3B BM25-Q2D 10000 100 100 110M CoCondenser RepLLaMA 7B Y Y N 41.1 49.3 50.3 94.9 - 98.9 - - 67.7 - - 67.4 Table 3: The effectiveness of RepLLaMA and RankLLaMA on the MS MARCO document corpus compared to existing methods.
2310.08319#27
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08319
28
Table 3: The effectiveness of RepLLaMA and RankLLaMA on the MS MARCO document corpus compared to existing methods. By default we allow the retriever and reranker to take the first 2048 tokens as input without any seg- mentation, which is a reasonable trade-off between input sequence length and the cost of training. This approach covers about 77% of the documents in the corpus entirely. We create the training data for the document retriever and reranker models based on the 300k training examples in the training set. Sim- ilar to the approach in passage ranking, we sample the hard negative documents to train RepLLaMA from the top-100 hard negatives from BM25 and our implementation of CoCondenser-MaxP. Here, BM25 directly indexes the entire documents, while CoCondenser retrieves documents using the afore- mentioned MaxP strategy. The hard negatives for RankLLaMA are selected from the top-100 results of RepLLaMA. Evaluation of document retrieval is performed on the development split of the MS MARCO docu- ment ranking dataset, which contains 5193 queries. Additionally, we evaluate our models on the TREC DL19/DL20 document ranking tasks, comprising 43 and 45 queries, respectively.
2310.08319#28
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08319
29
document RepLLaMA and RankLLaMA, with the same computing resources. However, there are two key differences: First, the models are trained with a batch size of 128, with each query sampling 7 hard negative passages. Second, during inference, Rep- LLaMA retrieves the top-1000 documents while RankLLaMA reranks the top-100 documents that are retrieved by RepLLaMA. The document model also generates text embeddings with 4096 dimen- sions. For the MS MARCO document corpus, this results in a 49G (flat) index after pre-encoding the entire corpus. Results Table 3 reports the effectiveness of our RepLLaMA–RankLLaMA pipeline for full- document retrieval on the MS MARCO docu- ment corpus. We see that both our retriever and reranker outperform existing methods. RepLLaMA achieves an MRR@100 score that is approxi- mately 3 points higher than CoCondenser-MaxP, while RankLLaMA exceeds (to our knowledge) the current state-of-the-art document reranker, MORES+ (Gao and Callan, 2022a), by 1 point in MRR@100. Implementation Details We follow a similar setup as in the passage ranking task to train both We again emphasize that both our retriever and reranker do not require document segmentation
2310.08319#29
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08319
30
Implementation Details We follow a similar setup as in the passage ranking task to train both We again emphasize that both our retriever and reranker do not require document segmentation Train Dev DL19 DL20 46.6 FT LoRA 40.8 41.6 41.2 72.8 74.3 69.9 72.1 Table 4: Comparison of MRR@10 between full fine- tuning (FT) and LoRA when training RepLLaMA for the passage retrieval task. and rank score aggregation. Instead, RepLLaMA directly consumes the entire document, and Rank- LLaMA directly scores the relevance of the entire query–document pair. # 4 Ablation Study and Analysis # 4.1 Full Fine-Tuning vs. LoRA When fine-tuning large language models, a key de- cision is whether to conduct full fine-tuning, which updates all parameters in the model, or to use a parameter-efficient method such as LoRA. Table 4 compares the effectiveness of RepLLaMA when trained with full fine-tuning and LoRA for the pas- sage retrieval task. Both models are trained on the training set for one epoch.
2310.08319#30
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08319
31
full fine-tuning achieves an MRR@10 score that is approximately 6 points higher than with LoRA on the training set. How- ever, on the development set, full fine-tuning only improves effectiveness by 0.4 points compared to LoRA. Interestingly, on the TREC DL19/DL20 datasets, which are derived from independent hu- man judgments, LoRA demonstrates better effec- tiveness. This suggests that full fine-tuning may be prone to overfitting on the training set distribution, while LoRA, with significantly fewer parameters, can generalizable better. For this reason, all the models presented in our main experiments (Sec- tion 3) use LoRA instead of full fine-tuning. # Input Sequence Length As discussed in Section 3.2, RankLLaMA has the advantage of accommodating longer inputs compared to previous models like BERT since its LLaMA backbone was pre-trained with a longer context window. We investigate the effects of vary- ing the maximum training input length and infer- ence input length on model effectiveness for the document reranking task. Results presented in Fig- ure 2 show a clear trend: the effectiveness of Rank- LLaMA improves as the maximum training length increases from 512 to 2048, with the MRR@100 score improving from 48.5 to 50.3. When the b o MRR@100 ES BS} 46 45 1000 2000 Input Length 3000 4000
2310.08319#31
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]
2310.08319
32
b o MRR@100 ES BS} 46 45 1000 2000 Input Length 3000 4000 Figure 2: Comparison of document ranking MRR@100 scores for RankLLaMA trained with different maximum input lengths and evaluated using different maximum input lengths. Each line represents a model trained with a specific maximum length, while points along the line indicate the effectiveness when varying the input length during inference (reranking). reranking input length is further increased to 4096, the MRR@100 score rises to 50.6. This demon- strates the model’s ability to exploit longer se- quences for improved effectiveness. However, it is important to note that the gains plateau beyond a certain length, suggesting a point of diminishing returns. The MRR@100 for the model trained with a length of 4096 is only 0.3 points higher than the model trained with a length of 2048, when evaluated on input lengths that match their training lengths. Moreover, the model trained with a length of 4096 takes about 8 days to train using 16 × V100 GPUs, while the model with a length of 2048 takes about 4 days. The same relative latency costs apply to inference as well. Therefore, while RankLLaMA can handle much longer input documents, it is crucial to balance this capability with the practical considerations of computational efficiency. # 5 Related Work # 5.1 Large Language Models
2310.08319#32
Fine-Tuning LLaMA for Multi-Stage Text Retrieval
The effectiveness of multi-stage text retrieval has been solidly demonstrated since before the era of pre-trained language models. However, most existing studies utilize models that predate recent advances in large language models (LLMs). This study seeks to explore potential improvements that state-of-the-art LLMs can bring. We conduct a comprehensive study, fine-tuning the latest LLaMA model both as a dense retriever (RepLLaMA) and as a pointwise reranker (RankLLaMA) for both passage retrieval and document retrieval using the MS MARCO datasets. Our findings demonstrate that the effectiveness of large language models indeed surpasses that of smaller models. Additionally, since LLMs can inherently handle longer contexts, they can represent entire documents holistically, obviating the need for traditional segmenting and pooling strategies. Furthermore, evaluations on BEIR demonstrate that our RepLLaMA-RankLLaMA pipeline exhibits strong zero-shot effectiveness. Model checkpoints from this study are available on HuggingFace.
http://arxiv.org/pdf/2310.08319
Xueguang Ma, Liang Wang, Nan Yang, Furu Wei, Jimmy Lin
cs.IR
null
null
cs.IR
20231012
20231012
[ { "id": "2302.13971" }, { "id": "2201.11903" }, { "id": "2301.12652" }, { "id": "2107.03374" }, { "id": "2201.10005" }, { "id": "2307.08691" }, { "id": "2307.09288" }, { "id": "2303.08774" }, { "id": "2304.09542" }, { "id": "1611.09268" }, { "id": "2303.18223" }, { "id": "2101.05667" }, { "id": "2102.07662" }, { "id": "2305.02156" }, { "id": "2309.15088" }, { "id": "2306.17563" }, { "id": "2304.04487" }, { "id": "2308.14963" }, { "id": "1910.14424" }, { "id": "1901.04085" }, { "id": "2110.01529" }, { "id": "1907.11692" }, { "id": "2305.06983" }, { "id": "2203.02155" }, { "id": "2202.08904" }, { "id": "2003.07820" } ]