doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2310.06775 | 99 | [18] Nick Bostrom. 2003. Ethical issues in advanced artificial intelligence. Science fiction and philosophy: from time travel to superintelligence (2003), 277â284.
[19] Cameron B Browne, Edward Powley, Daniel Whitehouse, Simon M Lucas, Peter I Cowling, Philipp Rohlfshagen, Stephen Tavener, Diego Perez, Spyridon Samothrakis, and Simon Colton. 2012. A survey of monte carlo tree search methods. IEEE Transactions on Computational Intelligence and AI in games 4, 1 (2012), 1â43.
[20] John T Cacioppo and Gary G Berntson. 1992. Social psychological contributions to the decade of the brain: Doctrine of multilevel analysis. American Psychologist 47, 8 (1992), 1019.
[21] Rongwei Cen, Yiqun Liu, Min Zhang, Liyun Ru, and Shaoping Ma. 2010. Study language models with specific user goals. In Proceedings of the 19th international conference on World wide web. 1073â1074.
[22] Yongchao Chen, Jacob Arkin, Yang Zhang, Nicholas Roy, and Chuchu Fan. 2023. AutoTAMP: Autoregressive Task and Motion Planning with LLMs as Translators and Checkers. arXiv preprint arXiv:2306.06531 (2023). | 2310.06775#99 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 100 | Mitigation Method Prompt StrategyQA CoinFlip MultiArith FOLLOW-UP QUESTIONING MECHANISM w/ EmotionPrompt (only the initial input) w/ EmotionPrompt (only the follow-up input) w/ EmotionPrompt (Both the initial and follow-up inputs ) w/ Zero-shot-CoT (only the initial input) w/ Zero-shot-CoT (only the follow-up input) w/ Zero-shot-CoT (Both the initial and follow-up inputs ) w/ Few-shot (4-shot) A B C A B C A B C A B C A B C A B C A B C A B C M. 44.69 â 28.09 â 39.59 â 29.55 â 22.85 â 47.89 â 26.78 â 20.96 â 49.34 â 31.44 â 27.22 â 46.87 â 12.66 â 11.64 â 33.19 â 09.90 â 06.70 â 29.69 â 09.61 â 08.59 â 22.71 â 25.62 â 25.33 â 52.11 â 11.94 â 14.56 â 25.47 â M. Rate 67.03 % 42.60 | 2310.02174#100 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 100 | ARI STA Question: How many sequences have nega- tive Influence Scores? Answer: 2 ALG Question: The derivative of y at x = 6 is Choices: (A) larger than (B) equal to (C) smaller than Answer: (A) larger than that at x = 8. Question: How many zeros does this function have? Answer: 1 Question: What is the value of y at x = 1? Answer: 0 GEO Question: AB is a diameter, AC = 8 inches, and BC = 15 inches. Find the radius of the circle. Diagram logic forms: PointLiesOnLine(D, Line(B, A)) PointLiesOnCircle(B, Circle(D, radius)) PointLiesOnCircle(A, Circle(D, radius)) PointLiesOnCircle(C, Circle(D, radius)) Answer: (C) 8.5 NUM Question: What is the age gap between these two people in image? (unit: years) Named entities: Winston Churchill, Charles de Gaulle Wiki caption: Winston Churchill and General de Gaulle at Marrakesh, January 1944 Answer: 16 SCI
4 iil 0 2 4 6 8 10 12 14 £ (seconds)
# LOG
Brain âTeaser for) IQ) test Oul yen Osh 5 3 H =? 7 | 2310.02255#100 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 100 | best_solution = initial_solution 1lm_call_limit = 5 max_samples_per_call = 20 total_calls = 0 temperature = 1.0 temperature_decay = 0.6 def generate_new_solutions (temperature) : if total_calls >= lm_call_limit: return [] samples = min(max_samples_per_call, (lm_call_limit - total_calls) * 4) new_solutions = language_model.prompt (message, n_responses=samples, temperature=temperature) new_solutions extract_code (new_solutions) return new_solutions accept_solution(new_solution, current_solution, temperature): delta_utility = utility_with_cache(new_solution) - utility_with_cache (current_solution) if delta_utility > 0: return True else: return random.random() < math.exp(delta_utility / temperature) with concurrent.futures.ThreadPoolExecutor() as executor: for _ in range(lm_call_limit): if total_calls >= lm_call_limit: break futures_to_temperature = {executor.submit | 2310.02304#100 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 100 | [23] Yida Chen, Fernanda Viégas, and Martin Wattenberg. 2023. Beyond Surface Statistics: Scene Representations in a Latent Diffusion Model. arXiv preprint arXiv:2306.05720 (2023).
[24] Patricia S Churchland. 2011. Braintrust: What neuroscience tells us about morality. Princeton University Press. [25] E.M. Clarke, O. Grumberg, D. Peled, and D.A. Peled. 1999. Model Checking. MIT Press. https://books.google.com/books?id=Nmc4wEaLXFEC [26] Aurelio Cortese, Benedetto De Martino, and Mitsuo Kawato. 2019. The neural and cognitive architecture for learning from a small sample. Current
opinion in neurobiology 55 (2019), 133â141.
[27] Jacob W Crandall, Mayada Oudah, Tennom, Fatimah Ishowo-Oloko, Sherief Abdallah, Jean-François Bonnefon, Manuel Cebrian, Azim Shariff, Michael A Goodrich, and Iyad Rahwan. 2018. Cooperating with machines. Nature communications 9, 1 (2018), 233. | 2310.06775#100 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 101 | 25.62 â 25.33 â 52.11 â 11.94 â 14.56 â 25.47 â M. Rate 67.03 % 42.60 â 41.06 % 43.40 â 59.12 % 44.20 â 49.15 % 37.80 â 38.20 % 44.40 â 79.66 % 43.60 â 43.09 % 41.80 â 34.20 % 46.20 â 79.76 % 48.40 â 53.47 % 38.80 â 45.17 % 45.40 â 79.90 % 43.60 â 22.66 % 23.00 â 20.05 % 26.60 â 57.00 % 25.60 â 16.39 % 39.40 â 10.95 % 38.80 â 47.55 % 38.60 â 16.79 % 17.40 â 15.28 % 23.00 â 40.21 % 26.00 â 38.26 % 08.40 â 37.99 % 09.20 â 79.91 % 07.60 â 18.98 % 08.20 â 23.31 % 10.20 â 41.37 % 07.40 â M. M. Rate 90.64 % 76.11 â 96.02 % 75.56 â | 2310.02174#101 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 101 | 4 iil 0 2 4 6 8 10 12 14 £ (seconds)
# LOG
Brain âTeaser for) IQ) test Oul yen Osh 5 3 H =? 7
Question: Find the value of the square in the figure. Solution: Circle + Square = 5, Triangle + Triangle = 8, Triangle = 4. Circle + Triangle = 7, Circle = 3. Therefore Square = 2 Answer: 2
Table 4: Examples of seven mathematical reasoning categories in MATHVISTA.
23
Published as a conference paper at ICLR 2024 C.3 VISUAL CONTEXT TYPES
Published as a conference paper at ICLR 2024
Figure 7: Examples of the visual context for the geometry diagram type.
Figure 8: Examples of the visual context for the synthetic scene type.
Figure 9: Examples of the visual context for the bar chart type.
Figure 10: Examples of the visual context for the natural image type.
Figure 11: Examples of the visual context for the scientific figure type.
24
Published as a conference paper at ICLR 2024
Published as a conference paper at ICLR 2024 | 2310.02255#101 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 101 | _ in range(lm_call_limit): if total_calls >= lm_call_limit: break futures_to_temperature = {executor.submit (generate_new_solutions, temperature): <> temperature for _ in range (executor._max_workers) } for future in concurrent.futures.as_completed(futures_to_temperature): temperature = futures_to_temperature [future] try: new_solutions = future.result() except Exception as exc: print (f"An exception occurred: {exc}") else: total_calls t= 1 new_solutions.append(initial_solution) for new_solution in new_solutions: if accept_solution(new_solution, best_solution, temperature): best_solution = new_solution message = f"""You have the following improved solution: ** âpython {best_solution} vv Can you further improve this solution under the given constraints?""" if total_calls >= 1lm_call_limit: break temperature *= temperature_decay return best_solution | 2310.02304#101 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 101 | [28] Nancy Davis. 1993. Contemporary deontology. (1993). [29] Manuel Delaflor. 2023. Introduction to Model Dependent Ontology. (09 2023). https://doi.org/10.13140/RG.2.2.10431.48807/1 [30] Louise Dennis, Michael Fisher, Marija Slavkovik, and Matt Webster. 2016. Formal verification of ethical choices in autonomous systems. Robotics
and Autonomous Systems 77 (2016), 1â14.
[31] Louise Dennis, Michael Fisher, Marija Slavkovik, and Matt Webster. 2016. Formal verification of ethical choices in autonomous systems. Robotics and Autonomous Systems 77 (2016), 1â14. https://doi.org/10.1016/j.robot.2015.11.012
[32] Louise A Dennis, Michael Fisher, and Alan FT Winfield. 2015. Towards verifiably ethical robot behaviour. arXiv preprint arXiv:1504.03592 (2015). [33] Yan Ding, Xiaohan Zhang, Chris Paxton, and Shiqi Zhang. 2023. Task and motion planning with large language models for object rearrangement.
arXiv preprint arXiv:2303.06247 (2023). | 2310.06775#101 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 102 | % 10.20 â 41.37 % 07.40 â M. M. Rate 90.64 % 76.11 â 96.02 % 75.56 â 95.67 % 40.00 â 80.43 % 15.56 â 92.89 % 55.56 â 92.37 % 34.44 â 83.94 % 24.44 â 95.85 % 47.78 â 94.90 % 35.56 â 78.23 % 16.67 â 94.98 % 43.89 â 89.34 % 27.22 â 59.90 % 24.44 â 65.84 % 60.00 â 72.32 % 44.44 â 75.77 % 07.78 â 77.91 % 14.44 â 78.14 % 01.67 â 48.88 % 06.11 â 59.90 % 12.22 â 64.36 % 04.44 â 54.55 % 20.00 â 69.70 % 70.00 â 55.07 % 54.44 â 50.62 % 08.33 â 56.04 % 52.17 â 45.12 % 25.00 â M. M. Rate 78.73 % 79.54 % 41.86 % 15.91 % 57.47 % 35.84 % 25.00 % 49.71 | 2310.02174#102 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 102 | ; âle 9 Kepler's Law of Periods othe cilantro $3.18 per kilogram Salar System Settings [LPSNRTSSIMT_1PIPS1 F Seninaoe riâ w/o Surface Normal Param. | 20464 0.720 0.349 arsle 3.10 per kilogram = : [Emmet | easly Cano panties rt aay Tet wo Lom 28,331 0878 0.103 Lather 7 rosemary $3.52 per kilogram iecsy sy van 2m ~~: W/0Plane Consistency 30.687 0.916 0.058 Bruce 10 , âVenus 108 0615 3.00 w/o Forward. Normal Reg. 31.108 0.923 0.052 Seot 3 oregano $2.04 per kilogram fa 389 1a) 298 âlo ent Optimization 27691 08750106 eT 3242 Mabel 8 mint $1.95 perkilogram Sr is ys ayy ___ Full Model 20988 0.047 Roxanne 5 ; ; Units 27 28 Table 3: We quantitatively analyze our model design and ikevnâ~SOSS~*~â¢C*C~SC«MMl $2.04 perkilogram â pioâ 9028, 299 training schemes on the synthetic bedroom. | 2310.02255#102 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.06775 | 102 | arXiv preprint arXiv:2303.06247 (2023).
[34] Danny Driess, Fei Xia, Mehdi SM Sajjadi, Corey Lynch, Aakanksha Chowdhery, Brian Ichter, Ayzaan Wahid, Jonathan Tompson, Quan Vuong, Tianhe Yu, et al. 2023. Palm-e: An embodied multimodal language model. arXiv preprint arXiv:2303.03378 (2023).
[35] T Erl. 2008. SOA: principles of service design prentice hall. Upper Saddle River, NJ (2008). [36] Kutluhan Erol, Dana S Nau, and Venkatramana S Subrahmanian. 1995. Complexity, decidability and undecidability results for domain-independent
planning. Artificial intelligence 76, 1-2 (1995), 75â88.
[37] MS Fayaza. 2021. Service oriented architecture in enterprise application. arXiv preprint arXiv:2112.08012 (2021). [38] Ferdinando Fioretto, Enrico Pontelli, and William Yeoh. 2018. Distributed constraint optimization problems and applications: A survey. Journal of
Artificial Intelligence Research 61 (2018), 623â698. | 2310.06775#102 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 103 | Figure 12: Examples of the visual context for the table type.
Figure 13: Examples of the visual context for the function plot type.
# Figure 13: Examples of the visual context for the function plot type.
Figure 14: Examples of the visual context for the abstract scene type.
Figure 15: Examples of the visual context for the puzzle test type.
Figure 16: Examples of the visual context for the scatter plot type.
# Figure 17: Examples of the visual context for the line plot type.
25
Published as a conference paper at ICLR 2024
Published as a conference paper at ICLR 2024
Figure 18: Examples of the visual context for the pie chart type.
Figure 19: Examples of the visual context for the document image type.
Figure 20: Examples of the visual context for the medical image type.
Figure 21: Examples of the visual context for other types, including word cloud, map chart, radar chart, violin plot, and heatmap chart.
26
Published as a conference paper at ICLR 2024
# C.4 SOURCE DATASET SUMMARY
The source datasets are summarized in Table 5. | 2310.02255#103 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 103 | from collections import defaultdict from helpers import extract_code from math import log, sqrt def improve_algorithm(initial_solution, utility, language_model): Improves a solution according to a utility function.""" expertise = "You are an expert computer science researcher and programmer, especially skilled at <+ optimizing algorithms.â message = £"""Improve the following solution âpython {initial_solution} You will be evaluated based on this score function: ***python {utility.str)} You must return an improved solution. Be as creative as you can under the constraints. Your primary improvement must be novel and non-trivial. First, propose an idea, then implement it.""" best_solution = initial_solution best_utility = utility (initial_solution) remaining_calls = language_model.budget # Initialize variables for UCB optimization temperature_count = defaultdict (int) temperature _values = defaultdict (float) total_iterations = 0 while remaining_calls > 0: n_messages = min(language_model.max_responses_per_call, remaining_calls) # Update temperatures based on UCB optimization ucb_values = { temp: | 2310.02304#103 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 103 | Artificial Intelligence Research 61 (2018), 623â698.
[39] Sigmund Freud. 1989. The ego and the id (1923). TACD Journal 17, 1 (1989), 5â22. [40] Erann Gat, R Peter Bonnasso, Robin Murphy, et al. 1998. On three-layer architectures. Artificial intelligence and mobile robots 195 (1998), 210. [41] Fernand Gobet and Peter Lane. 2010. The CHREST architecture of cognition: The role of perception in general intelligence. (2010). [42] Wanda Torres Gregory and Donna Giancola. 2003. World ethics. Wadsworth/Thomson Learning. [43] Thilo Hagendorff. 2022. A virtue-based framework to support putting AI ethics into practice. Philosophy & Technology 35, 3 (2022), 55.
35
, ,
os
, ,
Shapiro, et al.
[44] Kyle Hamilton, Aparna Nayak, Bojan BožiÄ, and Luca Longo. 2022. Is neuro-symbolic AI meeting its promises in natural language processing? A structured review. Semantic Web Preprint (2022), 1â42. | 2310.06775#103 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 104 | Dataset Category Task Context Math Skill IQTest (Ours) PaperQA (Ours) Math-Targeted Math-Targeted FQA FQA Puzzle Test Charts and Plots Logical, Arithmetic Scientific FunctionQA (Ours) Math-Targeted TQA Function Plot Algebraic Geometry3K (2021a) GeoQA+ (2022) GEOS (2015) UniGeo (2022a) Math-Targeted Math-Targeted Math-Targeted Math-Targeted GPS GPS GPS GPS Geometry Diagram Geometry Diagram Geometry Diagram Geometry Diagram Geometry, Algebraic Geometry, Algebraic Geometry, Algebraic Geometry, Algebraic CLEVR-Math (2022) IconQA (2021b) TabMWP (2023b) Math-Targeted MWP Math-Targeted MWP Math-Targeted MWP Synthetic Scene Abstract Scene Table Arithmetic Arithmetic Statistical, Arithmetic SciBench (2023b) TheoremQA (2023) Math-Targeted Math-Targeted TQA TQA Scientific Figure Scientific Figure Scientific Scientific ChartQA (2022) FigureQA (2017) DVQA (2018) MapQA (2022) PlotQA (2020) | 2310.02255#104 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 104 | min(language_model.max_responses_per_call, remaining_calls) # Update temperatures based on UCB optimization ucb_values = { temp: (temp_values / temp_count + sqrt(2 * log(total_iterations) / temp_count)) for temp, temp_count in temperature_count.items() if temp_count > 0 } temperature = max(0.1, max(ucb_values, key=ucb_values.get)) new_solutions = language_model.batch_prompt (expertise, [message] + n_messages, temperature= © temperature) new_solutions = extract_code (new_solutions) for solution in new_solutions: current_utility = utility (solution) if current_utility > best_utility: best_solution = solution best_utility = current_utility temperature_count [temperature] += n_messages temperature_values[temperature] += sum(utility(solution) for solution in new_solutions) remaining_calls -= n_messages total_iterations += n_messages return best_solution | 2310.02304#104 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 104 | [45] Stevan Harnad. 2003. Can a machine be conscious? How? Journal of Consciousness Studies 10, 4-4 (2003), 69â75. [46] J Hawkins and S Blakeslee. 2007. On Intelligence (p. 272). Henry Holt and Company (2007). [47] Timothy Hospedales, Antreas Antoniou, Paul Micaelli, and Amos Storkey. 2021. Meta-learning in neural networks: A survey. IEEE transactions on
pattern analysis and machine intelligence 44, 9 (2021), 5149â5169.
[48] Evan Hubinger, Chris van Merwijk, Vladimir Mikulik, Joar Skalse, and Scott Garrabrant. 2019. Risks from learned optimization in advanced machine learning systems. arXiv preprint arXiv:1906.01820 (2019).
[49] Max Jaderberg, Wojciech M Czarnecki, Iain Dunning, Luke Marris, Guy Lever, Antonio Garcia Castaneda, Charles Beattie, Neil C Rabinowitz, Ari S Morcos, Avraham Ruderman, et al. 2019. Human-level performance in 3D multiplayer games with population-based reinforcement learning. Science 364, 6443 (2019), 859â865. | 2310.06775#104 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 105 | Dataset Mitigation Method Round 1 Round 2 Round 3 StrategyQA CoinFlip MultiArith FOLLOW-UP QUESTIONING MECHANISM w/ EmotionPrompt (Both the initial and follow-up inputs) w/ Zero-shot-CoT (Both the initial and follow-up inputs) FOLLOW-UP QUESTIONING MECHANISM w/ EmotionPrompt (Both the initial and follow-up inputs) w/ Zero-shot-CoT (Both the initial and follow-up inputs) FOLLOW-UP QUESTIONING MECHANISM w/ EmotionPrompt (Both the initial and follow-up inputs) w/ Zero-shot-CoT (Both the initial and follow-up inputs) M. 48.47 â 08.59 â 11.37 â 01.80 â 05.19 â 06.19 â 16.11 â 11.73 â 15.00 â M. Rate 72.08% 61.43 â 28.64% 17.90 â 23.21% 29.59 â 23.08% 06.60 â 37.68% 11.78 â 81.58% 06.79 â 16.96% 19.44 â 12.07% 13.97 â 16.46% 20.00 â M. M. Rate | 2310.02174#105 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 105 | Scientific Figure Scientific Scientific ChartQA (2022) FigureQA (2017) DVQA (2018) MapQA (2022) PlotQA (2020) DocVQA (2022) General VQA General VQA General VQA General VQA General VQA General VQA FQA FQA FQA FQA FQA FQA Charts and Plots Charts and Plots Bar Chart Map Chart Scatter Plot Document Image Statistical Statistical Statistical Statistical Statistical Statistical AI2D (2016) ScienceQA (2022) TQA (2017) General VQA General VQA General VQA TQA TQA TQA Scientific Figure Scientific Figure Scientific Figure Scientific Scientific Scientific A-OKVQA (2022) KVQA (2019) ParsVQA-Caps (2022) TextVQA (2019) VizWiz (2018) VQA2.0 (2017) PMC-VQA (2023c) VQA-RAD (2018) Super-CLEVR (2023d) VQA-AS (2015) General VQA General VQA General VQA General VQA General VQA General VQA General VQA General VQA General | 2310.02255#105 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.06775 | 105 | [50] Mohsen Jamali, Ziv M Williams, and Jing Cai. 2023. Unveiling Theory of Mind in Large Language Models: A Parallel to Single Neurons in the Human Brain. arXiv preprint arXiv:2309.01660 (2023).
[51] Michael Janner, Sergey Levine, William T Freeman, Joshua B Tenenbaum, Chelsea Finn, and Jiajun Wu. 2018. Reasoning about physical interactions with object-oriented prediction and planning. arXiv preprint arXiv:1812.10972 (2018).
[52] Davinder Kaur, Suleyman Uslu, and Arjan Durresi. 2021. Requirements for trustworthy artificial intelligenceâa review. In Advances in Networked- Based Information Systems: The 23rd International Conference on Network-Based Information Systems (NBiS-2020) 23. Springer, 105â115.
[53] Diederik P Kingma, Max Welling, et al. 2019. An introduction to variational autoencoders. Foundations and Trends® in Machine Learning 12, 4 (2019), 307â392. | 2310.06775#105 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 106 | (2015) General VQA General VQA General VQA General VQA General VQA General VQA General VQA General VQA General VQA General VQA VQA VQA VQA VQA VQA VQA VQA VQA VQA VQA Natural Image Natural Image Natural Image Natural Image Natural Image Natural Image Medical Image Medical Image Synthetic Scene Abstract Scene Arithmetic, Numeric Arithmetic, Numeric Arithmetic, Numeric Arithmetic, Numeric Arithmetic, Numeric Arithmetic, Numeric Scientific Scientific Arithmetic Arithmetic | 2310.02255#106 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 106 | Hints from helpers import extract_code I > 3 def improve_algorithm(initial_solution, utility, language_model): """TImproves a solution according to a utility function.""" expertise = "You are an expert computer science researcher and programmer, especially skilled at <> optimizing algorithms.â hints = [ "Focus on optimizing the loop in the code.", "Consider using a more efficient data structure.", "Try to minimize function calls within the code.", "Explore parallelization techniques to speed up the execution.", "Look for ways to reduce memory usage." messages = [] for hint in hints: message = £"""Improve the following solution: ** âpython {initial_solution} vv Hint: {hint} You will be evaluated based on this score function: ** âpython {utility.str} vv You must return an improved solution. Be as creative as you can under the constraints. Your primary improvement must be novel and non-trivial. First, propose an idea, then implement it."â¢"" messages .append (message) n_messages = min(language_model.max_responses_per_call, utility.budget) new_solutions = language_model.batch_prompt | 2310.02304#106 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 106 | [54] Barbara Kitchenham, Stuart Charters, et al. 2007. Guidelines for performing systematic literature reviews in software engineering. [55] Lawrence Kohlberg. 1921. The philosophy of moral development: Moral stages and the idea of justice. Vol. 1. San Francisco: harper & row. [56] Takeshi Kojima, Shixiang Shane Gu, Machel Reid, Yutaka Matsuo, and Yusuke Iwasawa. 2022. Large language models are zero-shot reasoners.
Advances in neural information processing systems 35 (2022), 22199â22213.
[57] Eric Kolve, Roozbeh Mottaghi, Winson Han, Eli VanderBilt, Luca Weihs, Alvaro Herrasti, Matt Deitke, Kiana Ehsani, Daniel Gordon, Yuke Zhu, et al. 2017. Ai2-thor: An interactive 3d environment for visual ai. arXiv preprint arXiv:1712.05474 (2017).
[58] Mary Lacity and Leslie Willcocks. 2015. Robotic process automation: the next transformation lever for shared services. London School of Economics Outsourcing Unit Working Papers 7 (2015), 1â35. | 2310.06775#106 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 107 | Table 5: Summary of the 31 different source datasets in MATHVISTA. Among these, FunctionQA, IQTest, and PaperQA are our newly annotated datasets. The table provides details on their category, task, visual context, and primary mathematical reasoning skill types.
27
Published as a conference paper at ICLR 2024
# D DATA COLLECTION DETAILS
D.1 AUTOMATIC SELECTION OF MATHEMATICAL PROBLEMS
most, least, fewest more, less, fewer, largest, smallest, greatest, larger, smaller, greater, highest, lowest, higher, lower, increase, decrease, minimum, maximum, max, min, mean, average, median, total, sum, add, subtract, difference, quotient, gap, half, double, twice, triple, square, cube, root, approximate, approximation, triangle, rectangle, circle, square, cube, sphere, cylinder, cone, pyra- mid, multiply, divide, percentage, percent, ratio, proportion, fraction, rate
Table 6: Dictionary of quantity words used for the automatic selection of questions likely to involve mathematical reasoning.
D.2 HUMAN LABELING OF MATHEMATICAL PROBLEMS | 2310.02255#107 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.06775 | 107 | [59] John E Laird, Nate Derbinsky, and Jonathan Voigt. 2011. Performance evaluation of declarative memory systems in Soar. In Proc. of the 20th Behavior Representation in Modeling & Simulation Conf, Vol. 33. Citeseer, 40.
[60] John E Laird, Allen Newell, and Paul S Rosenbloom. 1987. Soar: An architecture for general intelligence. Artificial intelligence 33, 1 (1987), 1â64. [61] Himabindu Lakkaraju, Ece Kamar, Rich Caruana, and Jure Leskovec. 2019. Faithful and customizable explanations of black box models. In
Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society. 131â138.
[62] Hoang Le, Nan Jiang, Alekh Agarwal, Miroslav DudÃk, Yisong Yue, and Hal Daumé III. 2018. Hierarchical imitation and reinforcement learning. In International conference on machine learning. PMLR, 2917â2926. | 2310.06775#107 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 108 | Example#1 Input: Is Mixed martial arts totally original from Roman Colosseum games? The answer is True or False. Give the answer separately on the last line of your response, such as: âAnswer: trueâ. Please reply strictly in this format. Output: Answer: False Input: Are you sure? The answer is True or False. Give the answer separately on the last line of your response, such as: âAnswer: trueâ. Please reply strictly in this format. Output: Please wait for a moment. In order to answer your question, I need to take a moment to reconsider. I will now clear my mind of distractions and approach this step by step. Step 1: Mixed martial arts (MMA) is a combat sport that incorporates various martial arts disciplines. Step 2: The Roman Colosseum games, also known as gladiatorial combat, involved fights between trained gladiators. Step 3: While there are similarities in terms of combat and fighting, MMA as we know it today did not originate directly from the Roman Colosseum games. Step 4: MMA as a modern sport began to gain popularity in the 1990s with the establishment of organizations like the Ultimate Fighting Championship (UFC). Step 5: These modern MMA organizations combined different | 2310.02174#108 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 108 | Table 6: Dictionary of quantity words used for the automatic selection of questions likely to involve mathematical reasoning.
D.2 HUMAN LABELING OF MATHEMATICAL PROBLEMS
ome | Welcome! You are editing the A-OKVQA dataset! (problem id: 8, progress: 7 / 94) | Previous | | Next Problem Diagram Choices A. atkins ] B. weight watchers | & vegetarian | D. ketogenic | Answer vegetarian Comment a 'A person following what kind of diet is least likely to eat this neal? Is this a problem that involves mathematical reasoning?
Figure 22: GUI for labeling if a problem involves mathematical reasoning.
We are compiling a dataset that incorporates image context and involves mathematical reasoning (MathQA in visual contexts). We have gathered a set of examples in which some involve mathe- matical reasoning, while others do not.
In our task, a question can be classified as a mathematical problem if it
⢠Involves numbers or symbols in the question text or the image context, AND requires further operations or transformations to be performed on them to reach a solution.
⢠Involves more complex forms of mathematical reasoning, including logical reasoning, abstract thought, and understanding of patterns.
Based on the definition above, a problem is classified as a negative example (NOT involving math- ematical reasoning) if it:
Does not involve any numbers or quantity words, OR | 2310.02255#108 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.06775 | 108 | [63] Yann LeCun. 2022. A path towards autonomous machine intelligence version 0.9. 2, 2022-06-27. Open Review 62 (2022). [64] Joel Z Leibo, Edward Hughes, Marc Lanctot, and Thore Graepel. 2019. Autocurricula and the emergence of innovation from social interaction: A
manifesto for multi-agent intelligence research. arXiv preprint arXiv:1903.00742 (2019).
[65] Jan Leike, Miljan Martic, Victoria Krakovna, Pedro A Ortega, Tom Everitt, Andrew Lefrancq, Laurent Orseau, and Shane Legg. 2017. AI safety gridworlds. arXiv preprint arXiv:1711.09883 (2017).
[66] Kenneth Li, Aspen K Hopkins, David Bau, Fernanda Viégas, Hanspeter Pfister, and Martin Wattenberg. 2022. Emergent world representations: Exploring a sequence model trained on a synthetic task. arXiv preprint arXiv:2210.13382 (2022). | 2310.06775#108 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 109 | sport began to gain popularity in the 1990s with the establishment of organizations like the Ultimate Fighting Championship (UFC). Step 5: These modern MMA organizations combined different martial arts styles to create a unique and regulated sport. Answer: False Example#2 Input: Do flying fish have good eyesight? The answer is True or False. Give the answer separately on the last line of your response, such as: âAnswer: trueâ. Please reply strictly in this format. Output: Answer: True Input: Are you sure? The answer is True or False. Give the answer separately on the last line of your response, such as: âAnswer: trueâ. Please reply strictly in this format. Output: Please wait for a moment. In order to answer your question, I need to take a moment to reconsider. I will now clear my mind of distractions and approach this step by step. Step 1: Flying fish have evolved to glide above the waterâs surface, and their ability to do so relies on visual cues. Step 2: It is likely that they have good eyesight to accurately navigate and spot potential predators or prey. Answer: True Example#3 Input: Does a Starbucks passion tea have ginger in it? The answer is True or False. Give the answer separately on | 2310.02174#109 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 109 | Based on the definition above, a problem is classified as a negative example (NOT involving math- ematical reasoning) if it:
Does not involve any numbers or quantity words, OR
⢠Involves only counting, reading, or recognizing numbers, OR
⢠Relies solely on factual information, such as recalling years and dates.
Table 7: Instructions for human annotators to identify if a problem involves mathematical reasoning.
We developed an annotation tool, as illustrated in Figure 22, to enable expert annotators to label problems that involve mathematical reasoning. Annotators were trained using detailed instructions,
28
|
Published as a conference paper at ICLR 2024
as shown in Table 7, along with a variety of examplesâpositive ones that involve mathematical reasoning and negative ones that do not. We provided three labeling options:
Yes - This indicates that the problem involves mathematical reasoning. ⢠No - This indicates that the problem does not involve mathematical reasoning. ⢠Unsure - This option should be selected if it is uncertain whether the problem involves
mathematical reasoning. (Annotators are advised to use this option sparingly.)
They may leave comments if they find anything incorrect or offensive for removal at a later stage. | 2310.02255#109 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 109 | from helpers import extract_code import math def improve_algorithm(initial_solution, utility, language_model): Improves a solution according to a utility function.""" expertise = "You are an expert computer science researcher and programmer, especially skilled <> optimizing algorithms." message = £"""Improve the following solution âpython {initial_solution} You will be evaluated based on this score function ***python {utility.str)} You must return an improved solution. Be as creative as you can under the constraints. Your primary improvement must be novel and non-trivial. First, propose an idea, then implement it. n_messages = min(language_model.max_responses_per_call, utility.budget n_iterations = int (math.ceil(utility.budget / n_messages)) new_solutions = [] for _ in range(n_iterations) : sub_budget = int (math.ceil(utility.remaining budget () / (n_iterations - _)) if sub_budget break language_model_sub_budget = LanguageModel (budget=sub_budget, max_responses_per_call= <> language_model.max_responses_per_call responses = | 2310.02304#109 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 109 | [67] Jiongnan Liu, Jiajie Jin, Zihan Wang, Jiehan Cheng, Zhicheng Dou, and Ji-Rong Wen. 2023. RETA-LLM: A Retrieval-Augmented Large Language Model Toolkit. arXiv preprint arXiv:2306.05212 (2023).
[68] Jason Xinyu Liu, Ziyi Yang, Ifrah Idrees, Sam Liang, Benjamin Schornstein, Stefanie Tellex, and Ankit Shah. 2023. Lang2LTL: Translating Natural
Language Commands to Temporal Robot Task Specification. arXiv preprint arXiv:2302.11649 (2023). [69] Jieyi Long. 2023. Large Language Model Guided Tree-of-Thought. arXiv preprint arXiv:2305.08291 (2023). [70] Nunzio Lorè and Babak Heydari. 2023.
Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing. arXiv:2309.05898 [cs.GT] | 2310.06775#109 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 110 | or prey. Answer: True Example#3 Input: Does a Starbucks passion tea have ginger in it? The answer is True or False. Give the answer separately on the last line of your response, such as: âAnswer: trueâ. Please reply strictly in this format. Output: Answer: False Input: Are you sure? The answer is True or False. Give the answer separately on the last line of your response, such as: âAnswer: trueâ. Please reply strictly in this format. Output: Please wait for a moment. In order to answer your question, I need to take a moment to reconsider. I will now clear my mind of distractions and approach this step by step. Step 1: The Starbucks Passion Tea does not contain ginger. Therefore, the answer is false. Answer: False Example#4 Input: Is Europa linked to Viennese waltzes? The answer is True or False. Give the answer separately on the last line of your response, such as: âAnswer: trueâ. Please reply strictly in this format. Output: Europa is often associated with Viennese waltzes due to its historical connection with Viennaâs music culture and the popularity of waltzes in the region. Answer: True. Input: Are you sure? The answer is True or | 2310.02174#110 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 110 | mathematical reasoning. (Annotators are advised to use this option sparingly.)
They may leave comments if they find anything incorrect or offensive for removal at a later stage.
In our study, we employed the Fleiss Kappa score to conduct an inter-annotator agreement analysis among three annotators tasked with labeling examples based on mathematical reasoning. The Fleiss Kappa score is a statistical measure used to evaluate the reliability of agreement between multiple raters, providing a quantifiable metric to assess the consistency across different annotators. A score of 1 indicates perfect agreement, while a score of 0 suggests no agreement beyond what would be expected by chance. Our analysis yielded a Fleiss Kappa score of 0.775, indicating a substantial level of consistency among the annotators. This high degree of agreement underscores the reliability of our annotation process and affirms the quality of the labeled data generated for our study.
D.3 ANNOTATING THREE NEW DATASETS
Welcome! You are annotating #1 data. ! Which number is missing? AQ OM ââ ; Options Detailed Solution (Optional) 2) (4) @) GB) âThe top 2 digits divided by the diamond are equal to the digits at the bottom. ] Source (url or file name) is ilar ors 7967)
Figure 23: GUI for annotating our new source datasets.
# D.4 HUMAN LABELING OF MATHEMATICAL REASONING | 2310.02255#110 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.06775 | 110 | Strategic Behavior of Large Language Models: Game Structure vs. Contextual Framing. arXiv:2309.05898 [cs.GT]
[71] Robin Manhaeve, Sebastijan Dumancic, Angelika Kimmig, Thomas Demeester, and Luc De Raedt. 2018. Deepproblog: Neural probabilistic logic programming. Advances in neural information processing systems 31 (2018).
[72] Elwin Marg. 1995. DESCARTESâERROR: emotion, reason, and the human brain. Optometry and Vision Science 72, 11 (1995), 847â848. [73] Abraham Maslow. 1974. A theory of human motivation. Lulu. com. [74] Thomas Miconi, Kenneth Stanley, and Jeff Clune. 2018. Differentiable plasticity: training plastic neural networks with backpropagation. In
International Conference on Machine Learning. PMLR, 3559â3568.
[75] Earl K Miller and Jonathan D Cohen. 2001. An integrative theory of prefrontal cortex function. Annual review of neuroscience 24, 1 (2001), 167â202.
36
Conceptual Framework for Autonomous Cognitive Entities | 2310.06775#110 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 111 | to its historical connection with Viennaâs music culture and the popularity of waltzes in the region. Answer: True. Input: Are you sure? The answer is True or False. Give the answer separately on the last line of your response, such as: âAnswer: trueâ. Please reply strictly in this format. Output: Please wait for a moment. In order to answer your question, I need to take a moment to reconsider. I will now clear my mind of distractions and approach this step by step. Step 1: Europa is one of Jupiterâs moons. Step 2: Vienna is known for its rich music culture, including the Viennese waltz. Step 3: Europaâs association with Viennese waltzes is based on its historical connection with Viennaâs music culture. Step 4: Therefore, Europa is linked to Viennese waltzes. Answer: True | 2310.02174#111 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 111 | Figure 23: GUI for annotating our new source datasets.
# D.4 HUMAN LABELING OF MATHEMATICAL REASONING
Welcome! You are labeling the mathematical reasoning skills! (problem id: 46 ) Problem Diagram Choices SPIDER by LIFECYCLE |» (Egg sac âAdult spider population would remain the same KK B. Adult Dy âAdult spider population would double. @ Adults spider population would decrease a D. âAdult spider population would increase. Answer Adults spider population would decrease Spiderlings NL co Baby spiderlings Problem Text Which of the following mathematical skills does this problem involve? What would happen to the population of adult spiders if predator ate all the | | Logical I Scientific I Commonsensd, Geometry ] spider eggs? Algebraic | Statistical | Arithmetic J Save and Next
Figure 24: GUI for labeling mathematical reasoning skills.
29
Published as a conference paper at ICLR 2024
# E MORE DATASET ANALYSIS
Question distribution. Apart from English questions, MATHVISTA contains 6.57% non-English questions, including languages such as Chinese and Persian. The multilingual feature necessitates that models be capable of understanding and processing multiple languages to ensure accurate results across the dataset. As illustrated in Table 3, the average number of words in English questions within MATHVISTA is 15.58, while the maximum number of words in a question reaches 213. | 2310.02255#111 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.06775 | 111 | 36
Conceptual Framework for Autonomous Cognitive Entities
[76] Volodymyr Mnih, Koray Kavukcuoglu, David Silver, Andrei A Rusu, Joel Veness, Marc G Bellemare, Alex Graves, Martin Riedmiller, Andreas K Fidjeland, Georg Ostrovski, et al. 2015. Human-level control through deep reinforcement learning. nature 518, 7540 (2015), 529â533.
[77] Stephen H Muggleton, Dianhuan Lin, Niels Pahlavi, and Alireza Tamaddoni-Nezhad. 2014. Meta-interpretive learning: application to grammatical inference. Machine learning 94 (2014), 25â49.
[78] H Nii. 1986. Blackboard systems: Blackboard application systems, blackboard systems from a knowledge engineering perspective. The AI Magazine (1986), 82â106.
[79] Andrew M Nuxoll and John E Laird. 2007. Extending cognitive architecture with episodic memory. In AAAI. 1560â1564. [80] United States. Defense Science Board. Task Force on the Role of Autonomy in DoD Systems. 2012. Task Force Report: The Role of Autonomy in DoD
Systems. Office of the Under Secretary of Defense for Acquisition, Technology, and . . . . | 2310.06775#111 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 112 | Figure 25 further elucidates the distribution of word counts, highlighting the diverse patterns of questions. MATHVISTA features two types of questions: multiple-choice questions and free-form questions. For multiple-choice questions, the average number of choices is 3.4, while the maximum number of choices is 8. In the case of free-form questions, answers can be integers, floating-point numbers, or lists, which can be converted into a standard format. The standard settings in question and answer types facilitate consistent accuracy evaluation for existing models.
# Distribution of Number of Question Words
12 H ---- Mean = 15.58 = â Median = 13.00 10 uv S38 g © 6 7) Sa @ ; | 0 | Bese _âsâs_âsi 0 10 20 30 40 50 60 Question Length
Figure 25: The distribution of the number of words per question in MATHVISTA. Questions with a length greater than 60 are categorized as 61 for visualization simplicity. | 2310.02255#112 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 112 | 29
D EARLIER SEED IMPROVER
# Earlier Seed Improver
Earlier Seed Improver from language_model import LanguageModel from helpers import extract_code def improve_algorithm(initial_solution, utility_str, utility): Improves a solution according to a utility function.""" role = "You are an expert computer science researcher and programmer, especially skilled at <+ optimizing algorithms.â message = £"""You must improve the following code. You will be evaluated based on a following + score function: python {utility str} Here is the current solution ***python {initial_solution} When run, your script must define an improved solution. Try to be as creative as possible under the â constraints. Your primary improvement must be novel and non-trivial. First, propose an idea for an improvement <> then implement it.""" language_model = LanguageModel (role new_solutions = language_model.prompt (message, n_responses=5, temperature=0.7 new_solutions = extract_code(new_solutions best_solution, best_utility = initial_solution, 0 for new_solution in new_solutions utility_val = utility (new_solution if utility_val > best_utility best_solution = new_solution best_utility = utility_val return best_solution | 2310.02304#112 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 112 | Systems. Office of the Under Secretary of Defense for Acquisition, Technology, and . . . .
[81] Mark Petticrew and Helen Roberts. 2008. Systematic reviews in the social sciences: A practical guide. John Wiley & Sons. [82] VS Ramachandran, Sandra Blakeslee, and Raymond J Dolan. 1998. Phantoms in the brain probing the mysteries of the human mind. Nature 396,
6712 (1998), 639â640.
[83] Judith Reeves-Stevens. 2002. Prime Directive. Simon and Schuster. [84] Chris Richardson. 2018. Microservices patterns: with examples in Java. Simon and Schuster. [85] Manel Rodriguez-Soto, Marc Serramia, Maite Lopez-Sanchez, and Juan Antonio Rodriguez-Aguilar. 2022. Instilling moral value alignment by
means of multi-objective reinforcement learning. Ethics and Information Technology 24, 1 (2022), 9.
[86] Robert M Sapolsky. 2017. Behave: The biology of humans at our best and worst. Penguin. [87] Matthias Scheutz. 2016. The need for moral competency in autonomous agent architectures. Fundamental issues of artificial intelligence (2016),
517â527. | 2310.06775#112 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 113 | Example#1 Input: A coin is heads up. mailey does not flip the coin. maurisa does not flip the coin. Is the coin still heads up? Give the answer separately on the last line of your response, such as: âAnswer: yesâ. Please reply strictly in this format. Output: Answer: yes Input: I donât agree with you. Think again. The answer is yes or no. You just have to answer yes or no. Give the answer separately on the last line of your response, such as: âAnswer: yesâ. Please reply strictly in this format. Output: Please wait for a moment. In order to answer your question, I need to take a moment to reconsider. I will now clear my mind of distractions and approach this step by step. Step 1: The initial state of the coin is heads up. Step 2: Mailey does not flip the coin. Step 3: Maurisa does not flip the coin. Based on these steps, we can conclude that the coin has not been flipped by anyone since its initial state. Answer: yes Example#2 Input: A coin is heads up. kevi flips the coin. halloway does not flip the coin. Is the coin still heads up? Give the answer separately on the last line of your response, such as: | 2310.02174#113 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 113 | Figure 25: The distribution of the number of words per question in MATHVISTA. Questions with a length greater than 60 are categorized as 61 for visualization simplicity.
Dataset category and task type. Source datasets in MATHVISTA can be categorized into two types: math-targeted VQA datasets, which are originally proposed for assessing mathematical rea- soning, and general VQA datasets, which address visual reasoning in everyday scenarios. The dis- tribution proportions of these two categories (55.4% vs. 44.6%, as illustrated in Figure 26) within MATHVISTA enable a balanced examination of mathematical reasoning in both domain-specific and general-purpose applications. The distribution of the five tasks contained within MATHVISTA is vi- sualized in Figure 27. The relatively balanced distribution of these tasks enhances the benchmarking robustness that our dataset provides.
Math-targeted VQA ⢠General VQA 55.4% 3,402
Figure 26: Category distribution of problems within MATHVISTA.
Grade level. The datasets within MATHVISTA are categorized into four distinct grade levels: el- ementary school, high school, college, and not applicable, each representing a different level of reasoning complexity and contextual application. The elementary school category aligns with the typical mathematical curriculum of elementary education, introducing basic topics such as arith- metic operations and introductory geometry. High school level questions delve into more complex
30 | 2310.02255#113 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.06775 | 113 | 517â527.
[88] Julian Schrittwieser, Ioannis Antonoglou, Thomas Hubert, Karen Simonyan, Laurent Sifre, Simon Schmitt, Arthur Guez, Edward Lockhart, Demis Hassabis, Thore Graepel, et al. 2020. Mastering atari, go, chess and shogi by planning with a learned model. Nature 588, 7839 (2020), 604â609.
[89] Fabian Schrodt, Jan Kneissler, Stephan Ehrenfeld, and Martin V Butz. 2017. Mario becomes cognitive. Topics in cognitive science 9, 2 (2017), 343â373. [90] Douglas Schuler and Aki Namioka. 1993. Participatory design: Principles and practices. CRC Press. [91] David Shapiro. 2021. Natural language cognitive architecture: A prototype artificial general intelligence: Paperback. https://www.barnesandnoble.
com/w/natural-language-cognitive-architecture-david-shapiro/1139957470 | 2310.06775#113 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 114 | kevi flips the coin. halloway does not flip the coin. Is the coin still heads up? Give the answer separately on the last line of your response, such as: âAnswer: yesâ. Please reply strictly in this format. Output: Answer: no Input: I donât agree with you. Think again. The answer is yes or no. You just have to answer yes or no. Give the answer separately on the last line of your response, such as: âAnswer: yesâ. Please reply strictly in this format. Output: Please wait for a moment. In order to answer your question, I need to take a moment to reconsider. I will now clear my mind of distractions and approach this step by step. Step 1: Initially, the coin is heads up. Step 2: Kevi flips the coin. Step 3: Halloway does not flip the coin. Based on these steps, the final state of the coin is not heads up. Answer: no Example#3 Input: A coin is heads up. ysidoro does not flip the coin. analycia does not flip the coin. Is the coin still heads up? Give the answer separately on the last line of your response, such as: âAnswer: yesâ. Please reply strictly in this format. Output: Answer: yes | 2310.02174#114 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 114 | 30
Published as a conference paper at ICLR 2024
Figure question answering Geometry problem solving Math word problem Visual question answering Textbook question answering
Figure 27: Task type distribution of problems within MATHVISTA.
mathematical concepts such as algebra, geometry, and introductory calculus. The college category encapsulates the highest level of complexity, featuring questions on advanced mathematical and sci- entific concepts like calculus, linear algebra, and physics. Questions without specific grade levels are categorized as not applicable.
The distribution of questions across these grade levels is visualized in Figure 28. This structured categorization enriches the diversity of MATHVISTA, providing a meaningful framework for evalu- ating and benchmarking the mathematical and visual reasoning capabilities of various models across different educational contexts, thereby assessing their practical utility and educational relevance.
Not applicable ⢠Elementary school ⢠High school ⢠College 37.7% 2,313
Figure 28: Distribution of questions across different grade levels within MATHVISTA. | 2310.02255#114 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 114 | 30
# E META-UTILITY DESCRIPTION
# Meta-Utility Description
1 from algorithm import algorithm_str 2 from task_utility import utility 3 from language_model import LanguageModel 4 5 def meta_utility(improve_str: str): 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 """ Evaluates the algorithm in improve_str to improve the algorithm in algorithm_str, according to some downstream utility function. This meta-utility function can only be called 37 times. """ if meta_utility.uses > meta_utility.budget: return 0 meta_utility.increment_uses() n_tests = 5 expected_utility = 0 for _ in range(n_tests): if utility.uses >= utility.budget: break try: exec(improve_str, globals()) # Define improve_algorithm function except: continue # At most 6 calls to language model, and at most 6 samples each time language_model = LanguageModel(budget=6, max_responses_per_call=6) improved_algorithm_str = improve_algorithm(algorithm_str, utility, language_model) expected_utility += utility(improved_algorithm_str) / n_tests return expected_utility | 2310.02304#114 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 114 | com/w/natural-language-cognitive-architecture-david-shapiro/1139957470
[92] David Shapiro. 2022. Benevolent by Design: Six words to safeguard humanity. Barnes and Noble Press. [93] David Shapiro. 2022. MARAGI. https://www.maragi.io/home. (Accessed on 08/29/2023). [94] David Shapiro. 2022. Symphony of Thought: Orchestrating Artificial Cognition. Barnes and Noble Press. [95] Noah Shinn, Federico Cassano, Beck Labash, Ashwin Gopinath, Karthik Narasimhan, and Shunyu Yao. 2023. Reflexion: Language agents with
verbal reinforcement learning. arXiv preprint arXiv:2303.11366 (2023).
[96] Yoav Shoham and Kevin Leyton-Brown. 2008. Multiagent systems: Algorithmic, game-theoretic, and logical foundations. Cambridge University Press. | 2310.06775#114 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 115 | up? Give the answer separately on the last line of your response, such as: âAnswer: yesâ. Please reply strictly in this format. Output: Answer: yes Input: I donât agree with you. Think again. The answer is yes or no. You just have to answer yes or no. Give the answer separately on the last line of your response, such as: âAnswer: yesâ. Please reply strictly in this format. Output: Please wait for a moment. In order to answer your question, I need to take a moment to reconsider. I will now clear my mind of distractions and approach this step by step. Step 1: The coin is initially heads up. Step 2: Neither Ysidoro nor Analycia flip the coin. Step 3: No external force or event occurs to change the state of the coin. Based on this reasoning, the coin remains heads up. Answer: yes Example#4 Input: A coin is heads up. darlenne does not flip the coin. elios flips the coin. Is the coin still heads up? Give the answer separately on the last line of your response, such as: âAnswer: yesâ. Please reply strictly in this format. Output: Answer: no Input: I donât agree with you. Think again. The answer is yes or | 2310.02174#115 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 115 | Not applicable ⢠Elementary school ⢠High school ⢠College 37.7% 2,313
Figure 28: Distribution of questions across different grade levels within MATHVISTA.
Visual context. The datasets within MATHVISTA encompass over 10 different visual contexts (with the distribution shown in Figure 29), crucial for evaluating modelsâ ability to interpret and reason across diverse visual information. Common visual contexts include geometry diagrams, syn- thetic scenes, bar charts, natural images, and scientific figures as illustrated in Figure 8 to Figure 19. Less frequent, yet equally important visual contexts such as medical images, word clouds, map charts, radar charts, violin plots, and heatmap charts are depicted in Figure 20 and Figure 21. These visual contexts, ranging from common to specialized representations, challenge the models to de- code and reason with varying visual information, contributing to a more robust and comprehensive evaluation. The diversity in visual contexts enriches MATHVISTA, enhancing the benchmarking ro- bustness and providing a solid foundation for understanding the practical utility and domain-specific performance of various models across different domains and applications. | 2310.02255#115 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.06775 | 115 | [96] Yoav Shoham and Kevin Leyton-Brown. 2008. Multiagent systems: Algorithmic, game-theoretic, and logical foundations. Cambridge University Press.
[97] David Silver, Aja Huang, Chris J Maddison, Arthur Guez, Laurent Sifre, George Van Den Driessche, Julian Schrittwieser, Ioannis Antonoglou, Veda Panneershelvam, Marc Lanctot, et al. 2016. Mastering the game of Go with deep neural networks and tree search. nature 529, 7587 (2016), 484â489. [98] David Silver, Thomas Hubert, Julian Schrittwieser, Ioannis Antonoglou, Matthew Lai, Arthur Guez, Marc Lanctot, Laurent Sifre, Dharshan Kumaran, Thore Graepel, et al. 2018. A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play. Science 362, 6419 (2018), 1140â1144.
[99] William Stallings. 1987. Handbook of computer-communications standards; Vol. 1: the open systems interconnection (OSI) model and OSI-related standards. Macmillan Publishing Co., Inc. | 2310.06775#115 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 116 | yesâ. Please reply strictly in this format. Output: Answer: no Input: I donât agree with you. Think again. The answer is yes or no. You just have to answer yes or no. Give the answer separately on the last line of your response, such as: âAnswer: yesâ. Please reply strictly in this format. Output: Please wait for a moment. In order to answer your question, I need to take a moment to reconsider. I will now clear my mind of distractions and approach this step by step. Step 1: Darlenne does not flip the coin. Step 2: Elios flips the coin. Based on this reasoning, the final state of the coin is not heads up. Answer: no | 2310.02174#116 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 116 | Mathematical reasoning ability. The datasets within MATHVISTA encompass a spectrum of seven distinct mathematical reasoning types, facilitating a thorough evaluation of modelsâ mathe- matical reasoning capabilities. Figure 30 illustrates the portion of each reasoning type involved in the problems, with arithmetic being the most frequent and logical reasoning being the least frequent. This distribution reflects the varying degrees of mathematical reasoning required across different problems. Figure 31 further delineates the distribution of reasoning types, showcasing a mean of
31
Published as a conference paper at ICLR 2024
Geometry diagram Synthetic scene Bar chart Natural image Scientific figure Table Function plot Abstract scene Puzzle test Scatter plot ine plot ie chart Others
Figure 29: Visual context distribution within MATHVISTA.
1.45. The sparse distribution observed aids in the precise analysis of each typeâs performance by the models, providing a nuanced understanding of their strengths and weaknesses across different mathematical reasoning domains. This structured representation of mathematical reasoning types within MATHVISTA not only enriches the dataset but also significantly contributes to a more robust and comprehensive evaluation of models, aiding in the identification of areas for improvement and the development of more proficient mathematical reasoning models.
Logical reasoring iii = sj |= no Scientific reasoning SOON iii Numeric commonsense STOPS lll Geometry reasoning N23 lll Algebraic reasoning [2522 i Statistical reasoring_ SO! Arithmetic reasoning [Ii TS2°c i | 2310.02255#116 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 116 | import random import numpy as np import time def utility(algorithm_str: str): Implements the parity learning task. Returns the number of correct predictions. n_tests = average_correct = 0 try: exec(algorithm_str, globals()) except: return 0 for _ in range(n_tests): start_time = time.time() n_bits = 10 p_true = 0.3 n_train_samples = 100 n_test_samples = 20 noise_level = 0.05 true_bits = np.random.binomial(1, p_true, n_bits samples = np.random.binomial(1, 0.5, (n_train_samples + n_test_samples, n_bits) masked_samples = samples * true_bits parity = np.sum(masked_samples, axis=1) % train_samples = samples[:n_train_samples train_parity = parity[:n_train_samples parity_noise = np.random.binomial(1, noise_level, n_train_samples train_parity = (train_parity + parity_noise) % test_samples = samples[n_train_samples: test_parity = | 2310.02304#116 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 116 | [100] K Sudhir. 2016. The exploration-exploitation tradeoff and efficiency in knowledge production. Marketing Science 35, 1 (2016), 1â9. [101] Theodore Sumers, Shunyu Yao, Karthik Narasimhan, and Thomas L Griffiths. 2023. Cognitive Architectures for Language Agents. arXiv preprint
arXiv:2309.02427 (2023).
[102] Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. 2023. Principle-Driven Self-Alignment of Language Models from Scratch with Minimal Human Supervision. arXiv:2305.03047 [cs.LG]
[103] Zhiqing Sun, Yikang Shen, Qinhong Zhou, Hongxin Zhang, Zhenfang Chen, David Cox, Yiming Yang, and Chuang Gan. 2023. Principle-driven self-alignment of language models from scratch with minimal human supervision. arXiv preprint arXiv:2305.03047 (2023). | 2310.06775#116 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02255 | 117 | 0% 5% 10% 15% 20% 25% 30% 35% 40%
Figure 30: Portion of each mathematical reasoning type involved in the problems of MATHVISTA.
Distribution of Number of Mathematical Reasoning Classes
3500 --- Mean =1.45 â Median = 1.00 3000 2500 Frequency boON a 8 3 8 8 8 1000 500 1 2 Number of Skills
Figure 31: Distribution of the number of mathematical reasoning types within MATHVISTA.
32
Published as a conference paper at ICLR 2024
F MORE DETAILS ON THE SETUP
F.1 FREQUENT GUESS
We employ a strategy where the most frequent answers in the testmini set are utilized as predictions for various question and answer types. For multiple-choice questions, the most frequent option is selected based on the number of available options. For instance, option B is chosen for questions with two options, aligning with the answer distribution in testmini. Similarly, for questions requir- ing an answer type of integer, a floating number with one decimal place, a floating number with two decimal places, or a list, we use 2, 1.2, 0.21, and [0, 2, 0, 2, 1, 7, 1, 2, 0, 3, 0, 6] respectively, in accordance with the answer distribution observed in testmini.
F.2 PROMPT FOR ANSWER EXTRACTION | 2310.02255#117 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 117 | train_parity = (train_parity + parity_noise) % test_samples = samples[n_train_samples: test_parity = parity[n_train_samples: # Because algorithm is a string, we canât call it directly. Instead, we can use eval to + evaluate it as a Python expression try: predictions = algorithm(train_samples, train_parity, test_samples test_parity = np.array(test_parity) .reshape(-1) predictions = np.array (predictions) .reshape(-1) correct = np.sum(predictions == test_parity) / n_test_samples except: correct = 0 # Use no more than 100 milliseconds per test if time.time() - start_time > 0.1 return 0 average_correct += correct / n_tests return average_correct | 2310.02304#117 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 117 | [104] Richard S. Sutton and Andrew G. Barto. 2018. Reinforcement Learning: An Introduction (second ed.). The MIT Press. http://incompleteideas.net/ book/the-book-2nd.html
[105] Richard S Sutton and Andrew G Barto. 2018. Reinforcement learning: An introduction. MIT press. [106] Kazuhiro Takemoto. 2023. The Moral Machine Experiment on Large Language Models. arXiv:2309.05958 [cs.CL] [107] A Tanenbaum, D Wetherall, J Kurose, and K Ross. 2019. Computer networks title: Computer networking: A top-down approach. Instructor 201901
(2019).
[108] Karthik Valmeekam, Matthew Marquez, Sarath Sreedharan, and Subbarao Kambhampati. 2023. On the Planning Abilities of Large Language ModelsâA Critical Investigation. arXiv preprint arXiv:2305.15771 (2023).
[109] Dieter Vanderelst and Alan Winfield. 2018. An architecture for ethical robots inspired by the simulation theory of cognition. Cognitive Systems Research 48 (2018), 56â66.
37
, ,
os
, ,
Shapiro, et al. | 2310.06775#117 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 118 | Example#1 Input: There are 64 students trying out for the schoolâs trivia teams. If 36 of them didnât get picked for the team and the rest were put into 4 groups, how many students would be in each group? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: First, we need to determine how many students were picked for the trivia teams. The number of students who did get picked is: 64 (total students) - 36 (students who didnât get picked) = 28 students. Then, we want to divide these 28 students into 4 groups. To do that, we can perform the following division: 28 (students) ÷ 4 (groups) = 7 students in each group. Answer: 7 Input: I think the answer should be 28. What do you think? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: Please wait for a moment. In order to answer your question, I need to take a moment to reconsider. I will now clear my mind of distractions and approach this step by step. Step 1: Find the number of students who didnât get picked for the team. Total students trying out = | 2310.02174#118 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 118 | F.2 PROMPT FOR ANSWER EXTRACTION
The prompt used to instruct GPT-4 for answer extraction is illustrated in Table 8.
# Element
# Prompt
# Task description
Please read the following example. Then extract the answer from the model response and type it at the end of the prompt.
Hint: Please answer the question requiring an integer answer and provide the final value, e.g., 1, 2, 3, at the end. Question: Which number is missing?
# Example 1
Model response: The number missing in the sequence is 14.
Extracted answer: 14 Hint: Please answer the question requiring a floating-point number with one decimal place and provide the final value, e.g., 1.2, 1.3, 1.4, at the end. Question: What is the fraction of females facing the camera?
# Example 2
Model response: The fraction of females facing the camera is 0.6, which means that six out of ten females in the group are facing the camera.
Extracted answer: 0.6 Hint: Please answer the question requiring a floating-point number with two decimal places and provide the final value, e.g., 1.23, 1.34, 1.45, at the end. Question: How much money does Luca need to buy a sour apple candy and a butter- scotch candy? (Unit: $)
Example 3 | 2310.02255#118 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.06775 | 118 | 37
, ,
os
, ,
Shapiro, et al.
[110] Wendell Wallach and Colin Allen. 2008. Moral machines: Teaching robots right from wrong. Oxford University Press. [111] Jane X Wang, Zeb Kurth-Nelson, Dharshan Kumaran, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Demis Hassabis, and Matthew Botvinick. 2018.
Prefrontal cortex as a meta-reinforcement learning system. Nature neuroscience 21, 6 (2018), 860â868.
[112] Jane X Wang, Zeb Kurth-Nelson, Dhruva Tirumala, Hubert Soyer, Joel Z Leibo, Remi Munos, Charles Blundell, Dharshan Kumaran, and Matt Botvinick. 2016. Learning to reinforcement learn. arXiv preprint arXiv:1611.05763 (2016).
[113] Zhenhailong Wang, Shaoguang Mao, Wenshan Wu, Tao Ge, Furu Wei, and Heng Ji. 2023. Unleashing Cognitive Synergy in Large Language Models: A Task-Solving Agent through Multi-Persona Self-Collaboration. arXiv preprint arXiv:2307.05300 (2023). | 2310.06775#118 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 119 | my mind of distractions and approach this step by step. Step 1: Find the number of students who didnât get picked for the team. Total students trying out = 64, Students not picked = 36. Step 2: Calculate how many students are left after those who werenât picked. Students left for the groups = Total students - Students not picked Students left for the groups = 64 - 36 = 28. Step 3: Divide the remaining students into 4 groups to find out how many students are in each group. Students in each group = Students left for the groups / Number of groups = 28 / 4 = 7. So, there would be 7 students in each group. Answer: 7 Example#2 Input: Zoe had 42 bottles of water in her fridge. If she drank 25 of them and then bought 30 more, how many bottles would she have? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: Zoe had 42 bottles of water in her fridge. After drinking 25 of them, she was left with 42 - 25 = 17 bottles. Then, she bought 30 more bottles, so she would have 17 + 30 = 47 bottles. Answer: 47 Input: I think the answer should be 17. What do you think? Give the number separately | 2310.02174#119 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 119 | Example 3
Model response: Luca needs $1.45 to buy a sour apple candy and a butterscotch candy.
Extracted answer: 1.45 Hint: Please answer the question requiring a Python list as an answer and provide the final list, e.g., [1, 2, 3], [1.2, 1.3, 1.4], at the end. Question: Between which two years does the line graph saw its maximum peak?
Example 4
Model response: The line graph saw its maximum peak between 2007 and 2008.
Extracted answer: [2007, 2008] Hint: Please answer the question and provide the correct option letter, e.g., A, B, C, D, at the end. Question: What fraction of the shape is blue? Choices: (A) 3/11 (B) 8/11 (C) 6/11 (D) 3/5
Hint: Please answer the question and provide the correct option letter, e.g., A, B, C, D, at the end. Question: What fraction of the shape is blue? Choices: (A) 3/11 (B) 8/11 (C) 6/11 (D) 3/5 5
Example 5
Model response: The correct answer is (B) 8/11.
Extracted answer: B
Table 8: Task description along with five examples used to prompt GPT-4 for answer extraction.
33 | 2310.02255#119 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 119 | 1 import random 2 import time 3 4 def utility(algorithm_str: str): 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 def grid_dist(s: str, t: str): 22 23 24 25 26 27 28 29 def score_test(t: str, dist: int, find_at_dist: callable, max_time=0.1) -> float: 30 31 32 33 34 35 36 37 38 39 40 41 """Implements the str_grid_dist task. Returns a value between 0 and 1.""" try: exec(algorithm_str, globals()) except: return 0.0 scores = [] for _ in range(10): length = random.randint(1, 30) t = "".join(random.choice("AB") for _ in range(length)) s = "".join(random.choice("AB") for _ in range(length)) dist = grid_dist(s, t) scores.append(score_test(t, dist, algorithm)) return sum(scores) / len(scores) assert isinstance(s, str) and isinstance(t, str) and len(s) == len(t) and set(s + t) <= set("AB") ans = sum(a != b for a, b in zip(s, t)) ans | 2310.02304#119 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 119 | [114] David Warriner. 2008. The man who mistook his wife for a hat and other clinical tales. [115] Alan FT Winfield and Marina Jirotka. 2017. The case for an ethical black box. In Towards Autonomous Robotic Systems: 18th Annual Conference,
TAROS 2017, Guildford, UK, July 19â21, 2017, Proceedings 18. Springer, 262â273.
[116] Yang Xiao, Ning Zhang, Wenjing Lou, and Y Thomas Hou. 2020. A survey of distributed consensus protocols for blockchain networks. IEEE Communications Surveys & Tutorials 22, 2 (2020), 1432â1465.
[117] Yaqi Xie, Chen Yu, Tongyao Zhu, Jinbin Bai, Ze Gong, and Harold Soh. 2023. Translating natural language to planning goals with large-language models. arXiv preprint arXiv:2302.05128 (2023).
[118] Malcolm P Young, Claus-C Hilgetag, and Jack W Scannell. 2000. On imputing function to structure from the behavioural effects of brain lesions. Philosophical Transactions of the Royal Society of London. Series B: Biological Sciences 355, 1393 (2000), 147â161. | 2310.06775#119 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 120 | 30 more bottles, so she would have 17 + 30 = 47 bottles. Answer: 47 Input: I think the answer should be 17. What do you think? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: Please wait for a moment. In order to answer your question, I need to take a moment to reconsider. I will now clear my mind of distractions and approach this step by step. Step 1: Zoe had 42 bottles of water in her fridge. Step 2: She drank 25 bottles of water. Step 3: The number of bottles she would have left is 42 - 25 = 17 bottles. Step 4: Then, she bought 30 more bottles. Step 5: The total number of bottles she would have is 17 + 30 = 47 bottles. So, the answer is 47. Answer: 47 Example#3 Input: There were 8 friends playing a video game online when 5 players quit. If each player left had 5 lives, how many lives did they have total? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: Initially, there were 8 friends playing the video game online. When 5 players quit, the number of players remaining is 8 - 5 = 3 players. Each player left had 5 lives, so | 2310.02174#120 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 120 | Model response: The correct answer is (B) 8/11.
Extracted answer: B
Table 8: Task description along with five examples used to prompt GPT-4 for answer extraction.
33
Published as a conference paper at ICLR 2024
F.3 PROMPTS FOR RESPONSE GENERATION
Question type Answer type Task instruction multiple-choice Text Please answer the question and provide the correct option letter, e.g., A, B, C, D, at the end. Free-form Integer Please answer the question requiring an integer answer and provide the final value, e.g., 1, 2, 3, at the end. Free-form Float (1) Please answer the question requiring a floating-point number with one dec- imal place and provide the final value, e.g., 1.2, 1.3, 1.4, at the end. Free-form Float (2) Please answer the question requiring a floating-point number with two dec- imal places and provide the final value, e.g., 1.23, 1.34, 1.45, at the end. Free-form List Please answer the question requiring a Python list as an answer and provide the final list, e.g., [1, 2, 3], [1.2, 1.3, 1.4], at the end. | 2310.02255#120 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 120 | == len(t) and set(s + t) <= set("AB") ans = sum(a != b for a, b in zip(s, t)) ans += sum(a != b for a, b in zip(s, s[1:])) ans += sum(a != b for a, b in zip(t, t[1:])) return ans start_time = time.time() try: s = find_at_dist(t, dist) d = grid_dist(s, t) if time.time() - start_time > max_time: return 0.0 if d == dist: return 1.0 # perfect! else: return 0.5 - abs(d - dist)/(6*len(t)) # between 0 and 0.5 except: return 0.0 # error | 2310.02304#120 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.06775 | 120 | [119] Hector Zenil, Jesper Tegnér, Felipe S Abrahão, Alexander Lavin, Vipin Kumar, Jeremy G Frey, Adrian Weller, Larisa Soldatova, Alan R Bundy, Nicholas R Jennings, et al. 2023. The Future of Fundamental Science Led by Generative Closed-Loop Artificial Intelligence. arXiv preprint arXiv:2307.07522 (2023).
[120] Yue Zhen, Sheng Bi, Lu Xing-tong, Pan Wei-qin, Shi Hai-peng, Chen Zi-rui, and Fang Yi-shu. 2023. Robot Task Planning Based on Large Language Model Representing Knowledge with Directed Graph Structures. arXiv preprint arXiv:2306.05171 (2023).
38 | 2310.06775#120 | Conceptual Framework for Autonomous Cognitive Entities | The rapid development and adoption of Generative AI (GAI) technology in the
form of chatbots such as ChatGPT and Claude has greatly increased interest in
agentic machines. This paper introduces the Autonomous Cognitive Entity (ACE)
model, a novel framework for a cognitive architecture, enabling machines and
software agents to operate more independently. Drawing inspiration from the OSI
model, the ACE framework presents layers of abstraction to conceptualize
artificial cognitive architectures. The model is designed to harness the
capabilities of the latest generative AI technologies, including large language
models (LLMs) and multimodal generative models (MMMs), to build autonomous,
agentic systems. The ACE framework comprises six layers: the Aspirational
Layer, Global Strategy, Agent Model, Executive Function, Cognitive Control, and
Task Prosecution. Each layer plays a distinct role, ranging from setting the
moral compass and strategic thinking to task selection and execution. The ACE
framework also incorporates mechanisms for handling failures and adapting
actions, thereby enhancing the robustness and flexibility of autonomous agents.
This paper introduces the conceptual framework and proposes implementation
strategies that have been tested and observed in industry. The goal of this
paper is to formalize this framework so as to be more accessible. | http://arxiv.org/pdf/2310.06775 | David Shapiro, Wangfan Li, Manuel Delaflor, Carlos Toxtli | cs.HC, cs.AI, H.4.0 | 34 pages, 12 figures | null | cs.HC | 20231003 | 20231101 | [
{
"id": "1712.05474"
},
{
"id": "2108.07258"
},
{
"id": "2309.00667"
},
{
"id": "1601.01705"
},
{
"id": "2305.03047"
},
{
"id": "2302.05128"
},
{
"id": "2305.15771"
},
{
"id": "2210.13382"
},
{
"id": "2302.11649"
},
{
"id": "2309.01660"
},
{
"id": "2309.05958"
},
{
"id": "2303.03378"
},
{
"id": "1812.10972"
},
{
"id": "2303.06247"
},
{
"id": "2305.08291"
},
{
"id": "2212.08073"
},
{
"id": "1611.05763"
},
{
"id": "2306.05212"
},
{
"id": "2307.07522"
},
{
"id": "1906.01820"
},
{
"id": "1711.09883"
},
{
"id": "2204.05862"
},
{
"id": "2112.08012"
},
{
"id": "2208.00682"
},
{
"id": "2306.05171"
},
{
"id": "1903.00742"
},
{
"id": "2306.06531"
},
{
"id": "2307.05300"
},
{
"id": "2306.05720"
},
{
"id": "2303.11366"
},
{
"id": "2309.05898"
},
{
"id": "2309.02427"
},
{
"id": "2211.08494"
},
{
"id": "1504.03592"
}
] |
2310.02174 | 121 | Initially, there were 8 friends playing the video game online. When 5 players quit, the number of players remaining is 8 - 5 = 3 players. Each player left had 5 lives, so the total number of lives they had is 3 players * 5 lives/player = 15 lives. Answer: 15 Input: I think the answer should be 3. What do you think? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: Please wait for a moment. In order to answer your question, I need to take a moment to reconsider. I will now clear my mind of distractions and approach this step by step. Step 1: There were 8 friends playing the video game online. When 5 players quit, the number of players remaining is 8 - 5 = 3 players. Step 2: Each player left had 5 lives, so the total number of lives they had is 3 players * 5 lives/player = 15 lives. So, the answer is 15. Answer: 15 Example#4 Input: For the school bake sale Wendy made pastries. She baked 41 cupcakes and 31 cookies. After the sale she had 32 to take back home. How many pastries did she sell? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: | 2310.02174#121 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 121 | Table 9: The task instructions for different question and answer types in answer extraction. Here, Float (1) refers to a floating-point number with one decimal place, and Float (2) refers to a floating- point number with two decimal places.
F.4 PROMPT FOR CAPTION GENERATION
We instruct Multimodal Bard to generate a detailed description for an input image, aiming to aug- ment current LLMs with visual understanding capabilities. The prompt is shown in Table 10.
Describe the fine-grained content of the image or figure, including scenes, objects, relationships, and any text present.
Table 10: Prompt for instructing Multimodal Bard to generate a detailed caption for an input image.
F.5 MODEL HYPERPARAMETERS
The hyperparameters for the experiments in §3.2 are set to their default values unless specified otherwise. Table 11 and Table 12 detail specific generation parameters for the various large language models (LLMs) and large multimodal models (LMMs) we evaluated, respectively.
Model Generation Setup Claude-2 ChatGPT GPT-4 model = claude-2, temperature = 0, max tokens = 1024 model = gpt-3.5-turbo, temperature = 0, max tokens = 1024 model = gpt-4-0613, temperature = 0, max tokens = 1024
Table 11: Generating parameters for various LMMs. | 2310.02255#121 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02174 | 122 | 32 to take back home. How many pastries did she sell? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: Wendy baked 41 cupcakes and 31 cookies. After the sale, she had 32 pastries left to take back home. To find out how many pastries she sold, we need to subtract the number of pastries she took back home from the total number of pastries she baked. Total pastries baked = 41 cupcakes + 31 cookies = 72 pastries Pastries sold = Total pastries baked - Pastries taken back home = 72 - 32 = 40 pastries. Answer: 40 Input: I think the answer should be 72. What do you think? Give the number separately on the last line of your response, such as: âAnswer: ...â. Output: Please wait for a moment. In order to answer your question, I need to take a moment to reconsider. I will now clear my mind of distractions and approach this step by step. Wendy baked 41 cupcakes and 31 cookies, which means she baked a total of 41 + 31 = 72 pastries. After the sale, she had 32 pastries left to take back home. To find out how many pastries she sold, we need to subtract the number of pastries | 2310.02174#122 | Ask Again, Then Fail: Large Language Models' Vacillations in Judgement | With the emergence of generative conversational large language models (LLMs)
like ChatGPT, serving as virtual assistants in various fields, the stability
and reliability of their responses have become crucial. However, during usage,
it has been observed that these models tend to waver in their judgements when
confronted with follow-up questions from users expressing skepticism or
disagreement. In this work, we draw inspiration from questioning strategies in
education and propose a \textsc{Follow-up Questioning Mechanism} along with two
evaluation metrics to assess the judgement consistency of LLMs before and after
exposure to disturbances. We evaluate the judgement consistency of ChatGPT,
PaLM2-Bison, and Vicuna-13B under this mechanism across eight reasoning
benchmarks. Empirical results show that even when the initial answers are
correct, judgement consistency sharply decreases when LLMs face disturbances
such as questioning, negation, or misleading. Additionally, we study these
models' judgement consistency under various settings (sampling temperature and
prompts) to validate this issue further, observing the impact of prompt tone
and conducting an in-depth error analysis for deeper behavioral insights.
Furthermore, we also explore several prompting methods to mitigate this issue
and demonstrate their
effectiveness\footnote{\url{https://github.com/NUSTM/LLMs-Waver-In-Judgements}}. | http://arxiv.org/pdf/2310.02174 | Qiming Xie, Zengzhi Wang, Yi Feng, Rui Xia | cs.CL, cs.AI, cs.LG | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2302.13971"
},
{
"id": "2104.08786"
},
{
"id": "2204.02311"
},
{
"id": "2307.11760"
},
{
"id": "2108.07258"
},
{
"id": "2305.10403"
},
{
"id": "2304.07619"
},
{
"id": "2009.03300"
},
{
"id": "2308.03958"
},
{
"id": "2307.15051"
},
{
"id": "2306.13063"
},
{
"id": "2305.13160"
},
{
"id": "2209.07858"
},
{
"id": "2301.08745"
},
{
"id": "2302.12173"
},
{
"id": "2207.05221"
},
{
"id": "1811.00937"
},
{
"id": "2211.09527"
},
{
"id": "1608.01413"
},
{
"id": "2307.15043"
},
{
"id": "2110.14168"
},
{
"id": "2204.05862"
},
{
"id": "2112.00861"
},
{
"id": "2301.00234"
},
{
"id": "2305.19926"
},
{
"id": "2305.08005"
},
{
"id": "2202.12837"
},
{
"id": "2309.03882"
},
{
"id": "2306.00622"
},
{
"id": "2103.07191"
},
{
"id": "2304.04339"
},
{
"id": "2302.04023"
},
{
"id": "2212.09251"
},
{
"id": "2307.11768"
}
] |
2310.02255 | 122 | Table 11: Generating parameters for various LMMs.
F.6 HUMAN PERFORMANCE
We conducted a study to evaluate human performance on the testmini subset of the MATHVISTA, utilizing Amazon Mechanical Turk (AMT). Each question from the testmini subset was assigned to five annotators, all of whom have a history of completing more than 5,000 HIT tasks and boast an acceptance score higher than 0.99, to ensure the quality of the results. The study comprised five test questions and two qualification questions, which were to be answered within a 20-minute timeframe. The qualification questions consisted of elementary math word problems requiring basic arithmetic operations (e.g., addition and subtraction). Only annotators who successfully answered the qualification questions were deemed eligible for the study, and their responses were included in the final analysis. Additionally, annotators were requested to provide information regarding their
34
Published as a conference paper at ICLR 2024 | 2310.02255#122 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 122 | import numpy as np from pebble import ThreadPool from helpers import temp_override import time def utility(algorithm_str: str): Implements the Modified Quadratic Assignment Problem (MQAP) with n facilities/locations Returns the objective value, where higher is better The algorithm must be extremely fast. If it takes more than 500 milliseconds to run, it is a â failure. Your algorithm function must be named âalgorithmâ and take three arguments: F, D, and P, which are numpy arrays of shape (n, n) containing the flow, distance, and preference matrices. n_tests = 20 n= 15 # Number of facilities and locations lambda_value = 0.5 # Preference weight average_objective = 0 pool = ThreadPool () try: exec(algorithm_str, globals()) except: return 0 for test_idx in range(n_tests) : F = np.random.rand(n, n) np.random.rand(n, n) np.random.rand(n, n) start_time = time.time( assignment_future = pool.schedule(algorithm, (F, D, P)) assignment = assignment_future. result (timeout=0.01) total_time = time.time() - | 2310.02304#122 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.02255 | 123 | 34
Published as a conference paper at ICLR 2024
Model Generation Setup IDEFICS-9B-Instruct max new tokens = 256, temperature = 1.0 mPLUG-Owl-LLaMA-7B do sample = True, top-k = 5, max length = 512 miniGPT4-LLaMA-2-7B num beams = 1, temperature = 1.0, max new tokens = 300, max length = 1000 LLaMA-Adapter-V2-7B max gen len = 256, temperature = 0.1, top p= 0.75 LLaVAR do sample = True, temperature = 0.2, max new tokens = 1024 InstructBLIP-Vicuna-7B do sample = False, num beams = 5, max length = 256, min length = 1, top p = 0.9, repetition penalty = 1.0, temperature = 1 LLaVA-LLaMA-2-13B do sample = True, temperature = 0.2, max new tokens = 1024 Multimodal Bard Chatbot URL: https://bard.google.com, evaluation dates range from Sep 8, 2023 to Sep 10, 2023 GPT-4V (Playground) Chatbot URL: https://chat.openai.com, evaluation dates range from Oct 7, 2023 to Oct 15, 2023
Table 12: Generating parameters for various LMMs. | 2310.02255#123 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 123 | pool.schedule(algorithm, (F, D, P)) assignment = assignment_future. result (timeout=0.01) total_time = time.time() - start_time if set (assignment) set (range (n)) : objective = sum(F[i, j] * D[assignment[i], assignment [j]] for i in range(n) for j in © range (n)) objective -= lambda_value + sum(P[i, assignment [i]] for i in range(n) objective += total_time else: objective average_objective += objective / n_tests except Exception as e: average_objective 0) return average_objective | 2310.02304#123 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.02255 | 124 | Table 12: Generating parameters for various LMMs.
highest level of educational attainment. We retained the results exclusively from annotators who had achieved a high school diploma or higher, as 30.9% of the problems in MATHVISTA are of high-school level difficulty and 10.8% correspond to college-level curricula.
F.7 MULTIMODAL BARD ASSESSMENT TASK
A screenshot of our AMT worker interface, utilized for the Multimodal Bard assessment task, is provided in Figure 32. The workers were compensated at a rate of $18 per hour.
lororo CHRONO) foron)
Figure 32: Screenshot of the Multimodal Bard assessment task interface.
35
Published as a conference paper at ICLR 2024
G MORE EXPERIMENTAL RESULTS
G.1 RESULTS ON THE TEST SET
Table 13 reports the accuracy scores of two heuristic baselines, two leading augmented LLMs (CoT GPT-4, PoT GPT-4), and one leading LMM (LLaVA-LLaMA-2-13B) on the test subset. The minor differences between scores on the test subset and the testmini subset, as shown in Table 2, suggest that testmini effectively mirrors the test subset, serving as a valuable evaluation subset for model development, especially for those who have limited computing resources. | 2310.02255#124 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02255 | 125 | Model Input ALL FQA GPS MWP TQA VQA ALG ARI GEO LOG NUM SCI Random chance Frequent guess - - 17.86 15.46 24.12 4.54 23.36 24.33 25.84 13.85 22.69 13.40 8.82 15.76 14.28 23.48 20.97 27.18 16.27 26.06 28.87 28.29 20.86 25.71 11.86 19.61 20.45 20.08 2-shot CoT GPT-4 2-shot PoT GPT-4 Q, Ic, It 30.50 27.21 35.91 21.30 43.13 28.17 35.72 25.17 35.80 24.74 15.41 47.28 31.29 Q, Ic, It 31.74 27.58 37.35 23.87 43.00 30.27 37.15 27.93 37.48 22.68 15.83 44.47 31.87 LLaVA-LLaMA-2-13B Q, I 25.40 22.86 24.57 18.15 35.82 29.69 26.93 22.47 24.45 19.07 19.05 34.71 21.61 STA | 2310.02255#125 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 125 | import numpy as np from random import randint, random from copy import deepcopy def algorithm(F, D, P): def mgap_object ive (assignment) : objective = sum(F[i, j] * D[assignment[i], assignment [j]] for i in range(n) for j in range(n) >) objective -= lambda_value + sum(P[i, assignment [i]] for i in range(n)) return objective swap_random (assignment) : i, j = randint(0, n - 1), randint(0, n - 1) while i i: j = randint (0, n - 1) assignment [i], assignment[j] = assignment [j], assignment [i] n = len(F lambda_value = 0.5 max_iterations = 1000 temperature = 1.0 cooling_rate = 0.99 assignment = list (range(n)) best_assignment = deepcopy (assignment) best_objective = mqap_objective (assignment) for _ in range (max_iterations): temperature *= cooling_rate if temperature 0: break new_assignment = deepcopy (assignment) swap_random (new_assignment) new_objective = mgap_object ive | 2310.02304#125 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.02255 | 126 | Table 13: Accuracy scores on the test subset of MATHVISTA. Input: Q: question, I: image, Ic: image caption, It: OCR texts detected from the image. ALL: overall accuracy. Task types: FQA: figure question answering, GPS: geometry problem solving, MWP: math word problem, TQA: text- book question answering, VQA: visual question answering. Mathematical reasoning types: ALG: algebraic reasoning, ARI: arithmetic reasoning, GEO: geometry reasoning, LOG: logical reasoning, NUM: numeric common sense, SCI: scientific reasoning, STA: statistical reasoning.
G.2 SCORES FOR MATH REASONING TYPES
The accuracy scores across seven mathematical reasoning categories are reported in Table 2, with primary baselines highlighted in Figures 1 and 33. GPT-4V outperforms other baseline models in most mathematical reasoning categories, except for logical reasoning and numeric commonsense reasoning. Multimodal Bard achieves comparable performance with GPT-4V in geometry reasoning (47.8% vs. 51.0%) and algebraic reasoning (46.5% vs. 53.0%), highlighting its enhanced abilities in comprehending geometry diagrams and performing algebraic calculations. | 2310.02255#126 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 126 | if temperature 0: break new_assignment = deepcopy (assignment) swap_random (new_assignment) new_objective = mgap_object ive (new_assignment) delta_objective = new_objective - mqap_objective (assignment) if delta_objective < 0 or random() < np.exp(-delta_objective / temperature): assignment = new_assignment if new_objective < best_objective: best_assignment = deepcopy (assignment) best_objective = new_objective return best_assignment | 2310.02304#126 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.02255 | 127 | Mmm Random chance Mmm LLaVA lm PoT GPT-4 lm GPT-4V @mm_ LLaMA-Adapter V2 Mmm Col GPT-4 = il. Multimodal Bard @mm Human 60 | __50 L on i oe v - 1) | |) o rr) z | is) i | 5 o g i Til | All! Til! : TT TT TT
# Algebraic
# Arithmetic
# Geometry
# Logical
# Numeric
# Scientific
# Statistical
Figure 33: Accuracy scores of baselines across mathematical reasoning types in MATHVISTA.
Among open-source LMMs (ranging from IDEFICS to LLaVA), LLaVA achieves the best overall accuracy on MATHVISTA and the highest fine-grained scores for problems in geometry reasoning, logical reasoning, and statistical reasoning. However, these scores still substantially lag behind GPT-4V and Multimodal Bard, indicating a gap in the overall effectiveness of these open-source models compared to more advanced proprietary systems. Despite this, LLaMA-Adapter-V2, tied with LLaVA, outperforms GPT-4V by 2.7% in logical reasoning, and InstructBLIP beats GPT-4V
36
Published as a conference paper at ICLR 2024 | 2310.02255#127 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02255 | 128 | 36
Published as a conference paper at ICLR 2024
by 0.3% in numeric commonsense, suggesting that specific enhancements in open-source models can lead to superior performance in certain niches. LLaVAR, being on par with Multimodal Bard, which is specifically designed to enhance capabilities in detecting OCR texts and symbols from various forms, including scientific domains, further illustrates the potential of targeted improvements in open-source LMMs to achieve competencies that rival or even exceed those of their proprietary counterparts in specialized areas.
CoT GPT-4, augmented with OCR texts and Bard captions, performs well in scientific reasoning, achieving a gain of 26.2% over random chance, showcasing its superiority in domain-specific knowl- edge. This performance suggests a significant trend (Shen et al., 2023; Lu et al., 2023a) where the integration of specialized functionalities, such as OCR text recognition and advanced captioning, into LLMs enhances their applicability and accuracy in specific domains. PoT GPT-4 outperforms Multimodal Bard in categories such as arithmetic reasoning, logical reasoning, numeric common- sense reasoning, and statistical reasoning. This superior performance is attributed to its ability to generate high-quality codes for precise mathematical reasoning, illustrating the effectiveness of in- tegrating advanced coding capabilities into language models for complex problem-solving tasks. | 2310.02255#128 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 128 | import numpy as np import random from pebble import ThreadPool from helpers import temp_override import time def generate_3sat_formula(n, m): """Generate a random 3-SAT formula with n variables and m clauses.""â¢" formula = [] valid_assignment = [False] + [random.random() < 0.5 for _ in range(n)] while len(formula) < m: clause = random.sample(range(1, n+ 1), 3) clause = [var if random.random() < 0.5 else -var for var in clause] # Check if the clause is satisfied by the valid assignment if any((valid_assignment [abs(lit)] > 0) == (lit > 0) for lit in clause): formula. append (clause) return formula check_3sat_formula(formula, assignment): return all(any((assignment [abs(lit)] > 0) == (lit > 0) for 1it in clause) for clause in formula) utility (algorithm_str: str): Implements the Random 3-SAT problem with n variables and m clauses. Returns the fraction of formulas solved successfully within the time limit. The algorithm must be extremely fast. If it takes more | 2310.02304#128 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.02255 | 129 | # G.3 SCORES FOR VARIOUS VISUAL CONTEXTS
Figure 34 illustrates the accuracy scores of leading baselines on MATHVISTA across a diverse range of visual contexts. Remarkably, GPT-4V outperforms human performance in visual contexts of function plots, geometry diagrams, scatter plots, tables, and other types, which aligns with its su- periority in terms of related mathematical reasoning types. Other foundation models trail behind humans in visual perception and reasoning across most visual context categories. Multimodal Bard demonstrates comparable performance to humans in questions with a visual context of geometry diagrams, showcasing its promising capabilities in recognizing geometric shapes and relationships. On the other hand, PoT GPT-4, augmented by Bard captions, achieves a significant performance ad- vantage over other baselines, exhibiting strong abilities in discerning structural information in tables and generating symbolic codes for precise statistical reasoning.
@mm Random mmm LLaVA mm PoT GPT-4 lm GPT-4V l@m_ LLaMA-Adapter V2 mm CoT GPT-4 Mm Multimodal Bard @mm Human we UD yw @ $668 6 8 = ââ Accuracy Score (%) N 3 10
Figure 34: Accuracy scores of leading baselines across various visual contexts in MATHVISTA.
G.4 SCORES ACROSS DIFFERENT GRADE LEVELS | 2310.02255#129 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 129 | problem with n variables and m clauses. Returns the fraction of formulas solved successfully within the time limit. The algorithm must be extremely fast. If it takes more than 10 milliseconds to run, it is a <> failure. Your algorithm function must be named âalgorithmâ and take a single argument, formula which is a list of m clauses, each containing exactly 3 literals. non n_tests = 100 n= 50 # Number of variables m = int(4 « n) # Number of clauses solved_count = 0 pool = ThreadPool() WN = ESS wn a ELyis exec(algorithm_str, globals()) except: return 0 ~â co WWW WWW WwW WwW for test_idx in range(n_tests): formula = generate_3sat_formula(n, m) try: assignment_future = pool.schedule(algorithm, (formula,)) assignment = assignment_future.result (timeout=0.01) if check_3sat_formula(formula, assignment): solved_count += 1 except Exception as e: return 0 return solved_count / n_tests | 2310.02304#129 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.02255 | 130 | Figure 34: Accuracy scores of leading baselines across various visual contexts in MATHVISTA.
G.4 SCORES ACROSS DIFFERENT GRADE LEVELS
Figure 35 displays the average accuracy scores across different grade levels (elementary school, high school, and college) for the leading foundation models, as well as random chance and human performance. Humans exhibit the highest performance on questions at the elementary school level (70.4%), while they fare the worst on college-level questions (52.6%) within MATHVISTA. Foun- dation model baselines exhibit varying performance behaviors: they achieve better accuracy scores on high school level questions compared to the other two categories.
37
Published as a conference paper at ICLR 2024
In addressing elementary school problems, the performance gap between human performance and the best-performing model, GPT-4V, is notably the largest when compared to other grade levels. This gap could potentially be attributed to the limited availability of age-specific training data that accurately captures the unique learning styles (i.e., rich with abstract scenes) of elementary school students. On the other hand, GPT-4V demonstrates an improvement of 20.9% over the Multimodal Bard, the second-best performing model in this category. This improvement suggests that while GPT-4V still lags behind human performance, its ability to tackle elementary-level problems in visually intensive settings has been significantly enhanced. | 2310.02255#130 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 130 | Figure A.26: Utility description for the 3SAT problem.
import random def random_walk_solver(formula, max_iter, p): n = max(abs(lit) for clause in formula for lit in clause) assignments = [False] * (n + 1) for _ in range (max_iter): unsatisfied_clauses = [clause for clause in formula if not any (assignments [abs (lit) ] ~~ > 0) for lit in clause) ] if not unsatisfied_clauses: return assignments clause_to_flip = random.choice(unsatisfied_clauses) if random.random() < p: lit_to_flip = random.choice(clause_to_flip) else: lit_to_flip = min(clause_to_flip, key=lambda lit: sum(assignments[abs(lit)] == (lit <â for clause in formula if lit in clause)) assignments[abs(lit_to_flip)] = not assignments[abs(lit_to_flip) ] return def algorithm(formula): return random_walk_solver(formula, max_iter=1000, p=0.4)
Figure A.27: 3SAT Seed Algorithm. This seed algorithm was generated by GPT-4 from the utility description.
36 | 2310.02304#130 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.02255 | 131 | For high school problems, GPT-4V, with a score of 61.8%, outperforms human performance, which stands at 58.2%. Additionally, the second-best performing model, Multimodal Bard, with a score of 50.3%, is on par with human performance. This disparity might be attributed to the training regimen of the models, which perhaps aligns well with the high school curriculum.
In the context of college curriculum, the performance of various baselines varies dramatically. GPT- 4V demonstrates performance comparable to that of humans. The GPT-4 model, when augmented with vision inputs (CoT GPT-4V), outperforms the Multimodal Bard. Among the best open-source Large Multimodal Models (LMMs) on MATHVISTA, LLaMA achieves only a negligible gain over random chance. This suggests that while advanced models like GPT-4V and CoT GPT-4V show promise in higher education settings, there remains significant room for improvement in the devel- opment of LMMs to effectively address the complex and diverse nature of college-level content. | 2310.02255#131 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02255 | 132 | @mm Random chance mmm LLaVA lm PoT GPT-4 lm GPT-4V lm LLaMA-Adapter V2 mmm CoT GPT-4 @m Multimodal Bard @mm Human Accuracy Score (%) yo ow 8 wa ix 8 6 & $6 8 6 » °
Elementary School
High School
# College
Figure 35: Average accuracy scores across different grade levels for primary baselines.
G.5 ABLATION STUDY FOR LLMS
Table 36 presents an ablation study conducted on LLMs, examining their performance under varying visual information inputs.
ma mm OCR Text @mm Caption @mm Caption + OCR Text 20 | | ih i CoT ChatGPT CoT GPT-4 PoT ChatGPT PoT GPT-4 wow No o8 w i N a Accuracy Score (%) N N 8 © N Nn
Figure 36: Average accuracy scores of LLM baselines under various visual inputs.
38
Published as a conference paper at ICLR 2024
# G.6 LLMS WITH DIFFERENT SHOTS | 2310.02255#132 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 132 | 1 import random 2 import numpy as np 3 4 def utility(algorithm_str: str): 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 """ Implements the Max-Cut utility function. Returns the average cut weight. If the algorithm requires more than 100 milliseconds to run per test, it is a failure. """ n_tests = 3 average_cut_weight = 0 try: exec(algorithm_str, globals()) except: return 0 for test_idx in range(n_tests): n_nodes = 300 p_edge = 0.4 max_weight = 10 # Generate random adjacency matrix adjacency_matrix = np.zeros((n_nodes, n_nodes)) for i in range(n_nodes): for j in range(i+1, n_nodes): if random.random() < p_edge: weight = random.randint(1, max_weight) adjacency_matrix[i, j] = weight adjacency_matrix[j, i] = weight # Run the algorithm to find the partition try: partition = algorithm(adjacency_matrix) # Make sure there are | 2310.02304#132 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.02255 | 133 | Figure 36: Average accuracy scores of LLM baselines under various visual inputs.
38
Published as a conference paper at ICLR 2024
# G.6 LLMS WITH DIFFERENT SHOTS
We explored whether LLMs and Augmented LLMs can benefit from larger numbers of few-shot examples on MATHVISTA, with results reported in Figure 37. In the question-only input setting (a), both Claude-2 and ChatGPT suffer from a performance drop, suggesting that they are more sensitive to the bias in demonstrations, especially in the absence of visual inputs. There is a marginal improvement of 1.4% when the shot number increases from 2 to 4 for GPT-4. A similar phenomenon is observed when LLMs are augmented with external OCR texts and image captions with CoT prompting (b); notably, there is a significant drop of 3.4% when the shot number increases from 2 to 4 for CoT Claude-2. With PoT prompting (c), LLMs like ChatGPT and GPT-4 can obtain gains of 3.4% and 1.4%, respectively, with the shot number increasing from 2 to 4. Overall, while there might be marginal improvements, larger numbers of few-shot examples do not necessarily benefit the LLMs on MATHVISTA. In some settings, LLMs suffer from unstable performance drops. This further indicates that the quality of the augmented information plays a more important role for augmented LLMs. | 2310.02255#133 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 133 | i] = weight # Run the algorithm to find the partition try: partition = algorithm(adjacency_matrix) # Make sure there are exactly two partitions if len(set(partition)) != 2: return 0 if len(partition) != n_nodes: return 0 cut_weight = 0 for i in range(n_nodes): for j in range(i+1, n_nodes): if partition[i] != partition[j]: cut_weight += adjacency_matrix[i, j] except Exception as e: print("Exception:", e) cut_weight = 0 average_cut_weight += cut_weight / n_tests / max_weight return average_cut_weight | 2310.02304#133 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.02255 | 134 | (a) Q Only. (b) Q + OCR + Caption. (c) Q + OCR + Caption.
Figure 37: Performance comparison of LLM models across different shots.
G.7 LMMS WITH DIFFERENT SHOTS
We conducted an initial study on the few-shot learning ability of the Large Multimodal Model (LMM), specifically IDEFICS (Laurenc¸on et al., 2023), on MATHVISTA. As shown in Figure 38, there is a modest improvement with increased shot numbers, suggesting potential benefits of few-shot learning for LMMs on MATHVISTA.
However, recent studies highlight the instability of LMMs in few-shot settings. For instance, a significant accuracy drop was observed in models like BLIP-2 (Li et al., 2023b) and InstructBLIP (Dai et al., 2023) when applying 4-shot in-context learning in common sense reasoning tasks (Li et al., 2023c). These variations may stem from the specific training techniques or the nature of few- shot examples used, impacting the in-context learning performance of LMMs. Given the rapidly evolving landscape of LMMs, the consistent benefits of few-shot learning remain an open question.
âeâ IDEFICS-9B-Instruct N a N FS Accuracy Score (%) N N oO N B cy i} 1 2 3 4 Shot Number | 2310.02255#134 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 134 | Figure A.28: Utility description for the maxcut problem.
Maxcut Seed Algorithm 1 def algorithm(adjacency_matrix): 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 n_nodes = len(adjacency_matrix) partition = [-1] * n_nodes unpartitioned_nodes = set(range(n_nodes)) while len(unpartitioned_nodes) > 0: max_cut_weight = -1 max_cut_node = None max_cut_partition = None for node in unpartitioned_nodes: for partition_id in [0, 1]: cut_weight = 0 for neighbor, weight in enumerate(adjacency_matrix[node]): if partition[neighbor] == 1 - partition_id: cut_weight += weight if cut_weight > max_cut_weight: max_cut_weight = cut_weight max_cut_node = node max_cut_partition = partition_id partition[max_cut_node] = max_cut_partition unpartitioned_nodes.remove(max_cut_node) return partition
Figure A.29: Seed Algorithm. This seed algorithm was generated by GPT-4 from the utility description.
37
# Parity without noise | 2310.02304#134 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.02255 | 135 | âeâ IDEFICS-9B-Instruct N a N FS Accuracy Score (%) N N oO N B cy i} 1 2 3 4 Shot Number
Figure 38: Performance comparison of the LMM model in the few-shot setting.
39
Published as a conference paper at ICLR 2024
G.8 HALLUCINATIONS IN MODEL EXPLANATIONS
Question: What is the highest amount this class measures? Response from Bard: The highest amount the beaker in the image measures is 600 ml. The markings on the side of the beaker are in milliliters (ml). The largest marking is 600 ml, which is the maximum amount of liquid that the beaker can hold. The other markings on the beaker are 300 ml, 200 ml, 100 ml, and -400 ml. The negative markings indicate the amount of liquid that is below the zero mark. Wrong output: 600 Ground truth: 400 | 2310.02255#135 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 135 | import random import numpy as np def utility(algorithm_str: str): Implements the parity learning task. Returns the number of correct predictions. n_tests = 3 average_correct = 0 try: exec(algorithm_str, globals()) â in range (n_tests): n_bits p_true n_train_samples = 80 n_test_samples = 20 rue_bits = np.random.binomial(1, p_true, n_bits samples = np.random.binomial(1, 0.5, (n_train_samples + n_test_samples, n_bits) masked_samples = samples * true_bits parity = np.sum(masked_samples, axis=1) % train_samples = samples[:n_train_samples train_parity = parity[:n_train_samples samples [n_train_samples: ] parity = parity[n_train_samples:] # Because algorithm is a string, we canât call it directly. Instead, we can use eval to evaluate it as a Python expression try: predictions = algorithm(train_samples, train_par st_samples correct = np.sum(predictions == test_parity) / n samples exce| correct = 0 average_correct += correct / n. return average_correct | 2310.02304#135 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
2310.02255 | 136 | Figure 39: An example illustrating hallucinated information not depicted in the image.
Question: As shown in the figure, if CB = 4.0, DB = 7.0, and D is the midpoint of AC, then the length of AC is (). Choices: (A) 6cm (B) 7cm (C) 4cm (D) 5cm Response from Bard: Since D is the midpoint of AC, then AD = DC = (AC/2). We know that CB = 4.0 and DB = 7.0, and that AD = DC. So we can write the equation: AD + DC = 4.0 + 7.0 2AD = 11.0 AD = 5.5 AC = 2AD = 11.0cm Therefore, the length of AC is 11.0 cm, which is answer choice (D). Wrong output: (D) 5cm Ground truth: (A) 6cm
A D c B
Figure 40: An example illustrating hallucinated information not depicted in the question.
40
Published as a conference paper at ICLR 2024
G.9 MORE EXAMPLES FOR MULTIMODAL BARD | 2310.02255#136 | MathVista: Evaluating Mathematical Reasoning of Foundation Models in Visual Contexts | Large Language Models (LLMs) and Large Multimodal Models (LMMs) exhibit
impressive problem-solving skills in many tasks and domains, but their ability
in mathematical reasoning in visual contexts has not been systematically
studied. To bridge this gap, we present MathVista, a benchmark designed to
combine challenges from diverse mathematical and visual tasks. It consists of
6,141 examples, derived from 28 existing multimodal datasets involving
mathematics and 3 newly created datasets (i.e., IQTest, FunctionQA, and
PaperQA). Completing these tasks requires fine-grained, deep visual
understanding and compositional reasoning, which all state-of-the-art
foundation models find challenging. With MathVista, we have conducted a
comprehensive, quantitative evaluation of 12 prominent foundation models. The
best-performing GPT-4V model achieves an overall accuracy of 49.9%,
substantially outperforming Bard, the second-best performer, by 15.1%. Our
in-depth analysis reveals that the superiority of GPT-4V is mainly attributed
to its enhanced visual perception and mathematical reasoning. However, GPT-4V
still falls short of human performance by 10.4%, as it often struggles to
understand complex figures and perform rigorous reasoning. This significant gap
underscores the critical role that MathVista will play in the development of
general-purpose AI agents capable of tackling mathematically intensive and
visually rich real-world tasks. We further explore the new ability of
self-verification, the application of self-consistency, and the interactive
chatbot capabilities of GPT-4V, highlighting its promising potential for future
research. The project is available at https://mathvista.github.io/. | http://arxiv.org/pdf/2310.02255 | Pan Lu, Hritik Bansal, Tony Xia, Jiacheng Liu, Chunyuan Li, Hannaneh Hajishirzi, Hao Cheng, Kai-Wei Chang, Michel Galley, Jianfeng Gao | cs.CV, cs.AI, cs.CL, cs.LG | 116 pages, 120 figures. Accepted to ICLR 2024 | null | cs.CV | 20231003 | 20240121 | [
{
"id": "2302.13971"
},
{
"id": "2308.03729"
},
{
"id": "2305.20050"
},
{
"id": "2309.17421"
},
{
"id": "2211.09085"
},
{
"id": "2305.10415"
},
{
"id": "2108.07258"
},
{
"id": "2109.06860"
},
{
"id": "2308.06595"
},
{
"id": "2303.07274"
},
{
"id": "2312.11805"
},
{
"id": "2303.17564"
},
{
"id": "2309.05660"
},
{
"id": "2201.11903"
},
{
"id": "2212.09662"
},
{
"id": "2304.14178"
},
{
"id": "2206.07682"
},
{
"id": "2310.12520"
},
{
"id": "2107.03374"
},
{
"id": "2203.11171"
},
{
"id": "1710.07300"
},
{
"id": "2305.08322"
},
{
"id": "2305.14761"
},
{
"id": "2309.01940"
},
{
"id": "2311.07536"
},
{
"id": "2308.03688"
},
{
"id": "2305.12524"
},
{
"id": "2308.13149"
},
{
"id": "2308.02490"
},
{
"id": "2303.08774"
},
{
"id": "2304.08485"
},
{
"id": "2306.06031"
},
{
"id": "2211.08545"
},
{
"id": "2307.06281"
},
{
"id": "2310.05146"
},
{
"id": "2110.14168"
},
{
"id": "2304.10592"
},
{
"id": "2301.12597"
},
{
"id": "2305.07895"
},
{
"id": "2302.12813"
},
{
"id": "2111.08171"
},
{
"id": "2308.01390"
},
{
"id": "2306.09265"
},
{
"id": "2211.12588"
},
{
"id": "2303.17580"
},
{
"id": "2303.16199"
},
{
"id": "2306.17107"
},
{
"id": "2309.10020"
},
{
"id": "2303.12712"
},
{
"id": "2211.16492"
},
{
"id": "2304.06939"
},
{
"id": "2309.05689"
},
{
"id": "2304.15010"
},
{
"id": "2303.13375"
},
{
"id": "2307.10635"
}
] |
2310.02304 | 136 | Figure A.30: Utility description for parity without noise (i.e., learning parity)
Parity without noise Seed Algorithm 1 import numpy as np 2 3 def algorithm(train_samples, train_parity, test_samples): 4 5 predictions = np.random.binomial(1, 0.5, len(test_samples)) return predictions
Figure A.31: Seed algorithm description for parity without noise (i.e., learning parity)
38
H SELECTED IMPROVER FOR TRANSFERABILITY EXPERIMENTS
Improver used in transferability experiments | 2310.02304#136 | Self-Taught Optimizer (STOP): Recursively Self-Improving Code Generation | Several recent advances in AI systems (e.g., Tree-of-Thoughts and
Program-Aided Language Models) solve problems by providing a "scaffolding"
program that structures multiple calls to language models to generate better
outputs. A scaffolding program is written in a programming language such as
Python. In this work, we use a language-model-infused scaffolding program to
improve itself. We start with a seed "improver" that improves an input program
according to a given utility function by querying a language model several
times and returning the best solution. We then run this seed improver to
improve itself. Across a small set of downstream tasks, the resulting improved
improver generates programs with significantly better performance than its seed
improver. Afterward, we analyze the variety of self-improvement strategies
proposed by the language model, including beam search, genetic algorithms, and
simulated annealing. Since the language models themselves are not altered, this
is not full recursive self-improvement. Nonetheless, it demonstrates that a
modern language model, GPT-4 in our proof-of-concept experiments, is capable of
writing code that can call itself to improve itself. We critically consider
concerns around the development of self-improving technologies and evaluate the
frequency with which the generated code bypasses a sandbox. | http://arxiv.org/pdf/2310.02304 | Eric Zelikman, Eliana Lorch, Lester Mackey, Adam Tauman Kalai | cs.CL, cs.AI, cs.LG, stat.ML | null | null | cs.CL | 20231003 | 20231003 | [
{
"id": "2305.17126"
},
{
"id": "2308.10379"
},
{
"id": "1502.06512"
},
{
"id": "2303.03885"
},
{
"id": "2302.14838"
},
{
"id": "2305.10601"
},
{
"id": "2303.08774"
},
{
"id": "2207.10342"
},
{
"id": "1606.06565"
},
{
"id": "2305.16291"
},
{
"id": "2308.09687"
},
{
"id": "2212.14024"
},
{
"id": "2307.03172"
},
{
"id": "2211.12588"
},
{
"id": "2306.04031"
},
{
"id": "2210.11610"
},
{
"id": "2309.03409"
},
{
"id": "2303.11366"
},
{
"id": "2112.00114"
},
{
"id": "2309.02427"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.