instruction
stringclasses 1
value | input
stringlengths 260
2.07k
| output
stringclasses 10
values |
---|---|---|
What field is the article from? | Title: Arbitrarily Scalable Environment Generators via Neural Cellular Automata
Abstract: We study the problem of generating arbitrarily large environments to improve
the throughput of multi-robot systems. Prior work proposes Quality Diversity
(QD) algorithms as an effective method for optimizing the environments of
automated warehouses. However, these approaches optimize only relatively small
environments, falling short when it comes to replicating real-world warehouse
sizes. The challenge arises from the exponential increase in the search space
as the environment size increases. Additionally, the previous methods have only
been tested with up to 350 robots in simulations, while practical warehouses
could host thousands of robots. In this paper, instead of optimizing
environments, we propose to optimize Neural Cellular Automata (NCA) environment
generators via QD algorithms. We train a collection of NCA generators with QD
algorithms in small environments and then generate arbitrarily large
environments from the generators at test time. We show that NCA environment
generators maintain consistent, regularized patterns regardless of environment
size, significantly enhancing the scalability of multi-robot systems in two
different domains with up to 2,350 robots. Additionally, we demonstrate that
our method scales a single-agent reinforcement learning policy to arbitrarily
large environments with similar patterns. We include the source code at
\url{https://github.com/lunjohnzhang/warehouse_env_gen_nca_public}. | Robotics |
What field is the article from? | Title: CROP: Conservative Reward for Model-based Offline Policy Optimization
Abstract: Offline reinforcement learning (RL) aims to optimize policy using collected
data without online interactions. Model-based approaches are particularly
appealing for addressing offline RL challenges due to their capability to
mitigate the limitations of offline data through data generation using models.
Prior research has demonstrated that introducing conservatism into the model or
Q-function during policy optimization can effectively alleviate the prevalent
distribution drift problem in offline RL. However, the investigation into the
impacts of conservatism in reward estimation is still lacking. This paper
proposes a novel model-based offline RL algorithm, Conservative Reward for
model-based Offline Policy optimization (CROP), which conservatively estimates
the reward in model training. To achieve a conservative reward estimation, CROP
simultaneously minimizes the estimation error and the reward of random actions.
Theoretical analysis shows that this conservative reward mechanism leads to a
conservative policy evaluation and helps mitigate distribution drift.
Experiments on D4RL benchmarks showcase that the performance of CROP is
comparable to the state-of-the-art baselines. Notably, CROP establishes an
innovative connection between offline and online RL, highlighting that offline
RL problems can be tackled by adopting online RL techniques to the empirical
Markov decision process trained with a conservative reward. The source code is
available with https://github.com/G0K0URURI/CROP.git. | Machine Learning |
What field is the article from? | Title: Bridging the Digital Divide: Performance Variation across Socio-Economic Factors in Vision-Language Models
Abstract: Despite the impressive performance of current AI models reported across
various tasks, performance reports often do not include evaluations of how
these models perform on the specific groups that will be impacted by these
technologies. Among the minority groups under-represented in AI, data from
low-income households are often overlooked in data collection and model
evaluation. We evaluate the performance of a state-of-the-art vision-language
model (CLIP) on a geo-diverse dataset containing household images associated
with different income values (Dollar Street) and show that performance
inequality exists among households of different income levels. Our results
indicate that performance for the poorer groups is consistently lower than the
wealthier groups across various topics and countries. We highlight insights
that can help mitigate these issues and propose actionable steps for
economic-level inclusive AI development. Code is available at
https://github.com/MichiganNLP/Bridging_the_Digital_Divide. | Computers and Society |
What field is the article from? | Title: ToP-ToM: Trust-aware Robot Policy with Theory of Mind
Abstract: Theory of Mind (ToM) is a fundamental cognitive architecture that endows
humans with the ability to attribute mental states to others. Humans infer the
desires, beliefs, and intentions of others by observing their behavior and, in
turn, adjust their actions to facilitate better interpersonal communication and
team collaboration. In this paper, we investigated trust-aware robot policy
with the theory of mind in a multiagent setting where a human collaborates with
a robot against another human opponent. We show that by only focusing on team
performance, the robot may resort to the reverse psychology trick, which poses
a significant threat to trust maintenance. The human's trust in the robot will
collapse when they discover deceptive behavior by the robot. To mitigate this
problem, we adopt the robot theory of mind model to infer the human's trust
beliefs, including true belief and false belief (an essential element of ToM).
We designed a dynamic trust-aware reward function based on different trust
beliefs to guide the robot policy learning, which aims to balance between
avoiding human trust collapse due to robot reverse psychology. The experimental
results demonstrate the importance of the ToM-based robot policy for
human-robot trust and the effectiveness of our robot ToM-based robot policy in
multiagent interaction settings. | Robotics |
What field is the article from? | Title: Evaluating Neural Language Models as Cognitive Models of Language Acquisition
Abstract: The success of neural language models (LMs) on many technological tasks has
brought about their potential relevance as scientific theories of language
despite some clear differences between LM training and child language
acquisition. In this paper we argue that some of the most prominent benchmarks
for evaluating the syntactic capacities of LMs may not be sufficiently
rigorous. In particular, we show that the template-based benchmarks lack the
structural diversity commonly found in the theoretical and psychological
studies of language. When trained on small-scale data modeling child language
acquisition, the LMs can be readily matched by simple baseline models. We
advocate for the use of the readily available, carefully curated datasets that
have been evaluated for gradient acceptability by large pools of native
speakers and are designed to probe the structural basis of grammar
specifically. On one such dataset, the LI-Adger dataset, LMs evaluate sentences
in a way inconsistent with human language users. We conclude with suggestions
for better connecting LMs with the empirical study of child language
acquisition. | Computational Linguistics |
What field is the article from? | Title: Testing Language Model Agents Safely in the Wild
Abstract: A prerequisite for safe autonomy-in-the-wild is safe testing-in-the-wild. Yet
real-world autonomous tests face several unique safety challenges, both due to
the possibility of causing harm during a test, as well as the risk of
encountering new unsafe agent behavior through interactions with real-world and
potentially malicious actors. We propose a framework for conducting safe
autonomous agent tests on the open internet: agent actions are audited by a
context-sensitive monitor that enforces a stringent safety boundary to stop an
unsafe test, with suspect behavior ranked and logged to be examined by humans.
We design a basic safety monitor (AgentMonitor) that is flexible enough to
monitor existing LLM agents, and, using an adversarial simulated agent, we
measure its ability to identify and stop unsafe situations. Then we apply the
AgentMonitor on a battery of real-world tests of AutoGPT, and we identify
several limitations and challenges that will face the creation of safe
in-the-wild tests as autonomous agents grow more capable. | Artificial Intelligence |
What field is the article from? | Title: Don't Make Your LLM an Evaluation Benchmark Cheater
Abstract: Large language models~(LLMs) have greatly advanced the frontiers of
artificial intelligence, attaining remarkable improvement in model capacity. To
assess the model performance, a typical approach is to construct evaluation
benchmarks for measuring the ability level of LLMs in different aspects.
Despite that a number of high-quality benchmarks have been released, the
concerns about the appropriate use of these benchmarks and the fair comparison
of different models are increasingly growing. Considering these concerns, in
this paper, we discuss the potential risk and impact of inappropriately using
evaluation benchmarks and misleadingly interpreting the evaluation results.
Specially, we focus on a special issue that would lead to inappropriate
evaluation, \ie \emph{benchmark leakage}, referring that the data related to
evaluation sets is occasionally used for model training. This phenomenon now
becomes more common since pre-training data is often prepared ahead of model
test. We conduct extensive experiments to study the effect of benchmark
leverage, and find that it can dramatically boost the evaluation results, which
would finally lead to an unreliable assessment of model performance. To improve
the use of existing evaluation benchmarks, we finally present several
guidelines for both LLM developers and benchmark maintainers. We hope this work
can draw attention to appropriate training and evaluation of LLMs. | Computational Linguistics |
What field is the article from? | Title: Structured Chemistry Reasoning with Large Language Models
Abstract: This paper studies the problem of solving complex chemistry problems with
large language models (LLMs). Despite the extensive general knowledge in LLMs
(such as GPT-4), they struggle with chemistry reasoning that requires faithful
grounded reasoning with diverse chemical knowledge and an integrative
understanding of chemical interactions. We propose InstructChem, a new
structured reasoning approach that substantially boosts the LLMs' chemical
reasoning capabilities. InstructChem explicitly decomposes the reasoning into
three critical phrases, including chemical formulae generation by LLMs that
offers the basis for subsequent grounded reasoning, step-by-step reasoning that
makes multi-step derivations with the identified formulae for a preliminary
answer, and iterative review-and-refinement that steers LLMs to progressively
revise the previous phases for increasing confidence, leading to the final
high-confidence answer. We conduct extensive experiments on four different
chemistry challenges, including quantum chemistry, quantum mechanics, physical
chemistry, and chemistry kinetics. Our approach significantly enhances GPT-4 on
chemistry reasoning, yielding an 8% average absolute improvement and a 30% peak
improvement. We further use the generated reasoning by GPT-4 to fine-tune
smaller LMs (e.g., Vicuna) and observe strong improvement of the smaller LMs.
This validates our approach and enables LLMs to generate high-quality
reasoning. | Computational Linguistics |
What field is the article from? | Title: TabRepo: A Large Scale Repository of Tabular Model Evaluations and its AutoML Applications
Abstract: We introduce TabRepo, a new dataset of tabular model evaluations and
predictions. TabRepo contains the predictions and metrics of 1206 models
evaluated on 200 regression and classification datasets. We illustrate the
benefit of our datasets in multiple ways. First, we show that it allows to
perform analysis such as comparing Hyperparameter Optimization against current
AutoML systems while also considering ensembling at no cost by using
precomputed model predictions. Second, we show that our dataset can be readily
leveraged to perform transfer-learning. In particular, we show that applying
standard transfer-learning techniques allows to outperform current
state-of-the-art tabular systems in accuracy, runtime and latency. | Machine Learning |
What field is the article from? | Title: PARK: Parkinson's Analysis with Remote Kinetic-tasks
Abstract: We present a web-based framework to screen for Parkinson's disease (PD) by
allowing users to perform neurological tests in their homes. Our web framework
guides the users to complete three tasks involving speech, facial expression,
and finger movements. The task videos are analyzed to classify whether the
users show signs of PD. We present the results in an easy-to-understand manner,
along with personalized resources to further access to treatment and care. Our
framework is accessible by any major web browser, improving global access to
neurological care. | Human-Computer Interaction |
What field is the article from? | Title: Fully Quantized Always-on Face Detector Considering Mobile Image Sensors
Abstract: Despite significant research on lightweight deep neural networks (DNNs)
designed for edge devices, the current face detectors do not fully meet the
requirements for "intelligent" CMOS image sensors (iCISs) integrated with
embedded DNNs. These sensors are essential in various practical applications,
such as energy-efficient mobile phones and surveillance systems with always-on
capabilities. One noteworthy limitation is the absence of suitable face
detectors for the always-on scenario, a crucial aspect of image sensor-level
applications. These detectors must operate directly with sensor RAW data before
the image signal processor (ISP) takes over. This gap poses a significant
challenge in achieving optimal performance in such scenarios. Further research
and development are necessary to bridge this gap and fully leverage the
potential of iCIS applications. In this study, we aim to bridge the gap by
exploring extremely low-bit lightweight face detectors, focusing on the
always-on face detection scenario for mobile image sensor applications. To
achieve this, our proposed model utilizes sensor-aware synthetic RAW inputs,
simulating always-on face detection processed "before" the ISP chain. Our
approach employs ternary (-1, 0, 1) weights for potential implementations in
image sensors, resulting in a relatively simple network architecture with
shallow layers and extremely low-bitwidth. Our method demonstrates reasonable
face detection performance and excellent efficiency in simulation studies,
offering promising possibilities for practical always-on face detectors in
real-world applications. | Computer Vision |
What field is the article from? | Title: A Review of Hybrid and Ensemble in Deep Learning for Natural Language Processing
Abstract: This review presents a comprehensive exploration of hybrid and ensemble deep
learning models within Natural Language Processing (NLP), shedding light on
their transformative potential across diverse tasks such as Sentiment Analysis,
Named Entity Recognition, Machine Translation, Question Answering, Text
Classification, Generation, Speech Recognition, Summarization, and Language
Modeling. The paper systematically introduces each task, delineates key
architectures from Recurrent Neural Networks (RNNs) to Transformer-based models
like BERT, and evaluates their performance, challenges, and computational
demands. The adaptability of ensemble techniques is emphasized, highlighting
their capacity to enhance various NLP applications. Challenges in
implementation, including computational overhead, overfitting, and model
interpretation complexities, are addressed alongside the trade-off between
interpretability and performance. Serving as a concise yet invaluable guide,
this review synthesizes insights into tasks, architectures, and challenges,
offering a holistic perspective for researchers and practitioners aiming to
advance language-driven applications through ensemble deep learning in NLP. | Artificial Intelligence |
What field is the article from? | Title: GCPV: Guided Concept Projection Vectors for the Explainable Inspection of CNN Feature Spaces
Abstract: For debugging and verification of computer vision convolutional deep neural
networks (CNNs) human inspection of the learned latent representations is
imperative. Therefore, state-of-the-art eXplainable Artificial Intelligence
(XAI) methods globally associate given natural language semantic concepts with
representing vectors or regions in the CNN latent space supporting manual
inspection. Yet, this approach comes with two major disadvantages: They are
locally inaccurate when reconstructing a concept label and discard information
about the distribution of concept instance representations. The latter, though,
is of particular interest for debugging, like finding and understanding
outliers, learned notions of sub-concepts, and concept confusion. Furthermore,
current single-layer approaches neglect that information about a concept may be
spread over the CNN depth. To overcome these shortcomings, we introduce the
local-to-global Guided Concept Projection Vectors (GCPV) approach: It (1)
generates local concept vectors that each precisely reconstruct a concept
segmentation label, and then (2) generalizes these to global concept and even
sub-concept vectors by means of hiearchical clustering. Our experiments on
object detectors demonstrate improved performance compared to the
state-of-the-art, the benefit of multi-layer concept vectors, and robustness
against low-quality concept segmentation labels. Finally, we demonstrate that
GCPVs can be applied to find root causes for confusion of concepts like bus and
truck, and reveal interesting concept-level outliers. Thus, GCPVs pose a
promising step towards interpretable model debugging and informed data
improvement. | Computer Vision |
What field is the article from? | Title: NeuroPrompts: An Adaptive Framework to Optimize Prompts for Text-to-Image Generation
Abstract: Despite impressive recent advances in text-to-image diffusion models,
obtaining high-quality images often requires prompt engineering by humans who
have developed expertise in using them. In this work, we present NeuroPrompts,
an adaptive framework that automatically enhances a user's prompt to improve
the quality of generations produced by text-to-image models. Our framework
utilizes constrained text decoding with a pre-trained language model that has
been adapted to generate prompts similar to those produced by human prompt
engineers. This approach enables higher-quality text-to-image generations and
provides user control over stylistic features via constraint set specification.
We demonstrate the utility of our framework by creating an interactive
application for prompt enhancement and image generation using Stable Diffusion.
Additionally, we conduct experiments utilizing a large dataset of
human-engineered prompts for text-to-image generation and show that our
approach automatically produces enhanced prompts that result in superior image
quality. We make our code, a screencast video demo and a live demo instance of
NeuroPrompts publicly available. | Artificial Intelligence |
What field is the article from? | Title: PELMS: Pre-training for Effective Low-Shot Multi-Document Summarization
Abstract: We investigate pre-training techniques for abstractive multi-document
summarization (MDS), which is much less studied than summarizing single
documents. Though recent work has demonstrated the effectiveness of
highlighting information salience for pre-training strategy design, it
struggles to generate abstractive and reflective summaries, which are critical
properties for MDS. To this end, we present PELMS, a pre-trained model that
uses objectives based on semantic coherence heuristics and faithfulness
constraints with un-labeled multi-document inputs, to promote the generation of
concise, fluent, and faithful summaries. To support the training of PELMS, we
compile MultiPT, a multi-document pre-training corpus containing over 93
million documents to form more than 3 million unlabeled topic-centric document
clusters, covering diverse genres such as product reviews, news, and general
knowledge. We perform extensive evaluation of PELMS in low-shot settings on a
wide range of MDS datasets. Our approach consistently outperforms competitive
comparisons with respect to overall informativeness, abstractiveness,
coherence, and faithfulness. | Computational Linguistics |
What field is the article from? | Title: Causal-CoG: A Causal-Effect Look at Context Generation for Boosting Multi-modal Language Models
Abstract: While Multi-modal Language Models (MLMs) demonstrate impressive multimodal
ability, they still struggle on providing factual and precise responses for
tasks like visual question answering (VQA). In this paper, we address this
challenge from the perspective of contextual information. We propose Causal
Context Generation, Causal-CoG, which is a prompting strategy that engages
contextual information to enhance precise VQA during inference. Specifically,
we prompt MLMs to generate contexts, i.e, text description of an image, and
engage the generated contexts for question answering. Moreover, we investigate
the advantage of contexts on VQA from a causality perspective, introducing
causality filtering to select samples for which contextual information is
helpful. To show the effectiveness of Causal-CoG, we run extensive experiments
on 10 multimodal benchmarks and show consistent improvements, e.g., +6.30% on
POPE, +13.69% on Vizwiz and +6.43% on VQAv2 compared to direct decoding,
surpassing existing methods. We hope Casual-CoG inspires explorations of
context knowledge in multimodal models, and serves as a plug-and-play strategy
for MLM decoding. | Artificial Intelligence |
What field is the article from? | Title: Transferring Modality-Aware Pedestrian Attentive Learning for Visible-Infrared Person Re-identification
Abstract: Visible-infrared person re-identification (VI-ReID) aims to search the same
pedestrian of interest across visible and infrared modalities. Existing models
mainly focus on compensating for modality-specific information to reduce
modality variation. However, these methods often lead to a higher computational
overhead and may introduce interfering information when generating the
corresponding images or features. To address this issue, it is critical to
leverage pedestrian-attentive features and learn modality-complete and
-consistent representation. In this paper, a novel Transferring Modality-Aware
Pedestrian Attentive Learning (TMPA) model is proposed, focusing on the
pedestrian regions to efficiently compensate for missing modality-specific
features. Specifically, we propose a region-based data augmentation module
PedMix to enhance pedestrian region coherence by mixing the corresponding
regions from different modalities. A lightweight hybrid compensation module,
i.e., the Modality Feature Transfer (MFT), is devised to integrate cross
attention and convolution networks to fully explore the discriminative
modality-complete features with minimal computational overhead. Extensive
experiments conducted on the benchmark SYSU-MM01 and RegDB datasets
demonstrated the effectiveness of our proposed TMPA model. | Computer Vision |
What field is the article from? | Title: D3A-TS: Denoising-Driven Data Augmentation in Time Series
Abstract: It has been demonstrated that the amount of data is crucial in data-driven
machine learning methods. Data is always valuable, but in some tasks, it is
almost like gold. This occurs in engineering areas where data is scarce or very
expensive to obtain, such as predictive maintenance, where faults are rare. In
this context, a mechanism to generate synthetic data can be very useful. While
in fields such as Computer Vision or Natural Language Processing synthetic data
generation has been extensively explored with promising results, in other
domains such as time series it has received less attention. This work
specifically focuses on studying and analyzing the use of different techniques
for data augmentation in time series for classification and regression
problems. The proposed approach involves the use of diffusion probabilistic
models, which have recently achieved successful results in the field of Image
Processing, for data augmentation in time series. Additionally, the use of
meta-attributes to condition the data augmentation process is investigated. The
results highlight the high utility of this methodology in creating synthetic
data to train classification and regression models. To assess the results, six
different datasets from diverse domains were employed, showcasing versatility
in terms of input size and output types. Finally, an extensive ablation study
is conducted to further support the obtained outcomes. | Artificial Intelligence |
What field is the article from? | Title: Findings of the WMT 2023 Shared Task on Discourse-Level Literary Translation: A Fresh Orb in the Cosmos of LLMs
Abstract: Translating literary works has perennially stood as an elusive dream in
machine translation (MT), a journey steeped in intricate challenges. To foster
progress in this domain, we hold a new shared task at WMT 2023, the first
edition of the Discourse-Level Literary Translation. First, we (Tencent AI Lab
and China Literature Ltd.) release a copyrighted and document-level
Chinese-English web novel corpus. Furthermore, we put forth an
industry-endorsed criteria to guide human evaluation process. This year, we
totally received 14 submissions from 7 academia and industry teams. We employ
both automatic and human evaluations to measure the performance of the
submitted systems. The official ranking of the systems is based on the overall
human judgments. In addition, our extensive analysis reveals a series of
interesting findings on literary and discourse-aware MT. We release data,
system outputs, and leaderboard at
http://www2.statmt.org/wmt23/literary-translation-task.html. | Computational Linguistics |
What field is the article from? | Title: UINav: A maker of UI automation agents
Abstract: An automation system that can execute natural language instructions by
driving the user interface (UI) of an application can benefit users, especially
when situationally or permanently impaired. Traditional automation systems
(manual scripting, programming by demonstration tools, etc.) do not produce
generalizable models that can tolerate changes in the UI or task workflow.
Machine-learned automation agents generalize better, but either work only in
simple, hand-crafted applications or rely on large pre-trained models, which
may be too computationally expensive to run on mobile devices. In this paper,
we propose \emph{UINav}, a demonstration-based agent maker system. UINav agents
are lightweight enough to run on mobile devices, yet they achieve high success
rates with a modest number of task demonstrations. To minimize the number of
task demonstrations, UINav includes a referee model that allows users to
receive immediate feedback on tasks where the agent is failing to best guide
efforts to collect additional demonstrations. Further, UINav adopts macro
actions to reduce an agent's state space, and augments human demonstrations to
increase the diversity of training data. Our evaluation demonstrates that with
an average of 10 demonstrations per task UINav can achieve an accuracy of 70\%
or higher, and that with enough demonstrations it can achieve near-perfect
success rates on 40+ different tasks. | Human-Computer Interaction |
What field is the article from? | Title: Advancing Urban Renewal: An Automated Approach to Generating Historical Arcade Facades with Stable Diffusion Models
Abstract: Urban renewal and transformation processes necessitate the preservation of
the historical urban fabric, particularly in districts known for their
architectural and historical significance. These regions, with their diverse
architectural styles, have traditionally required extensive preliminary
research, often leading to subjective results. However, the advent of machine
learning models has opened up new avenues for generating building facade
images. Despite this, creating high-quality images for historical district
renovations remains challenging, due to the complexity and diversity inherent
in such districts. In response to these challenges, our study introduces a new
methodology for automatically generating images of historical arcade facades,
utilizing Stable Diffusion models conditioned on textual descriptions. By
classifying and tagging a variety of arcade styles, we have constructed several
realistic arcade facade image datasets. We trained multiple low-rank adaptation
(LoRA) models to control the stylistic aspects of the generated images,
supplemented by ControlNet models for improved precision and authenticity. Our
approach has demonstrated high levels of precision, authenticity, and diversity
in the generated images, showing promising potential for real-world urban
renewal projects. This new methodology offers a more efficient and accurate
alternative to conventional design processes in urban renewal, bypassing issues
of unconvincing image details, lack of precision, and limited stylistic
variety. Future research could focus on integrating this two-dimensional image
generation with three-dimensional modeling techniques, providing a more
comprehensive solution for renovating architectural facades in historical
districts. | Computer Vision |
What field is the article from? | Title: Technical Note: Feasibility of translating 3.0T-trained Deep-Learning Segmentation Models Out-of-the-Box on Low-Field MRI 0.55T Knee-MRI of Healthy Controls
Abstract: In the current study, our purpose is to evaluate the feasibility of applying
deep learning (DL) enabled algorithms to quantify bilateral knee biomarkers in
healthy controls scanned at 0.55T and compared with 3.0T. The current study
assesses the performance of standard in-practice bone, and cartilage
segmentation algorithms at 0.55T, both qualitatively and quantitatively, in
terms of comparing segmentation performance, areas of improvement, and
compartment-wise cartilage thickness values between 0.55T vs. 3.0T. Initial
results demonstrate a usable to good technical feasibility of translating
existing quantitative deep-learning-based image segmentation techniques,
trained on 3.0T, out of 0.55T for knee MRI, in a multi-vendor acquisition
environment. Especially in terms of segmenting cartilage compartments, the
models perform almost equivalent to 3.0T in terms of Likert ranking. The 0.55T
low-field sustainable and easy-to-install MRI, as demonstrated, thus, can be
utilized for evaluating knee cartilage thickness and bone segmentations aided
by established DL algorithms trained at higher-field strengths out-of-the-box
initially. This could be utilized at the far-spread point-of-care locations
with a lack of radiologists available to manually segment low-field images, at
least till a decent base of low-field data pool is collated. With further
fine-tuning with manual labeling of low-field data or utilizing synthesized
higher SNR images from low-field images, OA biomarker quantification
performance is potentially guaranteed to be further improved. | Computer Vision |
What field is the article from? | Title: Context Matter: Data-Efficient Augmentation of Large Language Models for Scientific Applications
Abstract: In this paper, we explore the challenges inherent to Large Language Models
(LLMs) like GPT-4, particularly their propensity for hallucinations, logic
mistakes, and incorrect conclusions when tasked with answering complex
questions. The capacity of LLMs to present erroneous answers in a coherent and
semantically rigorous manner further complicates the detection of factual
inaccuracies. This issue is especially pronounced in fields that require
specialized expertise. Our work delves into these challenges, aiming to enhance
the understanding and mitigation of such errors, thereby contributing to the
improvement of LLM accuracy and reliability in scientific and other specialized
domains. Our findings reveal a non-linear relationship between the context's
relevancy and the answers' measured quality. In addition, we demonstrate that
with the correct calibration, it is possible to automate the grading procedure
-- a finding suggesting that, at least to some degree, the LLMs can be used to
self-examine the quality of their own performance. Finally, we describe an
experimental platform that can be seen as a proof-of-concept of the techniques
described in this work. | Computational Linguistics |
What field is the article from? | Title: A Survey of Language Model Confidence Estimation and Calibration
Abstract: Language models (LMs) have demonstrated remarkable capabilities across a wide
range of tasks in various domains. Despite their impressive performance, the
reliability of their output is concerning and questionable regarding the demand
for AI safety. Assessing the confidence of LM predictions and calibrating them
across different tasks with the aim to align LM confidence with accuracy can
help mitigate risks and enable LMs to make better decisions. There have been
various works in this respect, but there has been no comprehensive overview of
this important research area. The present survey aims to bridge this gap. In
particular, we discuss methods and techniques for LM confidence estimation and
calibration, encompassing different LMs and various tasks. We further outline
the challenges of estimating the confidence for large language models and we
suggest some promising directions for future work. | Computational Linguistics |
What field is the article from? | Title: Aligner: One Global Token is Worth Millions of Parameters When Aligning Large Language Models
Abstract: We introduce Aligner, a novel Parameter-Efficient Fine-Tuning (PEFT) method
for aligning multi-billion-parameter-sized Large Language Models (LLMs).
Aligner employs a unique design that constructs a globally shared set of
tunable tokens that modify the attention of every layer. Remarkably with this
method, even when using one token accounting for a mere 5,000 parameters,
Aligner can still perform comparably well to state-of-the-art LLM adaptation
methods like LoRA that require millions of parameters. This capacity is
substantiated in both instruction following and value alignment tasks. Besides
the multiple order-of-magnitude improvement in parameter efficiency, the
insight Aligner provides into the internal mechanisms of LLMs is also valuable.
The architectural features and efficacy of our method, in addition to our
experiments demonstrate that an LLM separates its internal handling of "form"
and "knowledge" in a somewhat orthogonal manner. This finding promises to
motivate new research into LLM mechanism understanding and value alignment. | Computational Linguistics |
What field is the article from? | Title: Augmentation-Free Dense Contrastive Knowledge Distillation for Efficient Semantic Segmentation
Abstract: In recent years, knowledge distillation methods based on contrastive learning
have achieved promising results on image classification and object detection
tasks. However, in this line of research, we note that less attention is paid
to semantic segmentation. Existing methods heavily rely on data augmentation
and memory buffer, which entail high computational resource demands when
applying them to handle semantic segmentation that requires to preserve
high-resolution feature maps for making dense pixel-wise predictions. In order
to address this problem, we present Augmentation-free Dense Contrastive
Knowledge Distillation (Af-DCD), a new contrastive distillation learning
paradigm to train compact and accurate deep neural networks for semantic
segmentation applications. Af-DCD leverages a masked feature mimicking
strategy, and formulates a novel contrastive learning loss via taking advantage
of tactful feature partitions across both channel and spatial dimensions,
allowing to effectively transfer dense and structured local knowledge learnt by
the teacher model to a target student model while maintaining training
efficiency. Extensive experiments on five mainstream benchmarks with various
teacher-student network pairs demonstrate the effectiveness of our approach.
For instance, the DeepLabV3-Res18|DeepLabV3-MBV2 model trained by Af-DCD
reaches 77.03%|76.38% mIOU on Cityscapes dataset when choosing DeepLabV3-Res101
as the teacher, setting new performance records. Besides that, Af-DCD achieves
an absolute mIOU improvement of 3.26%|3.04%|2.75%|2.30%|1.42% compared with
individually trained counterpart on Cityscapes|Pascal
VOC|Camvid|ADE20K|COCO-Stuff-164K. Code is available at
https://github.com/OSVAI/Af-DCD | Computer Vision |
What field is the article from? | Title: Newvision: application for helping blind people using deep learning
Abstract: As able-bodied people, we often take our vision for granted. For people who
are visually impaired, however, their disability can have a significant impact
on their daily lives. We are developing proprietary headgear that will help
visually impaired people navigate their surroundings, identify objects and
people, read text, and avoid obstacles. The headgear will use a combination of
computer vision, distance estimation with ultrasonic sensors, voice
recognition, and voice assistants to provide users with real-time information
about their environment. Users will be able to interact with the headgear
through voice commands, such as ''What is that?'' to identify an object or
''Navigate to the front door'' to find their way around. The headgear will then
provide the user with a verbal description of the object or spoken navigation
instructions. We believe that this headgear has the potential to make a
significant difference in the lives of visually impaired people, allowing them
to live more independently and participate more fully in society. | Human-Computer Interaction |
What field is the article from? | Title: NeuSD: Surface Completion with Multi-View Text-to-Image Diffusion
Abstract: We present a novel method for 3D surface reconstruction from multiple images
where only a part of the object of interest is captured. Our approach builds on
two recent developments: surface reconstruction using neural radiance fields
for the reconstruction of the visible parts of the surface, and guidance of
pre-trained 2D diffusion models in the form of Score Distillation Sampling
(SDS) to complete the shape in unobserved regions in a plausible manner. We
introduce three components. First, we suggest employing normal maps as a pure
geometric representation for SDS instead of color renderings which are
entangled with the appearance information. Second, we introduce the freezing of
the SDS noise during training which results in more coherent gradients and
better convergence. Third, we propose Multi-View SDS as a way to condition the
generation of the non-observable part of the surface without fine-tuning or
making changes to the underlying 2D Stable Diffusion model. We evaluate our
approach on the BlendedMVS dataset demonstrating significant qualitative and
quantitative improvements over competing methods. | Computer Vision |
What field is the article from? | Title: Intriguing Properties of Data Attribution on Diffusion Models
Abstract: Data attribution seeks to trace model outputs back to training data. With the
recent development of diffusion models, data attribution has become a desired
module to properly assign valuations for high-quality or copyrighted training
samples, ensuring that data contributors are fairly compensated or credited.
Several theoretically motivated methods have been proposed to implement data
attribution, in an effort to improve the trade-off between computational
scalability and effectiveness. In this work, we conduct extensive experiments
and ablation studies on attributing diffusion models, specifically focusing on
DDPMs trained on CIFAR-10 and CelebA, as well as a Stable Diffusion model
LoRA-finetuned on ArtBench. Intriguingly, we report counter-intuitive
observations that theoretically unjustified design choices for attribution
empirically outperform previous baselines by a large margin, in terms of both
linear datamodeling score and counterfactual evaluation. Our work presents a
significantly more efficient approach for attributing diffusion models, while
the unexpected findings suggest that at least in non-convex settings,
constructions guided by theoretical assumptions may lead to inferior
attribution performance. The code is available at
https://github.com/sail-sg/D-TRAK. | Machine Learning |
What field is the article from? | Title: LoRA Fine-tuning Efficiently Undoes Safety Training in Llama 2-Chat 70B
Abstract: AI developers often apply safety alignment procedures to prevent the misuse
of their AI systems. For example, before Meta released Llama 2-Chat, a
collection of instruction fine-tuned large language models, they invested
heavily in safety training, incorporating extensive red-teaming and
reinforcement learning from human feedback. However, it remains unclear how
well safety training guards against model misuse when attackers have access to
model weights. We explore the robustness of safety training in language models
by subversively fine-tuning the public weights of Llama 2-Chat. We employ
low-rank adaptation (LoRA) as an efficient fine-tuning method. With a budget of
less than $200 per model and using only one GPU, we successfully undo the
safety training of Llama 2-Chat models of sizes 7B, 13B, and 70B. Specifically,
our fine-tuning technique significantly reduces the rate at which the model
refuses to follow harmful instructions. We achieve a refusal rate below 1% for
our 70B Llama 2-Chat model on two refusal benchmarks. Our fine-tuning method
retains general performance, which we validate by comparing our fine-tuned
models against Llama 2-Chat across two benchmarks. Additionally, we present a
selection of harmful outputs produced by our models. While there is
considerable uncertainty about the scope of risks from current models, it is
likely that future models will have significantly more dangerous capabilities,
including the ability to hack into critical infrastructure, create dangerous
bio-weapons, or autonomously replicate and adapt to new environments. We show
that subversive fine-tuning is practical and effective, and hence argue that
evaluating risks from fine-tuning should be a core part of risk assessments for
releasing model weights. | Machine Learning |
What field is the article from? | Title: DiT-Head: High-Resolution Talking Head Synthesis using Diffusion Transformers
Abstract: We propose a novel talking head synthesis pipeline called "DiT-Head", which
is based on diffusion transformers and uses audio as a condition to drive the
denoising process of a diffusion model. Our method is scalable and can
generalise to multiple identities while producing high-quality results. We
train and evaluate our proposed approach and compare it against existing
methods of talking head synthesis. We show that our model can compete with
these methods in terms of visual quality and lip-sync accuracy. Our results
highlight the potential of our proposed approach to be used for a wide range of
applications, including virtual assistants, entertainment, and education. For a
video demonstration of the results and our user study, please refer to our
supplementary material. | Artificial Intelligence |
What field is the article from? | Title: TransNeXt: Robust Foveal Visual Perception for Vision Transformers
Abstract: Due to the depth degradation effect in residual connections, many efficient
Vision Transformers models that rely on stacking layers for information
exchange often fail to form sufficient information mixing, leading to unnatural
visual perception. To address this issue, in this paper, we propose Aggregated
Attention, a biomimetic design-based token mixer that simulates biological
foveal vision and continuous eye movement while enabling each token on the
feature map to have a global perception. Furthermore, we incorporate learnable
tokens that interact with conventional queries and keys, which further
diversifies the generation of affinity matrices beyond merely relying on the
similarity between queries and keys. Our approach does not rely on stacking for
information exchange, thus effectively avoiding depth degradation and achieving
natural visual perception. Additionally, we propose Convolutional GLU, a
channel mixer that bridges the gap between GLU and SE mechanism, which empowers
each token to have channel attention based on its nearest neighbor image
features, enhancing local modeling capability and model robustness. We combine
aggregated attention and convolutional GLU to create a new visual backbone
called TransNeXt. Extensive experiments demonstrate that our TransNeXt achieves
state-of-the-art performance across multiple model sizes. At a resolution of
$224^2$, TransNeXt-Tiny attains an ImageNet accuracy of 84.0%, surpassing
ConvNeXt-B with 69% fewer parameters. Our TransNeXt-Base achieves an ImageNet
accuracy of 86.2% and an ImageNet-A accuracy of 61.6% at a resolution of
$384^2$, a COCO object detection mAP of 57.1, and an ADE20K semantic
segmentation mIoU of 54.7. | Computer Vision |
What field is the article from? | Title: TransCORALNet: A Two-Stream Transformer CORAL Networks for Supply Chain Credit Assessment Cold Start
Abstract: This paper proposes an interpretable two-stream transformer CORAL networks
(TransCORALNet) for supply chain credit assessment under the segment industry
and cold start problem. The model aims to provide accurate credit assessment
prediction for new supply chain borrowers with limited historical data. Here,
the two-stream domain adaptation architecture with correlation alignment
(CORAL) loss is used as a core model and is equipped with transformer, which
provides insights about the learned features and allow efficient
parallelization during training. Thanks to the domain adaptation capability of
the proposed model, the domain shift between the source and target domain is
minimized. Therefore, the model exhibits good generalization where the source
and target do not follow the same distribution, and a limited amount of target
labeled instances exist. Furthermore, we employ Local Interpretable
Model-agnostic Explanations (LIME) to provide more insight into the model
prediction and identify the key features contributing to supply chain credit
assessment decisions. The proposed model addresses four significant supply
chain credit assessment challenges: domain shift, cold start, imbalanced-class
and interpretability. Experimental results on a real-world data set demonstrate
the superiority of TransCORALNet over a number of state-of-the-art baselines in
terms of accuracy. The code is available on GitHub
https://github.com/JieJieNiu/TransCORALN . | Machine Learning |
What field is the article from? | Title: Federated Learning for Clinical Structured Data: A Benchmark Comparison of Engineering and Statistical Approaches
Abstract: Federated learning (FL) has shown promising potential in safeguarding data
privacy in healthcare collaborations. While the term "FL" was originally coined
by the engineering community, the statistical field has also explored similar
privacy-preserving algorithms. Statistical FL algorithms, however, remain
considerably less recognized than their engineering counterparts. Our goal was
to bridge the gap by presenting the first comprehensive comparison of FL
frameworks from both engineering and statistical domains. We evaluated five FL
frameworks using both simulated and real-world data. The results indicate that
statistical FL algorithms yield less biased point estimates for model
coefficients and offer convenient confidence interval estimations. In contrast,
engineering-based methods tend to generate more accurate predictions, sometimes
surpassing central pooled and statistical FL models. This study underscores the
relative strengths and weaknesses of both types of methods, emphasizing the
need for increased awareness and their integration in future FL applications. | Machine Learning |
What field is the article from? | Title: Towards Goal-oriented Intelligent Tutoring Systems in Online Education
Abstract: Interactive Intelligent Tutoring Systems (ITSs) enhance traditional ITSs by
promoting effective learning through interactions and problem resolution in
online education. Yet, proactive engagement, prioritizing resource optimization
with planning and assessment capabilities, is often overlooked in current ITS
designs. In this work, we investigate a new task, named Goal-oriented
Intelligent Tutoring Systems (GITS), which aims to enable the student's mastery
of a designated concept by strategically planning a customized sequence of
exercises and assessment. To address the problem of goal-oriented policy
learning in GITS, we propose a novel graph-based reinforcement learning
framework, named Planning-Assessment-Interaction (PAI). Specifically, we first
leverage cognitive structure information to improve state representation
learning and action selection for planning the next action, which can be either
to tutor an exercise or to assess the target concept. Further, we use a
dynamically updated cognitive diagnosis model to simulate student responses to
exercises and concepts. Three benchmark datasets across different subjects are
constructed for enabling offline academic research on GITS. Experimental
results demonstrate the effectiveness and efficiency of PAI and extensive
analyses of various types of students are conducted to showcase the challenges
in this task. | Computers and Society |
What field is the article from? | Title: TimeBench: A Comprehensive Evaluation of Temporal Reasoning Abilities in Large Language Models
Abstract: Understanding time is a pivotal aspect of human cognition, crucial in the
broader framework of grasping the intricacies of the world. Previous studies
typically focus on specific aspects of time, lacking a comprehensive temporal
reasoning benchmark. To address this issue, we propose TimeBench, a
comprehensive hierarchical temporal reasoning benchmark that covers a broad
spectrum of temporal reasoning phenomena, which provides a thorough evaluation
for investigating the temporal reasoning capabilities of large language models.
We conduct extensive experiments on popular LLMs, such as GPT-4, LLaMA2, and
Mistral, incorporating chain-of-thought prompting. Our experimental results
indicate a significant performance gap between the state-of-the-art LLMs and
humans, highlighting that there is still a considerable distance to cover in
temporal reasoning. We aspire for TimeBench to serve as a comprehensive
benchmark, fostering research in temporal reasoning for LLMs. Our resource is
available at https://github.com/zchuz/TimeBench | Computational Linguistics |
What field is the article from? | Title: Unsupervised Extractive Summarization with Learnable Length Control Strategies
Abstract: Unsupervised extractive summarization is an important technique in
information extraction and retrieval. Compared with supervised method, it does
not require high-quality human-labelled summaries for training and thus can be
easily applied for documents with different types, domains or languages. Most
of existing unsupervised methods including TextRank and PACSUM rely on
graph-based ranking on sentence centrality. However, this scorer can not be
directly applied in end-to-end training, and the positional-related prior
assumption is often needed for achieving good summaries. In addition, less
attention is paid to length-controllable extractor, where users can decide to
summarize texts under particular length constraint. This paper introduces an
unsupervised extractive summarization model based on a siamese network, for
which we develop a trainable bidirectional prediction objective between the
selected summary and the original document. Different from the centrality-based
ranking methods, our extractive scorer can be trained in an end-to-end manner,
with no other requirement of positional assumption. In addition, we introduce a
differentiable length control module by approximating 0-1 knapsack solver for
end-to-end length-controllable extracting. Experiments show that our
unsupervised method largely outperforms the centrality-based baseline using a
same sentence encoder. In terms of length control ability, via our trainable
knapsack module, the performance consistently outperforms the strong baseline
without utilizing end-to-end training. Human evaluation further evidences that
our method performs the best among baselines in terms of relevance and
consistency. | Artificial Intelligence |
What field is the article from? | Title: Responsible AI (RAI) Games and Ensembles
Abstract: Several recent works have studied the societal effects of AI; these include
issues such as fairness, robustness, and safety. In many of these objectives, a
learner seeks to minimize its worst-case loss over a set of predefined
distributions (known as uncertainty sets), with usual examples being perturbed
versions of the empirical distribution. In other words, aforementioned problems
can be written as min-max problems over these uncertainty sets. In this work,
we provide a general framework for studying these problems, which we refer to
as Responsible AI (RAI) games. We provide two classes of algorithms for solving
these games: (a) game-play based algorithms, and (b) greedy stagewise
estimation algorithms. The former class is motivated by online learning and
game theory, whereas the latter class is motivated by the classical statistical
literature on boosting, and regression. We empirically demonstrate the
applicability and competitive performance of our techniques for solving several
RAI problems, particularly around subpopulation shift. | Artificial Intelligence |
What field is the article from? | Title: Evaluating Uncertainty Quantification approaches for Neural PDEs in scientific applications
Abstract: The accessibility of spatially distributed data, enabled by affordable
sensors, field, and numerical experiments, has facilitated the development of
data-driven solutions for scientific problems, including climate change,
weather prediction, and urban planning. Neural Partial Differential Equations
(Neural PDEs), which combine deep learning (DL) techniques with domain
expertise (e.g., governing equations) for parameterization, have proven to be
effective in capturing valuable correlations within spatiotemporal datasets.
However, sparse and noisy measurements coupled with modeling approximation
introduce aleatoric and epistemic uncertainties. Therefore, quantifying
uncertainties propagated from model inputs to outputs remains a challenge and
an essential goal for establishing the trustworthiness of Neural PDEs. This
work evaluates various Uncertainty Quantification (UQ) approaches for both
Forward and Inverse Problems in scientific applications. Specifically, we
investigate the effectiveness of Bayesian methods, such as Hamiltonian Monte
Carlo (HMC) and Monte-Carlo Dropout (MCD), and a more conventional approach,
Deep Ensembles (DE). To illustrate their performance, we take two canonical
PDEs: Burger's equation and the Navier-Stokes equation. Our results indicate
that Neural PDEs can effectively reconstruct flow systems and predict the
associated unknown parameters. However, it is noteworthy that the results
derived from Bayesian methods, based on our observations, tend to display a
higher degree of certainty in their predictions as compared to those obtained
using the DE. This elevated certainty in predictions suggests that Bayesian
techniques might underestimate the true underlying uncertainty, thereby
appearing more confident in their predictions than the DE approach. | Machine Learning |
What field is the article from? | Title: Artificial Intelligence in the Service of Entrepreneurial Finance: Knowledge Structure and the Foundational Algorithmic Paradigm
Abstract: While the application of Artificial Intelligence in Finance has a long
tradition, its potential in Entrepreneurship has been intensively explored only
recently. In this context, Entrepreneurial Finance is a particularly fertile
ground for future Artificial Intelligence proliferation. To support the latter,
the study provides a bibliometric review of Artificial Intelligence
applications in (1) entrepreneurial finance literature, and (2) corporate
finance literature with implications for Entrepreneurship. Rigorous search and
screening procedures of the scientific database Web of Science Core Collection
resulted in the identification of 1890 relevant journal articles subjected to
analysis. The bibliometric analysis gives a rich insight into the knowledge
field's conceptual, intellectual, and social structure, indicating nascent and
underdeveloped research directions. As far as we were able to identify, this is
the first study to map and bibliometrically analyze the academic field
concerning the relationship between Artificial Intelligence, Entrepreneurship,
and Finance, and the first review that deals with Artificial Intelligence
methods in Entrepreneurship. According to the results, Artificial Neural
Network, Deep Neural Network and Support Vector Machine are highly represented
in almost all identified topic niches. At the same time, applying Topic
Modeling, Fuzzy Neural Network and Growing Hierarchical Self-organizing Map is
quite rare. As an element of the research, and before final remarks, the
article deals as well with a discussion of certain gaps in the relationship
between Computer Science and Economics. These gaps do represent problems in the
application of Artificial Intelligence in Economic Science. As a way to at
least in part remedy this situation, the foundational paradigm and the bespoke
demonstration of the Monte Carlo randomized algorithm are presented. | Artificial Intelligence |
What field is the article from? | Title: When is Off-Policy Evaluation Useful? A Data-Centric Perspective
Abstract: Evaluating the value of a hypothetical target policy with only a logged
dataset is important but challenging. On the one hand, it brings opportunities
for safe policy improvement under high-stakes scenarios like clinical
guidelines. On the other hand, such opportunities raise a need for precise
off-policy evaluation (OPE). While previous work on OPE focused on improving
the algorithm in value estimation, in this work, we emphasize the importance of
the offline dataset, hence putting forward a data-centric framework for
evaluating OPE problems. We propose DataCOPE, a data-centric framework for
evaluating OPE, that answers the questions of whether and to what extent we can
evaluate a target policy given a dataset. DataCOPE (1) forecasts the overall
performance of OPE algorithms without access to the environment, which is
especially useful before real-world deployment where evaluating OPE is
impossible; (2) identifies the sub-group in the dataset where OPE can be
inaccurate; (3) permits evaluations of datasets or data-collection strategies
for OPE problems. Our empirical analysis of DataCOPE in the logged contextual
bandit settings using healthcare datasets confirms its ability to evaluate both
machine-learning and human expert policies like clinical guidelines. | Machine Learning |
What field is the article from? | Title: Benchmark Generation Framework with Customizable Distortions for Image Classifier Robustness
Abstract: We present a novel framework for generating adversarial benchmarks to
evaluate the robustness of image classification models. Our framework allows
users to customize the types of distortions to be optimally applied to images,
which helps address the specific distortions relevant to their deployment. The
benchmark can generate datasets at various distortion levels to assess the
robustness of different image classifiers. Our results show that the
adversarial samples generated by our framework with any of the image
classification models, like ResNet-50, Inception-V3, and VGG-16, are effective
and transferable to other models causing them to fail. These failures happen
even when these models are adversarially retrained using state-of-the-art
techniques, demonstrating the generalizability of our adversarial samples. We
achieve competitive performance in terms of net $L_2$ distortion compared to
state-of-the-art benchmark techniques on CIFAR-10 and ImageNet; however, we
demonstrate our framework achieves such results with simple distortions like
Gaussian noise without introducing unnatural artifacts or color bleeds. This is
made possible by a model-based reinforcement learning (RL) agent and a
technique that reduces a deep tree search of the image for model sensitivity to
perturbations, to a one-level analysis and action. The flexibility of choosing
distortions and setting classification probability thresholds for multiple
classes makes our framework suitable for algorithmic audits. | Computer Vision |
What field is the article from? | Title: Accurate and Fast Fischer-Tropsch Reaction Microkinetics using PINNs
Abstract: Microkinetics allows detailed modelling of chemical transformations occurring
in many industrially relevant reactions. Traditional way of solving the
microkinetics model for Fischer-Tropsch synthesis (FTS) becomes inefficient
when it comes to more advanced real-time applications. In this work, we address
these challenges by using physics-informed neural networks(PINNs) for modelling
FTS microkinetics. We propose a computationally efficient and accurate method,
enabling the ultra-fast solution of the existing microkinetics models in
realistic process conditions. The proposed PINN model computes the fraction of
vacant catalytic sites, a key quantity in FTS microkinetics, with median
relative error (MRE) of 0.03%, and the FTS product formation rates with MRE of
0.1%. Compared to conventional equation solvers, the model achieves up to 1E+06
times speed-up when running on GPUs, thus being fast enough for multi-scale and
multi-physics reactor modelling and enabling its applications in real-time
process control and optimization. | Machine Learning |
What field is the article from? | Title: Knowledge-Aware Artifact Image Synthesis with LLM-Enhanced Prompting and Multi-Source Supervision
Abstract: Ancient artifacts are an important medium for cultural preservation and
restoration. However, many physical copies of artifacts are either damaged or
lost, leaving a blank space in archaeological and historical studies that calls
for artifact image generation techniques. Despite the significant advancements
in open-domain text-to-image synthesis, existing approaches fail to capture the
important domain knowledge presented in the textual description, resulting in
errors in recreated images such as incorrect shapes and patterns. In this
paper, we propose a novel knowledge-aware artifact image synthesis approach
that brings lost historical objects accurately into their visual forms. We use
a pretrained diffusion model as backbone and introduce three key techniques to
enhance the text-to-image generation framework: 1) we construct prompts with
explicit archaeological knowledge elicited from large language models (LLMs);
2) we incorporate additional textual guidance to correlated historical
expertise in a contrastive manner; 3) we introduce further visual-semantic
constraints on edge and perceptual features that enable our model to learn more
intricate visual details of the artifacts. Compared to existing approaches, our
proposed model produces higher-quality artifact images that align better with
the implicit details and historical knowledge contained within written
documents, thus achieving significant improvements across automatic metrics and
in human evaluation. Our code and data are available at
https://github.com/danielwusg/artifact_diffusion. | Computer Vision |
What field is the article from? | Title: FedGeo: Privacy-Preserving User Next Location Prediction with Federated Learning
Abstract: A User Next Location Prediction (UNLP) task, which predicts the next location
that a user will move to given his/her trajectory, is an indispensable task for
a wide range of applications. Previous studies using large-scale trajectory
datasets in a single server have achieved remarkable performance in UNLP task.
However, in real-world applications, legal and ethical issues have been raised
regarding privacy concerns leading to restrictions against sharing human
trajectory datasets to any other server. In response, Federated Learning (FL)
has emerged to address the personal privacy issue by collaboratively training
multiple clients (i.e., users) and then aggregating them. While previous
studies employed FL for UNLP, they are still unable to achieve reliable
performance because of the heterogeneity of clients' mobility. To tackle this
problem, we propose the Federated Learning for Geographic Information (FedGeo),
a FL framework specialized for UNLP, which alleviates the heterogeneity of
clients' mobility and guarantees personal privacy protection. Firstly, we
incorporate prior global geographic adjacency information to the local client
model, since the spatial correlation between locations is trained partially in
each client who has only a heterogeneous subset of the overall trajectories in
FL. We also introduce a novel aggregation method that minimizes the gap between
client models to solve the problem of client drift caused by differences
between client models when learning with their heterogeneous data. Lastly, we
probabilistically exclude clients with extremely heterogeneous data from the FL
process by focusing on clients who visit relatively diverse locations. We show
that FedGeo is superior to other FL methods for model performance in UNLP task.
We also validated our model in a real-world application using our own
customers' mobile phones and the FL agent system. | Cryptography and Security |
What field is the article from? | Title: Parameter-Efficient Multilingual Summarisation: An Empirical Study
Abstract: With the increasing prevalence of Large Language Models, traditional full
fine-tuning approaches face growing challenges, especially in memory-intensive
tasks. This paper investigates the potential of Parameter-Efficient
Fine-Tuning, focusing on Low-Rank Adaptation (LoRA), for complex and
under-explored multilingual summarisation tasks. We conduct an extensive study
across different data availability scenarios, including full-data, low-data,
and cross-lingual transfer, leveraging models of different sizes. Our findings
reveal that LoRA lags behind full fine-tuning when trained with full data,
however, it excels in low-data scenarios and cross-lingual transfer.
Interestingly, as models scale up, the performance gap between LoRA and full
fine-tuning diminishes. Additionally, we investigate effective strategies for
few-shot cross-lingual transfer, finding that continued LoRA tuning achieves
the best performance compared to both full fine-tuning and dynamic composition
of language-specific LoRA modules. | Computational Linguistics |
What field is the article from? | Title: Safety, Trust, and Ethics Considerations for Human-AI Teaming in Aerospace Control
Abstract: Designing a safe, trusted, and ethical AI may be practically impossible;
however, designing AI with safe, trusted, and ethical use in mind is possible
and necessary in safety and mission-critical domains like aerospace. Safe,
trusted, and ethical use of AI are often used interchangeably; however, a
system can be safely used but not trusted or ethical, have a trusted use that
is not safe or ethical, and have an ethical use that is not safe or trusted.
This manuscript serves as a primer to illuminate the nuanced differences
between these concepts, with a specific focus on applications of Human-AI
teaming in aerospace system control, where humans may be in, on, or
out-of-the-loop of decision-making. | Computers and Society |
What field is the article from? | Title: Collaborating Foundation models for Domain Generalized Semantic Segmentation
Abstract: Domain Generalized Semantic Segmentation (DGSS) deals with training a model
on a labeled source domain with the aim of generalizing to unseen domains
during inference. Existing DGSS methods typically effectuate robust features by
means of Domain Randomization (DR). Such an approach is often limited as it can
only account for style diversification and not content. In this work, we take
an orthogonal approach to DGSS and propose to use an assembly of CoLlaborative
FOUndation models for Domain Generalized Semantic Segmentation (CLOUDS). In
detail, CLOUDS is a framework that integrates FMs of various kinds: (i) CLIP
backbone for its robust feature representation, (ii) generative models to
diversify the content, thereby covering various modes of the possible target
distribution, and (iii) Segment Anything Model (SAM) for iteratively refining
the predictions of the segmentation model. Extensive experiments show that our
CLOUDS excels in adapting from synthetic to real DGSS benchmarks and under
varying weather conditions, notably outperforming prior methods by 5.6% and
6.7% on averaged miou, respectively. The code is available at :
https://github.com/yasserben/CLOUDS | Computer Vision |
What field is the article from? | Title: An Embodied Generalist Agent in 3D World
Abstract: Leveraging massive knowledge and learning schemes from large language models
(LLMs), recent machine learning models show notable successes in building
generalist agents that exhibit the capability of general-purpose task solving
in diverse domains, including natural language processing, computer vision, and
robotics. However, a significant challenge remains as these models exhibit
limited ability in understanding and interacting with the 3D world. We argue
this limitation significantly hinders the current models from performing
real-world tasks and further achieving general intelligence. To this end, we
introduce an embodied multi-modal and multi-task generalist agent that excels
in perceiving, grounding, reasoning, planning, and acting in the 3D world. Our
proposed agent, referred to as LEO, is trained with shared LLM-based model
architectures, objectives, and weights in two stages: (i) 3D vision-language
alignment and (ii) 3D vision-language-action instruction tuning. To facilitate
the training, we meticulously curate and generate an extensive dataset
comprising object-level and scene-level multi-modal tasks with exceeding scale
and complexity, necessitating a deep understanding of and interaction with the
3D world. Through rigorous experiments, we demonstrate LEO's remarkable
proficiency across a wide spectrum of tasks, including 3D captioning, question
answering, embodied reasoning, embodied navigation, and robotic manipulation.
Our ablation results further provide valuable insights for the development of
future embodied generalist agents. | Computer Vision |
What field is the article from? | Title: From Classification to Clinical Insights: Towards Analyzing and Reasoning About Mobile and Behavioral Health Data With Large Language Models
Abstract: Passively collected behavioral health data from ubiquitous sensors holds
significant promise to provide mental health professionals insights from
patient's daily lives; however, developing analysis tools to use this data in
clinical practice requires addressing challenges of generalization across
devices and weak or ambiguous correlations between the measured signals and an
individual's mental health. To address these challenges, we take a novel
approach that leverages large language models (LLMs) to synthesize clinically
useful insights from multi-sensor data. We develop chain of thought prompting
methods that use LLMs to generate reasoning about how trends in data such as
step count and sleep relate to conditions like depression and anxiety. We first
demonstrate binary depression classification with LLMs achieving accuracies of
61.1% which exceed the state of the art. While it is not robust for clinical
use, this leads us to our key finding: even more impactful and valued than
classification is a new human-AI collaboration approach in which clinician
experts interactively query these tools and combine their domain expertise and
context about the patient with AI generated reasoning to support clinical
decision-making. We find models like GPT-4 correctly reference numerical data
75% of the time, and clinician participants express strong interest in using
this approach to interpret self-tracking data. | Artificial Intelligence |
What field is the article from? | Title: Learning Curricula in Open-Ended Worlds
Abstract: Deep reinforcement learning (RL) provides powerful methods for training
optimal sequential decision-making agents. As collecting real-world
interactions can entail additional costs and safety risks, the common paradigm
of sim2real conducts training in a simulator, followed by real-world
deployment. Unfortunately, RL agents easily overfit to the choice of simulated
training environments, and worse still, learning ends when the agent masters
the specific set of simulated environments. In contrast, the real world is
highly open-ended, featuring endlessly evolving environments and challenges,
making such RL approaches unsuitable. Simply randomizing over simulated
environments is insufficient, as it requires making arbitrary distributional
assumptions and can be combinatorially less likely to sample specific
environment instances that are useful for learning. An ideal learning process
should automatically adapt the training environment to maximize the learning
potential of the agent over an open-ended task space that matches or surpasses
the complexity of the real world. This thesis develops a class of methods
called Unsupervised Environment Design (UED), which aim to produce such
open-ended processes. Given an environment design space, UED automatically
generates an infinite sequence or curriculum of training environments at the
frontier of the learning agent's capabilities. Through extensive empirical
studies and theoretical arguments founded on minimax-regret decision theory and
game theory, the findings in this thesis show that UED autocurricula can
produce RL agents exhibiting significantly improved robustness and
generalization to previously unseen environment instances. Such autocurricula
are promising paths toward open-ended learning systems that achieve more
general intelligence by continually generating and mastering additional
challenges of their own design. | Artificial Intelligence |
What field is the article from? | Title: Language Models can be Logical Solvers
Abstract: Logical reasoning is a fundamental aspect of human intelligence and a key
component of tasks like problem-solving and decision-making. Recent
advancements have enabled Large Language Models (LLMs) to potentially exhibit
reasoning capabilities, but complex logical reasoning remains a challenge. The
state-of-the-art, solver-augmented language models, use LLMs to parse natural
language logical questions into symbolic representations first and then adopt
external logical solvers to take in the symbolic representations and output the
answers. Despite their impressive performance, any parsing errors will
inevitably result in the failure of the execution of the external logical
solver and no answer to the logical questions. In this paper, we introduce
LoGiPT, a novel language model that directly emulates the reasoning processes
of logical solvers and bypasses the parsing errors by learning to strict
adherence to solver syntax and grammar. LoGiPT is fine-tuned on a newly
constructed instruction-tuning dataset derived from revealing and refining the
invisible reasoning process of deductive solvers. Experimental results on two
public deductive reasoning datasets demonstrate that LoGiPT outperforms
state-of-the-art solver-augmented LMs and few-shot prompting methods on
competitive LLMs like ChatGPT or GPT-4. | Computational Linguistics |
What field is the article from? | Title: MIMo: A Multi-Modal Infant Model for Studying Cognitive Development
Abstract: Human intelligence and human consciousness emerge gradually during the
process of cognitive development. Understanding this development is an
essential aspect of understanding the human mind and may facilitate the
construction of artificial minds with similar properties. Importantly, human
cognitive development relies on embodied interactions with the physical and
social environment, which is perceived via complementary sensory modalities.
These interactions allow the developing mind to probe the causal structure of
the world. This is in stark contrast to common machine learning approaches,
e.g., for large language models, which are merely passively ``digesting'' large
amounts of training data, but are not in control of their sensory inputs.
However, computational modeling of the kind of self-determined embodied
interactions that lead to human intelligence and consciousness is a formidable
challenge. Here we present MIMo, an open-source multi-modal infant model for
studying early cognitive development through computer simulations. MIMo's body
is modeled after an 18-month-old child with detailed five-fingered hands. MIMo
perceives its surroundings via binocular vision, a vestibular system,
proprioception, and touch perception through a full-body virtual skin, while
two different actuation models allow control of his body. We describe the
design and interfaces of MIMo and provide examples illustrating its use. All
code is available at https://github.com/trieschlab/MIMo . | Artificial Intelligence |
What field is the article from? | Title: Stochastic Bayesian Optimization with Unknown Continuous Context Distribution via Kernel Density Estimation
Abstract: Bayesian optimization (BO) is a sample-efficient method and has been widely
used for optimizing expensive black-box functions. Recently, there has been a
considerable interest in BO literature in optimizing functions that are
affected by context variable in the environment, which is uncontrollable by
decision makers. In this paper, we focus on the optimization of functions'
expectations over continuous context variable, subject to an unknown
distribution. To address this problem, we propose two algorithms that employ
kernel density estimation to learn the probability density function (PDF) of
continuous context variable online. The first algorithm is simpler, which
directly optimizes the expectation under the estimated PDF. Considering that
the estimated PDF may have high estimation error when the true distribution is
complicated, we further propose the second algorithm that optimizes the
distributionally robust objective. Theoretical results demonstrate that both
algorithms have sub-linear Bayesian cumulative regret on the expectation
objective. Furthermore, we conduct numerical experiments to empirically
demonstrate the effectiveness of our algorithms. | Machine Learning |
What field is the article from? | Title: Instance-wise Linearization of Neural Network for Model Interpretation
Abstract: Neural network have achieved remarkable successes in many scientific fields.
However, the interpretability of the neural network model is still a major
bottlenecks to deploy such technique into our daily life. The challenge can
dive into the non-linear behavior of the neural network, which rises a critical
question that how a model use input feature to make a decision. The classical
approach to address this challenge is feature attribution, which assigns an
important score to each input feature and reveal its importance of current
prediction. However, current feature attribution approaches often indicate the
importance of each input feature without detail of how they are actually
processed by a model internally. These attribution approaches often raise a
concern that whether they highlight correct features for a model prediction.
For a neural network model, the non-linear behavior is often caused by
non-linear activation units of a model. However, the computation behavior of a
prediction from a neural network model is locally linear, because one
prediction has only one activation pattern. Base on the observation, we propose
an instance-wise linearization approach to reformulates the forward computation
process of a neural network prediction. This approach reformulates different
layers of convolution neural networks into linear matrix multiplication.
Aggregating all layers' computation, a prediction complex convolution neural
network operations can be described as a linear matrix multiplication $F(x) = W
\cdot x + b$. This equation can not only provides a feature attribution map
that highlights the important of the input features but also tells how each
input feature contributes to a prediction exactly. Furthermore, we discuss the
application of this technique in both supervise classification and unsupervised
neural network learning parametric t-SNE dimension reduction. | Machine Learning |
What field is the article from? | Title: Adaptive Compression of the Latent Space in Variational Autoencoders
Abstract: Variational Autoencoders (VAEs) are powerful generative models that have been
widely used in various fields, including image and text generation. However,
one of the known challenges in using VAEs is the model's sensitivity to its
hyperparameters, such as the latent space size. This paper presents a simple
extension of VAEs for automatically determining the optimal latent space size
during the training process by gradually decreasing the latent size through
neuron removal and observing the model performance. The proposed method is
compared to traditional hyperparameter grid search and is shown to be
significantly faster while still achieving the best optimal dimensionality on
four image datasets. Furthermore, we show that the final performance of our
method is comparable to training on the optimal latent size from scratch, and
might thus serve as a convenient substitute. | Machine Learning |
What field is the article from? | Title: Matching Weak Informative Ontologies
Abstract: Most existing ontology matching methods utilize the literal information to
discover alignments. However, some literal information in ontologies may be
opaque and some ontologies may not have sufficient literal information. In this
paper, these ontologies are named as weak informative ontologies (WIOs) and it
is challenging for existing methods to matching WIOs. On one hand, string-based
and linguistic-based matching methods cannot work well for WIOs. On the other
hand, some matching methods use external resources to improve their
performance, but collecting and processing external resources is still
time-consuming. To address this issue, this paper proposes a practical method
for matching WIOs by employing the ontology structure information to discover
alignments. First, the semantic subgraphs are extracted from the ontology graph
to capture the precise meanings of ontology elements. Then, a new similarity
propagation model is designed for matching WIOs. Meanwhile, in order to avoid
meaningless propagation, the similarity propagation is constrained by semantic
subgraphs and other conditions. Consequently, the similarity propagation model
ensures a balance between efficiency and quality during matching. Finally, the
similarity propagation model uses a few credible alignments as seeds to find
more alignments, and some useful strategies are adopted to improve the
performance. This matching method for WIOs has been implemented in the ontology
matching system Lily. Experimental results on public OAEI benchmark datasets
demonstrate that Lily significantly outperforms most of the state-of-the-art
works in both WIO matching tasks and general ontology matching tasks. In
particular, Lily increases the recall by a large margin, while it still obtains
high precision of matching results. | Artificial Intelligence |
What field is the article from? | Title: Arabic Mini-ClimateGPT : A Climate Change and Sustainability Tailored Arabic LLM
Abstract: Climate change is one of the most significant challenges we face together as
a society. Creating awareness and educating policy makers the wide-ranging
impact of climate change is an essential step towards a sustainable future.
Recently, Large Language Models (LLMs) like ChatGPT and Bard have shown
impressive conversational abilities and excel in a wide variety of NLP tasks.
While these models are close-source, recently alternative open-source LLMs such
as Stanford Alpaca and Vicuna have shown promising results. However, these
open-source models are not specifically tailored for climate related domain
specific information and also struggle to generate meaningful responses in
other languages such as, Arabic. To this end, we propose a light-weight Arabic
Mini-ClimateGPT that is built on an open-source LLM and is specifically
fine-tuned on a conversational-style instruction tuning curated Arabic dataset
Clima500-Instruct with over 500k instructions about climate change and
sustainability. Further, our model also utilizes a vector embedding based
retrieval mechanism during inference. We validate our proposed model through
quantitative and qualitative evaluations on climate-related queries. Our model
surpasses the baseline LLM in 88.3% of cases during ChatGPT-based evaluation.
Furthermore, our human expert evaluation reveals an 81.6% preference for our
model's responses over multiple popular open-source models. Our open-source
demos, code-base and models are available here
https://github.com/mbzuai-oryx/ClimateGPT. | Computational Linguistics |
What field is the article from? | Title: Neural MMO 2.0: A Massively Multi-task Addition to Massively Multi-agent Learning
Abstract: Neural MMO 2.0 is a massively multi-agent environment for reinforcement
learning research. The key feature of this new version is a flexible task
system that allows users to define a broad range of objectives and reward
signals. We challenge researchers to train agents capable of generalizing to
tasks, maps, and opponents never seen during training. Neural MMO features
procedurally generated maps with 128 agents in the standard setting and support
for up to. Version 2.0 is a complete rewrite of its predecessor with three-fold
improved performance and compatibility with CleanRL. We release the platform as
free and open-source software with comprehensive documentation available at
neuralmmo.github.io and an active community Discord. To spark initial research
on this new platform, we are concurrently running a competition at NeurIPS
2023. | Artificial Intelligence |
What field is the article from? | Title: Mitigating Over-smoothing in Transformers via Regularized Nonlocal Functionals
Abstract: Transformers have achieved remarkable success in a wide range of natural
language processing and computer vision applications. However, the
representation capacity of a deep transformer model is degraded due to the
over-smoothing issue in which the token representations become identical when
the model's depth grows. In this work, we show that self-attention layers in
transformers minimize a functional which promotes smoothness, thereby causing
token uniformity. We then propose a novel regularizer that penalizes the norm
of the difference between the smooth output tokens from self-attention and the
input tokens to preserve the fidelity of the tokens. Minimizing the resulting
regularized energy functional, we derive the Neural Transformer with a
Regularized Nonlocal Functional (NeuTRENO), a novel class of transformer models
that can mitigate the over-smoothing issue. We empirically demonstrate the
advantages of NeuTRENO over the baseline transformers and state-of-the-art
methods in reducing the over-smoothing of token representations on various
practical tasks, including object classification, image segmentation, and
language modeling. | Computational Linguistics |
What field is the article from? | Title: Rethinking Dimensional Rationale in Graph Contrastive Learning from Causal Perspective
Abstract: Graph contrastive learning is a general learning paradigm excelling at
capturing invariant information from diverse perturbations in graphs. Recent
works focus on exploring the structural rationale from graphs, thereby
increasing the discriminability of the invariant information. However, such
methods may incur in the mis-learning of graph models towards the
interpretability of graphs, and thus the learned noisy and task-agnostic
information interferes with the prediction of graphs. To this end, with the
purpose of exploring the intrinsic rationale of graphs, we accordingly propose
to capture the dimensional rationale from graphs, which has not received
sufficient attention in the literature. The conducted exploratory experiments
attest to the feasibility of the aforementioned roadmap. To elucidate the
innate mechanism behind the performance improvement arising from the
dimensional rationale, we rethink the dimensional rationale in graph
contrastive learning from a causal perspective and further formalize the
causality among the variables in the pre-training stage to build the
corresponding structural causal model. On the basis of the understanding of the
structural causal model, we propose the dimensional rationale-aware graph
contrastive learning approach, which introduces a learnable dimensional
rationale acquiring network and a redundancy reduction constraint. The
learnable dimensional rationale acquiring network is updated by leveraging a
bi-level meta-learning technique, and the redundancy reduction constraint
disentangles the redundant features through a decorrelation process during
learning. Empirically, compared with state-of-the-art methods, our method can
yield significant performance boosts on various benchmarks with respect to
discriminability and transferability. The code implementation of our method is
available at https://github.com/ByronJi/DRGCL. | Machine Learning |
What field is the article from? | Title: Plum: Prompt Learning using Metaheuristic
Abstract: Since the emergence of large language models, prompt learning has become a
popular method for optimizing and customizing these models. Special prompts,
such as Chain-of-Thought, have even revealed previously unknown reasoning
capabilities within these models. However, the progress of discovering
effective prompts has been slow, driving a desire for general prompt
optimization methods. Unfortunately, few existing prompt learning methods
satisfy the criteria of being truly "general", i.e., automatic, discrete,
black-box, gradient-free, and interpretable all at once. In this paper, we
introduce metaheuristics, a branch of discrete non-convex optimization methods
with over 100 options, as a promising approach to prompt learning. Within our
paradigm, we test six typical methods: hill climbing, simulated annealing,
genetic algorithms with/without crossover, tabu search, and harmony search,
demonstrating their effectiveness in black-box prompt learning and
Chain-of-Thought prompt tuning. Furthermore, we show that these methods can be
used to discover more human-understandable prompts that were previously
unknown, opening the door to a cornucopia of possibilities in prompt
optimization. We release all the codes in
\url{https://github.com/research4pan/Plum}. | Machine Learning |
What field is the article from? | Title: Similarity-based Knowledge Transfer for Cross-Domain Reinforcement Learning
Abstract: Transferring knowledge in cross-domain reinforcement learning is a
challenging setting in which learning is accelerated by reusing knowledge from
a task with different observation and/or action space. However, it is often
necessary to carefully select the source of knowledge for the receiving end to
benefit from the transfer process. In this article, we study how to measure the
similarity between cross-domain reinforcement learning tasks to select a source
of knowledge that will improve the performance of the learning agent. We
developed a semi-supervised alignment loss to match different spaces with a set
of encoder-decoders, and use them to measure similarity and transfer policies
across tasks. In comparison to prior works, our method does not require data to
be aligned, paired or collected by expert policies. Experimental results, on a
set of varied Mujoco control tasks, show the robustness of our method in
effectively selecting and transferring knowledge, without the supervision of a
tailored set of source tasks. | Machine Learning |
What field is the article from? | Title: Evaluating AI Vocational Skills Through Professional Testing
Abstract: Using a novel professional certification survey, the study focuses on
assessing the vocational skills of two highly cited AI models, GPT-3 and
Turbo-GPT3.5. The approach emphasizes the importance of practical readiness
over academic performance by examining the models' performances on a benchmark
dataset consisting of 1149 professional certifications. This study also
includes a comparison with human test scores, providing perspective on the
potential of AI models to match or even surpass human performance in
professional certifications. GPT-3, even without any fine-tuning or exam
preparation, managed to achieve a passing score (over 70% correct) on 39% of
the professional certifications. It showcased proficiency in computer-related
fields, including cloud and virtualization, business analytics, cybersecurity,
network setup and repair, and data analytics. Turbo-GPT3.5, on the other hand,
scored a perfect 100% on the highly regarded Offensive Security Certified
Professional (OSCP) exam. This model also demonstrated competency in diverse
professional fields, such as nursing, licensed counseling, pharmacy, and
aviation. Turbo-GPT3.5 exhibited strong performance on customer service tasks,
indicating potential use cases in enhancing chatbots for call centers and
routine advice services. Both models also scored well on sensory and
experience-based tests outside a machine's traditional roles, including wine
sommelier, beer tasting, emotional quotient, and body language reading. The
study found that OpenAI's model improvement from Babbage to Turbo led to a 60%
better performance on the grading scale within a few years. This progress
indicates that addressing the current model's limitations could yield an AI
capable of passing even the most rigorous professional certifications. | Machine Learning |
What field is the article from? | Title: VisPercep: A Vision-Language Approach to Enhance Visual Perception for People with Blindness and Low Vision
Abstract: People with blindness and low vision (pBLV) encounter substantial challenges
when it comes to comprehensive scene recognition and precise object
identification in unfamiliar environments. Additionally, due to the vision
loss, pBLV have difficulty in accessing and identifying potential tripping
hazards on their own. In this paper, we present a pioneering approach that
leverages a large vision-language model to enhance visual perception for pBLV,
offering detailed and comprehensive descriptions of the surrounding
environments and providing warnings about the potential risks. Our method
begins by leveraging a large image tagging model (i.e., Recognize Anything
(RAM)) to identify all common objects present in the captured images. The
recognition results and user query are then integrated into a prompt, tailored
specifically for pBLV using prompt engineering. By combining the prompt and
input image, a large vision-language model (i.e., InstructBLIP) generates
detailed and comprehensive descriptions of the environment and identifies
potential risks in the environment by analyzing the environmental objects and
scenes, relevant to the prompt. We evaluate our approach through experiments
conducted on both indoor and outdoor datasets. Our results demonstrate that our
method is able to recognize objects accurately and provide insightful
descriptions and analysis of the environment for pBLV. | Computer Vision |
What field is the article from? | Title: nach0: Multimodal Natural and Chemical Languages Foundation Model
Abstract: Large Language Models (LLMs) have substantially driven scientific progress in
various domains, and many papers have demonstrated their ability to tackle
complex problems with creative solutions. Our paper introduces a new foundation
model, nach0, capable of solving various chemical and biological tasks:
biomedical question answering, named entity recognition, molecular generation,
molecular synthesis, attributes prediction, and others. nach0 is a multi-domain
and multi-task encoder-decoder LLM pre-trained on unlabeled text from
scientific literature, patents, and molecule strings to incorporate a range of
chemical and linguistic knowledge. We employed instruction tuning, where
specific task-related instructions are utilized to fine-tune nach0 for the
final set of tasks. To train nach0 effectively, we leverage the NeMo framework,
enabling efficient parallel optimization of both base and large model versions.
Extensive experiments demonstrate that our model outperforms state-of-the-art
baselines on single-domain and cross-domain tasks. Furthermore, it can generate
high-quality outputs in molecular and textual formats, showcasing its
effectiveness in multi-domain setups. | Computational Linguistics |
What field is the article from? | Title: Conceptual Engineering Using Large Language Models
Abstract: We describe a method, based on Jennifer Nado's definition of classification
procedures as targets of conceptual engineering, that implements such
procedures using a large language model. We then apply this method using data
from the Wikidata knowledge graph to evaluate concept definitions from two
paradigmatic conceptual engineering projects: the International Astronomical
Union's redefinition of PLANET and Haslanger's ameliorative analysis of WOMAN.
We discuss implications of this work for the theory and practice of conceptual
engineering. The code and data can be found on GitHub. | Computational Linguistics |
What field is the article from? | Title: Patch-MI: Enhancing Model Inversion Attacks via Patch-Based Reconstruction
Abstract: Model inversion (MI) attacks aim to reveal sensitive information in training
datasets by solely accessing model weights. Generative MI attacks, a prominent
strand in this field, utilize auxiliary datasets to recreate target data
attributes, restricting the images to remain photo-realistic, but their success
often depends on the similarity between auxiliary and target datasets. If the
distributions are dissimilar, existing MI attack attempts frequently fail,
yielding unrealistic or target-unrelated results. In response to these
challenges, we introduce a groundbreaking approach named Patch-MI, inspired by
jigsaw puzzle assembly. To this end, we build upon a new probabilistic
interpretation of MI attacks, employing a generative adversarial network
(GAN)-like framework with a patch-based discriminator. This approach allows the
synthesis of images that are similar to the target dataset distribution, even
in cases of dissimilar auxiliary dataset distribution. Moreover, we artfully
employ a random transformation block, a sophisticated maneuver that crafts
generalized images, thus enhancing the efficacy of the target classifier. Our
numerical and graphical findings demonstrate that Patch-MI surpasses existing
generative MI methods in terms of accuracy, marking significant advancements
while preserving comparable statistical dataset quality. For reproducibility of
our results, we make our source code publicly available in
https://github.com/jonggyujang0123/Patch-Attack. | Artificial Intelligence |
What field is the article from? | Title: TrustMark: Universal Watermarking for Arbitrary Resolution Images
Abstract: Imperceptible digital watermarking is important in copyright protection,
misinformation prevention, and responsible generative AI. We propose TrustMark
- a GAN-based watermarking method with novel design in architecture and
spatio-spectra losses to balance the trade-off between watermarked image
quality with the watermark recovery accuracy. Our model is trained with
robustness in mind, withstanding various in- and out-place perturbations on the
encoded image. Additionally, we introduce TrustMark-RM - a watermark remover
method useful for re-watermarking. Our methods achieve state-of-art performance
on 3 benchmarks comprising arbitrary resolution images. | Computer Vision |
What field is the article from? | Title: UI Layout Generation with LLMs Guided by UI Grammar
Abstract: The recent advances in Large Language Models (LLMs) have stimulated interest
among researchers and industry professionals, particularly in their application
to tasks concerning mobile user interfaces (UIs). This position paper
investigates the use of LLMs for UI layout generation. Central to our
exploration is the introduction of UI grammar -- a novel approach we proposed
to represent the hierarchical structure inherent in UI screens. The aim of this
approach is to guide the generative capacities of LLMs more effectively and
improve the explainability and controllability of the process. Initial
experiments conducted with GPT-4 showed the promising capability of LLMs to
produce high-quality user interfaces via in-context learning. Furthermore, our
preliminary comparative study suggested the potential of the grammar-based
approach in improving the quality of generative results in specific aspects. | Human-Computer Interaction |
What field is the article from? | Title: Context Tuning for Retrieval Augmented Generation
Abstract: Large language models (LLMs) have the remarkable ability to solve new tasks
with just a few examples, but they need access to the right tools. Retrieval
Augmented Generation (RAG) addresses this problem by retrieving a list of
relevant tools for a given task. However, RAG's tool retrieval step requires
all the required information to be explicitly present in the query. This is a
limitation, as semantic search, the widely adopted tool retrieval method, can
fail when the query is incomplete or lacks context. To address this limitation,
we propose Context Tuning for RAG, which employs a smart context retrieval
system to fetch relevant information that improves both tool retrieval and plan
generation. Our lightweight context retrieval model uses numerical,
categorical, and habitual usage signals to retrieve and rank context items. Our
empirical results demonstrate that context tuning significantly enhances
semantic search, achieving a 3.5-fold and 1.5-fold improvement in Recall@K for
context retrieval and tool retrieval tasks respectively, and resulting in an
11.6% increase in LLM-based planner accuracy. Additionally, we show that our
proposed lightweight model using Reciprocal Rank Fusion (RRF) with LambdaMART
outperforms GPT-4 based retrieval. Moreover, we observe context augmentation at
plan generation, even after tool retrieval, reduces hallucination. | Information Retrieval |
What field is the article from? | Title: Universal Self-Consistency for Large Language Model Generation
Abstract: Self-consistency with chain-of-thought prompting (CoT) has demonstrated
remarkable performance gains on various challenging tasks, by utilizing
multiple reasoning paths sampled from large language models (LLMs). However,
self-consistency relies on the answer extraction process to aggregate multiple
solutions, which is not applicable to free-form answers. In this work, we
propose Universal Self-Consistency (USC), which leverages LLMs themselves to
select the most consistent answer among multiple candidates. We evaluate USC on
a variety of benchmarks, including mathematical reasoning, code generation,
long-context summarization, and open-ended question answering. On open-ended
generation tasks where the original self-consistency method is not applicable,
USC effectively utilizes multiple samples and improves the performance. For
mathematical reasoning, USC matches the standard self-consistency performance
without requiring the answer formats to be similar. Finally, without access to
execution results, USC also matches the execution-based voting performance on
code generation. | Computational Linguistics |
What field is the article from? | Title: The Behavior of Large Language Models When Prompted to Generate Code Explanations
Abstract: This paper systematically investigates the generation of code explanations by
Large Language Models (LLMs) for code examples commonly encountered in
introductory programming courses. Our findings reveal significant variations in
the nature of code explanations produced by LLMs, influenced by factors such as
the wording of the prompt, the specific code examples under consideration, the
programming language involved, the temperature parameter, and the version of
the LLM. However, a consistent pattern emerges for Java and Python, where
explanations exhibit a Flesch-Kincaid readability level of approximately 7-8
grade and a consistent lexical density, indicating the proportion of meaningful
words relative to the total explanation size. Additionally, the generated
explanations consistently achieve high scores for correctness, but lower scores
on three other metrics: completeness, conciseness, and specificity. | Software Engineering |
What field is the article from? | Title: Multi-perspective Feedback-attention Coupling Model for Continuous-time Dynamic Graphs
Abstract: Recently, representation learning over graph networks has gained popularity,
with various models showing promising results. Despite this, several challenges
persist: 1) most methods are designed for static or discrete-time dynamic
graphs; 2) existing continuous-time dynamic graph algorithms focus on a single
evolving perspective; and 3) many continuous-time dynamic graph approaches
necessitate numerous temporal neighbors to capture long-term dependencies. In
response, this paper introduces the Multi-Perspective Feedback-Attention
Coupling (MPFA) model. MPFA incorporates information from both evolving and raw
perspectives, efficiently learning the interleaved dynamics of observed
processes. The evolving perspective employs temporal self-attention to
distinguish continuously evolving temporal neighbors for information
aggregation. Through dynamic updates, this perspective can capture long-term
dependencies using a small number of temporal neighbors. Meanwhile, the raw
perspective utilizes a feedback attention module with growth characteristic
coefficients to aggregate raw neighborhood information. Experimental results on
a self-organizing dataset and seven public datasets validate the efficacy and
competitiveness of our proposed model. | Machine Learning |
What field is the article from? | Title: Conditions for Length Generalization in Learning Reasoning Skills
Abstract: Reasoning is a fundamental capability of AI agents. Recently, large language
models (LLMs) have shown remarkable abilities to perform reasoning tasks.
However, numerous evaluations of the reasoning capabilities of LLMs have also
showed some limitations. An outstanding limitation is length generalization,
meaning that when trained on reasoning problems of smaller lengths or sizes,
the resulting models struggle with problems of larger sizes or lengths. This
potentially indicates some theoretical limitations of generalization in
learning reasoning skills. These evaluations and their observations motivated
us to perform a theoretical study of the length generalization problem. This
work focuses on reasoning tasks that can be formulated as Markov dynamic
processes (MDPs) and/or directed acyclic graphs (DAGs). It identifies and
proves conditions that decide whether the length generalization problem can be
solved or not for a reasoning task in a particular representation. Experiments
are also conducted to verify the theoretical results. | Artificial Intelligence |
What field is the article from? | Title: Probabilistic Forecast Reconciliation with Kullback-Leibler Divergence Regularization
Abstract: As the popularity of hierarchical point forecast reconciliation methods
increases, there is a growing interest in probabilistic forecast
reconciliation. Many studies have utilized machine learning or deep learning
techniques to implement probabilistic forecasting reconciliation and have made
notable progress. However, these methods treat the reconciliation step as a
fixed and hard post-processing step, leading to a trade-off between accuracy
and coherency. In this paper, we propose a new approach for probabilistic
forecast reconciliation. Unlike existing approaches, our proposed approach
fuses the prediction step and reconciliation step into a deep learning
framework, making the reconciliation step more flexible and soft by introducing
the Kullback-Leibler divergence regularization term into the loss function. The
approach is evaluated using three hierarchical time series datasets, which
shows the advantages of our approach over other probabilistic forecast
reconciliation methods. | Artificial Intelligence |
What field is the article from? | Title: How Trustworthy are Open-Source LLMs? An Assessment under Malicious Demonstrations Shows their Vulnerabilities
Abstract: The rapid progress in open-source Large Language Models (LLMs) is
significantly driving AI development forward. However, there is still a limited
understanding of their trustworthiness. Deploying these models at scale without
sufficient trustworthiness can pose significant risks, highlighting the need to
uncover these issues promptly. In this work, we conduct an assessment of
open-source LLMs on trustworthiness, scrutinizing them across eight different
aspects including toxicity, stereotypes, ethics, hallucination, fairness,
sycophancy, privacy, and robustness against adversarial demonstrations. We
propose an enhanced Chain of Utterances-based (CoU) prompting strategy by
incorporating meticulously crafted malicious demonstrations for trustworthiness
attack. Our extensive experiments encompass recent and representative series of
open-source LLMs, including Vicuna, MPT, Falcon, Mistral, and Llama 2. The
empirical outcomes underscore the efficacy of our attack strategy across
diverse aspects. More interestingly, our result analysis reveals that models
with superior performance in general NLP tasks do not always have greater
trustworthiness; in fact, larger models can be more vulnerable to attacks.
Additionally, models that have undergone instruction tuning, focusing on
instruction following, tend to be more susceptible, although fine-tuning LLMs
for safety alignment proves effective in mitigating adversarial trustworthiness
attacks. | Computational Linguistics |
What field is the article from? | Title: HypUC: Hyperfine Uncertainty Calibration with Gradient-boosted Corrections for Reliable Regression on Imbalanced Electrocardiograms
Abstract: The automated analysis of medical time series, such as the electrocardiogram
(ECG), electroencephalogram (EEG), pulse oximetry, etc, has the potential to
serve as a valuable tool for diagnostic decisions, allowing for remote
monitoring of patients and more efficient use of expensive and time-consuming
medical procedures. Deep neural networks (DNNs) have been demonstrated to
process such signals effectively. However, previous research has primarily
focused on classifying medical time series rather than attempting to regress
the continuous-valued physiological parameters central to diagnosis. One
significant challenge in this regard is the imbalanced nature of the dataset,
as a low prevalence of abnormal conditions can lead to heavily skewed data that
results in inaccurate predictions and a lack of certainty in such predictions
when deployed. To address these challenges, we propose HypUC, a framework for
imbalanced probabilistic regression in medical time series, making several
contributions. (i) We introduce a simple kernel density-based technique to
tackle the imbalanced regression problem with medical time series. (ii)
Moreover, we employ a probabilistic regression framework that allows
uncertainty estimation for the predicted continuous values. (iii) We also
present a new approach to calibrate the predicted uncertainty further. (iv)
Finally, we demonstrate a technique to use calibrated uncertainty estimates to
improve the predicted continuous value and show the efficacy of the calibrated
uncertainty estimates to flag unreliable predictions. HypUC is evaluated on a
large, diverse, real-world dataset of ECGs collected from millions of patients,
outperforming several conventional baselines on various diagnostic tasks,
suggesting a potential use-case for the reliable clinical deployment of deep
learning models. | Machine Learning |
What field is the article from? | Title: The theoretical limits of biometry
Abstract: Biometry has proved its capability in terms of recognition accuracy. Now, it
is widely used for automated border control with the biometric passport, to
unlock a smartphone or a computer with a fingerprint or a face recognition
algorithm. While identity verification is widely democratized, pure
identification with no additional clues is still a work in progress. The
identification difficulty depends on the population size, as the larger the
group is, the larger the confusion risk. For collision prevention, biometric
traits must be sufficiently distinguishable to scale to considerable groups,
and algorithms should be able to capture their differences accurately.
Most biometric works are purely experimental, and it is impossible to
extrapolate the results to a smaller or a larger group. In this work, we
propose a theoretical analysis of the distinguishability problem, which governs
the error rates of biometric systems. We demonstrate simple relationships
between the population size and the number of independent bits necessary to
prevent collision in the presence of noise. This work provides the lowest lower
bound for memory requirements. The results are very encouraging, as the
biometry of the whole Earth population can fit in a regular disk, leaving some
space for noise and redundancy. | Cryptography and Security |
What field is the article from? | Title: Federated Natural Policy Gradient Methods for Multi-task Reinforcement Learning
Abstract: Federated reinforcement learning (RL) enables collaborative decision making
of multiple distributed agents without sharing local data trajectories. In this
work, we consider a multi-task setting, in which each agent has its own private
reward function corresponding to different tasks, while sharing the same
transition kernel of the environment. Focusing on infinite-horizon tabular
Markov decision processes, the goal is to learn a globally optimal policy that
maximizes the sum of the discounted total rewards of all the agents in a
decentralized manner, where each agent only communicates with its neighbors
over some prescribed graph topology. We develop federated vanilla and
entropy-regularized natural policy gradient (NPG) methods under softmax
parameterization, where gradient tracking is applied to the global Q-function
to mitigate the impact of imperfect information sharing. We establish
non-asymptotic global convergence guarantees under exact policy evaluation,
which are nearly independent of the size of the state-action space and
illuminate the impacts of network size and connectivity. To the best of our
knowledge, this is the first time that global convergence is established for
federated multi-task RL using policy optimization. Moreover, the convergence
behavior of the proposed algorithms is robust against inexactness of policy
evaluation. | Machine Learning |
What field is the article from? | Title: Dynamic V2X Autonomous Perception from Road-to-Vehicle Vision
Abstract: Vehicle-to-everything (V2X) perception is an innovative technology that
enhances vehicle perception accuracy, thereby elevating the security and
reliability of autonomous systems. However, existing V2X perception methods
focus on static scenes from mainly vehicle-based vision, which is constrained
by sensor capabilities and communication loads. To adapt V2X perception models
to dynamic scenes, we propose to build V2X perception from road-to-vehicle
vision and present Adaptive Road-to-Vehicle Perception (AR2VP) method. In
AR2VP,we leverage roadside units to offer stable, wide-range sensing
capabilities and serve as communication hubs. AR2VP is devised to tackle both
intra-scene and inter-scene changes. For the former, we construct a dynamic
perception representing module, which efficiently integrates vehicle
perceptions, enabling vehicles to capture a more comprehensive range of dynamic
factors within the scene.Moreover, we introduce a road-to-vehicle perception
compensating module, aimed at preserving the maximized roadside unit perception
information in the presence of intra-scene changes.For inter-scene changes, we
implement an experience replay mechanism leveraging the roadside unit's storage
capacity to retain a subset of historical scene data, maintaining model
robustness in response to inter-scene shifts. We conduct perception experiment
on 3D object detection and segmentation, and the results show that AR2VP excels
in both performance-bandwidth trade-offs and adaptability within dynamic
environments. | Computer Vision |
What field is the article from? | Title: Exploring Large Language Models to Facilitate Variable Autonomy for Human-Robot Teaming
Abstract: In a rapidly evolving digital landscape autonomous tools and robots are
becoming commonplace. Recognizing the significance of this development, this
paper explores the integration of Large Language Models (LLMs) like Generative
pre-trained transformer (GPT) into human-robot teaming environments to
facilitate variable autonomy through the means of verbal human-robot
communication. In this paper, we introduce a novel framework for such a
GPT-powered multi-robot testbed environment, based on a Unity Virtual Reality
(VR) setting. This system allows users to interact with robot agents through
natural language, each powered by individual GPT cores. By means of OpenAI's
function calling, we bridge the gap between unstructured natural language input
and structure robot actions. A user study with 12 participants explores the
effectiveness of GPT-4 and, more importantly, user strategies when being given
the opportunity to converse in natural language within a multi-robot
environment. Our findings suggest that users may have preconceived expectations
on how to converse with robots and seldom try to explore the actual language
and cognitive capabilities of their robot collaborators. Still, those users who
did explore where able to benefit from a much more natural flow of
communication and human-like back-and-forth. We provide a set of lessons
learned for future research and technical implementations of similar systems. | Human-Computer Interaction |
What field is the article from? | Title: Homogeneous Artificial Neural Network
Abstract: The paper proposes an artificial neural network (ANN) being a global
approximator for a special class of functions, which are known as generalized
homogeneous. The homogeneity means a symmetry of a function with respect to a
group of transformations having topological characterization of a dilation. In
this paper, a class of the so-called linear dilations is considered. A
homogeneous universal approximation theorem is proven. Procedures for an
upgrade of an existing ANN to a homogeneous one are developed. Theoretical
results are supported by examples from the various domains (computer science,
systems theory and automatic control). | Machine Learning |
What field is the article from? | Title: Physics simulation capabilities of LLMs
Abstract: [Abridged abstract] Large Language Models (LLMs) can solve some
undergraduate-level to graduate-level physics textbook problems and are
proficient at coding. Combining these two capabilities could one day enable AI
systems to simulate and predict the physical world.
We present an evaluation of state-of-the-art (SOTA) LLMs on PhD-level to
research-level computational physics problems. We condition LLM generation on
the use of well-documented and widely-used packages to elicit coding
capabilities in the physics and astrophysics domains. We contribute $\sim 50$
original and challenging problems in celestial mechanics (with REBOUND),
stellar physics (with MESA), 1D fluid dynamics (with Dedalus) and non-linear
dynamics (with SciPy). Since our problems do not admit unique solutions, we
evaluate LLM performance on several soft metrics: counts of lines that contain
different types of errors (coding, physics, necessity and sufficiency) as well
as a more "educational" Pass-Fail metric focused on capturing the salient
physical ingredients of the problem at hand.
As expected, today's SOTA LLM (GPT4) zero-shot fails most of our problems,
although about 40\% of the solutions could plausibly get a passing grade. About
$70-90 \%$ of the code lines produced are necessary, sufficient and correct
(coding \& physics). Physics and coding errors are the most common, with some
unnecessary or insufficient lines. We observe significant variations across
problem class and difficulty. We identify several failure modes of GPT4 in the
computational physics domain.
Our reconnaissance work provides a snapshot of current computational
capabilities in classical physics and points to obvious improvement targets if
AI systems are ever to reach a basic level of autonomy in physics simulation
capabilities. | Artificial Intelligence |
What field is the article from? | Title: Scaling #DNN-Verification Tools with Efficient Bound Propagation and Parallel Computing
Abstract: Deep Neural Networks (DNNs) are powerful tools that have shown extraordinary
results in many scenarios, ranging from pattern recognition to complex robotic
problems. However, their intricate designs and lack of transparency raise
safety concerns when applied in real-world applications. In this context,
Formal Verification (FV) of DNNs has emerged as a valuable solution to provide
provable guarantees on the safety aspect. Nonetheless, the binary answer (i.e.,
safe or unsafe) could be not informative enough for direct safety interventions
such as safety model ranking or selection. To address this limitation, the FV
problem has recently been extended to the counting version, called
#DNN-Verification, for the computation of the size of the unsafe regions in a
given safety property's domain. Still, due to the complexity of the problem,
existing solutions struggle to scale on real-world robotic scenarios, where the
DNN can be large and complex. To address this limitation, inspired by advances
in FV, in this work, we propose a novel strategy based on reachability analysis
combined with Symbolic Linear Relaxation and parallel computing to enhance the
efficiency of existing exact and approximate FV for DNN counters. The empirical
evaluation on standard FV benchmarks and realistic robotic scenarios shows a
remarkable improvement in scalability and efficiency, enabling the use of such
techniques even for complex robotic applications. | Artificial Intelligence |
What field is the article from? | Title: KBFormer: A Diffusion Model for Structured Entity Completion
Abstract: We develop a generative attention-based approach to modeling structured
entities comprising different property types, such as numerical, categorical,
string, and composite. This approach handles such heterogeneous data through a
mixed continuous-discrete diffusion process over the properties. Our flexible
framework can model entities with arbitrary hierarchical properties, enabling
applications to structured Knowledge Base (KB) entities and tabular data. Our
approach obtains state-of-the-art performance on a majority of cases across 15
datasets. In addition, experiments with a device KB and a nuclear physics
dataset demonstrate the model's ability to learn representations useful for
entity completion in diverse settings. This has many downstream use cases,
including modeling numerical properties with high accuracy - critical for
science applications, which also benefit from the model's inherent
probabilistic nature. | Machine Learning |
What field is the article from? | Title: Model-Based Runtime Monitoring with Interactive Imitation Learning
Abstract: Robot learning methods have recently made great strides, but generalization
and robustness challenges still hinder their widespread deployment. Failing to
detect and address potential failures renders state-of-the-art learning systems
not combat-ready for high-stakes tasks. Recent advances in interactive
imitation learning have presented a promising framework for human-robot
teaming, enabling the robots to operate safely and continually improve their
performances over long-term deployments. Nonetheless, existing methods
typically require constant human supervision and preemptive feedback, limiting
their practicality in realistic domains. This work aims to endow a robot with
the ability to monitor and detect errors during task execution. We introduce a
model-based runtime monitoring algorithm that learns from deployment data to
detect system anomalies and anticipate failures. Unlike prior work that cannot
foresee future failures or requires failure experiences for training, our
method learns a latent-space dynamics model and a failure classifier, enabling
our method to simulate future action outcomes and detect out-of-distribution
and high-risk states preemptively. We train our method within an interactive
imitation learning framework, where it continually updates the model from the
experiences of the human-robot team collected using trustworthy deployments.
Consequently, our method reduces the human workload needed over time while
ensuring reliable task execution. Our method outperforms the baselines across
system-level and unit-test metrics, with 23% and 40% higher success rates in
simulation and on physical hardware, respectively. More information at
https://ut-austin-rpl.github.io/sirius-runtime-monitor/ | Robotics |
What field is the article from? | Title: Towards Conceptualization of "Fair Explanation": Disparate Impacts of anti-Asian Hate Speech Explanations on Content Moderators
Abstract: Recent research at the intersection of AI explainability and fairness has
focused on how explanations can improve human-plus-AI task performance as
assessed by fairness measures. We propose to characterize what constitutes an
explanation that is itself "fair" -- an explanation that does not adversely
impact specific populations. We formulate a novel evaluation method of "fair
explanations" using not just accuracy and label time, but also psychological
impact of explanations on different user groups across many metrics (mental
discomfort, stereotype activation, and perceived workload). We apply this
method in the context of content moderation of potential hate speech, and its
differential impact on Asian vs. non-Asian proxy moderators, across explanation
approaches (saliency map and counterfactual explanation). We find that saliency
maps generally perform better and show less evidence of disparate impact
(group) and individual unfairness than counterfactual explanations.
Content warning: This paper contains examples of hate speech and racially
discriminatory language. The authors do not support such content. Please
consider your risk of discomfort carefully before continuing reading! | Computational Linguistics |
What field is the article from? | Title: ExPT: Synthetic Pretraining for Few-Shot Experimental Design
Abstract: Experimental design is a fundamental problem in many science and engineering
fields. In this problem, sample efficiency is crucial due to the time, money,
and safety costs of real-world design evaluations. Existing approaches either
rely on active data collection or access to large, labeled datasets of past
experiments, making them impractical in many real-world scenarios. In this
work, we address the more challenging yet realistic setting of few-shot
experimental design, where only a few labeled data points of input designs and
their corresponding values are available. We approach this problem as a
conditional generation task, where a model conditions on a few labeled examples
and the desired output to generate an optimal input design. To this end, we
introduce Experiment Pretrained Transformers (ExPT), a foundation model for
few-shot experimental design that employs a novel combination of synthetic
pretraining with in-context learning. In ExPT, we only assume knowledge of a
finite collection of unlabelled data points from the input domain and pretrain
a transformer neural network to optimize diverse synthetic functions defined
over this domain. Unsupervised pretraining allows ExPT to adapt to any design
task at test time in an in-context fashion by conditioning on a few labeled
data points from the target task and generating the candidate optima. We
evaluate ExPT on few-shot experimental design in challenging domains and
demonstrate its superior generality and performance compared to existing
methods. The source code is available at https://github.com/tung-nd/ExPT.git. | Machine Learning |
What field is the article from? | Title: How Generative-AI can be Effectively used in Government Chatbots
Abstract: With the rapid development of artificial intelligence and breakthroughs in
machine learning and natural language processing, intelligent
question-answering robots have become widely used in government affairs. This
paper conducts a horizontal comparison between Guangdong Province's government
chatbots, ChatGPT, and Wenxin Ernie, two large language models, to analyze the
strengths and weaknesses of existing government chatbots and AIGC technology.
The study finds significant differences between government chatbots and large
language models. China's government chatbots are still in an exploratory stage
and have a gap to close to achieve "intelligence." To explore the future
direction of government chatbots more deeply, this research proposes targeted
optimization paths to help generative AI be effectively applied in government
chatbot conversations. | Computational Linguistics |
What field is the article from? | Title: Synergizing Human-AI Agency: A Guide of 23 Heuristics for Service Co-Creation with LLM-Based Agents
Abstract: This empirical study serves as a primer for interested service providers to
determine if and how Large Language Models (LLMs) technology will be integrated
for their practitioners and the broader community. We investigate the mutual
learning journey of non-AI experts and AI through CoAGent, a service
co-creation tool with LLM-based agents. Engaging in a three-stage participatory
design processes, we work with with 23 domain experts from public libraries
across the U.S., uncovering their fundamental challenges of integrating AI into
human workflows. Our findings provide 23 actionable "heuristics for service
co-creation with AI", highlighting the nuanced shared responsibilities between
humans and AI. We further exemplar 9 foundational agency aspects for AI,
emphasizing essentials like ownership, fair treatment, and freedom of
expression. Our innovative approach enriches the participatory design model by
incorporating AI as crucial stakeholders and utilizing AI-AI interaction to
identify blind spots. Collectively, these insights pave the way for synergistic
and ethical human-AI co-creation in service contexts, preparing for workforce
ecosystems where AI coexists. | Human-Computer Interaction |
What field is the article from? | Title: Heterogeneous Graph Neural Architecture Search with GPT-4
Abstract: Heterogeneous graph neural architecture search (HGNAS) represents a powerful
tool for automatically designing effective heterogeneous graph neural networks.
However, existing HGNAS algorithms suffer from inefficient searches and
unstable results. In this paper, we present a new GPT-4 based HGNAS model to
improve the search efficiency and search accuracy of HGNAS. Specifically, we
present a new GPT-4 enhanced Heterogeneous Graph Neural Architecture Search
(GHGNAS for short). The basic idea of GHGNAS is to design a set of prompts that
can guide GPT-4 toward the task of generating new heterogeneous graph neural
architectures. By iteratively asking GPT-4 with the prompts, GHGNAS continually
validates the accuracy of the generated HGNNs and uses the feedback to further
optimize the prompts. Experimental results show that GHGNAS can design new
HGNNs by leveraging the powerful generalization capability of GPT-4. Moreover,
GHGNAS runs more effectively and stably than previous HGNAS models based on
reinforcement learning and differentiable search algorithms. | Artificial Intelligence |
What field is the article from? | Title: The Alignment Problem in Context
Abstract: A core challenge in the development of increasingly capable AI systems is to
make them safe and reliable by ensuring their behaviour is consistent with
human values. This challenge, known as the alignment problem, does not merely
apply to hypothetical future AI systems that may pose catastrophic risks; it
already applies to current systems, such as large language models, whose
potential for harm is rapidly increasing. In this paper, I assess whether we
are on track to solve the alignment problem for large language models, and what
that means for the safety of future AI systems. I argue that existing
strategies for alignment are insufficient, because large language models remain
vulnerable to adversarial attacks that can reliably elicit unsafe behaviour. I
offer an explanation of this lingering vulnerability on which it is not simply
a contingent limitation of current language models, but has deep technical ties
to a crucial aspect of what makes these models useful and versatile in the
first place -- namely, their remarkable aptitude to learn "in context" directly
from user instructions. It follows that the alignment problem is not only
unsolved for current AI systems, but may be intrinsically difficult to solve
without severely undermining their capabilities. Furthermore, this assessment
raises concerns about the prospect of ensuring the safety of future and more
capable AI systems. | Machine Learning |
What field is the article from? | Title: Sample Efficient Reinforcement Learning from Human Feedback via Active Exploration
Abstract: Preference-based feedback is important for many applications in reinforcement
learning where direct evaluation of a reward function is not feasible. A
notable recent example arises in reinforcement learning from human feedback
(RLHF) on large language models. For many applications of RLHF, the cost of
acquiring the human feedback can be substantial. In this work, we take
advantage of the fact that one can often choose contexts at which to obtain
human feedback in order to most efficiently identify a good policy, and
formalize this as an offline contextual dueling bandit problem. We give an
upper-confidence-bound style algorithm for this problem and prove a polynomial
worst-case regret bound. We then provide empirical confirmation in a synthetic
setting that our approach outperforms existing methods. After, we extend the
setting and methodology for practical use in RLHF training of large language
models. Here, our method is able to reach better performance with fewer samples
of human preferences than multiple baselines on three real-world datasets. | Machine Learning |
What field is the article from? | Title: Operator-learning-inspired Modeling of Neural Ordinary Differential Equations
Abstract: Neural ordinary differential equations (NODEs), one of the most influential
works of the differential equation-based deep learning, are to continuously
generalize residual networks and opened a new field. They are currently
utilized for various downstream tasks, e.g., image classification, time series
classification, image generation, etc. Its key part is how to model the
time-derivative of the hidden state, denoted dh(t)/dt. People have habitually
used conventional neural network architectures, e.g., fully-connected layers
followed by non-linear activations. In this paper, however, we present a neural
operator-based method to define the time-derivative term. Neural operators were
initially proposed to model the differential operator of partial differential
equations (PDEs). Since the time-derivative of NODEs can be understood as a
special type of the differential operator, our proposed method, called branched
Fourier neural operator (BFNO), makes sense. In our experiments with general
downstream tasks, our method significantly outperforms existing methods. | Machine Learning |
What field is the article from? | Title: SCPO: Safe Reinforcement Learning with Safety Critic Policy Optimization
Abstract: Incorporating safety is an essential prerequisite for broadening the
practical applications of reinforcement learning in real-world scenarios. To
tackle this challenge, Constrained Markov Decision Processes (CMDPs) are
leveraged, which introduce a distinct cost function representing safety
violations. In CMDPs' settings, Lagrangian relaxation technique has been
employed in previous algorithms to convert constrained optimization problems
into unconstrained dual problems. However, these algorithms may inaccurately
predict unsafe behavior, resulting in instability while learning the Lagrange
multiplier. This study introduces a novel safe reinforcement learning
algorithm, Safety Critic Policy Optimization (SCPO). In this study, we define
the safety critic, a mechanism that nullifies rewards obtained through
violating safety constraints. Furthermore, our theoretical analysis indicates
that the proposed algorithm can automatically balance the trade-off between
adhering to safety constraints and maximizing rewards. The effectiveness of the
SCPO algorithm is empirically validated by benchmarking it against strong
baselines. | Machine Learning |
What field is the article from? | Title: MMMU: A Massive Multi-discipline Multimodal Understanding and Reasoning Benchmark for Expert AGI
Abstract: We introduce MMMU: a new benchmark designed to evaluate multimodal models on
massive multi-discipline tasks demanding college-level subject knowledge and
deliberate reasoning. MMMU includes 11.5K meticulously collected multimodal
questions from college exams, quizzes, and textbooks, covering six core
disciplines: Art & Design, Business, Science, Health & Medicine, Humanities &
Social Science, and Tech & Engineering. These questions span 30 subjects and
183 subfields, comprising 30 highly heterogeneous image types, such as charts,
diagrams, maps, tables, music sheets, and chemical structures. Unlike existing
benchmarks, MMMU focuses on advanced perception and reasoning with
domain-specific knowledge, challenging models to perform tasks akin to those
faced by experts. The evaluation of 14 open-source LMMs as well as the
proprietary GPT-4V(ision) and Gemini highlights the substantial challenges
posed by MMMU. Even the advanced GPT-4V and Gemini Ultra only achieve
accuracies of 56% and 59% respectively, indicating significant room for
improvement. We believe MMMU will stimulate the community to build
next-generation multimodal foundation models towards expert artificial general
intelligence. | Computational Linguistics |
What field is the article from? | Title: Learning from One Continuous Video Stream
Abstract: We introduce a framework for online learning from a single continuous video
stream -- the way people and animals learn, without mini-batches, data
augmentation or shuffling. This poses great challenges given the high
correlation between consecutive video frames and there is very little prior
work on it. Our framework allows us to do a first deep dive into the topic and
includes a collection of streams and tasks composed from two existing video
datasets, plus methodology for performance evaluation that considers both
adaptation and generalization. We employ pixel-to-pixel modelling as a
practical and flexible way to switch between pre-training and single-stream
evaluation as well as between arbitrary tasks, without ever requiring changes
to models and always using the same pixel loss. Equipped with this framework we
obtained large single-stream learning gains from pre-training with a novel
family of future prediction tasks, found that momentum hurts, and that the pace
of weight updates matters. The combination of these insights leads to matching
the performance of IID learning with batch size 1, when using the same
architecture and without costly replay buffers. | Computer Vision |
What field is the article from? | Title: Stochastic Configuration Machines: FPGA Implementation
Abstract: Neural networks for industrial applications generally have additional
constraints such as response speed, memory size and power usage. Randomized
learners can address some of these issues. However, hardware solutions can
provide better resource reduction whilst maintaining the model's performance.
Stochastic configuration networks (SCNs) are a prime choice in industrial
applications due to their merits and feasibility for data modelling. Stochastic
Configuration Machines (SCMs) extend this to focus on reducing the memory
constraints by limiting the randomized weights to a binary value with a scalar
for each node and using a mechanism model to improve the learning performance
and result interpretability. This paper aims to implement SCM models on a field
programmable gate array (FPGA) and introduce binary-coded inputs to the
algorithm. Results are reported for two benchmark and two industrial datasets,
including SCM with single-layer and deep architectures. | Machine Learning |
What field is the article from? | Title: Concept-centric Personalization with Large-scale Diffusion Priors
Abstract: Despite large-scale diffusion models being highly capable of generating
diverse open-world content, they still struggle to match the photorealism and
fidelity of concept-specific generators. In this work, we present the task of
customizing large-scale diffusion priors for specific concepts as
concept-centric personalization. Our goal is to generate high-quality
concept-centric images while maintaining the versatile controllability inherent
to open-world models, enabling applications in diverse tasks such as
concept-centric stylization and image translation. To tackle these challenges,
we identify catastrophic forgetting of guidance prediction from diffusion
priors as the fundamental issue. Consequently, we develop a guidance-decoupled
personalization framework specifically designed to address this task. We
propose Generalized Classifier-free Guidance (GCFG) as the foundational theory
for our framework. This approach extends Classifier-free Guidance (CFG) to
accommodate an arbitrary number of guidances, sourced from a variety of
conditions and models. Employing GCFG enables us to separate conditional
guidance into two distinct components: concept guidance for fidelity and
control guidance for controllability. This division makes it feasible to train
a specialized model for concept guidance, while ensuring both control and
unconditional guidance remain intact. We then present a null-text
Concept-centric Diffusion Model as a concept-specific generator to learn
concept guidance without the need for text annotations. Code will be available
at https://github.com/PRIV-Creation/Concept-centric-Personalization. | Computer Vision |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.