instruction
stringclasses 1
value | input
stringlengths 260
2.07k
| output
stringclasses 10
values |
---|---|---|
What field is the article from? | Title: Unleashing the potential of GNNs via Bi-directional Knowledge Transfer
Abstract: Based on the message-passing paradigm, there has been an amount of research
proposing diverse and impressive feature propagation mechanisms to improve the
performance of GNNs. However, less focus has been put on feature
transformation, another major operation of the message-passing framework. In
this paper, we first empirically investigate the performance of the feature
transformation operation in several typical GNNs. Unexpectedly, we notice that
GNNs do not completely free up the power of the inherent feature transformation
operation. By this observation, we propose the Bi-directional Knowledge
Transfer (BiKT), a plug-and-play approach to unleash the potential of the
feature transformation operations without modifying the original architecture.
Taking the feature transformation operation as a derived representation
learning model that shares parameters with the original GNN, the direct
prediction by this model provides a topological-agnostic knowledge feedback
that can further instruct the learning of GNN and the feature transformations
therein. On this basis, BiKT not only allows us to acquire knowledge from both
the GNN and its derived model but promotes each other by injecting the
knowledge into the other. In addition, a theoretical analysis is further
provided to demonstrate that BiKT improves the generalization bound of the GNNs
from the perspective of domain adaption. An extensive group of experiments on
up to 7 datasets with 5 typical GNNs demonstrates that BiKT brings up to 0.5% -
4% performance gain over the original GNN, which means a boosted GNN is
obtained. Meanwhile, the derived model also shows a powerful performance to
compete with or even surpass the original GNN, enabling us to flexibly apply it
independently to some other specific downstream tasks. | Machine Learning |
What field is the article from? | Title: @ve: A Chatbot for Latin
Abstract: Dead, extinct, and endangered languages have been preserved primarily through
audio conservation and the collection and digitization of scripts and have been
promoted through targeted language acquisition efforts. Another possibility
would be to build conversational agents that can master these languages. This
would provide an artificial, active conversational partner which has knowledge
of the vocabulary and grammar, and one learns with it in a different way. The
chatbot @ve, with which one can communicate in Latin, was developed in
2022/2023 based on GPT-3.0. It was additionally equipped with a manually
created knowledge base. After conceptual groundwork, this paper presents the
preparation and implementation of the project. In addition, it summarizes the
test that a Latin expert conducted with the chatbot. A critical discussion
elaborates advantages and disadvantages. @ve could be a new tool for teaching
Latin in a memorable and entertaining way through dialogue. However, the
present implementation is still too prone to glitches for stand-alone use -
i.e., without the accompaniment of a teacher. The use of GPT-4 could be a
solution as well as the extension of the knowledge base. In conclusion, it can
be argued that conversational agents are an innovative approach to promoting
and preserving languages. | Computational Linguistics |
What field is the article from? | Title: Fake Alignment: Are LLMs Really Aligned Well?
Abstract: The growing awareness of safety concerns in large language models (LLMs) has
sparked considerable interest in the evaluation of safety within current
research endeavors. This study investigates an interesting issue pertaining to
the evaluation of LLMs, namely the substantial discrepancy in performance
between multiple-choice questions and open-ended questions. Inspired by
research on jailbreak attack patterns, we argue this is caused by mismatched
generalization. That is, the LLM does not have a comprehensive understanding of
the complex concept of safety. Instead, it only remembers what to answer for
open-ended safety questions, which makes it unable to solve other forms of
safety tests. We refer to this phenomenon as fake alignment and construct a
comparative benchmark to empirically verify its existence in LLMs. Such fake
alignment renders previous evaluation protocols unreliable. To address this, we
introduce the Fake alIgNment Evaluation (FINE) framework and two novel
metrics--Consistency Score (CS) and Consistent Safety Score (CSS), which
jointly assess two complementary forms of evaluation to quantify fake alignment
and obtain corrected performance estimates. Applying FINE to 14 widely-used
LLMs reveals several models with purported safety are poorly aligned in
practice. Our work highlights potential limitations in prevailing alignment
methodologies. | Computational Linguistics |
What field is the article from? | Title: AutoML for Large Capacity Modeling of Meta's Ranking Systems
Abstract: Web-scale ranking systems at Meta serving billions of users is complex.
Improving ranking models is essential but engineering heavy. Automated Machine
Learning (AutoML) can release engineers from labor intensive work of tuning
ranking models; however, it is unknown if AutoML is efficient enough to meet
tight production timeline in real-world and, at the same time, bring additional
improvements to the strong baselines. Moreover, to achieve higher ranking
performance, there is an ever-increasing demand to scale up ranking models to
even larger capacity, which imposes more challenges on the efficiency. The
large scale of models and tight production schedule requires AutoML to
outperform human baselines by only using a small number of model evaluation
trials (around 100). We presents a sampling-based AutoML method, focusing on
neural architecture search and hyperparameter optimization, addressing these
challenges in Meta-scale production when building large capacity models. Our
approach efficiently handles large-scale data demands. It leverages a
lightweight predictor-based searcher and reinforcement learning to explore vast
search spaces, significantly reducing the number of model evaluations. Through
experiments in large capacity modeling for CTR and CVR applications, we show
that our method achieves outstanding Return on Investment (ROI) versus human
tuned baselines, with up to 0.09% Normalized Entropy (NE) loss reduction or
$25\%$ Query per Second (QPS) increase by only sampling one hundred models on
average from a curated search space. The proposed AutoML method has already
made real-world impact where a discovered Instagram CTR model with up to -0.36%
NE gain (over existing production baseline) was selected for large-scale online
A/B test and show statistically significant gain. These production results
proved AutoML efficacy and accelerated its adoption in ranking systems at Meta. | Information Retrieval |
What field is the article from? | Title: tagE: Enabling an Embodied Agent to Understand Human Instructions
Abstract: Natural language serves as the primary mode of communication when an
intelligent agent with a physical presence engages with human beings. While a
plethora of research focuses on natural language understanding (NLU),
encompassing endeavors such as sentiment analysis, intent prediction, question
answering, and summarization, the scope of NLU directed at situations
necessitating tangible actions by an embodied agent remains limited. The
inherent ambiguity and incompleteness inherent in natural language present
challenges for intelligent agents striving to decipher human intention. To
tackle this predicament head-on, we introduce a novel system known as task and
argument grounding for Embodied agents (tagE). At its core, our system employs
an inventive neural network model designed to extract a series of tasks from
complex task instructions expressed in natural language. Our proposed model
adopts an encoder-decoder framework enriched with nested decoding to
effectively extract tasks and their corresponding arguments from these
intricate instructions. These extracted tasks are then mapped (or grounded) to
the robot's established collection of skills, while the arguments find
grounding in objects present within the environment. To facilitate the training
and evaluation of our system, we have curated a dataset featuring complex
instructions. The results of our experiments underscore the prowess of our
approach, as it outperforms robust baseline models. | Robotics |
What field is the article from? | Title: RSG: Fast Learning Adaptive Skills for Quadruped Robots by Skill Graph
Abstract: Developing robotic intelligent systems that can adapt quickly to unseen wild
situations is one of the critical challenges in pursuing autonomous robotics.
Although some impressive progress has been made in walking stability and skill
learning in the field of legged robots, their ability to fast adaptation is
still inferior to that of animals in nature. Animals are born with massive
skills needed to survive, and can quickly acquire new ones, by composing
fundamental skills with limited experience. Inspired by this, we propose a
novel framework, named Robot Skill Graph (RSG) for organizing massive
fundamental skills of robots and dexterously reusing them for fast adaptation.
Bearing a structure similar to the Knowledge Graph (KG), RSG is composed of
massive dynamic behavioral skills instead of static knowledge in KG and enables
discovering implicit relations that exist in be-tween of learning context and
acquired skills of robots, serving as a starting point for understanding subtle
patterns existing in robots' skill learning. Extensive experimental results
demonstrate that RSG can provide rational skill inference upon new tasks and
environments and enable quadruped robots to adapt to new scenarios and learn
new skills rapidly. | Robotics |
What field is the article from? | Title: Dream to Adapt: Meta Reinforcement Learning by Latent Context Imagination and MDP Imagination
Abstract: Meta reinforcement learning (Meta RL) has been amply explored to quickly
learn an unseen task by transferring previously learned knowledge from similar
tasks. However, most state-of-the-art algorithms require the meta-training
tasks to have a dense coverage on the task distribution and a great amount of
data for each of them. In this paper, we propose MetaDreamer, a context-based
Meta RL algorithm that requires less real training tasks and data by doing
meta-imagination and MDP-imagination. We perform meta-imagination by
interpolating on the learned latent context space with disentangled properties,
as well as MDP-imagination through the generative world model where physical
knowledge is added to plain VAE networks. Our experiments with various
benchmarks show that MetaDreamer outperforms existing approaches in data
efficiency and interpolated generalization. | Machine Learning |
What field is the article from? | Title: Pre-training with Random Orthogonal Projection Image Modeling
Abstract: Masked Image Modeling (MIM) is a powerful self-supervised strategy for visual
pre-training without the use of labels. MIM applies random crops to input
images, processes them with an encoder, and then recovers the masked inputs
with a decoder, which encourages the network to capture and learn structural
information about objects and scenes. The intermediate feature representations
obtained from MIM are suitable for fine-tuning on downstream tasks. In this
paper, we propose an Image Modeling framework based on random orthogonal
projection instead of binary masking as in MIM. Our proposed Random Orthogonal
Projection Image Modeling (ROPIM) reduces spatially-wise token information
under guaranteed bound on the noise variance and can be considered as masking
entire spatial image area under locally varying masking degrees. Since ROPIM
uses a random subspace for the projection that realizes the masking step, the
readily available complement of the subspace can be used during unmasking to
promote recovery of removed information. In this paper, we show that using
random orthogonal projection leads to superior performance compared to
crop-based masking. We demonstrate state-of-the-art results on several popular
benchmarks. | Computer Vision |
What field is the article from? | Title: Understanding and Improving In-Context Learning on Vision-language Models
Abstract: Recently, in-context learning (ICL) on large language models (LLMs) has
received great attention, and this technique can also be applied to
vision-language models (VLMs) built upon LLMs. These VLMs can respond to
queries by conditioning responses on a series of multimodal demonstrations,
which comprise images, queries, and answers. Though ICL has been extensively
studied on LLMs, its research on VLMs remains limited. The inclusion of
additional visual information in the demonstrations motivates the following
research questions: which of the two modalities in the demonstration is more
significant? How can we select effective multimodal demonstrations to enhance
ICL performance? This study investigates the significance of both visual and
language information. Our findings indicate that ICL in VLMs is predominantly
driven by the textual information in the demonstrations whereas the visual
information in the demonstrations barely affects the ICL performance.
Subsequently, we provide an understanding of the findings by analyzing the
model information flow and comparing model inner states given different ICL
settings. Motivated by our analysis, we propose a simple yet effective
approach, termed Mixed Modality In-Context Example Selection (MMICES), which
considers both visual and language modalities when selecting demonstrations and
shows better ICL performance. Extensive experiments are conducted to support
our findings, understanding, and improvement of the ICL performance of VLMs. | Computer Vision |
What field is the article from? | Title: Towards Calibrated Robust Fine-Tuning of Vision-Language Models
Abstract: While fine-tuning unlocks the potential of a pre-trained model for a specific
task, it compromises the model's ability to generalize to out-of-distribution
(OOD) datasets. To mitigate this, robust fine-tuning aims to ensure performance
on OOD datasets as well as on an in-distribution (ID) dataset for which the
model is being tuned. However, another criterion for reliable machine learning
(ML), confidence calibration, has been overlooked despite its increasing demand
for real-world high-stakes ML applications (e.g., autonomous driving and
medical diagnosis). For the first time, we raise concerns about the calibration
of fine-tuned vision-language models (VLMs) under distribution shift by showing
that naive fine-tuning and even state-of-the-art robust fine-tuning methods
hurt the calibration of pre-trained VLMs, especially on OOD datasets. To
address this issue, we provide a simple approach, called calibrated robust
fine-tuning (CaRot), that incentivizes calibration and robustness on both ID
and OOD datasets. Empirical results on ImageNet-1K distribution shift
evaluation verify the effectiveness of our method. | Computer Vision |
What field is the article from? | Title: Anticipating User Needs: Insights from Design Fiction on Conversational Agents for Computational Thinking
Abstract: Computational thinking, and by extension, computer programming, is
notoriously challenging to learn. Conversational agents and generative
artificial intelligence (genAI) have the potential to facilitate this learning
process by offering personalized guidance, interactive learning experiences,
and code generation. However, current genAI-based chatbots focus on
professional developers and may not adequately consider educational needs.
Involving educators in conceiving educational tools is critical for ensuring
usefulness and usability. We enlisted \numParticipants{} instructors to engage
in design fiction sessions in which we elicited abilities such a conversational
agent supported by genAI should display. Participants envisioned a
conversational agent that guides students stepwise through exercises, tuning
its method of guidance with an awareness of the educational background, skills
and deficits, and learning preferences. The insights obtained in this paper can
guide future implementations of tutoring conversational agents oriented toward
teaching computational thinking and computer programming. | Human-Computer Interaction |
What field is the article from? | Title: Post-Training Quantization for Re-parameterization via Coarse & Fine Weight Splitting
Abstract: Although neural networks have made remarkable advancements in various
applications, they require substantial computational and memory resources.
Network quantization is a powerful technique to compress neural networks,
allowing for more efficient and scalable AI deployments. Recently,
Re-parameterization has emerged as a promising technique to enhance model
performance while simultaneously alleviating the computational burden in
various computer vision tasks. However, the accuracy drops significantly when
applying quantization on the re-parameterized networks. We identify that the
primary challenge arises from the large variation in weight distribution across
the original branches. To address this issue, we propose a coarse & fine weight
splitting (CFWS) method to reduce quantization error of weight, and develop an
improved KL metric to determine optimal quantization scales for activation. To
the best of our knowledge, our approach is the first work that enables
post-training quantization applicable on re-parameterized networks. For
example, the quantized RepVGG-A1 model exhibits a mere 0.3% accuracy loss. The
code is in https://github.com/NeonHo/Coarse-Fine-Weight-Split.git | Computer Vision |
What field is the article from? | Title: Global Transformer Architecture for Indoor Room Temperature Forecasting
Abstract: A thorough regulation of building energy systems translates in relevant
energy savings and in a better comfort for the occupants. Algorithms to predict
the thermal state of a building on a certain time horizon with a good
confidence are essential for the implementation of effective control systems.
This work presents a global Transformer architecture for indoor temperature
forecasting in multi-room buildings, aiming at optimizing energy consumption
and reducing greenhouse gas emissions associated with HVAC systems. Recent
advancements in deep learning have enabled the development of more
sophisticated forecasting models compared to traditional feedback control
systems. The proposed global Transformer architecture can be trained on the
entire dataset encompassing all rooms, eliminating the need for multiple
room-specific models, significantly improving predictive performance, and
simplifying deployment and maintenance. Notably, this study is the first to
apply a Transformer architecture for indoor temperature forecasting in
multi-room buildings. The proposed approach provides a novel solution to
enhance the accuracy and efficiency of temperature forecasting, serving as a
valuable tool to optimize energy consumption and decrease greenhouse gas
emissions in the building sector. | Machine Learning |
What field is the article from? | Title: Vital Sign Forecasting for Sepsis Patients in ICUs
Abstract: Sepsis and septic shock are a critical medical condition affecting millions
globally, with a substantial mortality rate. This paper uses state-of-the-art
deep learning (DL) architectures to introduce a multi-step forecasting system
to predict vital signs indicative of septic shock progression in Intensive Care
Units (ICUs). Our approach utilizes a short window of historical vital sign
data to forecast future physiological conditions. We introduce a DL-based vital
sign forecasting system that predicts up to 3 hours of future vital signs from
6 hours of past data. We further adopt the DILATE loss function to capture
better the shape and temporal dynamics of vital signs, which are critical for
clinical decision-making. We compare three DL models, N-BEATS, N-HiTS, and
Temporal Fusion Transformer (TFT), using the publicly available eICU
Collaborative Research Database (eICU-CRD), highlighting their forecasting
capabilities in a critical care setting. We evaluate the performance of our
models using mean squared error (MSE) and dynamic time warping (DTW) metrics.
Our findings show that while TFT excels in capturing overall trends, N-HiTS is
superior in retaining short-term fluctuations within a predefined range. This
paper demonstrates the potential of deep learning in transforming the
monitoring systems in ICUs, potentially leading to significant improvements in
patient care and outcomes by accurately forecasting vital signs to assist
healthcare providers in detecting early signs of physiological instability and
anticipating septic shock. | Machine Learning |
What field is the article from? | Title: Dataset Distillation in Large Data Era
Abstract: Dataset distillation aims to generate a smaller but representative subset
from a large dataset, which allows a model to be trained efficiently, meanwhile
evaluating on the original testing data distribution to achieve decent
performance. Many prior works have aimed to align with diverse aspects of the
original datasets, such as matching the training weight trajectories, gradient,
feature/BatchNorm distributions, etc. In this work, we show how to distill
various large-scale datasets such as full ImageNet-1K/21K under a conventional
input resolution of 224$\times$224 to achieve the best accuracy over all
previous approaches, including SRe$^2$L, TESLA and MTT. To achieve this, we
introduce a simple yet effective ${\bf C}$urriculum ${\bf D}$ata ${\bf
A}$ugmentation ($\texttt{CDA}$) during data synthesis that obtains the accuracy
on large-scale ImageNet-1K and 21K with 63.2% under IPC (Images Per Class) 50
and 36.1% under IPC 20, respectively. Finally, we show that, by integrating all
our enhancements together, the proposed model beats the current
state-of-the-art by more than 4% Top-1 accuracy on ImageNet-1K/21K and for the
first time, reduces the gap to its full-data training counterpart to less than
absolute 15%. Moreover, this work represents the inaugural success in dataset
distillation on larger-scale ImageNet-21K under the standard 224$\times$224
resolution. Our code and distilled ImageNet-21K dataset of 20 IPC, 2K recovery
budget are available at https://github.com/VILA-Lab/SRe2L/tree/main/CDA. | Computer Vision |
What field is the article from? | Title: Low-Rank MDPs with Continuous Action Spaces
Abstract: Low-Rank Markov Decision Processes (MDPs) have recently emerged as a
promising framework within the domain of reinforcement learning (RL), as they
allow for provably approximately correct (PAC) learning guarantees while also
incorporating ML algorithms for representation learning. However, current
methods for low-rank MDPs are limited in that they only consider finite action
spaces, and give vacuous bounds as $|\mathcal{A}| \to \infty$, which greatly
limits their applicability. In this work, we study the problem of extending
such methods to settings with continuous actions, and explore multiple concrete
approaches for performing this extension. As a case study, we consider the
seminal FLAMBE algorithm (Agarwal et al., 2020), which is a reward-agnostic
method for PAC RL with low-rank MDPs. We show that, without any modifications
to the algorithm, we obtain similar PAC bound when actions are allowed to be
continuous. Specifically, when the model for transition functions satisfies a
Holder smoothness condition w.r.t. actions, and either the policy class has a
uniformly bounded minimum density or the reward function is also Holder smooth,
we obtain a polynomial PAC bound that depends on the order of smoothness. | Machine Learning |
What field is the article from? | Title: CreoleVal: Multilingual Multitask Benchmarks for Creoles
Abstract: Creoles represent an under-explored and marginalized group of languages, with
few available resources for NLP research. While the genealogical ties between
Creoles and other highly-resourced languages imply a significant potential for
transfer learning, this potential is hampered due to this lack of annotated
data. In this work we present CreoleVal, a collection of benchmark datasets
spanning 8 different NLP tasks, covering up to 28 Creole languages; it is an
aggregate of brand new development datasets for machine comprehension, relation
classification, and machine translation for Creoles, in addition to a practical
gateway to a handful of preexisting benchmarks. For each benchmark, we conduct
baseline experiments in a zero-shot setting in order to further ascertain the
capabilities and limitations of transfer learning for Creoles. Ultimately, the
goal of CreoleVal is to empower research on Creoles in NLP and computational
linguistics. We hope this resource will contribute to technological inclusion
for Creole language users around the globe. | Computational Linguistics |
What field is the article from? | Title: Beyond Expected Return: Accounting for Policy Reproducibility when Evaluating Reinforcement Learning Algorithms
Abstract: Many applications in Reinforcement Learning (RL) usually have noise or
stochasticity present in the environment. Beyond their impact on learning,
these uncertainties lead the exact same policy to perform differently, i.e.
yield different return, from one roll-out to another. Common evaluation
procedures in RL summarise the consequent return distributions using solely the
expected return, which does not account for the spread of the distribution. Our
work defines this spread as the policy reproducibility: the ability of a policy
to obtain similar performance when rolled out many times, a crucial property in
some real-world applications. We highlight that existing procedures that only
use the expected return are limited on two fronts: first an infinite number of
return distributions with a wide range of performance-reproducibility
trade-offs can have the same expected return, limiting its effectiveness when
used for comparing policies; second, the expected return metric does not leave
any room for practitioners to choose the best trade-off value for considered
applications. In this work, we address these limitations by recommending the
use of Lower Confidence Bound, a metric taken from Bayesian optimisation that
provides the user with a preference parameter to choose a desired
performance-reproducibility trade-off. We also formalise and quantify policy
reproducibility, and demonstrate the benefit of our metrics using extensive
experiments of popular RL algorithms on common uncertain RL tasks. | Machine Learning |
What field is the article from? | Title: Studying Artist Sentiments around AI-generated Artwork
Abstract: Art created using generated Artificial Intelligence has taken the world by
storm and generated excitement for many digital creators and technologists.
However, the reception and reaction from artists have been mixed. Concerns
about plagiarizing their artworks and styles for datasets and uncertainty
around the future of digital art sparked movements in artist communities
shunning the use of AI for generating art and protecting artists' rights.
Collaborating with these tools for novel creative use cases also sparked hope
from some creators. Artists are an integral stakeholder in the rapidly evolving
digital creativity industry and understanding their concerns and hopes inform
responsible development and use of creativity support tools. In this work, we
study artists' sentiments about AI-generated art. We interviewed 7 artists and
analyzed public posts from artists on social media platforms Reddit, Twitter
and Artstation. We report artists' main concerns and hopes around AI-generated
artwork, informing a way forward for inclusive development of these tools. | Human-Computer Interaction |
What field is the article from? | Title: CLIP-Motion: Learning Reward Functions for Robotic Actions Using Consecutive Observations
Abstract: This paper presents a novel method for learning reward functions for robotic
motions by harnessing the power of a CLIP-based model. Traditional reward
function design often hinges on manual feature engineering, which can struggle
to generalize across an array of tasks. Our approach circumvents this challenge
by capitalizing on CLIP's capability to process both state features and image
inputs effectively. Given a pair of consecutive observations, our model excels
in identifying the motion executed between them. We showcase results spanning
various robotic activities, such as directing a gripper to a designated target
and adjusting the position of a cube. Through experimental evaluations, we
underline the proficiency of our method in precisely deducing motion and its
promise to enhance reinforcement learning training in the realm of robotics. | Robotics |
What field is the article from? | Title: Spatial Knowledge-Infused Hierarchical Learning: An Application in Flood Mapping on Earth Imagery
Abstract: Deep learning for Earth imagery plays an increasingly important role in
geoscience applications such as agriculture, ecology, and natural disaster
management. Still, progress is often hindered by the limited training labels.
Given Earth imagery with limited training labels, a base deep neural network
model, and a spatial knowledge base with label constraints, our problem is to
infer the full labels while training the neural network. The problem is
challenging due to the sparse and noisy input labels, spatial uncertainty
within the label inference process, and high computational costs associated
with a large number of sample locations. Existing works on neuro-symbolic
models focus on integrating symbolic logic into neural networks (e.g., loss
function, model architecture, and training label augmentation), but these
methods do not fully address the challenges of spatial data (e.g., spatial
uncertainty, the trade-off between spatial granularity and computational
costs). To bridge this gap, we propose a novel Spatial Knowledge-Infused
Hierarchical Learning (SKI-HL) framework that iteratively infers sample labels
within a multi-resolution hierarchy. Our framework consists of a module to
selectively infer labels in different resolutions based on spatial uncertainty
and a module to train neural network parameters with uncertainty-aware
multi-instance learning. Extensive experiments on real-world flood mapping
datasets show that the proposed model outperforms several baseline methods. The
code is available at \url{https://github.com/ZelinXu2000/SKI-HL}. | Artificial Intelligence |
What field is the article from? | Title: MemoryCompanion: A Smart Healthcare Solution to Empower Efficient Alzheimer's Care Via Unleashing Generative AI
Abstract: With the rise of Large Language Models (LLMs), notably characterized by GPT
frameworks, there emerges a catalyst for novel healthcare applications. Earlier
iterations of chatbot caregivers, though existent, have yet to achieve a
dimension of human-like authenticity. This paper unveils `MemoryCompanion' a
pioneering digital health solution explicitly tailored for Alzheimer's disease
(AD) patients and their caregivers. Drawing upon the nuances of GPT technology
and prompt engineering, MemoryCompanion manifests a personalized caregiving
paradigm, fostering interactions via voice-cloning and talking-face mechanisms
that resonate with the familiarity of known companions. Using advanced
prompt-engineering, the system intricately adapts to each patient's distinct
profile, curating its content and communication style accordingly. This
approach strives to counteract prevalent issues of social isolation and
loneliness frequently observed in AD demographics. Our methodology, grounded in
its innovative design, addresses both the caregiving and technological
challenges intrinsic to this domain. | Computational Linguistics |
What field is the article from? | Title: Efficient Data Learning for Open Information Extraction with Pre-trained Language Models
Abstract: Open Information Extraction (OpenIE) is a fundamental yet challenging task in
Natural Language Processing, which involves extracting all triples (subject,
predicate, object) from a given sentence. While labeling-based methods have
their merits, generation-based techniques offer unique advantages, such as the
ability to generate tokens not present in the original sentence. However, these
generation-based methods often require a significant amount of training data to
learn the task form of OpenIE and substantial training time to overcome slow
model convergence due to the order penalty. In this paper, we introduce a novel
framework, OK-IE, that ingeniously transforms the task form of OpenIE into the
pre-training task form of the T5 model, thereby reducing the need for extensive
training data. Furthermore, we introduce an innovative concept of Anchor to
control the sequence of model outputs, effectively eliminating the impact of
order penalty on model convergence and significantly reducing training time.
Experimental results indicate that, compared to previous SOTA methods, OK-IE
requires only 1/100 of the training data (900 instances) and 1/120 of the
training time (3 minutes) to achieve comparable results. | Computational Linguistics |
What field is the article from? | Title: Efficient Causal Discovery for Robotics Applications
Abstract: Using robots for automating tasks in environments shared with humans, such as
warehouses, shopping centres, or hospitals, requires these robots to comprehend
the fundamental physical interactions among nearby agents and objects.
Specifically, creating models to represent cause-and-effect relationships among
these elements can aid in predicting unforeseen human behaviours and anticipate
the outcome of particular robot actions. To be suitable for robots, causal
analysis must be both fast and accurate, meeting real-time demands and the
limited computational resources typical in most robotics applications. In this
paper, we present a practical demonstration of our approach for fast and
accurate causal analysis, known as Filtered PCMCI (F-PCMCI), along with a
real-world robotics application. The provided application illustrates how our
F-PCMCI can accurately and promptly reconstruct the causal model of a
human-robot interaction scenario, which can then be leveraged to enhance the
quality of the interaction. | Robotics |
What field is the article from? | Title: A Comprehensive Study of Vision Transformers in Image Classification Tasks
Abstract: Image Classification is a fundamental task in the field of computer vision
that frequently serves as a benchmark for gauging advancements in Computer
Vision. Over the past few years, significant progress has been made in image
classification due to the emergence of deep learning. However, challenges still
exist, such as modeling fine-grained visual information, high computation
costs, the parallelism of the model, and inconsistent evaluation protocols
across datasets. In this paper, we conduct a comprehensive survey of existing
papers on Vision Transformers for image classification. We first introduce the
popular image classification datasets that influenced the design of models.
Then, we present Vision Transformers models in chronological order, starting
with early attempts at adapting attention mechanism to vision tasks followed by
the adoption of vision transformers, as they have demonstrated success in
capturing intricate patterns and long-range dependencies within images.
Finally, we discuss open problems and shed light on opportunities for image
classification to facilitate new research ideas. | Computer Vision |
What field is the article from? | Title: Interaction is all You Need? A Study of Robots Ability to Understand and Execute
Abstract: This paper aims to address a critical challenge in robotics, which is
enabling them to operate seamlessly in human environments through natural
language interactions. Our primary focus is to equip robots with the ability to
understand and execute complex instructions in coherent dialogs to facilitate
intricate task-solving scenarios. To explore this, we build upon the Execution
from Dialog History (EDH) task from the Teach benchmark. We employ a
multi-transformer model with BART LM. We observe that our best configuration
outperforms the baseline with a success rate score of 8.85 and a
goal-conditioned success rate score of 14.02. In addition, we suggest an
alternative methodology for completing this task. Moreover, we introduce a new
task by expanding the EDH task and making predictions about game plans instead
of individual actions. We have evaluated multiple BART models and an LLaMA2
LLM, which has achieved a ROGUE-L score of 46.77 for this task. | Robotics |
What field is the article from? | Title: Characterizing Large Language Models as Rationalizers of Knowledge-intensive Tasks
Abstract: Large language models (LLMs) are proficient at generating fluent text with
minimal task-specific supervision. Yet, their ability to provide well-grounded
rationalizations for knowledge-intensive tasks remains under-explored. Such
tasks, like commonsense multiple-choice questions, require rationales based on
world knowledge to support predictions and refute alternate options. We
consider the task of generating knowledge-guided rationalization in natural
language by using expert-written examples in a few-shot manner. Surprisingly,
crowd-workers preferred knowledge-grounded rationales over crowdsourced
rationalizations, citing their factuality, sufficiency, and comprehensive
refutations. Although LLMs-generated rationales were preferable, further
improvements in conciseness and novelty are required. In another study, we show
how rationalization of incorrect model predictions erodes humans' trust in
LLM-generated rationales. Motivated by these observations, we create a
two-stage pipeline to review task predictions and eliminate potential incorrect
decisions before rationalization, enabling trustworthy rationale generation. | Computational Linguistics |
What field is the article from? | Title: Noise in Relation Classification Dataset TACRED: Characterization and Reduction
Abstract: The overarching objective of this paper is two-fold. First, to explore
model-based approaches to characterize the primary cause of the noise. in the
RE dataset TACRED Second, to identify the potentially noisy instances. Towards
the first objective, we analyze predictions and performance of state-of-the-art
(SOTA) models to identify the root cause of noise in the dataset. Our analysis
of TACRED shows that the majority of the noise in the dataset originates from
the instances labeled as no-relation which are negative examples. For the
second objective, we explore two nearest-neighbor-based strategies to
automatically identify potentially noisy examples for elimination and
reannotation. Our first strategy, referred to as Intrinsic Strategy (IS), is
based on the assumption that positive examples are clean. Thus, we have used
false-negative predictions to identify noisy negative examples. Whereas, our
second approach, referred to as Extrinsic Strategy, is based on using a clean
subset of the dataset to identify potentially noisy negative examples. Finally,
we retrained the SOTA models on the eliminated and reannotated dataset. Our
empirical results based on two SOTA models trained on TACRED-E following the IS
show an average 4% F1-score improvement, whereas reannotation (TACRED-R) does
not improve the original results. However, following ES, SOTA models show the
average F1-score improvement of 3.8% and 4.4% when trained on respective
eliminated (TACRED-EN) and reannotated (TACRED-RN) datasets respectively. We
further extended the ES for cleaning positive examples as well, which resulted
in an average performance improvement of 5.8% and 5.6% for the eliminated
(TACRED-ENP) and reannotated (TACRED-RNP) datasets respectively. | Computational Linguistics |
What field is the article from? | Title: Complex Organ Mask Guided Radiology Report Generation
Abstract: The goal of automatic report generation is to generate a clinically accurate
and coherent phrase from a single given X-ray image, which could alleviate the
workload of traditional radiology reporting. However, in a real-world scenario,
radiologists frequently face the challenge of producing extensive reports
derived from numerous medical images, thereby medical report generation from
multi-image perspective is needed. In this paper, we propose the Complex Organ
Mask Guided (termed as COMG) report generation model, which incorporates masks
from multiple organs (e.g., bones, lungs, heart, and mediastinum), to provide
more detailed information and guide the model's attention to these crucial body
regions. Specifically, we leverage prior knowledge of the disease corresponding
to each organ in the fusion process to enhance the disease identification phase
during the report generation process. Additionally, cosine similarity loss is
introduced as target function to ensure the convergence of cross-modal
consistency and facilitate model optimization.Experimental results on two
public datasets show that COMG achieves a 11.4% and 9.7% improvement in terms
of BLEU@4 scores over the SOTA model KiUT on IU-Xray and MIMIC, respectively.
The code is publicly available at https://github.com/GaryGuTC/COMG_model. | Computer Vision |
What field is the article from? | Title: Fuse to Forget: Bias Reduction and Selective Memorization through Model Fusion
Abstract: Model fusion research aims to aggregate the knowledge of multiple models to
enhance performance by combining their weights. In this work, we study the
inverse, investigating whether and how can model fusion interfere and reduce
unwanted knowledge. We delve into the effects of model fusion on the evolution
of learned shortcuts, social biases, and memorization capabilities in
fine-tuned language models. Through several experiments covering text
classification and generation tasks, our analysis highlights that shared
knowledge among models is usually enhanced during model fusion, while unshared
knowledge is usually lost or forgotten. Based on this observation, we
demonstrate the potential of model fusion as a debiasing tool and showcase its
efficacy in addressing privacy concerns associated with language models. | Computational Linguistics |
What field is the article from? | Title: Enhancing Vehicle Entrance and Parking Management: Deep Learning Solutions for Efficiency and Security
Abstract: The auto-management of vehicle entrance and parking in any organization is a
complex challenge encompassing record-keeping, efficiency, and security
concerns. Manual methods for tracking vehicles and finding parking spaces are
slow and a waste of time. To solve the problem of auto management of vehicle
entrance and parking, we have utilized state-of-the-art deep learning models
and automated the process of vehicle entrance and parking into any
organization. To ensure security, our system integrated vehicle detection,
license number plate verification, and face detection and recognition models to
ensure that the person and vehicle are registered with the organization. We
have trained multiple deep-learning models for vehicle detection, license
number plate detection, face detection, and recognition, however, the YOLOv8n
model outperformed all the other models. Furthermore, License plate recognition
is facilitated by Google's Tesseract-OCR Engine. By integrating these
technologies, the system offers efficient vehicle detection, precise
identification, streamlined record keeping, and optimized parking slot
allocation in buildings, thereby enhancing convenience, accuracy, and security.
Future research opportunities lie in fine-tuning system performance for a wide
range of real-world applications. | Computer Vision |
What field is the article from? | Title: Modifying RL Policies with Imagined Actions: How Predictable Policies Can Enable Users to Perform Novel Tasks
Abstract: It is crucial that users are empowered to use the functionalities of a robot
to creatively solve problems on the fly. A user who has access to a
Reinforcement Learning (RL) based robot may want to use the robot's autonomy
and their knowledge of its behavior to complete new tasks. One way is for the
user to take control of some of the robot's action space through teleoperation
while the RL policy simultaneously controls the rest. However, an
out-of-the-box RL policy may not readily facilitate this. For example, a user's
control may bring the robot into a failure state from the policy's perspective,
causing it to act in a way the user is not familiar with, hindering the success
of the user's desired task. In this work, we formalize this problem and present
Imaginary Out-of-Distribution Actions, IODA, an initial algorithm for
addressing that problem and empowering user's to leverage their expectation of
a robot's behavior to accomplish new tasks. | Robotics |
What field is the article from? | Title: Explainable Artificial Intelligence (XAI) 2.0: A Manifesto of Open Challenges and Interdisciplinary Research Directions
Abstract: As systems based on opaque Artificial Intelligence (AI) continue to flourish
in diverse real-world applications, understanding these black box models has
become paramount. In response, Explainable AI (XAI) has emerged as a field of
research with practical and ethical benefits across various domains. This paper
not only highlights the advancements in XAI and its application in real-world
scenarios but also addresses the ongoing challenges within XAI, emphasizing the
need for broader perspectives and collaborative efforts. We bring together
experts from diverse fields to identify open problems, striving to synchronize
research agendas and accelerate XAI in practical applications. By fostering
collaborative discussion and interdisciplinary cooperation, we aim to propel
XAI forward, contributing to its continued success. Our goal is to put forward
a comprehensive proposal for advancing XAI. To achieve this goal, we present a
manifesto of 27 open problems categorized into nine categories. These
challenges encapsulate the complexities and nuances of XAI and offer a road map
for future research. For each problem, we provide promising research directions
in the hope of harnessing the collective intelligence of interested
stakeholders. | Artificial Intelligence |
What field is the article from? | Title: A Multifidelity Sim-to-Real Pipeline for Verifiable and Compositional Reinforcement Learning
Abstract: We propose and demonstrate a compositional framework for training and
verifying reinforcement learning (RL) systems within a multifidelity
sim-to-real pipeline, in order to deploy reliable and adaptable RL policies on
physical hardware. By decomposing complex robotic tasks into component subtasks
and defining mathematical interfaces between them, the framework allows for the
independent training and testing of the corresponding subtask policies, while
simultaneously providing guarantees on the overall behavior that results from
their composition. By verifying the performance of these subtask policies using
a multifidelity simulation pipeline, the framework not only allows for
efficient RL training, but also for a refinement of the subtasks and their
interfaces in response to challenges arising from discrepancies between
simulation and reality. In an experimental case study we apply the framework to
train and deploy a compositional RL system that successfully pilots a Warthog
unmanned ground robot. | Robotics |
What field is the article from? | Title: RelVAE: Generative Pretraining for few-shot Visual Relationship Detection
Abstract: Visual relations are complex, multimodal concepts that play an important role
in the way humans perceive the world. As a result of their complexity,
high-quality, diverse and large scale datasets for visual relations are still
absent. In an attempt to overcome this data barrier, we choose to focus on the
problem of few-shot Visual Relationship Detection (VRD), a setting that has
been so far neglected by the community. In this work we present the first
pretraining method for few-shot predicate classification that does not require
any annotated relations. We achieve this by introducing a generative model that
is able to capture the variation of semantic, visual and spatial information of
relations inside a latent space and later exploiting its representations in
order to achieve efficient few-shot classification. We construct few-shot
training splits and show quantitative experiments on VG200 and VRD datasets
where our model outperforms the baselines. Lastly we attempt to interpret the
decisions of the model by conducting various qualitative experiments. | Computer Vision |
What field is the article from? | Title: Pearl: A Production-ready Reinforcement Learning Agent
Abstract: Reinforcement Learning (RL) offers a versatile framework for achieving
long-term goals. Its generality allows us to formalize a wide range of problems
that real-world intelligent systems encounter, such as dealing with delayed
rewards, handling partial observability, addressing the exploration and
exploitation dilemma, utilizing offline data to improve online performance, and
ensuring safety constraints are met. Despite considerable progress made by the
RL research community in addressing these issues, existing open-source RL
libraries tend to focus on a narrow portion of the RL solution pipeline,
leaving other aspects largely unattended. This paper introduces Pearl, a
Production-ready RL agent software package explicitly designed to embrace these
challenges in a modular fashion. In addition to presenting preliminary
benchmark results, this paper highlights Pearl's industry adoptions to
demonstrate its readiness for production usage. Pearl is open sourced on Github
at github.com/facebookresearch/pearl and its official website is located at
pearlagent.github.io. | Machine Learning |
What field is the article from? | Title: Weakly-Supervised Audio-Visual Segmentation
Abstract: Audio-visual segmentation is a challenging task that aims to predict
pixel-level masks for sound sources in a video. Previous work applied a
comprehensive manually designed architecture with countless pixel-wise accurate
masks as supervision. However, these pixel-level masks are expensive and not
available in all cases. In this work, we aim to simplify the supervision as the
instance-level annotation, i.e., weakly-supervised audio-visual segmentation.
We present a novel Weakly-Supervised Audio-Visual Segmentation framework,
namely WS-AVS, that can learn multi-scale audio-visual alignment with
multi-scale multiple-instance contrastive learning for audio-visual
segmentation. Extensive experiments on AVSBench demonstrate the effectiveness
of our WS-AVS in the weakly-supervised audio-visual segmentation of
single-source and multi-source scenarios. | Computer Vision |
What field is the article from? | Title: Alignment and Outer Shell Isotropy for Hyperbolic Graph Contrastive Learning
Abstract: Learning good self-supervised graph representations that are beneficial to
downstream tasks is challenging. Among a variety of methods, contrastive
learning enjoys competitive performance. The embeddings of contrastive learning
are arranged on a hypersphere that enables the Cosine distance measurement in
the Euclidean space. However, the underlying structure of many domains such as
graphs exhibits highly non-Euclidean latent geometry. To this end, we propose a
novel contrastive learning framework to learn high-quality graph embedding.
Specifically, we design the alignment metric that effectively captures the
hierarchical data-invariant information, as well as we propose a substitute of
uniformity metric to prevent the so-called dimensional collapse. We show that
in the hyperbolic space one has to address the leaf- and height-level
uniformity which are related to properties of trees, whereas in the ambient
space of the hyperbolic manifold, these notions translate into imposing an
isotropic ring density towards boundaries of Poincar\'e ball. This ring density
can be easily imposed by promoting the isotropic feature distribution on the
tangent space of manifold. In the experiments, we demonstrate the efficacy of
our proposed method across different hyperbolic graph embedding techniques in
both supervised and self-supervised learning settings. | Machine Learning |
What field is the article from? | Title: A Stability Principle for Learning under Non-Stationarity
Abstract: We develop a versatile framework for statistical learning in non-stationary
environments. In each time period, our approach applies a stability principle
to select a look-back window that maximizes the utilization of historical data
while keeping the cumulative bias within an acceptable range relative to the
stochastic error. Our theory showcases the adaptability of this approach to
unknown non-stationarity. The regret bound is minimax optimal up to logarithmic
factors when the population losses are strongly convex, or Lipschitz only. At
the heart of our analysis lie two novel components: a measure of similarity
between functions and a segmentation technique for dividing the non-stationary
data sequence into quasi-stationary pieces. | Machine Learning |
What field is the article from? | Title: Analysis of the User Perception of Chatbots in Education Using A Partial Least Squares Structural Equation Modeling Approach
Abstract: The integration of Artificial Intelligence (AI) into education is a recent
development, with chatbots emerging as a noteworthy addition to this
transformative landscape. As online learning platforms rapidly advance,
students need to adapt swiftly to excel in this dynamic environment.
Consequently, understanding the acceptance of chatbots, particularly those
employing Large Language Model (LLM) such as Chat Generative Pretrained
Transformer (ChatGPT), Google Bard, and other interactive AI technologies, is
of paramount importance. However, existing research on chatbots in education
has overlooked key behavior-related aspects, such as Optimism, Innovativeness,
Discomfort, Insecurity, Transparency, Ethics, Interaction, Engagement, and
Accuracy, creating a significant literature gap. To address this gap, this
study employs Partial Least Squares Structural Equation Modeling (PLS-SEM) to
investigate the determinant of chatbots adoption in education among students,
considering the Technology Readiness Index (TRI) and Technology Acceptance
Model (TAM). Utilizing a five-point Likert scale for data collection, we
gathered a total of 185 responses, which were analyzed using R-Studio software.
We established 12 hypotheses to achieve its objectives. The results showed that
Optimism and Innovativeness are positively associated with Perceived Ease of
Use (PEOU) and Perceived Usefulness (PU). Conversely, Discomfort and Insecurity
negatively impact PEOU, with only Insecurity negatively affecting PU. These
findings provide insights for future technology designers, elucidating critical
user behavior factors influencing chatbots adoption and utilization in
educational contexts. | Human-Computer Interaction |
What field is the article from? | Title: Grounding Everything: Emerging Localization Properties in Vision-Language Transformers
Abstract: Vision-language foundation models have shown remarkable performance in
various zero-shot settings such as image retrieval, classification, or
captioning. But so far, those models seem to fall behind when it comes to
zero-shot localization of referential expressions and objects in images. As a
result, they need to be fine-tuned for this task. In this paper, we show that
pretrained vision-language (VL) models allow for zero-shot open-vocabulary
object localization without any fine-tuning. To leverage those capabilities, we
propose a Grounding Everything Module (GEM) that generalizes the idea of
value-value attention introduced by CLIPSurgery to a self-self attention path.
We show that the concept of self-self attention corresponds to clustering, thus
enforcing groups of tokens arising from the same object to be similar while
preserving the alignment with the language space. To further guide the group
formation, we propose a set of regularizations that allows the model to finally
generalize across datasets and backbones. We evaluate the proposed GEM
framework on various benchmark tasks and datasets for semantic segmentation. It
shows that GEM not only outperforms other training-free open-vocabulary
localization methods, but also achieves state-of-the-art results on the
recently proposed OpenImagesV7 large-scale segmentation benchmark. | Computer Vision |
What field is the article from? | Title: Leveraging generative artificial intelligence to simulate student learning behavior
Abstract: Student simulation presents a transformative approach to enhance learning
outcomes, advance educational research, and ultimately shape the future of
effective pedagogy. We explore the feasibility of using large language models
(LLMs), a remarkable achievement in AI, to simulate student learning behaviors.
Unlike conventional machine learning based prediction, we leverage LLMs to
instantiate virtual students with specific demographics and uncover intricate
correlations among learning experiences, course materials, understanding
levels, and engagement. Our objective is not merely to predict learning
outcomes but to replicate learning behaviors and patterns of real students. We
validate this hypothesis through three experiments. The first experiment, based
on a dataset of N = 145, simulates student learning outcomes from demographic
data, revealing parallels with actual students concerning various demographic
factors. The second experiment (N = 4524) results in increasingly realistic
simulated behaviors with more assessment history for virtual students
modelling. The third experiment (N = 27), incorporating prior knowledge and
course interactions, indicates a strong link between virtual students' learning
behaviors and fine-grained mappings from test questions, course materials,
engagement and understanding levels. Collectively, these findings deepen our
understanding of LLMs and demonstrate its viability for student simulation,
empowering more adaptable curricula design to enhance inclusivity and
educational effectiveness. | Artificial Intelligence |
What field is the article from? | Title: Foundational Moral Values for AI Alignment
Abstract: Solving the AI alignment problem requires having clear, defensible values
towards which AI systems can align. Currently, targets for alignment remain
underspecified and do not seem to be built from a philosophically robust
structure. We begin the discussion of this problem by presenting five core,
foundational values, drawn from moral philosophy and built on the requisites
for human existence: survival, sustainable intergenerational existence,
society, education, and truth. We show that these values not only provide a
clearer direction for technical alignment work, but also serve as a framework
to highlight threats and opportunities from AI systems to both obtain and
sustain these values. | Computers and Society |
What field is the article from? | Title: General Policies, Subgoal Structure, and Planning Width
Abstract: It has been observed that many classical planning domains with atomic goals
can be solved by means of a simple polynomial exploration procedure, called IW,
that runs in time exponential in the problem width, which in these cases is
bounded and small. Yet, while the notion of width has become part of
state-of-the-art planning algorithms such as BFWS, there is no good explanation
for why so many benchmark domains have bounded width when atomic goals are
considered. In this work, we address this question by relating bounded width
with the existence of general optimal policies that in each planning instance
are represented by tuples of atoms of bounded size. We also define the notions
of (explicit) serializations and serialized width that have a broader scope as
many domains have a bounded serialized width but no bounded width. Such
problems are solved non-optimally in polynomial time by a suitable variant of
the Serialized IW algorithm. Finally, the language of general policies and the
semantics of serializations are combined to yield a simple, meaningful, and
expressive language for specifying serializations in compact form in the form
of sketches, which can be used for encoding domain control knowledge by hand or
for learning it from small examples. Sketches express general problem
decompositions in terms of subgoals, and sketches of bounded width express
problem decompositions that can be solved in polynomial time. | Artificial Intelligence |
What field is the article from? | Title: EQ-Bench: An Emotional Intelligence Benchmark for Large Language Models
Abstract: We introduce EQ-Bench, a novel benchmark designed to evaluate aspects of
emotional intelligence in Large Language Models (LLMs). We assess the ability
of LLMs to understand complex emotions and social interactions by asking them
to predict the intensity of emotional states of characters in a dialogue. The
benchmark is able to discriminate effectively between a wide range of models.
We find that EQ-Bench correlates strongly with comprehensive multi-domain
benchmarks like MMLU (Hendrycks et al., 2020) (r=0.97), indicating that we may
be capturing similar aspects of broad intelligence. Our benchmark produces
highly repeatable results using a set of 60 English-language questions. We also
provide open-source code for an automated benchmarking pipeline at
https://github.com/EQ-bench/EQ-Bench and a leaderboard at
https://www.eqbench.com | Computational Linguistics |
What field is the article from? | Title: LLaVA-Plus: Learning to Use Tools for Creating Multimodal Agents
Abstract: LLaVA-Plus is a general-purpose multimodal assistant that expands the
capabilities of large multimodal models. It maintains a skill repository of
pre-trained vision and vision-language models and can activate relevant tools
based on users' inputs to fulfill real-world tasks. LLaVA-Plus is trained on
multimodal instruction-following data to acquire the ability to use tools,
covering visual understanding, generation, external knowledge retrieval, and
compositions. Empirical results show that LLaVA-Plus outperforms LLaVA in
existing capabilities and exhibits new ones. It is distinct in that the image
query is directly grounded and actively engaged throughout the entire human-AI
interaction sessions, significantly improving tool use performance and enabling
new scenarios. | Computer Vision |
What field is the article from? | Title: An advantage based policy transfer algorithm for reinforcement learning with metrics of transferability
Abstract: Reinforcement learning (RL) can enable sequential decision-making in complex
and high-dimensional environments if the acquisition of a new state-action pair
is efficient, i.e., when interaction with the environment is inexpensive.
However, there are a myriad of real-world applications in which a high number
of interactions are infeasible. In these environments, transfer RL algorithms,
which can be used for the transfer of knowledge from one or multiple source
environments to a target environment, have been shown to increase learning
speed and improve initial and asymptotic performance. However, most existing
transfer RL algorithms are on-policy and sample inefficient, and often require
heuristic choices in algorithm design. This paper proposes an off-policy
Advantage-based Policy Transfer algorithm, APT-RL, for fixed domain
environments. Its novelty is in using the popular notion of ``advantage'' as a
regularizer, to weigh the knowledge that should be transferred from the source,
relative to new knowledge learned in the target, removing the need for
heuristic choices. Further, we propose a new transfer performance metric to
evaluate the performance of our algorithm and unify existing transfer RL
frameworks. Finally, we present a scalable, theoretically-backed task
similarity measurement algorithm to illustrate the alignments between our
proposed transferability metric and similarities between source and target
environments. Numerical experiments on three continuous control benchmark tasks
demonstrate that APT-RL outperforms existing transfer RL algorithms on most
tasks, and is $10\%$ to $75\%$ more sample efficient than learning from
scratch. | Machine Learning |
What field is the article from? | Title: Self Model for Embodied Intelligence: Modeling Full-Body Human Musculoskeletal System and Locomotion Control with Hierarchical Low-Dimensional Representation
Abstract: Modeling and control of the human musculoskeletal system is important for
understanding human motion, developing embodied intelligence, and optimizing
human-robot interaction systems. However, current open-source models are
restricted to a limited range of body parts and often with a reduced number of
muscles. There is also a lack of algorithms capable of controlling over 600
muscles to generate reasonable human movements. To fill this gap, we build a
comprehensive musculoskeletal model with 90 body segments, 206 joints, and 700
muscle-tendon units, allowing simulation of full-body dynamics and interaction
with various devices. We develop a new algorithm using low-dimensional
representation and hierarchical deep reinforcement learning to achieve
state-of-the-art full-body control. We validate the effectiveness of our model
and algorithm in simulations and on real human locomotion data. The
musculoskeletal model, along with its control algorithm, will be made available
to the research community to promote a deeper understanding of human motion
control and better design of interactive robots. | Artificial Intelligence |
What field is the article from? | Title: An Extensive Study on Adversarial Attack against Pre-trained Models of Code
Abstract: Transformer-based pre-trained models of code (PTMC) have been widely utilized
and have achieved state-of-the-art performance in many mission-critical
applications. However, they can be vulnerable to adversarial attacks through
identifier substitution or coding style transformation, which can significantly
degrade accuracy and may further incur security concerns. Although several
approaches have been proposed to generate adversarial examples for PTMC, the
effectiveness and efficiency of such approaches, especially on different code
intelligence tasks, has not been well understood. To bridge this gap, this
study systematically analyzes five state-of-the-art adversarial attack
approaches from three perspectives: effectiveness, efficiency, and the quality
of generated examples. The results show that none of the five approaches
balances all these perspectives. Particularly, approaches with a high attack
success rate tend to be time-consuming; the adversarial code they generate
often lack naturalness, and vice versa. To address this limitation, we explore
the impact of perturbing identifiers under different contexts and find that
identifier substitution within for and if statements is the most effective.
Based on these findings, we propose a new approach that prioritizes different
types of statements for various tasks and further utilizes beam search to
generate adversarial examples. Evaluation results show that it outperforms the
state-of-the-art ALERT in terms of both effectiveness and efficiency while
preserving the naturalness of the generated adversarial examples. | Cryptography and Security |
What field is the article from? | Title: Personalized Path Recourse
Abstract: This paper introduces Personalized Path Recourse, a novel method that
generates recourse paths for an agent. The objective is to achieve desired
goals (e.g., better outcomes compared to the agent's original paths of action),
while ensuring a high similarity to the agent's original paths and being
personalized to the agent. Personalization refers to the extent to which the
new path is tailored to the agent's observed behavior patterns from their
policy function. We train a personalized recourse agent to generate such
personalized paths, which are obtained using reward functions that consider the
goal, similarity, and personalization. The proposed method is applicable to
both reinforcement learning and supervised learning settings for correcting or
improving sequences of actions or sequences of data to achieve a pre-determined
goal. The method is evaluated in various settings and demonstrates promising
results. | Machine Learning |
What field is the article from? | Title: Exploring Lip Segmentation Techniques in Computer Vision: A Comparative Analysis
Abstract: Lip segmentation is crucial in computer vision, especially for lip reading.
Despite extensive face segmentation research, lip segmentation has received
limited attention. The aim of this study is to compare state-of-the-art lip
segmentation models using a standardized setting and a publicly available
dataset. Five techniques, namely EHANet, Mask2Former, BiSeNet V2, PIDNet, and
STDC1, are qualitatively selected based on their reported performance,
inference time, code availability, recency, and popularity. The CelebAMask-HQ
dataset, comprising manually annotated face images, is used to fairly assess
the lip segmentation performance of the selected models. Inference experiments
are conducted on a Raspberry Pi4 to emulate limited computational resources.
The results show that Mask2Former and EHANet have the best performances in
terms of mIoU score. BiSeNet V2 demonstrate competitive performance, while
PIDNet excels in recall but has lower precision. Most models present inference
time ranging from 1000 to around 3000 milliseconds on a Raspberry Pi4, with
PIDNet having the lowest mean inference time. This study provides a
comprehensive evaluation of lip segmentation models, highlighting their
performance and inference times. The findings contribute to the development of
lightweight techniques and establish benchmarks for future advances in lip
segmentation, especially in IoT and edge computing scenarios. | Computer Vision |
What field is the article from? | Title: TST$^\mathrm{R}$: Target Similarity Tuning Meets the Real World
Abstract: Target similarity tuning (TST) is a method of selecting relevant examples in
natural language (NL) to code generation through large language models (LLMs)
to improve performance. Its goal is to adapt a sentence embedding model to have
the similarity between two NL inputs match the similarity between their
associated code outputs. In this paper, we propose different methods to apply
and improve TST in the real world. First, we replace the sentence transformer
with embeddings from a larger model, which reduces sensitivity to the language
distribution and thus provides more flexibility in synthetic generation of
examples, and we train a tiny model that transforms these embeddings to a space
where embedding similarity matches code similarity, which allows the model to
remain a black box and only requires a few matrix multiplications at inference
time. Second, we show how to efficiently select a smaller number of training
examples to train the TST model. Third, we introduce a ranking-based evaluation
for TST that does not require end-to-end code generation experiments, which can
be expensive to perform. | Artificial Intelligence |
What field is the article from? | Title: Touring sampling with pushforward maps
Abstract: The number of sampling methods could be daunting for a practitioner looking
to cast powerful machine learning methods to their specific problem. This paper
takes a theoretical stance to review and organize many sampling approaches in
the ``generative modeling'' setting, where one wants to generate new data that
are similar to some training examples. By revealing links between existing
methods, it might prove useful to overcome some of the current challenges in
sampling with diffusion models, such as long inference time due to diffusion
simulation, or the lack of diversity in generated samples. | Machine Learning |
What field is the article from? | Title: The Quest for Content: A Survey of Search-Based Procedural Content Generation for Video Games
Abstract: Video games demand is constantly increasing, which requires the costly
production of large amounts of content. Towards this challenge, researchers
have developed Search-Based Procedural Content Generation (SBPCG), that is, the
(semi-)automated creation of content through search algorithms. We survey the
current state of SBPCG, reporting work appeared in the field between 2011-2022
and identifying open research challenges. The results lead to recommendations
for practitioners and to the identification of several potential future
research avenues for SBPCG. | Software Engineering |
What field is the article from? | Title: Sim-to-Real Causal Transfer: A Metric Learning Approach to Causally-Aware Interaction Representations
Abstract: Modeling spatial-temporal interactions among neighboring agents is at the
heart of multi-agent problems such as motion forecasting and crowd navigation.
Despite notable progress, it remains unclear to which extent modern
representations can capture the causal relationships behind agent interactions.
In this work, we take an in-depth look at the causal awareness of these
representations, from computational formalism to real-world practice. First, we
cast doubt on the notion of non-causal robustness studied in the recent
CausalAgents benchmark. We show that recent representations are already
partially resilient to perturbations of non-causal agents, and yet modeling
indirect causal effects involving mediator agents remains challenging. To
address this challenge, we introduce a metric learning approach that
regularizes latent representations with causal annotations. Our controlled
experiments show that this approach not only leads to higher degrees of causal
awareness but also yields stronger out-of-distribution robustness. To further
operationalize it in practice, we propose a sim-to-real causal transfer method
via cross-domain multi-task learning. Experiments on pedestrian datasets show
that our method can substantially boost generalization, even in the absence of
real-world causal annotations. We hope our work provides a new perspective on
the challenges and potential pathways towards causally-aware representations of
multi-agent interactions. Our code is available at
https://github.com/socialcausality. | Machine Learning |
What field is the article from? | Title: Controlled Decoding from Language Models
Abstract: We propose controlled decoding (CD), a novel off-policy reinforcement
learning method to control the autoregressive generation from language models
towards high reward outcomes. CD solves an off-policy reinforcement learning
problem through a value function for the reward, which we call a prefix scorer.
The prefix scorer is used at inference time to steer the generation towards
higher reward outcomes. We show that the prefix scorer may be trained on
(possibly) off-policy data to predict the expected reward when decoding is
continued from a partially decoded response. We empirically demonstrate that CD
is effective as a control mechanism on Reddit conversations corpus. We also
show that the modularity of the design of CD makes it possible to control for
multiple rewards, effectively solving a multi-objective reinforcement learning
problem with no additional complexity. Finally, we show that CD can be applied
in a novel blockwise fashion at inference-time, again without the need for any
training-time changes, essentially bridging the gap between the popular
best-of-$K$ strategy and token-level reinforcement learning. This makes CD a
promising approach for alignment of language models. | Machine Learning |
What field is the article from? | Title: RoboSense At Edge: Detecting Slip, Crumple and Shape of the Object in Robotic Hand for Teleoprations
Abstract: Slip and crumple detection is essential for performing robust manipulation
tasks with a robotic hand (RH) like remote surgery. It has been one of the
challenging problems in the robotics manipulation community. In this work, we
propose a technique based on machine learning (ML) based techniques to detect
the slip, and crumple as well as the shape of an object that is currently held
in the robotic hand. We proposed ML model will detect the slip, crumple, and
shape using the force/torque exerted and the angular positions of the actuators
present in the RH. The proposed model would be integrated into the loop of a
robotic hand(RH) and haptic glove(HG). This would help us to reduce the latency
in case of teleoperation | Robotics |
What field is the article from? | Title: GreenLightningAI: An Efficient AI System with Decoupled Structural and Quantitative Knowledge
Abstract: The number and complexity of artificial intelligence (AI) applications is
growing relentlessly. As a result, even with the many algorithmic and
mathematical advances experienced over past decades as well as the impressive
energy efficiency and computational capacity of current hardware accelerators,
training the most powerful and popular deep neural networks comes at very high
economic and environmental costs. Recognising that additional optimisations of
conventional neural network training is very difficult, this work takes a
radically different approach by proposing GreenLightningAI, a new AI system
design consisting of a linear model that is capable of emulating the behaviour
of deep neural networks by subsetting the model for each particular sample. The
new AI system stores the information required to select the system subset for a
given sample (referred to as structural information) separately from the linear
model parameters (referred to as quantitative knowledge). In this paper we
present a proof of concept, showing that the structural information stabilises
far earlier than the quantitative knowledge. Additionally, we show
experimentally that the structural information can be kept unmodified when
re-training the AI system with new samples while still achieving a validation
accuracy similar to that obtained when re-training a neural network with
similar size. Since the proposed AI system is based on a linear model, multiple
copies of the model, trained with different datasets, can be easily combined.
This enables faster and greener (re)-training algorithms, including incremental
re-training and federated incremental re-training. | Machine Learning |
What field is the article from? | Title: Rapid Motor Adaptation for Robotic Manipulator Arms
Abstract: Developing generalizable manipulation skills is a core challenge in embodied
AI. This includes generalization across diverse task configurations,
encompassing variations in object shape, density, friction coefficient, and
external disturbances such as forces applied to the robot. Rapid Motor
Adaptation (RMA) offers a promising solution to this challenge. It posits that
essential hidden variables influencing an agent's task performance, such as
object mass and shape, can be effectively inferred from the agent's action and
proprioceptive history. Drawing inspiration from RMA in locomotion and in-hand
rotation, we use depth perception to develop agents tailored for rapid motor
adaptation in a variety of manipulation tasks. We evaluated our agents on four
challenging tasks from the Maniskill2 benchmark, namely pick-and-place
operations with hundreds of objects from the YCB and EGAD datasets, peg
insertion with precise position and orientation, and operating a variety of
faucets and handles, with customized environment variations. Empirical results
demonstrate that our agents surpass state-of-the-art methods like automatic
domain randomization and vision-based policies, obtaining better generalization
performance and sample efficiency. | Robotics |
What field is the article from? | Title: OMNIINPUT: A Model-centric Evaluation Framework through Output Distribution
Abstract: We propose a novel model-centric evaluation framework, OmniInput, to evaluate
the quality of an AI/ML model's predictions on all possible inputs (including
human-unrecognizable ones), which is crucial for AI safety and reliability.
Unlike traditional data-centric evaluation based on pre-defined test sets, the
test set in OmniInput is self-constructed by the model itself and the model
quality is evaluated by investigating its output distribution. We employ an
efficient sampler to obtain representative inputs and the output distribution
of the trained model, which, after selective annotation, can be used to
estimate the model's precision and recall at different output values and a
comprehensive precision-recall curve. Our experiments demonstrate that
OmniInput enables a more fine-grained comparison between models, especially
when their performance is almost the same on pre-defined datasets, leading to
new findings and insights for how to train more robust, generalizable models. | Machine Learning |
What field is the article from? | Title: SynFundus: A synthetic fundus images dataset with millions of samples and multi-disease annotations
Abstract: In the field of medical imaging, there are seldom large-scale public datasets
with high-quality annotations due to data privacy and annotation cost. To
address this issue, we release SynFundus-1M, a high-quality synthetic dataset
containing over \textbf{1 million} fundus images w.r.t. 11 disease types.
Moreover, we intentionally diversify the readability of the images and
accordingly provide 4 types of the quality score for each image. To the best of
our knowledge, SynFundus-1M is currently the largest fundus dataset with the
most sophisticated annotations. All the images are generated by a Denoising
Diffusion Probabilistic Model, named SynFundus-Generator. Trained with over 1.3
million private fundus images, our SynFundus-Generator achieves significant
superior performance in generating fundus images compared to some recent
related works. Furthermore, we blend some synthetic images from SynFundus-1M
with real fundus images, and ophthalmologists can hardly distinguish the
synthetic images from real ones. Through extensive experiments, we demonstrate
that both convolutional neural networs (CNN) and Vision Transformer (ViT) can
benefit from SynFundus-1M by pretraining or training directly. Compared to
datasets like ImageNet or EyePACS, models trained on SynFundus-1M not only
achieve better performance but also faster convergence on various downstream
tasks. | Computer Vision |
What field is the article from? | Title: UTBoost: A Tree-boosting based System for Uplift Modeling
Abstract: Uplift modeling refers to the set of machine learning techniques that a
manager may use to estimate customer uplift, that is, the net effect of an
action on some customer outcome. By identifying the subset of customers for
whom a treatment will have the greatest effect, uplift models assist
decision-makers in optimizing resource allocations and maximizing overall
returns. Accurately estimating customer uplift poses practical challenges, as
it requires assessing the difference between two mutually exclusive outcomes
for each individual. In this paper, we propose two innovative adaptations of
the well-established Gradient Boosting Decision Trees (GBDT) algorithm, which
learn the causal effect in a sequential way and overcome the counter-factual
nature. Both approaches innovate existing techniques in terms of ensemble
learning method and learning objectives, respectively. Experiments on
large-scale datasets demonstrate the usefulness of the proposed methods, which
often yielding remarkable improvements over base models. To facilitate the
application, we develop the UTBoost, an end-to-end tree boosting system
specifically designed for uplift modeling. The package is open source and has
been optimized for training speed to meet the needs of real industrial
applications. | Machine Learning |
What field is the article from? | Title: Mitigating Object Hallucinations in Large Vision-Language Models through Visual Contrastive Decoding
Abstract: Large Vision-Language Models (LVLMs) have advanced considerably, intertwining
visual recognition and language understanding to generate content that is not
only coherent but also contextually attuned. Despite their success, LVLMs still
suffer from the issue of object hallucinations, where models generate plausible
yet incorrect outputs that include objects that do not exist in the images. To
mitigate this issue, we introduce Visual Contrastive Decoding (VCD), a simple
and training-free method that contrasts output distributions derived from
original and distorted visual inputs. The proposed VCD effectively reduces the
over-reliance on statistical bias and unimodal priors, two essential causes of
object hallucinations. This adjustment ensures the generated content is closely
grounded to visual inputs, resulting in contextually accurate outputs. Our
experiments show that VCD, without either additional training or the usage of
external tools, significantly mitigates the object hallucination issue across
different LVLM families. Beyond mitigating object hallucinations, VCD also
excels in general LVLM benchmarks, highlighting its wide-ranging applicability. | Computer Vision |
What field is the article from? | Title: Generative AI for Hate Speech Detection: Evaluation and Findings
Abstract: Automatic hate speech detection using deep neural models is hampered by the
scarcity of labeled datasets, leading to poor generalization. To mitigate this
problem, generative AI has been utilized to generate large amounts of synthetic
hate speech sequences from available labeled examples, leveraging the generated
data in finetuning large pre-trained language models (LLMs). In this chapter,
we provide a review of relevant methods, experimental setups and evaluation of
this approach. In addition to general LLMs, such as BERT, RoBERTa and ALBERT,
we apply and evaluate the impact of train set augmentation with generated data
using LLMs that have been already adapted for hate detection, including
RoBERTa-Toxicity, HateBERT, HateXplain, ToxDect, and ToxiGen. An empirical
study corroborates our previous findings, showing that this approach improves
hate speech generalization, boosting recall performance across data
distributions. In addition, we explore and compare the performance of the
finetuned LLMs with zero-shot hate detection using a GPT-3.5 model. Our results
demonstrate that while better generalization is achieved using the GPT-3.5
model, it achieves mediocre recall and low precision on most datasets. It is an
open question whether the sensitivity of models such as GPT-3.5, and onward,
can be improved using similar techniques of text generation. | Computational Linguistics |
What field is the article from? | Title: Evaluating Large Language Models in Ophthalmology
Abstract: Purpose: The performance of three different large language models (LLMS)
(GPT-3.5, GPT-4, and PaLM2) in answering ophthalmology professional questions
was evaluated and compared with that of three different professional
populations (medical undergraduates, medical masters, and attending
physicians). Methods: A 100-item ophthalmology single-choice test was
administered to three different LLMs (GPT-3.5, GPT-4, and PaLM2) and three
different professional levels (medical undergraduates, medical masters, and
attending physicians), respectively. The performance of LLM was comprehensively
evaluated and compared with the human group in terms of average score,
stability, and confidence. Results: Each LLM outperformed undergraduates in
general, with GPT-3.5 and PaLM2 being slightly below the master's level, while
GPT-4 showed a level comparable to that of attending physicians. In addition,
GPT-4 showed significantly higher answer stability and confidence than GPT-3.5
and PaLM2. Conclusion: Our study shows that LLM represented by GPT-4 performs
better in the field of ophthalmology. With further improvements, LLM will bring
unexpected benefits in medical education and clinical decision making in the
near future. | Computational Linguistics |
What field is the article from? | Title: Education distillation:getting student models to learn in shcools
Abstract: Knowledge distillation is one of the methods for model compression, and
existing knowledge distillation techniques focus on how to improve the
distillation algorithm so as to enhance the distillation efficiency. This paper
introduces dynamic incremental learning into knowledge distillation and
proposes a distillation strategy for education distillation. Specifically, it
is proposed to take fragmented student models divided from the complete student
model as lower-grade models. As the grade level rises, fragmented student
models deepen in conjunction with designed teaching reference layers, while
learning and distilling from more teacher models. By moving from lower to
higher grades, fragmented student models were gradually integrated into a
complete target student model, and the performance of the student models
gradually improved from lower to higher grades of the stage. Education
distillation strategies combined with distillation algorithms outperform the
results of single distillation algorithms on the public dataset
CIFAR100,Caltech256, Food-101 dataset. | Artificial Intelligence |
What field is the article from? | Title: "Close...but not as good as an educator." -- Using ChatGPT to provide formative feedback in large-class collaborative learning
Abstract: Delivering personalised, formative feedback to multiple problem-based
learning groups in a short time period can be almost impossible. We employed
ChatGPT to provide personalised formative feedback in a one-hour Zoom break-out
room activity that taught practicing health professionals how to formulate
evaluation plans for digital health initiatives. Learners completed an
evaluation survey that included Likert scales and open-ended questions that
were analysed. Half of the 44 survey respondents had never used ChatGPT before.
Overall, respondents found the feedback favourable, described a wide range of
group dynamics, and had adaptive responses to the feedback, yet only three
groups used the feedback loop to improve their evaluation plans. Future
educators can learn from our experience including engineering prompts,
providing instructions on how to use ChatGPT, and scaffolding optimal group
interactions with ChatGPT. Future researchers should explore the influence of
ChatGPT on group dynamics and derive design principles for the use of ChatGPT
in collaborative learning. | Human-Computer Interaction |
What field is the article from? | Title: On Task-personalized Multimodal Few-shot Learning for Visually-rich Document Entity Retrieval
Abstract: Visually-rich document entity retrieval (VDER), which extracts key
information (e.g. date, address) from document images like invoices and
receipts, has become an important topic in industrial NLP applications. The
emergence of new document types at a constant pace, each with its unique entity
types, presents a unique challenge: many documents contain unseen entity types
that occur only a couple of times. Addressing this challenge requires models to
have the ability of learning entities in a few-shot manner. However, prior
works for Few-shot VDER mainly address the problem at the document level with a
predefined global entity space, which doesn't account for the entity-level
few-shot scenario: target entity types are locally personalized by each task
and entity occurrences vary significantly among documents. To address this
unexplored scenario, this paper studies a novel entity-level few-shot VDER
task. The challenges lie in the uniqueness of the label space for each task and
the increased complexity of out-of-distribution (OOD) contents. To tackle this
novel task, we present a task-aware meta-learning based framework, with a
central focus on achieving effective task personalization that distinguishes
between in-task and out-of-task distribution. Specifically, we adopt a
hierarchical decoder (HC) and employ contrastive learning (ContrastProtoNet) to
achieve this goal. Furthermore, we introduce a new dataset, FewVEX, to boost
future research in the field of entity-level few-shot VDER. Experimental
results demonstrate our approaches significantly improve the robustness of
popular meta-learning baselines. | Artificial Intelligence |
What field is the article from? | Title: CGS-Mask: Making Time Series Predictions Intuitive for Al
Abstract: Artificial intelligence (AI) has immense potential in time series prediction,
but most explainable tools have limited capabilities in providing a systematic
understanding of important features over time. These tools typically rely on
evaluating a single time point, overlook the time ordering of inputs, and
neglect the time-sensitive nature of time series applications. These factors
make it difficult for users, particularly those without domain knowledge, to
comprehend AI model decisions and obtain meaningful explanations. We propose
CGS-Mask, a post-hoc and model-agnostic cellular genetic strip mask-based
saliency approach to address these challenges. CGS-Mask uses consecutive time
steps as a cohesive entity to evaluate the impact of features on the final
prediction, providing binary and sustained feature importance scores over time.
Our algorithm optimizes the mask population iteratively to obtain the optimal
mask in a reasonable time. We evaluated CGS-Mask on synthetic and real-world
datasets, and it outperformed state-of-the-art methods in elucidating the
importance of features over time. According to our pilot user study via a
questionnaire survey, CGS-Mask is the most effective approach in presenting
easily understandable time series prediction results, enabling users to
comprehend the decision-making process of AI models with ease. | Artificial Intelligence |
What field is the article from? | Title: Clinical Notes Reveal Physician Fatigue
Abstract: Physicians write notes about patients. In doing so, they reveal much about
themselves. Using data from 129,228 emergency room visits, we train a model to
identify notes written by fatigued physicians -- those who worked 5 or more of
the prior 7 days. In a hold-out set, the model accurately identifies notes
written by these high-workload physicians, and also flags notes written in
other high-fatigue settings: on overnight shifts, and after high patient
volumes. Model predictions also correlate with worse decision-making on at
least one important metric: yield of testing for heart attack is 18% lower with
each standard deviation increase in model-predicted fatigue. Finally, the model
indicates that notes written about Black and Hispanic patients have 12% and 21%
higher predicted fatigue than Whites -- larger than overnight vs. daytime
differences. These results have an important implication for large language
models (LLMs). Our model indicates that fatigued doctors write more predictable
notes. Perhaps unsurprisingly, because word prediction is the core of how LLMs
work, we find that LLM-written notes have 17% higher predicted fatigue than
real physicians' notes. This indicates that LLMs may introduce distortions in
generated text that are not yet fully understood. | Computational Linguistics |
What field is the article from? | Title: Multi-Set Inoculation: Assessing Model Robustness Across Multiple Challenge Sets
Abstract: Language models, given their black-box nature, often exhibit sensitivity to
input perturbations, leading to trust issues due to hallucinations. To bolster
trust, it's essential to understand these models' failure modes and devise
strategies to enhance their performance. In this study, we propose a framework
to study the effect of input perturbations on language models of different
scales, from pre-trained models to large language models (LLMs). We use
fine-tuning to train a robust model to perturbations, and we investigate
whether exposure to one perturbation improves or degrades the model's
performance on other perturbations. To address multi-perturbation robustness,
we suggest three distinct training strategies. We also extend the framework to
LLMs via a chain of thought(COT) prompting with exemplars. We instantiate our
framework for the Tabular-NLI task and show that the proposed strategies train
the model robust to different perturbations without losing accuracy on a given
dataset. | Computational Linguistics |
What field is the article from? | Title: High-fidelity Person-centric Subject-to-Image Synthesis
Abstract: Current subject-driven image generation methods encounter significant
challenges in person-centric image generation. The reason is that they learn
the semantic scene and person generation by fine-tuning a common pre-trained
diffusion, which involves an irreconcilable training imbalance. Precisely, to
generate realistic persons, they need to sufficiently tune the pre-trained
model, which inevitably causes the model to forget the rich semantic scene
prior and makes scene generation over-fit to the training data. Moreover, even
with sufficient fine-tuning, these methods can still not generate high-fidelity
persons since joint learning of the scene and person generation also lead to
quality compromise. In this paper, we propose Face-diffuser, an effective
collaborative generation pipeline to eliminate the above training imbalance and
quality compromise. Specifically, we first develop two specialized pre-trained
diffusion models, i.e., Text-driven Diffusion Model (TDM) and Subject-augmented
Diffusion Model (SDM), for scene and person generation, respectively. The
sampling process is divided into three sequential stages, i.e., semantic scene
construction, subject-scene fusion, and subject enhancement. The first and last
stages are performed by TDM and SDM respectively. The subject-scene fusion
stage, that is the collaboration achieved through a novel and highly effective
mechanism, Saliency-adaptive Noise Fusion (SNF). Specifically, it is based on
our key observation that there exists a robust link between classifier-free
guidance responses and the saliency of generated images. In each time step, SNF
leverages the unique strengths of each model and allows for the spatial
blending of predicted noises from both models automatically in a saliency-aware
manner. Extensive experiments confirm the impressive effectiveness and
robustness of the Face-diffuser. | Computer Vision |
What field is the article from? | Title: Relation Extraction from News Articles (RENA): A Tool for Epidemic Surveillance
Abstract: Relation Extraction from News Articles (RENA) is a browser-based tool
designed to extract key entities and their semantic relationships in English
language news articles related to infectious diseases. Constructed using the
React framework, this system presents users with an elegant and user-friendly
interface. It enables users to input a news article and select from a choice of
two models to generate a comprehensive list of relations within the provided
text. As a result, RENA allows real-time parsing of news articles to extract
key information for epidemic surveillance, contributing to EPIWATCH, an
open-source intelligence-based epidemic warning system. | Computational Linguistics |
What field is the article from? | Title: Language Guided Visual Question Answering: Elevate Your Multimodal Language Model Using Knowledge-Enriched Prompts
Abstract: Visual question answering (VQA) is the task of answering questions about an
image. The task assumes an understanding of both the image and the question to
provide a natural language answer. VQA has gained popularity in recent years
due to its potential applications in a wide range of fields, including
robotics, education, and healthcare. In this paper, we focus on
knowledge-augmented VQA, where answering the question requires commonsense
knowledge, world knowledge, and reasoning about ideas and concepts not present
in the image. We propose a multimodal framework that uses language guidance
(LG) in the form of rationales, image captions, scene graphs, etc to answer
questions more accurately. We benchmark our method on the multi-choice
question-answering task of the A-OKVQA, Science-QA, VSR, and IconQA datasets
using CLIP and BLIP models. We show that the use of language guidance is a
simple but powerful and effective strategy for visual question answering. Our
language guidance improves the performance of CLIP by 7.6% and BLIP-2 by 4.8%
in the challenging A-OKVQA dataset. We also observe consistent improvement in
performance on the Science-QA, VSR, and IconQA datasets when using the proposed
language guidances. The implementation of LG-VQA is publicly available at
https:// github.com/declare-lab/LG-VQA. | Computer Vision |
What field is the article from? | Title: Holodeck: Language Guided Generation of 3D Embodied AI Environments
Abstract: 3D simulated environments play a critical role in Embodied AI, but their
creation requires expertise and extensive manual effort, restricting their
diversity and scope. To mitigate this limitation, we present Holodeck, a system
that generates 3D environments to match a user-supplied prompt fully
automatedly. Holodeck can generate diverse scenes, e.g., arcades, spas, and
museums, adjust the designs for styles, and can capture the semantics of
complex queries such as "apartment for a researcher with a cat" and "office of
a professor who is a fan of Star Wars". Holodeck leverages a large language
model (GPT-4) for common sense knowledge about what the scene might look like
and uses a large collection of 3D assets from Objaverse to populate the scene
with diverse objects. To address the challenge of positioning objects
correctly, we prompt GPT-4 to generate spatial relational constraints between
objects and then optimize the layout to satisfy those constraints. Our
large-scale human evaluation shows that annotators prefer Holodeck over
manually designed procedural baselines in residential scenes and that Holodeck
can produce high-quality outputs for diverse scene types. We also demonstrate
an exciting application of Holodeck in Embodied AI, training agents to navigate
in novel scenes like music rooms and daycares without human-constructed data,
which is a significant step forward in developing general-purpose embodied
agents. | Computer Vision |
What field is the article from? | Title: Large Language Models with Retrieval-Augmented Generation for Zero-Shot Disease Phenotyping
Abstract: Identifying disease phenotypes from electronic health records (EHRs) is
critical for numerous secondary uses. Manually encoding physician knowledge
into rules is particularly challenging for rare diseases due to inadequate EHR
coding, necessitating review of clinical notes. Large language models (LLMs)
offer promise in text understanding but may not efficiently handle real-world
clinical documentation. We propose a zero-shot LLM-based method enriched by
retrieval-augmented generation and MapReduce, which pre-identifies
disease-related text snippets to be used in parallel as queries for the LLM to
establish diagnosis. We show that this method as applied to pulmonary
hypertension (PH), a rare disease characterized by elevated arterial pressures
in the lungs, significantly outperforms physician logic rules ($F_1$ score of
0.62 vs. 0.75). This method has the potential to enhance rare disease cohort
identification, expanding the scope of robust clinical research and care gap
identification. | Artificial Intelligence |
What field is the article from? | Title: VisionTraj: A Noise-Robust Trajectory Recovery Framework based on Large-scale Camera Network
Abstract: Trajectory recovery based on the snapshots from the city-wide multi-camera
network facilitates urban mobility sensing and driveway optimization. The
state-of-the-art solutions devoted to such a vision-based scheme typically
incorporate predefined rules or unsupervised iterative feedback, struggling
with multi-fold challenges such as lack of open-source datasets for training
the whole pipeline, and the vulnerability to the noises from visual inputs. In
response to the dilemma, this paper proposes VisionTraj, the first
learning-based model that reconstructs vehicle trajectories from snapshots
recorded by road network cameras. Coupled with it, we elaborate on two rational
vision-trajectory datasets, which produce extensive trajectory data along with
corresponding visual snapshots, enabling supervised vision-trajectory interplay
extraction. Following the data creation, based on the results from the
off-the-shelf multi-modal vehicle clustering, we first re-formulate the
trajectory recovery problem as a generative task and introduce the canonical
Transformer as the autoregressive backbone. Then, to identify clustering noises
(e.g., false positives) with the bound on the snapshots' spatiotemporal
dependencies, a GCN-based soft-denoising module is conducted based on the fine-
and coarse-grained Re-ID clusters. Additionally, we harness strong semantic
information extracted from the tracklet to provide detailed insights into the
vehicle's entry and exit actions during trajectory recovery. The denoising and
tracklet components can also act as plug-and-play modules to boost baselines.
Experimental results on the two hand-crafted datasets show that the proposed
VisionTraj achieves a maximum +11.5% improvement against the sub-best model. | Computer Vision |
What field is the article from? | Title: Applying Large Language Models for Causal Structure Learning in Non Small Cell Lung Cancer
Abstract: Causal discovery is becoming a key part in medical AI research. These methods
can enhance healthcare by identifying causal links between biomarkers,
demographics, treatments and outcomes. They can aid medical professionals in
choosing more impactful treatments and strategies. In parallel, Large Language
Models (LLMs) have shown great potential in identifying patterns and generating
insights from text data. In this paper we investigate applying LLMs to the
problem of determining the directionality of edges in causal discovery.
Specifically, we test our approach on a deidentified set of Non Small Cell Lung
Cancer(NSCLC) patients that have both electronic health record and genomic
panel data. Graphs are validated using Bayesian Dirichlet estimators using
tabular data. Our result shows that LLMs can accurately predict the
directionality of edges in causal graphs, outperforming existing
state-of-the-art methods. These findings suggests that LLMs can play a
significant role in advancing causal discovery and help us better understand
complex systems. | Artificial Intelligence |
What field is the article from? | Title: A Graph Neural Network-Based QUBO-Formulated Hamiltonian-Inspired Loss Function for Combinatorial Optimization using Reinforcement Learning
Abstract: Quadratic Unconstrained Binary Optimization (QUBO) is a generic technique to
model various NP-hard Combinatorial Optimization problems (CO) in the form of
binary variables. Ising Hamiltonian is used to model the energy function of a
system. QUBO to Ising Hamiltonian is regarded as a technique to solve various
canonical optimization problems through quantum optimization algorithms.
Recently, PI-GNN, a generic framework, has been proposed to address CO problems
over graphs based on Graph Neural Network (GNN) architecture. They introduced a
generic QUBO-formulated Hamiltonian-inspired loss function that was directly
optimized using GNN. PI-GNN is highly scalable but there lies a noticeable
decrease in the number of satisfied constraints when compared to
problem-specific algorithms and becomes more pronounced with increased graph
densities. Here, We identify a behavioral pattern related to it and devise
strategies to improve its performance. Another group of literature uses
Reinforcement learning (RL) to solve the aforementioned NP-hard problems using
problem-specific reward functions. In this work, we also focus on creating a
bridge between the RL-based solutions and the QUBO-formulated Hamiltonian. We
formulate and empirically evaluate the compatibility of the QUBO-formulated
Hamiltonian as the generic reward function in the RL-based paradigm in the form
of rewards. Furthermore, we also introduce a novel Monty Carlo Tree
Search-based strategy with GNN where we apply a guided search through manual
perturbation of node labels during training. We empirically evaluated our
methods and observed up to 44% improvement in the number of constraint
violations compared to the PI-GNN. | Machine Learning |
What field is the article from? | Title: A Spatial-Temporal Transformer based Framework For Human Pose Assessment And Correction in Education Scenarios
Abstract: Human pose assessment and correction play a crucial role in applications
across various fields, including computer vision, robotics, sports analysis,
healthcare, and entertainment. In this paper, we propose a Spatial-Temporal
Transformer based Framework (STTF) for human pose assessment and correction in
education scenarios such as physical exercises and science experiment. The
framework comprising skeletal tracking, pose estimation, posture assessment,
and posture correction modules to educate students with professional,
quick-to-fix feedback. We also create a pose correction method to provide
corrective feedback in the form of visual aids. We test the framework with our
own dataset. It comprises (a) new recordings of five exercises, (b) existing
recordings found on the internet of the same exercises, and (c) corrective
feedback on the recordings by professional athletes and teachers. Results show
that our model can effectively measure and comment on the quality of students'
actions. The STTF leverages the power of transformer models to capture spatial
and temporal dependencies in human poses, enabling accurate assessment and
effective correction of students' movements. | Computer Vision |
What field is the article from? | Title: Prompt-Engineering and Transformer-based Question Generation and Evaluation
Abstract: Question generation has numerous applications in the educational context.
Question generation can prove helpful for students when reviewing content and
testing themselves. Furthermore, a question generation model can aid teachers
by lessening the burden of creating assessments and other practice material.
This paper aims to find the best method to generate questions from textual data
through a transformer model and prompt engineering. In this research, we
finetuned a pretrained distilBERT model on the SQuAD question answering dataset
to generate questions. In addition to training a transformer model, prompt
engineering was applied to generate questions effectively using the LLaMA
model. The generated questions were compared against the baseline questions in
the SQuAD dataset to evaluate the effectiveness of four different prompts. All
four prompts demonstrated over 60% similarity on average. Of the
prompt-generated questions, 30% achieved a high similarity score greater than
70%. | Computational Linguistics |
What field is the article from? | Title: Mixed Pseudo Labels for Semi-Supervised Object Detection
Abstract: While the pseudo-label method has demonstrated considerable success in
semi-supervised object detection tasks, this paper uncovers notable limitations
within this approach. Specifically, the pseudo-label method tends to amplify
the inherent strengths of the detector while accentuating its weaknesses, which
is manifested in the missed detection of pseudo-labels, particularly for small
and tail category objects. To overcome these challenges, this paper proposes
Mixed Pseudo Labels (MixPL), consisting of Mixup and Mosaic for pseudo-labeled
data, to mitigate the negative impact of missed detections and balance the
model's learning across different object scales. Additionally, the model's
detection performance on tail categories is improved by resampling labeled data
with relevant instances. Notably, MixPL consistently improves the performance
of various detectors and obtains new state-of-the-art results with Faster
R-CNN, FCOS, and DINO on COCO-Standard and COCO-Full benchmarks. Furthermore,
MixPL also exhibits good scalability on large models, improving DINO Swin-L by
2.5% mAP and achieving nontrivial new records (60.2% mAP) on the COCO val2017
benchmark without extra annotations. | Computer Vision |
What field is the article from? | Title: Contactless Fingerprint Biometric Anti-Spoofing: An Unsupervised Deep Learning Approach
Abstract: Contactless fingerprint recognition offers a higher level of user comfort and
addresses hygiene concerns more effectively. However, it is also more
vulnerable to presentation attacks such as photo paper, paper-printout, and
various display attacks, which makes it more challenging to implement in
biometric systems compared to contact-based modalities. Limited research has
been conducted on presentation attacks in contactless fingerprint systems, and
these studies have encountered challenges in terms of generalization and
scalability since both bonafide samples and presentation attacks are utilized
during training model. Although this approach appears promising, it lacks the
ability to handle unseen attacks, which is a crucial factor for developing PAD
methods that can generalize effectively. We introduced an innovative
anti-spoofing approach that combines an unsupervised autoencoder with a
convolutional block attention module to address the limitations of existing
methods. Our model is exclusively trained on bonafide images without exposure
to any spoofed samples during the training phase. It is then evaluated against
various types of presentation attack images in the testing phase. The scheme we
proposed has achieved an average BPCER of 0.96\% with an APCER of 1.6\% for
presentation attacks involving various types of spoofed samples. | Computer Vision |
What field is the article from? | Title: Optimal Wildfire Escape Route Planning for Drones under Dynamic Fire and Smoke
Abstract: In recent years, the increasing prevalence and intensity of wildfires have
posed significant challenges to emergency response teams. The utilization of
unmanned aerial vehicles (UAVs), commonly known as drones, has shown promise in
aiding wildfire management efforts. This work focuses on the development of an
optimal wildfire escape route planning system specifically designed for drones,
considering dynamic fire and smoke models. First, the location of the source of
the wildfire can be well located by information fusion between UAV and
satellite, and the road conditions in the vicinity of the fire can be assessed
and analyzed using multi-channel remote sensing data. Second, the road network
can be extracted and segmented in real time using UAV vision technology, and
each road in the road network map can be given priority based on the results of
road condition classification. Third, the spread model of dynamic fires
calculates the new location of the fire source based on the fire intensity,
wind speed and direction, and the radius increases as the wildfire spreads.
Smoke is generated around the fire source to create a visual representation of
a burning fire. Finally, based on the improved A* algorithm, which considers
all the above factors, the UAV can quickly plan an escape route based on the
starting and destination locations that avoid the location of the fire source
and the area where it is spreading. By considering dynamic fire and smoke
models, the proposed system enhances the safety and efficiency of drone
operations in wildfire environments. | Robotics |
What field is the article from? | Title: Image Restoration Through Generalized Ornstein-Uhlenbeck Bridge
Abstract: Diffusion models possess powerful generative capabilities enabling the
mapping of noise to data using reverse stochastic differential equations.
However, in image restoration tasks, the focus is on the mapping relationship
from low-quality images to high-quality images. To address this, we introduced
the Generalized Ornstein-Uhlenbeck Bridge (GOUB) model. By leveraging the
natural mean-reverting property of the generalized OU process and further
adjusting the variance of its steady-state distribution through the Doob's
h-transform, we achieve diffusion mappings from point to point with minimal
cost. This allows for end-to-end training, enabling the recovery of
high-quality images from low-quality ones. Additionally, we uncovered the
mathematical essence of some bridge models, all of which are special cases of
the GOUB and empirically demonstrated the optimality of our proposed models.
Furthermore, benefiting from our distinctive parameterization mechanism, we
proposed the Mean-ODE model that is better at capturing pixel-level information
and structural perceptions. Experimental results show that both models achieved
state-of-the-art results in various tasks, including inpainting, deraining, and
super-resolution. Code is available at https://github.com/Hammour-steak/GOUB. | Computer Vision |
What field is the article from? | Title: Push it to the Demonstrated Limit: Multimodal Visuotactile Imitation Learning with Force Matching
Abstract: Optical tactile sensors have emerged as an effective means to acquire dense
contact information during robotic manipulation. A recently-introduced
`see-through-your-skin' (STS) variant of this type of sensor has both visual
and tactile modes, enabled by leveraging a semi-transparent surface and
controllable lighting. In this work, we investigate the benefits of pairing
visuotactile sensing with imitation learning for contact-rich manipulation
tasks. First, we use tactile force measurements and a novel algorithm during
kinesthetic teaching to yield a force profile that better matches that of the
human demonstrator. Second, we add visual/tactile STS mode switching as a
control policy output, simplifying the application of the sensor. Finally, we
study multiple observation configurations to compare and contrast the value of
visual/tactile data (both with and without mode switching) with visual data
from a wrist-mounted eye-in-hand camera. We perform an extensive series of
experiments on a real robotic manipulator with door-opening and closing tasks,
including over 3,000 real test episodes. Our results highlight the importance
of tactile sensing for imitation learning, both for data collection to allow
force matching, and for policy execution to allow accurate task feedback. | Robotics |
What field is the article from? | Title: Relax: Composable Abstractions for End-to-End Dynamic Machine Learning
Abstract: Dynamic shape computations have become critical in modern machine learning
workloads, especially in emerging large language models. The success of these
models has driven demand for deploying them to a diverse set of backend
environments. In this paper, we present Relax, a compiler abstraction for
optimizing end-to-end dynamic machine learning workloads. Relax introduces
first-class symbolic shape annotations to track dynamic shape computations
globally across the program. It also introduces a cross-level abstraction that
encapsulates computational graphs, loop-level tensor programs, and library
calls in a single representation to enable cross-level optimizations. We build
an end-to-end compilation framework using the proposed approach to optimize
dynamic shape models. Experimental results on large language models show that
Relax delivers performance competitive with state-of-the-art hand-optimized
systems across platforms and enables deployment of emerging dynamic models to a
broader set of environments, including mobile phones, embedded devices, and web
browsers. | Machine Learning |
What field is the article from? | Title: Finnish 5th and 6th graders' misconceptions about Artificial Intelligence
Abstract: Research on children's initial conceptions of AI is in an emerging state,
which, from a constructivist viewpoint, challenges the development of
pedagogically sound AI-literacy curricula, methods, and materials. To
contribute to resolving this need in the present paper, qualitative survey data
from 195 children were analyzed abductively to answer the following three
research questions: What kind of misconceptions do Finnish 5th and 6th graders'
have about the essence AI?; 2) How do these misconceptions relate to common
misconception types?; and 3) How profound are these misconceptions? As a
result, three misconception categories were identified: 1) Non-technological
AI, in which AI was conceptualized as peoples' cognitive processes (factual
misconception); 2) Anthropomorphic AI, in which AI was conceptualized as a
human-like entity (vernacular, non-scientific, and conceptual misconception);
and 3) AI as a machine with a pre-installed intelligence or knowledge (factual
misconception). Majority of the children evaluated their AI-knowledge low,
which implies that the misconceptions are more superficial than profound. The
findings suggest that context-specific linguistic features can contribute to
students' AI misconceptions. Implications for future research and AI literacy
education are discussed. | Computers and Society |
What field is the article from? | Title: Chain-of-Note: Enhancing Robustness in Retrieval-Augmented Language Models
Abstract: Retrieval-augmented language models (RALMs) represent a substantial
advancement in the capabilities of large language models, notably in reducing
factual hallucination by leveraging external knowledge sources. However, the
reliability of the retrieved information is not always guaranteed. The
retrieval of irrelevant data can lead to misguided responses, and potentially
causing the model to overlook its inherent knowledge, even when it possesses
adequate information to address the query. Moreover, standard RALMs often
struggle to assess whether they possess adequate knowledge, both intrinsic and
retrieved, to provide an accurate answer. In situations where knowledge is
lacking, these systems should ideally respond with "unknown" when the answer is
unattainable. In response to these challenges, we introduces Chain-of-Noting
(CoN), a novel approach aimed at improving the robustness of RALMs in facing
noisy, irrelevant documents and in handling unknown scenarios. The core idea of
CoN is to generate sequential reading notes for retrieved documents, enabling a
thorough evaluation of their relevance to the given question and integrating
this information to formulate the final answer. We employed ChatGPT to create
training data for CoN, which was subsequently trained on an LLaMa-2 7B model.
Our experiments across four open-domain QA benchmarks show that RALMs equipped
with CoN significantly outperform standard RALMs. Notably, CoN achieves an
average improvement of +7.9 in EM score given entirely noisy retrieved
documents and +10.5 in rejection rates for real-time questions that fall
outside the pre-training knowledge scope. | Computational Linguistics |
What field is the article from? | Title: Shortcut Bias Mitigation via Ensemble Diversity Using Diffusion Probabilistic Models
Abstract: Spurious correlations in the data, where multiple cues are predictive of the
target labels, often lead to a phenomenon known as simplicity bias, where a
model relies on erroneous, easy-to-learn cues while ignoring reliable ones. In
this work, we propose an ensemble diversification framework exploiting
Diffusion Probabilistic Models (DPMs) for shortcut bias mitigation. We show
that at particular training intervals, DPMs can generate images with novel
feature combinations, even when trained on images displaying correlated input
features. We leverage this crucial property to generate synthetic
counterfactuals to increase model diversity via ensemble disagreement. We show
that DPM-guided diversification is sufficient to remove dependence on primary
shortcut cues, without a need for additional supervised signals. We further
empirically quantify its efficacy on several diversification objectives, and
finally show improved generalization and diversification performance on par
with prior work that relies on auxiliary data collection. | Machine Learning |
What field is the article from? | Title: Mission-driven Exploration for Accelerated Deep Reinforcement Learning with Temporal Logic Task Specifications
Abstract: This paper addresses the problem of designing optimal control policies for
mobile robots with mission and safety requirements specified using Linear
Temporal Logic (LTL). We consider robots with unknown stochastic dynamics
operating in environments with unknown geometric structure. The robots are
equipped with sensors allowing them to detect obstacles. Our goal is to
synthesize a control policy that maximizes the probability of satisfying an
LTL-encoded task in the presence of motion and environmental uncertainty.
Several deep reinforcement learning (DRL) algorithms have been proposed
recently to address similar problems. A common limitation in related works is
that of slow learning performance. In order to address this issue, we propose a
novel DRL algorithm, which has the capability to learn control policies at a
notably faster rate compared to similar methods. Its sample efficiency is due
to a mission-driven exploration strategy that prioritizes exploration towards
directions that may contribute to mission accomplishment. Identifying these
directions relies on an automaton representation of the LTL task as well as a
learned neural network that (partially) models the unknown system dynamics. We
provide comparative experiments demonstrating the efficiency of our algorithm
on robot navigation tasks in unknown environments. | Robotics |
What field is the article from? | Title: Uncertainty Quantification of Deep Learning for Spatiotemporal Data: Challenges and Opportunities
Abstract: With the advancement of GPS, remote sensing, and computational simulations,
large amounts of geospatial and spatiotemporal data are being collected at an
increasing speed. Such emerging spatiotemporal big data assets, together with
the recent progress of deep learning technologies, provide unique opportunities
to transform society. However, it is widely recognized that deep learning
sometimes makes unexpected and incorrect predictions with unwarranted
confidence, causing severe consequences in high-stake decision-making
applications (e.g., disaster management, medical diagnosis, autonomous
driving). Uncertainty quantification (UQ) aims to estimate a deep learning
model's confidence. This paper provides a brief overview of UQ of deep learning
for spatiotemporal data, including its unique challenges and existing methods.
We particularly focus on the importance of uncertainty sources. We identify
several future research directions for spatiotemporal data. | Machine Learning |
What field is the article from? | Title: Exploring Semi-supervised Hierarchical Stacked Encoder for Legal Judgement Prediction
Abstract: Predicting the judgment of a legal case from its unannotated case facts is a
challenging task. The lengthy and non-uniform document structure poses an even
greater challenge in extracting information for decision prediction. In this
work, we explore and propose a two-level classification mechanism; both
supervised and unsupervised; by using domain-specific pre-trained BERT to
extract information from long documents in terms of sentence embeddings further
processing with transformer encoder layer and use unsupervised clustering to
extract hidden labels from these embeddings to better predict a judgment of a
legal case. We conduct several experiments with this mechanism and see higher
performance gains than the previously proposed methods on the ILDC dataset. Our
experimental results also show the importance of domain-specific pre-training
of Transformer Encoders in legal information processing. | Computational Linguistics |
What field is the article from? | Title: CompeteAI: Understanding the Competition Behaviors in Large Language Model-based Agents
Abstract: Large language models (LLMs) have been widely used as agents to complete
different tasks, such as personal assistance or event planning. While most work
has focused on cooperation and collaboration between agents, little work
explores competition, another important mechanism that fosters the development
of society and economy. In this paper, we seek to examine the competition
behaviors in LLM-based agents. We first propose a general framework to study
the competition between agents. Then, we implement a practical competitive
environment using GPT-4 to simulate a virtual town with two types of agents,
including restaurant agents and customer agents. Specifically, restaurant
agents compete with each other to attract more customers, where the competition
fosters them to transform, such as cultivating new operating strategies. The
results of our experiments reveal several interesting findings ranging from
social learning to Matthew Effect, which aligns well with existing sociological
and economic theories. We believe that competition between agents deserves
further investigation to help us understand society better. The code will be
released soon. | Artificial Intelligence |
What field is the article from? | Title: HEALNet -- Hybrid Multi-Modal Fusion for Heterogeneous Biomedical Data
Abstract: Technological advances in medical data collection such as high-resolution
histopathology and high-throughput genomic sequencing have contributed to the
rising requirement for multi-modal biomedical modelling, specifically for
image, tabular, and graph data. Most multi-modal deep learning approaches use
modality-specific architectures that are trained separately and cannot capture
the crucial cross-modal information that motivates the integration of different
data sources. This paper presents the Hybrid Early-fusion Attention Learning
Network (HEALNet): a flexible multi-modal fusion architecture, which a)
preserves modality-specific structural information, b) captures the cross-modal
interactions and structural information in a shared latent space, c) can
effectively handle missing modalities during training and inference, and d)
enables intuitive model inspection by learning on the raw data input instead of
opaque embeddings. We conduct multi-modal survival analysis on Whole Slide
Images and Multi-omic data on four cancer cohorts of The Cancer Genome Atlas
(TCGA). HEALNet achieves state-of-the-art performance, substantially improving
over both uni-modal and recent multi-modal baselines, whilst being robust in
scenarios with missing modalities. | Machine Learning |
What field is the article from? | Title: Train 'n Trade: Foundations of Parameter Markets
Abstract: Organizations typically train large models individually. This is costly and
time-consuming, particularly for large-scale foundation models. Such vertical
production is known to be suboptimal. Inspired by this economic insight, we ask
whether it is possible to leverage others' expertise by trading the constituent
parts in models, i.e., sets of weights, as if they were market commodities.
While recent advances in aligning and interpolating models suggest that doing
so may be possible, a number of fundamental questions must be answered to
create viable parameter markets. In this work, we address these basic
questions, propose a framework containing the infrastructure necessary for
market operations to take place, study strategies for exchanging parameters,
and offer means for agents to monetize parameters. Excitingly, compared to
agents who train siloed models from scratch, we show that it is possible to
mutually gain by using the market, even in competitive settings. This suggests
that the notion of parameter markets may be a useful paradigm for improving
large-scale model training in the future. | Machine Learning |
What field is the article from? | Title: On Surgical Fine-tuning for Language Encoders
Abstract: Fine-tuning all the layers of a pre-trained neural language encoder (either
using all the parameters or using parameter-efficient methods) is often the
de-facto way of adapting it to a new task. We show evidence that for different
downstream language tasks, fine-tuning only a subset of layers is sufficient to
obtain performance that is close to and often better than fine-tuning all the
layers in the language encoder. We propose an efficient metric based on the
diagonal of the Fisher information matrix (FIM score), to select the candidate
layers for selective fine-tuning. We show, empirically on GLUE and SuperGLUE
tasks and across distinct language encoders, that this metric can effectively
select layers leading to a strong downstream performance. Our work highlights
that task-specific information corresponding to a given downstream task is
often localized within a few layers, and tuning only those is sufficient for
strong performance. Additionally, we demonstrate the robustness of the FIM
score to rank layers in a manner that remains constant during the optimization
process. | Computational Linguistics |
What field is the article from? | Title: The New Frontier of Cybersecurity: Emerging Threats and Innovations
Abstract: In today's digitally interconnected world, cybersecurity threats have reached
unprecedented levels, presenting a pressing concern for individuals,
organizations, and governments. This study employs a qualitative research
approach to comprehensively examine the diverse threats of cybersecurity and
their impacts across various sectors. Four primary categories of threats are
identified and analyzed, encompassing malware attacks, social engineering
attacks, network vulnerabilities, and data breaches. The research delves into
the consequences of these threats on individuals, organizations, and society at
large. The findings reveal a range of key emerging threats in cybersecurity,
including advanced persistent threats, ransomware attacks, Internet of Things
(IoT) vulnerabilities, and social engineering exploits. Consequently, it is
evident that emerging cybersecurity threats pose substantial risks to both
organizations and individuals. The sophistication and diversity of these
emerging threats necessitate a multi-layered approach to cybersecurity. This
approach should include robust security measures, comprehensive employee
training, and regular security audits. The implications of these emerging
threats are extensive, with potential consequences such as financial loss,
reputational damage, and compromised personal information. This study
emphasizes the importance of implementing effective measures to mitigate these
threats. It highlights the significance of using strong passwords, encryption
methods, and regularly updating software to bolster cyber defenses. | Cryptography and Security |
What field is the article from? | Title: Distributed Global Structure-from-Motion with a Deep Front-End
Abstract: While initial approaches to Structure-from-Motion (SfM) revolved around both
global and incremental methods, most recent applications rely on incremental
systems to estimate camera poses due to their superior robustness. Though there
has been tremendous progress in SfM `front-ends' powered by deep models learned
from data, the state-of-the-art (incremental) SfM pipelines still rely on
classical SIFT features, developed in 2004. In this work, we investigate
whether leveraging the developments in feature extraction and matching helps
global SfM perform on par with the SOTA incremental SfM approach (COLMAP). To
do so, we design a modular SfM framework that allows us to easily combine
developments in different stages of the SfM pipeline. Our experiments show that
while developments in deep-learning based two-view correspondence estimation do
translate to improvements in point density for scenes reconstructed with global
SfM, none of them outperform SIFT when comparing with incremental SfM results
on a range of datasets. Our SfM system is designed from the ground up to
leverage distributed computation, enabling us to parallelize computation on
multiple machines and scale to large scenes. | Computer Vision |
What field is the article from? | Title: ChatGPT and Beyond: The Generative AI Revolution in Education
Abstract: The wide adoption and usage of generative artificial intelligence (AI)
models, particularly ChatGPT, has sparked a surge in research exploring their
potential applications in the educational landscape. This survey examines
academic literature published between November, 2022, and July, 2023,
specifically targeting high-impact research from Scopus-indexed Q1 and Q2
journals. This survey delves into the practical applications and implications
of generative AI models across a diverse range of educational contexts. Through
a comprehensive and rigorous evaluation of recent academic literature, this
survey seeks to illuminate the evolving role of generative AI models,
particularly ChatGPT, in education. By shedding light on the potential
benefits, challenges, and emerging trends in this dynamic field, the survey
endeavors to contribute to the understanding of the nexus between artificial
intelligence and education. The findings of this review will empower educators,
researchers, and policymakers to make informed decisions about the integration
of AI technologies into learning environments. | Computers and Society |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.