instruction
stringclasses 1
value | input
stringlengths 260
2.07k
| output
stringclasses 10
values |
---|---|---|
What field is the article from? | Title: Multi-level Reasoning for Robotic Assembly: From Sequence Inference to Contact Selection
Abstract: Automating the assembly of objects from their parts is a complex problem with
innumerable applications in manufacturing, maintenance, and recycling. Unlike
existing research, which is limited to target segmentation, pose regression, or
using fixed target blueprints, our work presents a holistic multi-level
framework for part assembly planning consisting of part assembly sequence
inference, part motion planning, and robot contact optimization. We present the
Part Assembly Sequence Transformer (PAST) -- a sequence-to-sequence neural
network -- to infer assembly sequences recursively from a target blueprint. We
then use a motion planner and optimization to generate part movements and
contacts. To train PAST, we introduce D4PAS: a large-scale Dataset for Part
Assembly Sequences (D4PAS) consisting of physically valid sequences for
industrial objects. Experimental results show that our approach generalizes
better than prior methods while needing significantly less computational time
for inference. | Robotics |
What field is the article from? | Title: Dialogue-based generation of self-driving simulation scenarios using Large Language Models
Abstract: Simulation is an invaluable tool for developing and evaluating controllers
for self-driving cars. Current simulation frameworks are driven by
highly-specialist domain specific languages, and so a natural language
interface would greatly enhance usability. But there is often a gap, consisting
of tacit assumptions the user is making, between a concise English utterance
and the executable code that captures the user's intent. In this paper we
describe a system that addresses this issue by supporting an extended
multimodal interaction: the user can follow up prior instructions with
refinements or revisions, in reaction to the simulations that have been
generated from their utterances so far. We use Large Language Models (LLMs) to
map the user's English utterances in this interaction into domain-specific
code, and so we explore the extent to which LLMs capture the context
sensitivity that's necessary for computing the speaker's intended message in
discourse. | Artificial Intelligence |
What field is the article from? | Title: Online Boosting Adaptive Learning under Concept Drift for Multistream Classification
Abstract: Multistream classification poses significant challenges due to the necessity
for rapid adaptation in dynamic streaming processes with concept drift. Despite
the growing research outcomes in this area, there has been a notable oversight
regarding the temporal dynamic relationships between these streams, leading to
the issue of negative transfer arising from irrelevant data. In this paper, we
propose a novel Online Boosting Adaptive Learning (OBAL) method that
effectively addresses this limitation by adaptively learning the dynamic
correlation among different streams. Specifically, OBAL operates in a
dual-phase mechanism, in the first of which we design an Adaptive COvariate
Shift Adaptation (AdaCOSA) algorithm to construct an initialized ensemble model
using archived data from various source streams, thus mitigating the covariate
shift while learning the dynamic correlations via an adaptive re-weighting
strategy. During the online process, we employ a Gaussian Mixture Model-based
weighting mechanism, which is seamlessly integrated with the acquired
correlations via AdaCOSA to effectively handle asynchronous drift. This
approach significantly improves the predictive performance and stability of the
target stream. We conduct comprehensive experiments on several synthetic and
real-world data streams, encompassing various drifting scenarios and types. The
results clearly demonstrate that OBAL achieves remarkable advancements in
addressing multistream classification problems by effectively leveraging
positive knowledge derived from multiple sources. | Machine Learning |
What field is the article from? | Title: From Dialogue to Diagram: Task and Relationship Extraction from Natural Language for Accelerated Business Process Prototyping
Abstract: The automatic transformation of verbose, natural language descriptions into
structured process models remains a challenge of significant complexity - This
paper introduces a contemporary solution, where central to our approach, is the
use of dependency parsing and Named Entity Recognition (NER) for extracting key
elements from textual descriptions. Additionally, we utilize
Subject-Verb-Object (SVO) constructs for identifying action relationships and
integrate semantic analysis tools, including WordNet, for enriched contextual
understanding. A novel aspect of our system is the application of neural
coreference resolution, integrated with the SpaCy framework, enhancing the
precision of entity linkage and anaphoric references. Furthermore, the system
adeptly handles data transformation and visualization, converting extracted
information into BPMN (Business Process Model and Notation) diagrams. This
methodology not only streamlines the process of capturing and representing
business workflows but also significantly reduces the manual effort and
potential for error inherent in traditional modeling approaches. | Computational Linguistics |
What field is the article from? | Title: Unifying the Perspectives of NLP and Software Engineering: A Survey on Language Models for Code
Abstract: In this work we systematically review the recent advancements in code
processing with language models, covering 50+ models, 30+ evaluation tasks,
170+ datasets, and 700 related works. We break down code processing models into
general language models represented by the GPT family and specialized models
that are specifically pretrained on code, often with tailored objectives. We
discuss the relations and differences between these models, and highlight the
historical transition of code modeling from statistical models and RNNs to
pretrained Transformers and LLMs, which is exactly the same course that had
been taken by NLP. We also discuss code-specific features such as AST, CFG, and
unit tests, along with their application in training code language models, and
identify key challenges and potential future directions in this domain. We keep
the survey open and updated on GitHub at
https://github.com/codefuse-ai/Awesome-Code-LLM. | Computational Linguistics |
What field is the article from? | Title: Mixing-Denoising Generalizable Occupancy Networks
Abstract: While current state-of-the-art generalizable implicit neural shape models
rely on the inductive bias of convolutions, it is still not entirely clear how
properties emerging from such biases are compatible with the task of 3D
reconstruction from point cloud. We explore an alternative approach to
generalizability in this context. We relax the intrinsic model bias (i.e. using
MLPs to encode local features as opposed to convolutions) and constrain the
hypothesis space instead with an auxiliary regularization related to the
reconstruction task, i.e. denoising. The resulting model is the first only-MLP
locally conditioned implicit shape reconstruction from point cloud network with
fast feed forward inference. Point cloud borne features and denoising offsets
are predicted from an exclusively MLP-made network in a single forward pass. A
decoder predicts occupancy probabilities for queries anywhere in space by
pooling nearby features from the point cloud order-invariantly, guided by
denoised relative positional encoding. We outperform the state-of-the-art
convolutional method while using half the number of model parameters. | Computer Vision |
What field is the article from? | Title: Prompted Zero-Shot Multi-label Classification of Factual Incorrectness in Machine-Generated Summaries
Abstract: This study addresses the critical issue of factual inaccuracies in
machine-generated text summaries, an increasingly prevalent issue in
information dissemination. Recognizing the potential of such errors to
compromise information reliability, we investigate the nature of factual
inconsistencies across machine-summarized content. We introduce a prompt-based
classification system that categorizes errors into four distinct types:
misrepresentation, inaccurate quantities or measurements, false attribution,
and fabrication. The participants are tasked with evaluating a corpus of
machine-generated summaries against their original articles. Our methodology
employs qualitative judgements to identify the occurrence of factual
distortions. The results show that our prompt-based approaches are able to
detect the type of errors in the summaries to some extent, although there is
scope for improvement in our classification systems. | Computational Linguistics |
What field is the article from? | Title: Unsupervised textile defect detection using convolutional neural networks
Abstract: In this study, we propose a novel motif-based approach for unsupervised
textile anomaly detection that combines the benefits of traditional
convolutional neural networks with those of an unsupervised learning paradigm.
It consists of five main steps: preprocessing, automatic pattern period
extraction, patch extraction, features selection and anomaly detection. This
proposed approach uses a new dynamic and heuristic method for feature selection
which avoids the drawbacks of initialization of the number of filters (neurons)
and their weights, and those of the backpropagation mechanism such as the
vanishing gradients, which are common practice in the state-of-the-art methods.
The design and training of the network are performed in a dynamic and input
domain-based manner and, thus, no ad-hoc configurations are required. Before
building the model, only the number of layers and the stride are defined. We do
not initialize the weights randomly nor do we define the filter size or number
of filters as conventionally done in CNN-based approaches. This reduces effort
and time spent on hyperparameter initialization and fine-tuning. Only one
defect-free sample is required for training and no further labeled data is
needed. The trained network is then used to detect anomalies on defective
fabric samples. We demonstrate the effectiveness of our approach on the
Patterned Fabrics benchmark dataset. Our algorithm yields reliable and
competitive results (on recall, precision, accuracy and f1- measure) compared
to state-of-the-art unsupervised approaches, in less time, with efficient
training in a single epoch and a lower computational cost. | Computer Vision |
What field is the article from? | Title: Measuring Five Accountable Talk Moves to Improve Instruction at Scale
Abstract: Providing consistent, individualized feedback to teachers on their
instruction can improve student learning outcomes. Such feedback can especially
benefit novice instructors who teach on online platforms and have limited
access to instructional training. To build scalable measures of instruction, we
fine-tune RoBERTa and GPT models to identify five instructional talk moves
inspired by accountable talk theory: adding on, connecting, eliciting, probing
and revoicing students' ideas. We fine-tune these models on a newly annotated
dataset of 2500 instructor utterances derived from transcripts of small group
instruction in an online computer science course, Code in Place. Although we
find that GPT-3 consistently outperforms RoBERTa in terms of precision, its
recall varies significantly. We correlate the instructors' use of each talk
move with indicators of student engagement and satisfaction, including
students' section attendance, section ratings, and assignment completion rates.
We find that using talk moves generally correlates positively with student
outcomes, and connecting student ideas has the largest positive impact. These
results corroborate previous research on the effectiveness of accountable talk
moves and provide exciting avenues for using these models to provide
instructors with useful, scalable feedback. | Computers and Society |
What field is the article from? | Title: Towards Human-like Perception: Learning Structural Causal Model in Heterogeneous Graph
Abstract: Heterogeneous graph neural networks have become popular in various domains.
However, their generalizability and interpretability are limited due to the
discrepancy between their inherent inference flows and human reasoning logic or
underlying causal relationships for the learning problem. This study introduces
a novel solution, HG-SCM (Heterogeneous Graph as Structural Causal Model). It
can mimic the human perception and decision process through two key steps:
constructing intelligible variables based on semantics derived from the graph
schema and automatically learning task-level causal relationships among these
variables by incorporating advanced causal discovery techniques. We compared
HG-SCM to seven state-of-the-art baseline models on three real-world datasets,
under three distinct and ubiquitous out-of-distribution settings. HG-SCM
achieved the highest average performance rank with minimal standard deviation,
substantiating its effectiveness and superiority in terms of both predictive
power and generalizability. Additionally, the visualization and analysis of the
auto-learned causal diagrams for the three tasks aligned well with domain
knowledge and human cognition, demonstrating prominent interpretability.
HG-SCM's human-like nature and its enhanced generalizability and
interpretability make it a promising solution for special scenarios where
transparency and trustworthiness are paramount. | Machine Learning |
What field is the article from? | Title: Pitfall of Optimism: Distributional Reinforcement Learning by Randomizing Risk Criterion
Abstract: Distributional reinforcement learning algorithms have attempted to utilize
estimated uncertainty for exploration, such as optimism in the face of
uncertainty. However, using the estimated variance for optimistic exploration
may cause biased data collection and hinder convergence or performance. In this
paper, we present a novel distributional reinforcement learning algorithm that
selects actions by randomizing risk criterion to avoid one-sided tendency on
risk. We provide a perturbed distributional Bellman optimality operator by
distorting the risk measure and prove the convergence and optimality of the
proposed method with the weaker contraction property. Our theoretical results
support that the proposed method does not fall into biased exploration and is
guaranteed to converge to an optimal return. Finally, we empirically show that
our method outperforms other existing distribution-based algorithms in various
environments including Atari 55 games. | Machine Learning |
What field is the article from? | Title: ADAPTER-RL: Adaptation of Any Agent using Reinforcement Learning
Abstract: Deep Reinforcement Learning (DRL) agents frequently face challenges in
adapting to tasks outside their training distribution, including issues with
over-fitting, catastrophic forgetting and sample inefficiency. Although the
application of adapters has proven effective in supervised learning contexts
such as natural language processing and computer vision, their potential within
the DRL domain remains largely unexplored. This paper delves into the
integration of adapters in reinforcement learning, presenting an innovative
adaptation strategy that demonstrates enhanced training efficiency and
improvement of the base-agent, experimentally in the nanoRTS environment, a
real-time strategy (RTS) game simulation. Our proposed universal approach is
not only compatible with pre-trained neural networks but also with rule-based
agents, offering a means to integrate human expertise. | Artificial Intelligence |
What field is the article from? | Title: Batch Bayesian Optimization for Replicable Experimental Design
Abstract: Many real-world experimental design problems (a) evaluate multiple
experimental conditions in parallel and (b) replicate each condition multiple
times due to large and heteroscedastic observation noise. Given a fixed total
budget, this naturally induces a trade-off between evaluating more unique
conditions while replicating each of them fewer times vs. evaluating fewer
unique conditions and replicating each more times. Moreover, in these problems,
practitioners may be risk-averse and hence prefer an input with both good
average performance and small variability. To tackle both challenges, we
propose the Batch Thompson Sampling for Replicable Experimental Design
(BTS-RED) framework, which encompasses three algorithms. Our BTS-RED-Known and
BTS-RED-Unknown algorithms, for, respectively, known and unknown noise
variance, choose the number of replications adaptively rather than
deterministically such that an input with a larger noise variance is replicated
more times. As a result, despite the noise heteroscedasticity, both algorithms
enjoy a theoretical guarantee and are asymptotically no-regret. Our
Mean-Var-BTS-RED algorithm aims at risk-averse optimization and is also
asymptotically no-regret. We also show the effectiveness of our algorithms in
two practical real-world applications: precision agriculture and AutoML. | Machine Learning |
What field is the article from? | Title: FlowZero: Zero-Shot Text-to-Video Synthesis with LLM-Driven Dynamic Scene Syntax
Abstract: Text-to-video (T2V) generation is a rapidly growing research area that aims
to translate the scenes, objects, and actions within complex video text into a
sequence of coherent visual frames. We present FlowZero, a novel framework that
combines Large Language Models (LLMs) with image diffusion models to generate
temporally-coherent videos. FlowZero uses LLMs to understand complex
spatio-temporal dynamics from text, where LLMs can generate a comprehensive
dynamic scene syntax (DSS) containing scene descriptions, object layouts, and
background motion patterns. These elements in DSS are then used to guide the
image diffusion model for video generation with smooth object motions and
frame-to-frame coherence. Moreover, FlowZero incorporates an iterative
self-refinement process, enhancing the alignment between the spatio-temporal
layouts and the textual prompts for the videos. To enhance global coherence, we
propose enriching the initial noise of each frame with motion dynamics to
control the background movement and camera motion adaptively. By using
spatio-temporal syntaxes to guide the diffusion process, FlowZero achieves
improvement in zero-shot video synthesis, generating coherent videos with vivid
motion. | Computer Vision |
What field is the article from? | Title: Unveiling the Unseen Potential of Graph Learning through MLPs: Effective Graph Learners Using Propagation-Embracing MLPs
Abstract: Recent studies attempted to utilize multilayer perceptrons (MLPs) to solve
semi-supervised node classification on graphs, by training a student MLP by
knowledge distillation (KD) from a teacher graph neural network (GNN). While
previous studies have focused mostly on training the student MLP by matching
the output probability distributions between the teacher and student models
during KD, it has not been systematically studied how to inject the structural
information in an explicit and interpretable manner. Inspired by GNNs that
separate feature transformation $T$ and propagation $\Pi$, we re-frame the KD
process as enabling the student MLP to explicitly learn both $T$ and $\Pi$.
Although this can be achieved by applying the inverse propagation $\Pi^{-1}$
before distillation from the teacher GNN, it still comes with a high
computational cost from large matrix multiplications during training. To solve
this problem, we propose Propagate & Distill (P&D), which propagates the output
of the teacher GNN before KD and can be interpreted as an approximate process
of the inverse propagation $\Pi^{-1}$. Through comprehensive evaluations using
real-world benchmark datasets, we demonstrate the effectiveness of P&D by
showing further performance boost of the student MLP. | Machine Learning |
What field is the article from? | Title: Prompting LLMs with content plans to enhance the summarization of scientific articles
Abstract: This paper presents novel prompting techniques to improve the performance of
automatic summarization systems for scientific articles. Scientific article
summarization is highly challenging due to the length and complexity of these
documents. We conceive, implement, and evaluate prompting techniques that
provide additional contextual information to guide summarization systems.
Specifically, we feed summarizers with lists of key terms extracted from
articles, such as author keywords or automatically generated keywords. Our
techniques are tested with various summarization models and input texts.
Results show performance gains, especially for smaller models summarizing
sections separately. This evidences that prompting is a promising approach to
overcoming the limitations of less powerful systems. Our findings introduce a
new research direction of using prompts to aid smaller models. | Computational Linguistics |
What field is the article from? | Title: Perturbation-based Active Learning for Question Answering
Abstract: Building a question answering (QA) model with less annotation costs can be
achieved by utilizing active learning (AL) training strategy. It selects the
most informative unlabeled training data to update the model effectively.
Acquisition functions for AL are used to determine how informative each
training example is, such as uncertainty or diversity based sampling. In this
work, we propose a perturbation-based active learning acquisition strategy and
demonstrate it is more effective than existing commonly used strategies. | Computational Linguistics |
What field is the article from? | Title: Unsupervised Behavior Extraction via Random Intent Priors
Abstract: Reward-free data is abundant and contains rich prior knowledge of human
behaviors, but it is not well exploited by offline reinforcement learning (RL)
algorithms. In this paper, we propose UBER, an unsupervised approach to extract
useful behaviors from offline reward-free datasets via diversified rewards.
UBER assigns different pseudo-rewards sampled from a given prior distribution
to different agents to extract a diverse set of behaviors, and reuse them as
candidate policies to facilitate the learning of new tasks. Perhaps
surprisingly, we show that rewards generated from random neural networks are
sufficient to extract diverse and useful behaviors, some even close to expert
ones. We provide both empirical and theoretical evidence to justify the use of
random priors for the reward function. Experiments on multiple benchmarks
showcase UBER's ability to learn effective and diverse behavior sets that
enhance sample efficiency for online RL, outperforming existing baselines. By
reducing reliance on human supervision, UBER broadens the applicability of RL
to real-world scenarios with abundant reward-free data. | Machine Learning |
What field is the article from? | Title: Learning Causal Representations from General Environments: Identifiability and Intrinsic Ambiguity
Abstract: This paper studies causal representation learning, the task of recovering
high-level latent variables and their causal relationships from low-level data
that we observe, assuming access to observations generated from multiple
environments. While existing works are able to prove full identifiability of
the underlying data generating process, they typically assume access to
single-node, hard interventions which is rather unrealistic in practice. The
main contribution of this paper is characterize a notion of identifiability
which is provably the best one can achieve when hard interventions are not
available. First, for linear causal models, we provide identifiability
guarantee for data observed from general environments without assuming any
similarities between them. While the causal graph is shown to be fully
recovered, the latent variables are only identified up to an effect-domination
ambiguity (EDA). We then propose an algorithm, LiNGCReL which is guaranteed to
recover the ground-truth model up to EDA, and we demonstrate its effectiveness
via numerical experiments. Moving on to general non-parametric causal models,
we prove the same idenfifiability guarantee assuming access to groups of soft
interventions. Finally, we provide counterparts of our identifiability results,
indicating that EDA is basically inevitable in our setting. | Machine Learning |
What field is the article from? | Title: Estimation of Concept Explanations Should be Uncertainty Aware
Abstract: Model explanations are very valuable for interpreting and debugging
prediction models. We study a specific kind of global explanations called
Concept Explanations, where the goal is to interpret a model using
human-understandable concepts. Recent advances in multi-modal learning
rekindled interest in concept explanations and led to several label-efficient
proposals for estimation. However, existing estimation methods are unstable to
the choice of concepts or dataset that is used for computing explanations. We
observe that instability in explanations is due to high variance in point
estimation of importance scores. We propose an uncertainty aware Bayesian
estimation method, which readily improved reliability of the concept
explanations. We demonstrate with theoretical analysis and empirical evaluation
that explanations computed by our method are more reliable while also being
label-efficient and faithful. | Machine Learning |
What field is the article from? | Title: Robust and Scalable Hyperdimensional Computing With Brain-Like Neural Adaptations
Abstract: The Internet of Things (IoT) has facilitated many applications utilizing
edge-based machine learning (ML) methods to analyze locally collected data.
Unfortunately, popular ML algorithms often require intensive computations
beyond the capabilities of today's IoT devices. Brain-inspired hyperdimensional
computing (HDC) has been introduced to address this issue. However, existing
HDCs use static encoders, requiring extremely high dimensionality and hundreds
of training iterations to achieve reasonable accuracy. This results in a huge
efficiency loss, severely impeding the application of HDCs in IoT systems. We
observed that a main cause is that the encoding module of existing HDCs lacks
the capability to utilize and adapt to information learned during training. In
contrast, neurons in human brains dynamically regenerate all the time and
provide more useful functionalities when learning new information. While the
goal of HDC is to exploit the high-dimensionality of randomly generated base
hypervectors to represent the information as a pattern of neural activity, it
remains challenging for existing HDCs to support a similar behavior as brain
neural regeneration. In this work, we present dynamic HDC learning frameworks
that identify and regenerate undesired dimensions to provide adequate accuracy
with significantly lowered dimensionalities, thereby accelerating both the
training and inference. | Machine Learning |
What field is the article from? | Title: CSGNN: Conquering Noisy Node labels via Dynamic Class-wise Selection
Abstract: Graph Neural Networks (GNNs) have emerged as a powerful tool for
representation learning on graphs, but they often suffer from overfitting and
label noise issues, especially when the data is scarce or imbalanced. Different
from the paradigm of previous methods that rely on single-node confidence, in
this paper, we introduce a novel Class-wise Selection for Graph Neural
Networks, dubbed CSGNN, which employs a neighbor-aggregated latent space to
adaptively select reliable nodes across different classes. Specifically, 1) to
tackle the class imbalance issue, we introduce a dynamic class-wise selection
mechanism, leveraging the clustering technique to identify clean nodes based on
the neighbor-aggregated confidences. In this way, our approach can avoid the
pitfalls of biased sampling which is common with global threshold techniques.
2) To alleviate the problem of noisy labels, built on the concept of the
memorization effect, CSGNN prioritizes learning from clean nodes before noisy
ones, thereby iteratively enhancing model performance while mitigating label
noise. Through extensive experiments, we demonstrate that CSGNN outperforms
state-of-the-art methods in terms of both effectiveness and robustness. | Machine Learning |
What field is the article from? | Title: Empowering Autonomous Driving with Large Language Models: A Safety Perspective
Abstract: Autonomous Driving (AD) faces crucial hurdles for commercial launch, notably
in the form of diminished public trust and safety concerns from long-tail
unforeseen driving scenarios. This predicament is due to the limitation of deep
neural networks in AD software, which struggle with interpretability and
exhibit poor generalization capabilities in out-of-distribution and uncertain
scenarios. To this end, this paper advocates for the integration of Large
Language Models (LLMs) into the AD system, leveraging their robust common-sense
knowledge, reasoning abilities, and human-interaction capabilities. The
proposed approach deploys the LLM as an intelligent decision-maker in planning,
incorporating safety verifiers for contextual safety learning to enhance
overall AD performance and safety. We present results from two case studies
that affirm the efficacy of our approach. We further discuss the potential
integration of LLM for other AD software components including perception,
prediction, and simulation. Despite the observed challenges in the case
studies, the integration of LLMs is promising and beneficial for reinforcing
both safety and performance in AD. | Artificial Intelligence |
What field is the article from? | Title: PaSCo: Urban 3D Panoptic Scene Completion with Uncertainty Awareness
Abstract: We propose the task of Panoptic Scene Completion (PSC) which extends the
recently popular Semantic Scene Completion (SSC) task with instance-level
information to produce a richer understanding of the 3D scene. Our PSC proposal
utilizes a hybrid mask-based technique on the non-empty voxels from sparse
multi-scale completions. Whereas the SSC literature overlooks uncertainty which
is critical for robotics applications, we instead propose an efficient
ensembling to estimate both voxel-wise and instance-wise uncertainties along
PSC. This is achieved by building on a multi-input multi-output (MIMO)
strategy, while improving performance and yielding better uncertainty for
little additional compute. Additionally, we introduce a technique to aggregate
permutation-invariant mask predictions. Our experiments demonstrate that our
method surpasses all baselines in both Panoptic Scene Completion and
uncertainty estimation on three large-scale autonomous driving datasets. Our
code and data are available at https://astra-vision.github.io/PaSCo . | Computer Vision |
What field is the article from? | Title: Meta Learning for Multi-View Visuomotor Systems
Abstract: This paper introduces a new approach for quickly adapting a multi-view
visuomotor system for robots to varying camera configurations from the baseline
setup. It utilises meta-learning to fine-tune the perceptual network while
keeping the policy network fixed. Experimental results demonstrate a
significant reduction in the number of new training episodes needed to attain
baseline performance. | Robotics |
What field is the article from? | Title: rTisane: Externalizing conceptual models for data analysis increases engagement with domain knowledge and improves statistical model quality
Abstract: Statistical models should accurately reflect analysts' domain knowledge about
variables and their relationships. While recent tools let analysts express
these assumptions and use them to produce a resulting statistical model, it
remains unclear what analysts want to express and how externalization impacts
statistical model quality. This paper addresses these gaps. We first conduct an
exploratory study of analysts using a domain-specific language (DSL) to express
conceptual models. We observe a preference for detailing how variables relate
and a desire to allow, and then later resolve, ambiguity in their conceptual
models. We leverage these findings to develop rTisane, a DSL for expressing
conceptual models augmented with an interactive disambiguation process. In a
controlled evaluation, we find that rTisane's DSL helps analysts engage more
deeply with and accurately externalize their assumptions. rTisane also leads to
statistical models that match analysts' assumptions, maintain analysis intent,
and better fit the data. | Human-Computer Interaction |
What field is the article from? | Title: Action Inference by Maximising Evidence: Zero-Shot Imitation from Observation with World Models
Abstract: Unlike most reinforcement learning agents which require an unrealistic amount
of environment interactions to learn a new behaviour, humans excel at learning
quickly by merely observing and imitating others. This ability highly depends
on the fact that humans have a model of their own embodiment that allows them
to infer the most likely actions that led to the observed behaviour. In this
paper, we propose Action Inference by Maximising Evidence (AIME) to replicate
this behaviour using world models. AIME consists of two distinct phases. In the
first phase, the agent learns a world model from its past experience to
understand its own body by maximising the ELBO. While in the second phase, the
agent is given some observation-only demonstrations of an expert performing a
novel task and tries to imitate the expert's behaviour. AIME achieves this by
defining a policy as an inference model and maximising the evidence of the
demonstration under the policy and world model. Our method is "zero-shot" in
the sense that it does not require further training for the world model or
online interactions with the environment after given the demonstration. We
empirically validate the zero-shot imitation performance of our method on the
Walker and Cheetah embodiment of the DeepMind Control Suite and find it
outperforms the state-of-the-art baselines. Code is available at:
https://github.com/argmax-ai/aime. | Machine Learning |
What field is the article from? | Title: Unbalancedness in Neural Monge Maps Improves Unpaired Domain Translation
Abstract: In optimal transport (OT), a Monge map is known as a mapping that transports
a source distribution to a target distribution in the most cost-efficient way.
Recently, multiple neural estimators for Monge maps have been developed and
applied in diverse unpaired domain translation tasks, e.g. in single-cell
biology and computer vision. However, the classic OT framework enforces mass
conservation, which makes it prone to outliers and limits its applicability in
real-world scenarios. The latter can be particularly harmful in OT domain
translation tasks, where the relative position of a sample within a
distribution is explicitly taken into account. While unbalanced OT tackles this
challenge in the discrete setting, its integration into neural Monge map
estimators has received limited attention. We propose a theoretically grounded
method to incorporate unbalancedness into any Monge map estimator. We improve
existing estimators to model cell trajectories over time and to predict
cellular responses to perturbations. Moreover, our approach seamlessly
integrates with the OT flow matching (OT-FM) framework. While we show that
OT-FM performs competitively in image translation, we further improve
performance by incorporating unbalancedness (UOT-FM), which better preserves
relevant features. We hence establish UOT-FM as a principled method for
unpaired image translation. | Computer Vision |
What field is the article from? | Title: Synthetic Data Generation for Bridging Sim2Real Gap in a Production Environment
Abstract: Synthetic data is being used lately for training deep neural networks in
computer vision applications such as object detection, object segmentation and
6D object pose estimation. Domain randomization hereby plays an important role
in reducing the simulation to reality gap. However, this generalization might
not be effective in specialized domains like a production environment involving
complex assemblies. Either the individual parts, trained with synthetic images,
are integrated in much larger assemblies making them indistinguishable from
their counterparts and result in false positives or are partially occluded just
enough to give rise to false negatives. Domain knowledge is vital in these
cases and if conceived effectively while generating synthetic data, can show a
considerable improvement in bridging the simulation to reality gap. This paper
focuses on synthetic data generation procedures for parts and assemblies used
in a production environment. The basic procedures for synthetic data generation
and their various combinations are evaluated and compared on images captured in
a production environment, where results show up to 15% improvement using
combinations of basic procedures. Reducing the simulation to reality gap in
this way can aid to utilize the true potential of robot assisted production
using artificial intelligence. | Computer Vision |
What field is the article from? | Title: Using State-of-the-Art Speech Models to Evaluate Oral Reading Fluency in Ghana
Abstract: This paper reports on a set of three recent experiments utilizing large-scale
speech models to evaluate the oral reading fluency (ORF) of students in Ghana.
While ORF is a well-established measure of foundational literacy, assessing it
typically requires one-on-one sessions between a student and a trained
evaluator, a process that is time-consuming and costly. Automating the
evaluation of ORF could support better literacy instruction, particularly in
education contexts where formative assessment is uncommon due to large class
sizes and limited resources. To our knowledge, this research is among the first
to examine the use of the most recent versions of large-scale speech models
(Whisper V2 wav2vec2.0) for ORF assessment in the Global South.
We find that Whisper V2 produces transcriptions of Ghanaian students reading
aloud with a Word Error Rate of 13.5. This is close to the model's average WER
on adult speech (12.8) and would have been considered state-of-the-art for
children's speech transcription only a few years ago. We also find that when
these transcriptions are used to produce fully automated ORF scores, they
closely align with scores generated by expert human graders, with a correlation
coefficient of 0.96. Importantly, these results were achieved on a
representative dataset (i.e., students with regional accents, recordings taken
in actual classrooms), using a free and publicly available speech model out of
the box (i.e., no fine-tuning). This suggests that using large-scale speech
models to assess ORF may be feasible to implement and scale in lower-resource,
linguistically diverse educational contexts. | Computational Linguistics |
What field is the article from? | Title: From Principle to Practice: Vertical Data Minimization for Machine Learning
Abstract: Aiming to train and deploy predictive models, organizations collect large
amounts of detailed client data, risking the exposure of private information in
the event of a breach. To mitigate this, policymakers increasingly demand
compliance with the data minimization (DM) principle, restricting data
collection to only that data which is relevant and necessary for the task.
Despite regulatory pressure, the problem of deploying machine learning models
that obey DM has so far received little attention. In this work, we address
this challenge in a comprehensive manner. We propose a novel vertical DM (vDM)
workflow based on data generalization, which by design ensures that no
full-resolution client data is collected during training and deployment of
models, benefiting client privacy by reducing the attack surface in case of a
breach. We formalize and study the corresponding problem of finding
generalizations that both maximize data utility and minimize empirical privacy
risk, which we quantify by introducing a diverse set of policy-aligned
adversarial scenarios. Finally, we propose a range of baseline vDM algorithms,
as well as Privacy-aware Tree (PAT), an especially effective vDM algorithm that
outperforms all baselines across several settings. We plan to release our code
as a publicly available library, helping advance the standardization of DM for
machine learning. Overall, we believe our work can help lay the foundation for
further exploration and adoption of DM principles in real-world applications. | Machine Learning |
What field is the article from? | Title: X-Adapter: Adding Universal Compatibility of Plugins for Upgraded Diffusion Model
Abstract: We introduce X-Adapter, a universal upgrader to enable the pretrained
plug-and-play modules (e.g., ControlNet, LoRA) to work directly with the
upgraded text-to-image diffusion model (e.g., SDXL) without further retraining.
We achieve this goal by training an additional network to control the frozen
upgraded model with the new text-image data pairs. In detail, X-Adapter keeps a
frozen copy of the old model to preserve the connectors of different plugins.
Additionally, X-Adapter adds trainable mapping layers that bridge the decoders
from models of different versions for feature remapping. The remapped features
will be used as guidance for the upgraded model. To enhance the guidance
ability of X-Adapter, we employ a null-text training strategy for the upgraded
model. After training, we also introduce a two-stage denoising strategy to
align the initial latents of X-Adapter and the upgraded model. Thanks to our
strategies, X-Adapter demonstrates universal compatibility with various plugins
and also enables plugins of different versions to work together, thereby
expanding the functionalities of diffusion community. To verify the
effectiveness of the proposed method, we conduct extensive experiments and the
results show that X-Adapter may facilitate wider application in the upgraded
foundational diffusion model. | Computer Vision |
What field is the article from? | Title: Handshape recognition for Argentinian Sign Language using ProbSom
Abstract: Automatic sign language recognition is an important topic within the areas of
human-computer interaction and machine learning. On the one hand, it poses a
complex challenge that requires the intervention of various knowledge areas,
such as video processing, image processing, intelligent systems and
linguistics. On the other hand, robust recognition of sign language could
assist in the translation process and the integration of hearing-impaired
people.
This paper offers two main contributions: first, the creation of a database
of handshapes for the Argentinian Sign Language (LSA), which is a topic that
has barely been discussed so far. Secondly, a technique for image processing,
descriptor extraction and subsequent handshape classification using a
supervised adaptation of self-organizing maps that is called ProbSom. This
technique is compared to others in the state of the art, such as Support Vector
Machines (SVM), Random Forests, and Neural Networks.
The database that was built contains 800 images with 16 LSA handshapes, and
is a first step towards building a comprehensive database of Argentinian signs.
The ProbSom-based neural classifier, using the proposed descriptor, achieved an
accuracy rate above 90%. | Computer Vision |
What field is the article from? | Title: Multimodality of AI for Education: Towards Artificial General Intelligence
Abstract: This paper presents a comprehensive examination of how multimodal artificial
intelligence (AI) approaches are paving the way towards the realization of
Artificial General Intelligence (AGI) in educational contexts. It scrutinizes
the evolution and integration of AI in educational systems, emphasizing the
crucial role of multimodality, which encompasses auditory, visual, kinesthetic,
and linguistic modes of learning. This research delves deeply into the key
facets of AGI, including cognitive frameworks, advanced knowledge
representation, adaptive learning mechanisms, strategic planning, sophisticated
language processing, and the integration of diverse multimodal data sources. It
critically assesses AGI's transformative potential in reshaping educational
paradigms, focusing on enhancing teaching and learning effectiveness, filling
gaps in existing methodologies, and addressing ethical considerations and
responsible usage of AGI in educational settings. The paper also discusses the
implications of multimodal AI's role in education, offering insights into
future directions and challenges in AGI development. This exploration aims to
provide a nuanced understanding of the intersection between AI, multimodality,
and education, setting a foundation for future research and development in AGI. | Artificial Intelligence |
What field is the article from? | Title: TEAL: Tokenize and Embed ALL for Multi-modal Large Language Models
Abstract: Despite Multi-modal Large Language Models (MM-LLMs) have made exciting
strides recently, they are still struggling to efficiently model the
interactions among multi-modal inputs and the generation in non-textual
modalities. In this work, we propose TEAL (Tokenize and Embed ALl)}, an
approach to treat the input from any modality as a token sequence and learn a
joint embedding space for all modalities. Specifically, for the input from any
modality, TEAL first discretizes it into a token sequence with the
off-the-shelf tokenizer and embeds the token sequence into a joint embedding
space with a learnable embedding matrix. MM-LLMs just need to predict the
multi-modal tokens autoregressively as the textual LLMs do. Finally, the
corresponding de-tokenizer is applied to generate the output in each modality
based on the predicted token sequence. With the joint embedding space, TEAL
enables the frozen LLMs to perform both understanding and generation tasks
involving non-textual modalities, such as image and audio. Thus, the textual
LLM can just work as an interface and maintain its high performance in textual
understanding and generation. Experiments show that TEAL achieves substantial
improvements in multi-modal understanding, and implements a simple scheme for
multi-modal generations. | Computational Linguistics |
What field is the article from? | Title: Kattis vs. ChatGPT: Assessment and Evaluation of Programming Tasks in the Age of Artificial Intelligence
Abstract: AI-powered education technologies can support students and teachers in
computer science education. However, with the recent developments in generative
AI, and especially the increasingly emerging popularity of ChatGPT, the
effectiveness of using large language models for solving programming tasks has
been underexplored. The present study examines ChatGPT's ability to generate
code solutions at different difficulty levels for introductory programming
courses. We conducted an experiment where ChatGPT was tested on 127 randomly
selected programming problems provided by Kattis, an automatic software grading
tool for computer science programs, often used in higher education. The results
showed that ChatGPT independently could solve 19 out of 127 programming tasks
generated and assessed by Kattis. Further, ChatGPT was found to be able to
generate accurate code solutions for simple problems but encountered
difficulties with more complex programming tasks. The results contribute to the
ongoing debate on the utility of AI-powered tools in programming education. | Artificial Intelligence |
What field is the article from? | Title: Pre-training LLMs using human-like development data corpus
Abstract: Pre-trained Large Language Models (LLMs) have shown success in a diverse set
of language inference and understanding tasks. The pre-training stage of LLMs
looks at a large corpus of raw textual data. The BabyLM shared task compares
LLM pre-training to human language acquisition, where the number of tokens seen
by 13-year-old kids is magnitudes smaller than the number of tokens seen by
LLMs. In this work, we pre-train and evaluate LLMs on their ability to learn
contextual word representations using roughly the same number of tokens as seen
by children. We provide a strong set of baselines; with different
architectures, evaluation of changes in performance across epochs, and reported
pre-training metrics for the strict small and strict tracks of the task. We
also try to loosely replicate the RoBERTa baseline given by the task organizers
to observe the training robustness to hyperparameter selection and
replicability. We provide the submission details to the strict and strict-small
tracks in this report. | Computational Linguistics |
What field is the article from? | Title: Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number
Abstract: Deep architectures such as Transformers are sometimes criticized for having
uninterpretable "black-box" representations. We use causal intervention
analysis to show that, in fact, some linguistic features are represented in a
linear, interpretable format. Specifically, we show that BERT's ability to
conjugate verbs relies on a linear encoding of subject number that can be
manipulated with predictable effects on conjugation accuracy. This encoding is
found in the subject position at the first layer and the verb position at the
last layer, but distributed across positions at middle layers, particularly
when there are multiple cues to subject number. | Computational Linguistics |
What field is the article from? | Title: Transferring CLIP's Knowledge into Zero-Shot Point Cloud Semantic Segmentation
Abstract: Traditional 3D segmentation methods can only recognize a fixed range of
classes that appear in the training set, which limits their application in
real-world scenarios due to the lack of generalization ability. Large-scale
visual-language pre-trained models, such as CLIP, have shown their
generalization ability in the zero-shot 2D vision tasks, but are still unable
to be applied to 3D semantic segmentation directly. In this work, we focus on
zero-shot point cloud semantic segmentation and propose a simple yet effective
baseline to transfer the visual-linguistic knowledge implied in CLIP to point
cloud encoder at both feature and output levels. Both feature-level and
output-level alignments are conducted between 2D and 3D encoders for effective
knowledge transfer. Concretely, a Multi-granularity Cross-modal Feature
Alignment (MCFA) module is proposed to align 2D and 3D features from global
semantic and local position perspectives for feature-level alignment. For the
output level, per-pixel pseudo labels of unseen classes are extracted using the
pre-trained CLIP model as supervision for the 3D segmentation model to mimic
the behavior of the CLIP image encoder. Extensive experiments are conducted on
two popular benchmarks of point cloud segmentation. Our method outperforms
significantly previous state-of-the-art methods under zero-shot setting (+29.2%
mIoU on SemanticKITTI and 31.8% mIoU on nuScenes), and further achieves
promising results in the annotation-free point cloud semantic segmentation
setting, showing its great potential for label-efficient learning. | Computer Vision |
What field is the article from? | Title: Reinforcement Neighborhood Selection for Unsupervised Graph Anomaly Detection
Abstract: Unsupervised graph anomaly detection is crucial for various practical
applications as it aims to identify anomalies in a graph that exhibit rare
patterns deviating significantly from the majority of nodes. Recent
advancements have utilized Graph Neural Networks (GNNs) to learn high-quality
node representations for anomaly detection by aggregating information from
neighborhoods. However, the presence of anomalies may render the observed
neighborhood unreliable and result in misleading information aggregation for
node representation learning. Selecting the proper neighborhood is critical for
graph anomaly detection but also challenging due to the absence of
anomaly-oriented guidance and the interdependence with representation learning.
To address these issues, we utilize the advantages of reinforcement learning in
adaptively learning in complex environments and propose a novel method that
incorporates Reinforcement neighborhood selection for unsupervised graph
ANomaly Detection (RAND). RAND begins by enriching the candidate neighbor pool
of the given central node with multiple types of indirect neighbors. Next, RAND
designs a tailored reinforcement anomaly evaluation module to assess the
reliability and reward of considering the given neighbor. Finally, RAND selects
the most reliable subset of neighbors based on these rewards and introduces an
anomaly-aware aggregator to amplify messages from reliable neighbors while
diminishing messages from unreliable ones. Extensive experiments on both three
synthetic and two real-world datasets demonstrate that RAND outperforms the
state-of-the-art methods. | Machine Learning |
What field is the article from? | Title: Improving Denoising Diffusion Probabilistic Models via Exploiting Shared Representations
Abstract: In this work, we address the challenge of multi-task image generation with
limited data for denoising diffusion probabilistic models (DDPM), a class of
generative models that produce high-quality images by reversing a noisy
diffusion process. We propose a novel method, SR-DDPM, that leverages
representation-based techniques from few-shot learning to effectively learn
from fewer samples across different tasks. Our method consists of a core meta
architecture with shared parameters, i.e., task-specific layers with exclusive
parameters. By exploiting the similarity between diverse data distributions,
our method can scale to multiple tasks without compromising the image quality.
We evaluate our method on standard image datasets and show that it outperforms
both unconditional and conditional DDPM in terms of FID and SSIM metrics. | Machine Learning |
What field is the article from? | Title: SCCA: Shifted Cross Chunk Attention for long contextual semantic expansion
Abstract: Sparse attention as a efficient method can significantly decrease the
computation cost, but current sparse attention tend to rely on window self
attention which block the global information flow. For this problem, we present
Shifted Cross Chunk Attention (SCCA), using different KV shifting strategy to
extend respective field in each attention layer. Except, we combine Dilated
Attention(DA) and Dilated Neighborhood Attention(DNA) to present Shifted
Dilated Attention(SDA). Both SCCA and SDA can accumulate attention results in
multi head attention to obtain approximate respective field in full attention.
In this paper, we conduct language modeling experiments using different pattern
of SCCA and combination of SCCA and SDA. The proposed shifted cross chunk
attention (SCCA) can effectively extend large language models (LLMs) to longer
context combined with Positional interpolation(PI) and LoRA than current sparse
attention. Notably, SCCA adopts LLaMA2 7B from 4k context to 8k in single V100.
This attention pattern can provide a Plug-and-play fine-tuning method to extend
model context while retaining their original architectures, and is compatible
with most existing techniques. | Computational Linguistics |
What field is the article from? | Title: Mathematical Introduction to Deep Learning: Methods, Implementations, and Theory
Abstract: This book aims to provide an introduction to the topic of deep learning
algorithms. We review essential components of deep learning algorithms in full
mathematical detail including different artificial neural network (ANN)
architectures (such as fully-connected feedforward ANNs, convolutional ANNs,
recurrent ANNs, residual ANNs, and ANNs with batch normalization) and different
optimization algorithms (such as the basic stochastic gradient descent (SGD)
method, accelerated methods, and adaptive methods). We also cover several
theoretical aspects of deep learning algorithms such as approximation
capacities of ANNs (including a calculus for ANNs), optimization theory
(including Kurdyka-{\L}ojasiewicz inequalities), and generalization errors. In
the last part of the book some deep learning approximation methods for PDEs are
reviewed including physics-informed neural networks (PINNs) and deep Galerkin
methods. We hope that this book will be useful for students and scientists who
do not yet have any background in deep learning at all and would like to gain a
solid foundation as well as for practitioners who would like to obtain a firmer
mathematical understanding of the objects and methods considered in deep
learning. | Machine Learning |
What field is the article from? | Title: A Survey of the Various Methodologies Towards making Artificial Intelligence More Explainable
Abstract: Machines are being increasingly used in decision-making processes, resulting
in the realization that decisions need explanations. Unfortunately, an
increasing number of these deployed models are of a 'black-box' nature where
the reasoning behind the decisions is unknown. Hence, there is a need for
clarity behind the reasoning of these decisions. As humans, we would want these
decisions to be presented to us in an explainable manner. However, explanations
alone are insufficient. They do not necessarily tell us how to achieve an
outcome but merely tell us what achieves the given outcome. For this reason, my
research focuses on explainability/interpretability and how it extends to
counterfactual thinking. | Artificial Intelligence |
What field is the article from? | Title: Do Similar Entities have Similar Embeddings?
Abstract: Knowledge graph embedding models (KGEMs) developed for link prediction learn
vector representations for graph entities, known as embeddings. A common tacit
assumption is the KGE entity similarity assumption, which states that these
KGEMs retain the graph's structure within their embedding space, i.e., position
similar entities close to one another. This desirable property make KGEMs
widely used in downstream tasks such as recommender systems or drug
repurposing. Yet, the alignment of graph similarity with embedding space
similarity has rarely been formally evaluated. Typically, KGEMs are assessed
based on their sole link prediction capabilities, using ranked-based metrics
such as Hits@K or Mean Rank. This paper challenges the prevailing assumption
that entity similarity in the graph is inherently mirrored in the embedding
space. Therefore, we conduct extensive experiments to measure the capability of
KGEMs to cluster similar entities together, and investigate the nature of the
underlying factors. Moreover, we study if different KGEMs expose a different
notion of similarity. Datasets, pre-trained embeddings and code are available
at: https://github.com/nicolas-hbt/similar-embeddings. | Artificial Intelligence |
What field is the article from? | Title: Sparse Low-rank Adaptation of Pre-trained Language Models
Abstract: Fine-tuning pre-trained large language models in a parameter-efficient manner
is widely studied for its effectiveness and efficiency. The popular method of
low-rank adaptation (LoRA) offers a notable approach, hypothesizing that the
adaptation process is intrinsically low-dimensional. Although LoRA has
demonstrated commendable performance, it is implemented with a fixed and
unalterable intrinsic rank that might not always be the ideal choice.
Recognizing the need for more flexible adaptation, we extend the methodology of
LoRA to an innovative approach we call sparse low-rank adaptation (SoRA) that
enables dynamic adjustments to the intrinsic rank during the adaptation
process. We achieve this through the incorporation of a gate unit optimized
with proximal gradient method in the training stage, controlling the
cardinality of rank under the sparsity of the gate. In the subsequent inference
stage, we eliminate the parameter blocks corresponding to the zeroed-out ranks,
to reduce each SoRA module back to a concise yet rank-optimal LoRA. Our
approach strengthens the representation power of LoRA by initializing it with a
higher rank, while efficiently taming a temporarily increased number of
parameters via updating in a sparse way. We further introduce a sparsifying
scheduler for SoRA, aiming to examine the impact of the number of non-zero
parameters on the model's memorization and generalization. Our experimental
results demonstrate that SoRA can outperform other baselines even with 70%
retained parameters and 70% training time. | Computational Linguistics |
What field is the article from? | Title: AI Recommendation System for Enhanced Customer Experience: A Novel Image-to-Text Method
Abstract: Existing fashion recommendation systems encounter difficulties in using
visual data for accurate and personalized recommendations. This research
describes an innovative end-to-end pipeline that uses artificial intelligence
to provide fine-grained visual interpretation for fashion recommendations. When
customers upload images of desired products or outfits, the system
automatically generates meaningful descriptions emphasizing stylistic elements.
These captions guide retrieval from a global fashion product catalogue to offer
similar alternatives that fit the visual characteristics of the original image.
On a dataset of over 100,000 categorized fashion photos, the pipeline was
trained and evaluated. The F1-score for the object detection model was 0.97,
exhibiting exact fashion object recognition capabilities optimized for
recommendation. This visually aware system represents a key advancement in
customer engagement through personalized fashion recommendations | Information Retrieval |
What field is the article from? | Title: Learning Independently from Causality in Multi-Agent Environments
Abstract: Multi-Agent Reinforcement Learning (MARL) comprises an area of growing
interest in the field of machine learning. Despite notable advances, there are
still problems that require investigation. The lazy agent pathology is a famous
problem in MARL that denotes the event when some of the agents in a MARL team
do not contribute to the common goal, letting the teammates do all the work. In
this work, we aim to investigate this problem from a causality-based
perspective. We intend to create the bridge between the fields of MARL and
causality and argue about the usefulness of this link. We study a fully
decentralised MARL setup where agents need to learn cooperation strategies and
show that there is a causal relation between individual observations and the
team reward. The experiments carried show how this relation can be used to
improve independent agents in MARL, resulting not only on better performances
as a team but also on the rise of more intelligent behaviours on individual
agents. | Machine Learning |
What field is the article from? | Title: Taking control: Policies to address extinction risks from advanced AI
Abstract: This paper provides policy recommendations to reduce extinction risks from
advanced artificial intelligence (AI). First, we briefly provide background
information about extinction risks from AI. Second, we argue that voluntary
commitments from AI companies would be an inappropriate and insufficient
response. Third, we describe three policy proposals that would meaningfully
address the threats from advanced AI: (1) establishing a Multinational AGI
Consortium to enable democratic oversight of advanced AI (MAGIC), (2)
implementing a global cap on the amount of computing power used to train an AI
system (global compute cap), and (3) requiring affirmative safety evaluations
to ensure that risks are kept below acceptable levels (gating critical
experiments). MAGIC would be a secure, safety-focused, internationally-governed
institution responsible for reducing risks from advanced AI and performing
research to safely harness the benefits of AI. MAGIC would also maintain
emergency response infrastructure (kill switch) to swiftly halt AI development
or withdraw model deployment in the event of an AI-related emergency. The
global compute cap would end the corporate race toward dangerous AI systems
while enabling the vast majority of AI innovation to continue unimpeded. Gating
critical experiments would ensure that companies developing powerful AI systems
are required to present affirmative evidence that these models keep extinction
risks below an acceptable threshold. After describing these recommendations, we
propose intermediate steps that the international community could take to
implement these proposals and lay the groundwork for international coordination
around advanced AI. | Artificial Intelligence |
What field is the article from? | Title: FoMo Rewards: Can we cast foundation models as reward functions?
Abstract: We explore the viability of casting foundation models as generic reward
functions for reinforcement learning. To this end, we propose a simple pipeline
that interfaces an off-the-shelf vision model with a large language model.
Specifically, given a trajectory of observations, we infer the likelihood of an
instruction describing the task that the user wants an agent to perform. We
show that this generic likelihood function exhibits the characteristics ideally
expected from a reward function: it associates high values with the desired
behaviour and lower values for several similar, but incorrect policies.
Overall, our work opens the possibility of designing open-ended agents for
interactive tasks via foundation models. | Machine Learning |
What field is the article from? | Title: Investigating Multi-Pivot Ensembling with Massively Multilingual Machine Translation Models
Abstract: Massively multilingual machine translation models allow for the translation
of a large number of languages with a single model, but have limited
performance on low- and very-low-resource translation directions. Pivoting via
high-resource languages remains a strong strategy for low-resource directions,
and in this paper we revisit ways of pivoting through multiple languages.
Previous work has used a simple averaging of probability distributions from
multiple paths, but we find that this performs worse than using a single pivot,
and exacerbates the hallucination problem because the same hallucinations can
be probable across different paths. As an alternative, we propose MaxEns, a
combination strategy that is biased towards the most confident predictions,
hypothesising that confident predictions are less prone to be hallucinations.
We evaluate different strategies on the FLORES benchmark for 20 low-resource
language directions, demonstrating that MaxEns improves translation quality for
low-resource languages while reducing hallucination in translations, compared
to both direct translation and an averaging approach. On average, multi-pivot
strategies still lag behind using English as a single pivot language, raising
the question of how to identify the best pivoting strategy for a given
translation direction. | Computational Linguistics |
What field is the article from? | Title: Towards Transferring Tactile-based Continuous Force Control Policies from Simulation to Robot
Abstract: The advent of tactile sensors in robotics has sparked many ideas on how
robots can leverage direct contact measurements of their environment
interactions to improve manipulation tasks. An important line of research in
this regard is that of grasp force control, which aims to manipulate objects
safely by limiting the amount of force exerted on the object. While prior works
have either hand-modeled their force controllers, employed model-based
approaches, or have not shown sim-to-real transfer, we propose a model-free
deep reinforcement learning approach trained in simulation and then transferred
to the robot without further fine-tuning. We therefore present a simulation
environment that produces realistic normal forces, which we use to train
continuous force control policies. An evaluation in which we compare against a
baseline and perform an ablation study shows that our approach outperforms the
hand-modeled baseline and that our proposed inductive bias and domain
randomization facilitate sim-to-real transfer. Code, models, and supplementary
videos are available on https://sites.google.com/view/rl-force-ctrl | Robotics |
What field is the article from? | Title: Speculative Exploration on the Concept of Artificial Agents Conducting Autonomous Research
Abstract: This paper engages in a speculative exploration of the concept of an
artificial agent capable of conducting research. Initially, it examines how the
act of research can be conceptually characterized, aiming to provide a starting
point for discussions about what it means to create such agents. The focus then
shifts to the core components of research: question formulation, hypothesis
generation, and hypothesis verification. This discussion includes a
consideration of the potential and challenges associated with enabling machines
to autonomously perform these tasks. Subsequently, this paper briefly considers
the overlapping themes and interconnections that underlie them. Finally, the
paper presents preliminary thoughts on prototyping as an initial step towards
uncovering the challenges involved in developing these research-capable agents. | Artificial Intelligence |
What field is the article from? | Title: Towards Context-Aware Domain Generalization: Representing Environments with Permutation-Invariant Networks
Abstract: In this work, we show that information about the context of an input $X$ can
improve the predictions of deep learning models when applied in new domains or
production environments. We formalize the notion of context as a
permutation-invariant representation of a set of data points that originate
from the same environment/domain as the input itself. These representations are
jointly learned with a standard supervised learning objective, providing
incremental information about the unknown outcome. Furthermore, we offer a
theoretical analysis of the conditions under which our approach can, in
principle, yield benefits, and formulate two necessary criteria that can be
easily verified in practice. Additionally, we contribute insights into the kind
of distribution shifts for which our approach promises robustness. Our
empirical evaluation demonstrates the effectiveness of our approach for both
low-dimensional and high-dimensional data sets. Finally, we demonstrate that we
can reliably detect scenarios where a model is tasked with unwarranted
extrapolation in out-of-distribution (OOD) domains, identifying potential
failure cases. Consequently, we showcase a method to select between the most
predictive and the most robust model, circumventing the well-known trade-off
between predictive performance and robustness. | Machine Learning |
What field is the article from? | Title: Predictive Minds: LLMs As Atypical Active Inference Agents
Abstract: Large language models (LLMs) like GPT are often conceptualized as passive
predictors, simulators, or even stochastic parrots. We instead conceptualize
LLMs by drawing on the theory of active inference originating in cognitive
science and neuroscience. We examine similarities and differences between
traditional active inference systems and LLMs, leading to the conclusion that,
currently, LLMs lack a tight feedback loop between acting in the world and
perceiving the impacts of their actions, but otherwise fit in the active
inference paradigm. We list reasons why this loop may soon be closed, and
possible consequences of this including enhanced model self-awareness and the
drive to minimize prediction error by changing the world. | Computational Linguistics |
What field is the article from? | Title: Detailed Human-Centric Text Description-Driven Large Scene Synthesis
Abstract: Text-driven large scene image synthesis has made significant progress with
diffusion models, but controlling it is challenging. While using additional
spatial controls with corresponding texts has improved the controllability of
large scene synthesis, it is still challenging to faithfully reflect detailed
text descriptions without user-provided controls. Here, we propose
DetText2Scene, a novel text-driven large-scale image synthesis with high
faithfulness, controllability, and naturalness in a global context for the
detailed human-centric text description. Our DetText2Scene consists of 1)
hierarchical keypoint-box layout generation from the detailed description by
leveraging large language model (LLM), 2) view-wise conditioned joint diffusion
process to synthesize a large scene from the given detailed text with
LLM-generated grounded keypoint-box layout and 3) pixel perturbation-based
pyramidal interpolation to progressively refine the large scene for global
coherence. Our DetText2Scene significantly outperforms prior arts in
text-to-large scene synthesis qualitatively and quantitatively, demonstrating
strong faithfulness with detailed descriptions, superior controllability, and
excellent naturalness in a global context. | Computer Vision |
What field is the article from? | Title: Refine, Discriminate and Align: Stealing Encoders via Sample-Wise Prototypes and Multi-Relational Extraction
Abstract: This paper introduces RDA, a pioneering approach designed to address two
primary deficiencies prevalent in previous endeavors aiming at stealing
pre-trained encoders: (1) suboptimal performances attributed to biased
optimization objectives, and (2) elevated query costs stemming from the
end-to-end paradigm that necessitates querying the target encoder every epoch.
Specifically, we initially Refine the representations of the target encoder for
each training sample, thereby establishing a less biased optimization objective
before the steal-training phase. This is accomplished via a sample-wise
prototype, which consolidates the target encoder's representations for a given
sample's various perspectives. Demanding exponentially fewer queries compared
to the end-to-end approach, prototypes can be instantiated to guide subsequent
query-free training. For more potent efficacy, we develop a multi-relational
extraction loss that trains the surrogate encoder to Discriminate mismatched
embedding-prototype pairs while Aligning those matched ones in terms of both
amplitude and angle. In this way, the trained surrogate encoder achieves
state-of-the-art results across the board in various downstream datasets with
limited queries. Moreover, RDA is shown to be robust to multiple widely-used
defenses. | Machine Learning |
What field is the article from? | Title: Probing and Mitigating Intersectional Social Biases in Vision-Language Models with Counterfactual Examples
Abstract: While vision-language models (VLMs) have achieved remarkable performance
improvements recently, there is growing evidence that these models also posses
harmful biases with respect to social attributes such as gender and race. Prior
studies have primarily focused on probing such bias attributes individually
while ignoring biases associated with intersections between social attributes.
This could be due to the difficulty of collecting an exhaustive set of
image-text pairs for various combinations of social attributes. To address this
challenge, we employ text-to-image diffusion models to produce counterfactual
examples for probing intserctional social biases at scale. Our approach
utilizes Stable Diffusion with cross attention control to produce sets of
counterfactual image-text pairs that are highly similar in their depiction of a
subject (e.g., a given occupation) while differing only in their depiction of
intersectional social attributes (e.g., race & gender). Through our
over-generate-then-filter methodology, we produce SocialCounterfactuals, a
high-quality dataset containing over 171k image-text pairs for probing
intersectional biases related to gender, race, and physical characteristics. We
conduct extensive experiments to demonstrate the usefulness of our generated
dataset for probing and mitigating intersectional social biases in
state-of-the-art VLMs. | Computer Vision |
What field is the article from? | Title: Large Language Model-Driven Classroom Flipping: Empowering Student-Centric Peer Questioning with Flipped Interaction
Abstract: Reciprocal questioning is essential for effective teaching and learning,
fostering active engagement and deeper understanding through collaborative
interactions, especially in large classrooms. Can large language model (LLM),
such as OpenAI's GPT (Generative Pre-trained Transformer) series, assist in
this? This paper investigates a pedagogical approach of classroom flipping
based on flipped interaction in LLMs. Flipped interaction involves using
language models to prioritize generating questions instead of answers to
prompts. We demonstrate how traditional classroom flipping techniques,
including Peer Instruction and Just-in-Time Teaching (JiTT), can be enhanced
through flipped interaction techniques, creating student-centric questions for
hybrid teaching. In particular, we propose a workflow to integrate prompt
engineering with clicker and JiTT quizzes by a poll-prompt-quiz routine and a
quiz-prompt-discuss routine to empower students to self-regulate their learning
capacity and enable teachers to swiftly personalize training pathways. We
develop an LLM-driven chatbot software that digitizes various elements of
classroom flipping and facilitates the assessment of students using these
routines to deliver peer-generated questions. We have applied our LLM-driven
chatbot software for teaching both undergraduate and graduate students from
2020 to 2022, effectively useful for bridging the gap between teachers and
students in remote teaching during the COVID-19 pandemic years. In particular,
LLM-driven classroom flipping can be particularly beneficial in large class
settings to optimize teaching pace and enable engaging classroom experiences. | Computers and Society |
What field is the article from? | Title: CPSOR-GCN: A Vehicle Trajectory Prediction Method Powered by Emotion and Cognitive Theory
Abstract: Active safety systems on vehicles often face problems with false alarms. Most
active safety systems predict the driver's trajectory with the assumption that
the driver is always in a normal emotion, and then infer risks. However, the
driver's trajectory uncertainty increases under abnormal emotions. This paper
proposes a new trajectory prediction model: CPSOR-GCN, which predicts vehicle
trajectories under abnormal emotions. At the physical level, the interaction
features between vehicles are extracted by the physical GCN module. At the
cognitive level, SOR cognitive theory is used as prior knowledge to build a
Dynamic Bayesian Network (DBN) structure. The conditional probability and state
transition probability of nodes from the calibrated SOR-DBN quantify the causal
relationship between cognitive factors, which is embedded into the cognitive
GCN module to extract the characteristics of the influence mechanism of
emotions on driving behavior. The CARLA-SUMO joint driving simulation platform
was built to develop dangerous pre-crash scenarios. Methods of recreating
traffic scenes were used to naturally induce abnormal emotions. The experiment
collected data from 26 participants to verify the proposed model. Compared with
the model that only considers physical motion features, the prediction accuracy
of the proposed model is increased by 68.70%. Furthermore,considering the
SOR-DBN reduces the prediction error of the trajectory by 15.93%. Compared with
other advanced trajectory prediction models, the results of CPSOR-GCN also have
lower errors. This model can be integrated into active safety systems to better
adapt to the driver's emotions, which could effectively reduce false alarms. | Artificial Intelligence |
What field is the article from? | Title: Linking Surface Facts to Large-Scale Knowledge Graphs
Abstract: Open Information Extraction (OIE) methods extract facts from natural language
text in the form of ("subject"; "relation"; "object") triples. These facts are,
however, merely surface forms, the ambiguity of which impedes their downstream
usage; e.g., the surface phrase "Michael Jordan" may refer to either the former
basketball player or the university professor. Knowledge Graphs (KGs), on the
other hand, contain facts in a canonical (i.e., unambiguous) form, but their
coverage is limited by a static schema (i.e., a fixed set of entities and
predicates). To bridge this gap, we need the best of both worlds: (i) high
coverage of free-text OIEs, and (ii) semantic precision (i.e., monosemy) of
KGs. In order to achieve this goal, we propose a new benchmark with novel
evaluation protocols that can, for example, measure fact linking performance on
a granular triple slot level, while also measuring if a system has the ability
to recognize that a surface form has no match in the existing KG. Our extensive
evaluation of several baselines show that detection of out-of-KG entities and
predicates is more difficult than accurate linking to existing ones, thus
calling for more research efforts on this difficult task. We publicly release
all resources (data, benchmark and code) on
https://github.com/nec-research/fact-linking. | Computational Linguistics |
What field is the article from? | Title: Language Models, Agent Models, and World Models: The LAW for Machine Reasoning and Planning
Abstract: Despite their tremendous success in many applications, large language models
often fall short of consistent reasoning and planning in various (language,
embodied, and social) scenarios, due to inherent limitations in their
inference, learning, and modeling capabilities. In this position paper, we
present a new perspective of machine reasoning, LAW, that connects the concepts
of Language models, Agent models, and World models, for more robust and
versatile reasoning capabilities. In particular, we propose that world and
agent models are a better abstraction of reasoning, that introduces the crucial
elements of deliberate human-like reasoning, including beliefs about the world
and other agents, anticipation of consequences, goals/rewards, and strategic
planning. Crucially, language models in LAW serve as a backend to implement the
system or its elements and hence provide the computational power and
adaptability. We review the recent studies that have made relevant progress and
discuss future research directions towards operationalizing the LAW framework. | Artificial Intelligence |
What field is the article from? | Title: A Self-enhancement Approach for Domain-specific Chatbot Training via Knowledge Mining and Digest
Abstract: Large Language Models (LLMs), despite their great power in language
generation, often encounter challenges when dealing with intricate and
knowledge-demanding queries in specific domains. This paper introduces a novel
approach to enhance LLMs by effectively extracting the relevant knowledge from
domain-specific textual sources, and the adaptive training of a chatbot with
domain-specific inquiries. Our two-step approach starts from training a
knowledge miner, namely LLMiner, which autonomously extracts Question-Answer
pairs from relevant documents through a chain-of-thought reasoning process.
Subsequently, we blend the mined QA pairs with a conversational dataset to
fine-tune the LLM as a chatbot, thereby enriching its domain-specific expertise
and conversational capabilities. We also developed a new evaluation benchmark
which comprises four domain-specific text corpora and associated human-crafted
QA pairs for testing. Our model shows remarkable performance improvement over
generally aligned LLM and surpasses domain-adapted models directly fine-tuned
on domain corpus. In particular, LLMiner achieves this with minimal human
intervention, requiring only 600 seed instances, thereby providing a pathway
towards self-improvement of LLMs through model-synthesized training data. | Computational Linguistics |
What field is the article from? | Title: Towards Automated Recipe Genre Classification using Semi-Supervised Learning
Abstract: Sharing cooking recipes is a great way to exchange culinary ideas and provide
instructions for food preparation. However, categorizing raw recipes found
online into appropriate food genres can be challenging due to a lack of
adequate labeled data. In this study, we present a dataset named the
``Assorted, Archetypal, and Annotated Two Million Extended (3A2M+) Cooking
Recipe Dataset" that contains two million culinary recipes labeled in
respective categories with extended named entities extracted from recipe
descriptions. This collection of data includes various features such as title,
NER, directions, and extended NER, as well as nine different labels
representing genres including bakery, drinks, non-veg, vegetables, fast food,
cereals, meals, sides, and fusions. The proposed pipeline named 3A2M+ extends
the size of the Named Entity Recognition (NER) list to address missing named
entities like heat, time or process from the recipe directions using two NER
extraction tools. 3A2M+ dataset provides a comprehensive solution to the
various challenging recipe-related tasks, including classification, named
entity recognition, and recipe generation. Furthermore, we have demonstrated
traditional machine learning, deep learning and pre-trained language models to
classify the recipes into their corresponding genre and achieved an overall
accuracy of 98.6\%. Our investigation indicates that the title feature played a
more significant role in classifying the genre. | Computational Linguistics |
What field is the article from? | Title: On Tuning Neural ODE for Stability, Consistency and Faster Convergence
Abstract: Neural-ODE parameterize a differential equation using continuous depth neural
network and solve it using numerical ODE-integrator. These models offer a
constant memory cost compared to models with discrete sequence of hidden layers
in which memory cost increases linearly with the number of layers. In addition
to memory efficiency, other benefits of neural-ode include adaptability of
evaluation approach to input, and flexibility to choose numerical precision or
fast training. However, despite having all these benefits, it still has some
limitations. We identify the ODE-integrator (also called ODE-solver) as the
weakest link in the chain as it may have stability, consistency and convergence
(CCS) issues and may suffer from slower convergence or may not converge at all.
We propose a first-order Nesterov's accelerated gradient (NAG) based ODE-solver
which is proven to be tuned vis-a-vis CCS conditions. We empirically
demonstrate the efficacy of our approach by training faster, while achieving
better or comparable performance against neural-ode employing other fixed-step
explicit ODE-solvers as well discrete depth models such as ResNet in three
different tasks including supervised classification, density estimation, and
time-series modelling. | Machine Learning |
What field is the article from? | Title: Mitigating Estimation Errors by Twin TD-Regularized Actor and Critic for Deep Reinforcement Learning
Abstract: We address the issue of estimation bias in deep reinforcement learning (DRL)
by introducing solution mechanisms that include a new, twin TD-regularized
actor-critic (TDR) method. It aims at reducing both over and under-estimation
errors. With TDR and by combining good DRL improvements, such as distributional
learning and long N-step surrogate stage reward (LNSS) method, we show that our
new TDR-based actor-critic learning has enabled DRL methods to outperform their
respective baselines in challenging environments in DeepMind Control Suite.
Furthermore, they elevate TD3 and SAC respectively to a level of performance
comparable to that of D4PG (the current SOTA), and they also improve the
performance of D4PG to a new SOTA level measured by mean reward, convergence
speed, learning success rate, and learning variance. | Machine Learning |
What field is the article from? | Title: An Information-Flow Perspective on Algorithmic Fairness
Abstract: This work presents insights gained by investigating the relationship between
algorithmic fairness and the concept of secure information flow. The problem of
enforcing secure information flow is well-studied in the context of information
security: If secret information may "flow" through an algorithm or program in
such a way that it can influence the program's output, then that is considered
insecure information flow as attackers could potentially observe (parts of) the
secret.
There is a strong correspondence between secure information flow and
algorithmic fairness: if protected attributes such as race, gender, or age are
treated as secret program inputs, then secure information flow means that these
``secret'' attributes cannot influence the result of a program. While most
research in algorithmic fairness evaluation concentrates on studying the impact
of algorithms (often treating the algorithm as a black-box), the concepts
derived from information flow can be used both for the analysis of disparate
treatment as well as disparate impact w.r.t. a structural causal model.
In this paper, we examine the relationship between quantitative as well as
qualitative information-flow properties and fairness. Moreover, based on this
duality, we derive a new quantitative notion of fairness called fairness
spread, which can be easily analyzed using quantitative information flow and
which strongly relates to counterfactual fairness. We demonstrate that
off-the-shelf tools for information-flow properties can be used in order to
formally analyze a program's algorithmic fairness properties, including the new
notion of fairness spread as well as established notions such as demographic
parity. | Cryptography and Security |
What field is the article from? | Title: Corrupting Convolution-based Unlearnable Datasets with Pixel-based Image Transformations
Abstract: Unlearnable datasets lead to a drastic drop in the generalization performance
of models trained on them by introducing elaborate and imperceptible
perturbations into clean training sets. Many existing defenses, e.g., JPEG
compression and adversarial training, effectively counter UDs based on
norm-constrained additive noise. However, a fire-new type of convolution-based
UDs have been proposed and render existing defenses all ineffective, presenting
a greater challenge to defenders. To address this, we express the
convolution-based unlearnable sample as the result of multiplying a matrix by a
clean sample in a simplified scenario, and formalize the intra-class matrix
inconsistency as $\Theta_{imi}$, inter-class matrix consistency as
$\Theta_{imc}$ to investigate the working mechanism of the convolution-based
UDs. We conjecture that increasing both of these metrics will mitigate the
unlearnability effect. Through validation experiments that commendably support
our hypothesis, we further design a random matrix to boost both $\Theta_{imi}$
and $\Theta_{imc}$, achieving a notable degree of defense effect. Hence, by
building upon and extending these facts, we first propose a brand-new image
COrruption that employs randomly multiplicative transformation via
INterpolation operation to successfully defend against convolution-based UDs.
Our approach leverages global pixel random interpolations, effectively
suppressing the impact of multiplicative noise in convolution-based UDs.
Additionally, we have also designed two new forms of convolution-based UDs, and
find that our defense is the most effective against them. | Computer Vision |
What field is the article from? | Title: Trustworthy AI: Deciding What to Decide
Abstract: When engaging in strategic decision-making, we are frequently confronted with
overwhelming information and data. The situation can be further complicated
when certain pieces of evidence contradict each other or become paradoxical.
The primary challenge is how to determine which information can be trusted when
we adopt Artificial Intelligence (AI) systems for decision-making. This issue
is known as deciding what to decide or Trustworthy AI. However, the AI system
itself is often considered an opaque black box. We propose a new approach to
address this issue by introducing a novel framework of Trustworthy AI (TAI)
encompassing three crucial components of AI: representation space, loss
function, and optimizer. Each component is loosely coupled with four TAI
properties. Altogether, the framework consists of twelve TAI properties. We aim
to use this framework to conduct the TAI experiments by quantitive and
qualitative research methods to satisfy TAI properties for the decision-making
context. The framework allows us to formulate an optimal prediction model
trained by the given dataset for applying the strategic investment decision of
credit default swaps (CDS) in the technology sector. Finally, we provide our
view of the future direction of TAI research | Artificial Intelligence |
What field is the article from? | Title: PathoDuet: Foundation Models for Pathological Slide Analysis of H&E and IHC Stains
Abstract: Large amounts of digitized histopathological data display a promising future
for developing pathological foundation models via self-supervised learning
methods. Foundation models pretrained with these methods serve as a good basis
for downstream tasks. However, the gap between natural and histopathological
images hinders the direct application of existing methods. In this work, we
present PathoDuet, a series of pretrained models on histopathological images,
and a new self-supervised learning framework in histopathology. The framework
is featured by a newly-introduced pretext token and later task raisers to
explicitly utilize certain relations between images, like multiple
magnifications and multiple stains. Based on this, two pretext tasks,
cross-scale positioning and cross-stain transferring, are designed to pretrain
the model on Hematoxylin and Eosin (H\&E) images and transfer the model to
immunohistochemistry (IHC) images, respectively. To validate the efficacy of
our models, we evaluate the performance over a wide variety of downstream
tasks, including patch-level colorectal cancer subtyping and whole slide image
(WSI)-level classification in H\&E field, together with expression level
prediction of IHC marker and tumor identification in IHC field. The
experimental results show the superiority of our models over most tasks and the
efficacy of proposed pretext tasks. The codes and models are available at
https://github.com/openmedlab/PathoDuet. | Computer Vision |
What field is the article from? | Title: Informative Priors Improve the Reliability of Multimodal Clinical Data Classification
Abstract: Machine learning-aided clinical decision support has the potential to
significantly improve patient care. However, existing efforts in this domain
for principled quantification of uncertainty have largely been limited to
applications of ad-hoc solutions that do not consistently improve reliability.
In this work, we consider stochastic neural networks and design a tailor-made
multimodal data-driven (M2D2) prior distribution over network parameters. We
use simple and scalable Gaussian mean-field variational inference to train a
Bayesian neural network using the M2D2 prior. We train and evaluate the
proposed approach using clinical time-series data in MIMIC-IV and corresponding
chest X-ray images in MIMIC-CXR for the classification of acute care
conditions. Our empirical results show that the proposed method produces a more
reliable predictive model compared to deterministic and Bayesian neural network
baselines. | Computer Vision |
What field is the article from? | Title: An Eye on Clinical BERT: Investigating Language Model Generalization for Diabetic Eye Disease Phenotyping
Abstract: Diabetic eye disease is a major cause of blindness worldwide. The ability to
monitor relevant clinical trajectories and detect lapses in care is critical to
managing the disease and preventing blindness. Alas, much of the information
necessary to support these goals is found only in the free text of the
electronic medical record. To fill this information gap, we introduce a system
for extracting evidence from clinical text of 19 clinical concepts related to
diabetic eye disease and inferring relevant attributes for each. In developing
this ophthalmology phenotyping system, we are also afforded a unique
opportunity to evaluate the effectiveness of clinical language models at
adapting to new clinical domains. Across multiple training paradigms, we find
that BERT language models pretrained on out-of-distribution clinical data offer
no significant improvement over BERT language models pretrained on non-clinical
data for our domain. Our study tempers recent claims that language models
pretrained on clinical data are necessary for clinical NLP tasks and highlights
the importance of not treating clinical language data as a single homogeneous
domain. | Computational Linguistics |
What field is the article from? | Title: JaxMARL: Multi-Agent RL Environments in JAX
Abstract: Benchmarks play an important role in the development of machine learning
algorithms. For example, research in reinforcement learning (RL) has been
heavily influenced by available environments and benchmarks. However, RL
environments are traditionally run on the CPU, limiting their scalability with
typical academic compute. Recent advancements in JAX have enabled the wider use
of hardware acceleration to overcome these computational hurdles, enabling
massively parallel RL training pipelines and environments. This is particularly
useful for multi-agent reinforcement learning (MARL) research. First of all,
multiple agents must be considered at each environment step, adding
computational burden, and secondly, the sample complexity is increased due to
non-stationarity, decentralised partial observability, or other MARL
challenges. In this paper, we present JaxMARL, the first open-source code base
that combines ease-of-use with GPU enabled efficiency, and supports a large
number of commonly used MARL environments as well as popular baseline
algorithms. When considering wall clock time, our experiments show that per-run
our JAX-based training pipeline is up to 12500x faster than existing
approaches. This enables efficient and thorough evaluations, with the potential
to alleviate the evaluation crisis of the field. We also introduce and
benchmark SMAX, a vectorised, simplified version of the popular StarCraft
Multi-Agent Challenge, which removes the need to run the StarCraft II game
engine. This not only enables GPU acceleration, but also provides a more
flexible MARL environment, unlocking the potential for self-play,
meta-learning, and other future applications in MARL. We provide code at
https://github.com/flairox/jaxmarl. | Machine Learning |
What field is the article from? | Title: Tackling the Abstraction and Reasoning Corpus (ARC) with Object-centric Models and the MDL Principle
Abstract: The Abstraction and Reasoning Corpus (ARC) is a challenging benchmark,
introduced to foster AI research towards human-level intelligence. It is a
collection of unique tasks about generating colored grids, specified by a few
examples only. In contrast to the transformation-based programs of existing
work, we introduce object-centric models that are in line with the natural
programs produced by humans. Our models can not only perform predictions, but
also provide joint descriptions for input/output pairs. The Minimum Description
Length (MDL) principle is used to efficiently search the large model space. A
diverse range of tasks are solved, and the learned models are similar to the
natural programs. We demonstrate the generality of our approach by applying it
to a different domain. | Artificial Intelligence |
What field is the article from? | Title: TempME: Towards the Explainability of Temporal Graph Neural Networks via Motif Discovery
Abstract: Temporal graphs are widely used to model dynamic systems with time-varying
interactions. In real-world scenarios, the underlying mechanisms of generating
future interactions in dynamic systems are typically governed by a set of
recurring substructures within the graph, known as temporal motifs. Despite the
success and prevalence of current temporal graph neural networks (TGNN), it
remains uncertain which temporal motifs are recognized as the significant
indications that trigger a certain prediction from the model, which is a
critical challenge for advancing the explainability and trustworthiness of
current TGNNs. To address this challenge, we propose a novel approach, called
Temporal Motifs Explainer (TempME), which uncovers the most pivotal temporal
motifs guiding the prediction of TGNNs. Derived from the information bottleneck
principle, TempME extracts the most interaction-related motifs while minimizing
the amount of contained information to preserve the sparsity and succinctness
of the explanation. Events in the explanations generated by TempME are verified
to be more spatiotemporally correlated than those of existing approaches,
providing more understandable insights. Extensive experiments validate the
superiority of TempME, with up to 8.21% increase in terms of explanation
accuracy across six real-world datasets and up to 22.96% increase in boosting
the prediction Average Precision of current TGNNs. | Machine Learning |
What field is the article from? | Title: Discriminator Guidance for Autoregressive Diffusion Models
Abstract: We introduce discriminator guidance in the setting of Autoregressive
Diffusion Models. The use of a discriminator to guide a diffusion process has
previously been used for continuous diffusion models, and in this work we
derive ways of using a discriminator together with a pretrained generative
model in the discrete case. First, we show that using an optimal discriminator
will correct the pretrained model and enable exact sampling from the underlying
data distribution. Second, to account for the realistic scenario of using a
sub-optimal discriminator, we derive a sequential Monte Carlo algorithm which
iteratively takes the predictions from the discrimiator into account during the
generation process. We test these approaches on the task of generating
molecular graphs and show how the discriminator improves the generative
performance over using only the pretrained model. | Machine Learning |
What field is the article from? | Title: Causality Analysis for Evaluating the Security of Large Language Models
Abstract: Large Language Models (LLMs) such as GPT and Llama2 are increasingly adopted
in many safety-critical applications. Their security is thus essential. Even
with considerable efforts spent on reinforcement learning from human feedback
(RLHF), recent studies have shown that LLMs are still subject to attacks such
as adversarial perturbation and Trojan attacks. Further research is thus needed
to evaluate their security and/or understand the lack of it. In this work, we
propose a framework for conducting light-weight causality-analysis of LLMs at
the token, layer, and neuron level. We applied our framework to open-source
LLMs such as Llama2 and Vicuna and had multiple interesting discoveries. Based
on a layer-level causality analysis, we show that RLHF has the effect of
overfitting a model to harmful prompts. It implies that such security can be
easily overcome by `unusual' harmful prompts. As evidence, we propose an
adversarial perturbation method that achieves 100\% attack success rate on the
red-teaming tasks of the Trojan Detection Competition 2023. Furthermore, we
show the existence of one mysterious neuron in both Llama2 and Vicuna that has
an unreasonably high causal effect on the output. While we are uncertain on why
such a neuron exists, we show that it is possible to conduct a ``Trojan''
attack targeting that particular neuron to completely cripple the LLM, i.e., we
can generate transferable suffixes to prompts that frequently make the LLM
produce meaningless responses. | Artificial Intelligence |
What field is the article from? | Title: Architecture of Data Anomaly Detection-Enhanced Decentralized Expert System for Early-Stage Alzheimer's Disease Prediction
Abstract: Alzheimer's Disease is a global health challenge that requires early and
accurate detection to improve patient outcomes. Magnetic Resonance Imaging
(MRI) holds significant diagnostic potential, but its effective analysis
remains a formidable task. This study introduces a groundbreaking decentralized
expert system that cleverly combines blockchain technology with Artificial
Intelligence (AI) to integrate robust anomaly detection for patient-submitted
data.
Traditional diagnostic methods often lead to delayed and imprecise
predictions, especially in the early stages of the disease. Centralized data
repositories struggle to manage the immense volumes of MRI data, and persistent
privacy concerns hinder collaborative efforts. Our innovative solution
harnesses decentralization to protect data integrity and patient privacy,
facilitated by blockchain technology. It not only emphasizes AI-driven MRI
analysis but also incorporates a sophisticated data anomaly detection
architecture. These mechanisms scrutinize patient-contributed data for various
issues, including data quality problems and atypical findings within MRI
images.
Conducting an exhaustive check of MRI image correctness and quality directly
on the blockchain is impractical due to computational complexity and cost
constraints. Typically, such checks are performed off-chain, and the blockchain
securely records the results. This comprehensive approach empowers our
decentralized app to provide more precise early-stage Alzheimer's Disease
predictions. By merging the strengths of blockchain, AI, and anomaly detection,
our system represents a pioneering step towards revolutionizing disease
diagnostics. | Cryptography and Security |
What field is the article from? | Title: Defense semantics of argumentation: revisit
Abstract: In this paper we introduce a novel semantics, called defense semantics, for
Dung's abstract argumentation frameworks in terms of a notion of (partial)
defence, which is a triple encoding that one argument is (partially) defended
by another argument via attacking the attacker of the first argument. In terms
of defense semantics, we show that defenses related to self-attacked arguments
and arguments in 3-cycles are unsatifiable under any situation and therefore
can be removed without affecting the defense semantics of an AF. Then, we
introduce a new notion of defense equivalence of AFs, and compare defense
equivalence with standard equivalence and strong equivalence, respectively.
Finally, by exploiting defense semantics, we define two kinds of reasons for
accepting arguments, i.e., direct reasons and root reasons, and a notion of
root equivalence of AFs that can be used in argumentation summarization. | Artificial Intelligence |
What field is the article from? | Title: LiFT: Unsupervised Reinforcement Learning with Foundation Models as Teachers
Abstract: We propose a framework that leverages foundation models as teachers, guiding
a reinforcement learning agent to acquire semantically meaningful behavior
without human feedback. In our framework, the agent receives task instructions
grounded in a training environment from large language models. Then, a
vision-language model guides the agent in learning the multi-task
language-conditioned policy by providing reward feedback. We demonstrate that
our method can learn semantically meaningful skills in a challenging open-ended
MineDojo environment while prior unsupervised skill discovery methods struggle.
Additionally, we discuss observed challenges of using off-the-shelf foundation
models as teachers and our efforts to address them. | Machine Learning |
What field is the article from? | Title: A Method to Improve the Performance of Reinforcement Learning Based on the Y Operator for a Class of Stochastic Differential Equation-Based Child-Mother Systems
Abstract: This paper introduces a novel operator, termed the Y operator, to elevate
control performance in Actor-Critic(AC) based reinforcement learning for
systems governed by stochastic differential equations(SDEs). The Y operator
ingeniously integrates the stochasticity of a class of child-mother system into
the Critic network's loss function, yielding substantial advancements in the
control performance of RL algorithms.Additionally, the Y operator elegantly
reformulates the challenge of solving partial differential equations for the
state-value function into a parallel problem for the drift and diffusion
functions within the system's SDEs.A rigorous mathematical proof confirms the
operator's validity.This transformation enables the Y Operator-based
Reinforcement Learning(YORL) framework to efficiently tackle optimal control
problems in both model-based and data-driven systems.The superiority of YORL is
demonstrated through linear and nonlinear numerical examples showing its
enhanced performance over existing methods post convergence. | Artificial Intelligence |
What field is the article from? | Title: Gradient Informed Proximal Policy Optimization
Abstract: We introduce a novel policy learning method that integrates analytical
gradients from differentiable environments with the Proximal Policy
Optimization (PPO) algorithm. To incorporate analytical gradients into the PPO
framework, we introduce the concept of an {\alpha}-policy that stands as a
locally superior policy. By adaptively modifying the {\alpha} value, we can
effectively manage the influence of analytical policy gradients during
learning. To this end, we suggest metrics for assessing the variance and bias
of analytical gradients, reducing dependence on these gradients when high
variance or bias is detected. Our proposed approach outperforms baseline
algorithms in various scenarios, such as function optimization, physics
simulations, and traffic control environments. Our code can be found online:
https://github.com/SonSang/gippo. | Machine Learning |
What field is the article from? | Title: Comparison of metaheuristics for the firebreak placement problem: a simulation-based optimization approach
Abstract: The problem of firebreak placement is crucial for fire prevention, and its
effectiveness at landscape scale will depend on their ability to impede the
progress of future wildfires. To provide an adequate response, it is therefore
necessary to consider the stochastic nature of fires, which are highly
unpredictable from ignition to extinction. Thus, the placement of firebreaks
can be considered a stochastic optimization problem where: (1) the objective
function is to minimize the expected cells burnt of the landscape; (2) the
decision variables being the location of firebreaks; and (3) the random
variable being the spatial propagation/behavior of fires. In this paper, we
propose a solution approach for the problem from the perspective of
simulation-based optimization (SbO), where the objective function is not
available (a black-box function), but can be computed (and/or approximated) by
wildfire simulations. For this purpose, Genetic Algorithm and GRASP are
implemented. The final implementation yielded favorable results for the Genetic
Algorithm, demonstrating strong performance in scenarios with medium to high
operational capacity, as well as medium levels of stochasticity | Artificial Intelligence |
What field is the article from? | Title: What Planning Problems Can A Relational Neural Network Solve?
Abstract: Goal-conditioned policies are generally understood to be "feed-forward"
circuits, in the form of neural networks that map from the current state and
the goal specification to the next action to take. However, under what
circumstances such a policy can be learned and how efficient the policy will be
are not well understood. In this paper, we present a circuit complexity
analysis for relational neural networks (such as graph neural networks and
transformers) representing policies for planning problems, by drawing
connections with serialized goal regression search (S-GRS). We show that there
are three general classes of planning problems, in terms of the growth of
circuit width and depth as a function of the number of objects and planning
horizon, providing constructive proofs. We also illustrate the utility of this
analysis for designing neural networks for policy learning. | Machine Learning |
What field is the article from? | Title: New Boolean satisfiability problem heuristic strategy: Minimal Positive Negative Product Strategy
Abstract: This study presents a novel heuristic algorithm called the "Minimal Positive
Negative Product Strategy" to guide the CDCL algorithm in solving the Boolean
satisfiability problem. It provides a mathematical explanation for the
superiority of this algorithm over widely used heuristics such as the Dynamic
Largest Individual Sum (DLIS) and the Variable State Independent Decaying Sum
(VSIDS). Experimental results further confirm the effectiveness of this
heuristic strategy in problem-solving. | Artificial Intelligence |
What field is the article from? | Title: Mixed Distillation Helps Smaller Language Model Better Reasoning
Abstract: Despite the remarkable performance of large language models (LLMs) in recent
NLP tasks, their deployment poses substantial challenges due to high
computational and memory demands. Recent research has concentrated on improving
open-source smaller models through knowledge distillation from LLMs to reduce
computational resource costs with promising outcomes. Nevertheless, they
frequently fall short of attaining LLM-level performance, particularly in tasks
demanding advanced reasoning. In this work, we introduce the \textbf{Mixed
Distillation} framework, which capitalizes on the strengths of
Program-of-Thought (PoT) and Chain-of-Thought (CoT) capabilities within LLMs
and distills these capabilities to smaller models. Regarding these two
capabilities, the PoT is dedicated to enhancing the performance of reasoning
results generated by smaller models, while CoT simultaneously optimizes the
results. Our Mixed Distillation framework offers a promising approach to
enhance the capabilities of smaller models, bridging the gap with LLMs, and
demonstrating better performance across various tasks. Specifically, on the
SVAMP dataset, employing a 7 billion parameter Llama2 and CodeLlama in a mixed
distillation framework not only boosts distillation capabilities beyond
single-path distillation methods but also outperforms the LLM (GPT-3.5-turbo)
in terms of reasoning accuracy. Through sampling in multiple-path reasoning,
the models achieve impressive accuracy performances of 85% and 85.5%,
respectively, signifying advancements over previous distillation methods. | Computational Linguistics |
What field is the article from? | Title: ACTOR: Active Learning with Annotator-specific Classification Heads to Embrace Human Label Variation
Abstract: Label aggregation such as majority voting is commonly used to resolve
annotator disagreement in dataset creation. However, this may disregard
minority values and opinions. Recent studies indicate that learning from
individual annotations outperforms learning from aggregated labels, though they
require a considerable amount of annotation. Active learning, as an annotation
cost-saving strategy, has not been fully explored in the context of learning
from disagreement. We show that in the active learning setting, a multi-head
model performs significantly better than a single-head model in terms of
uncertainty estimation. By designing and evaluating acquisition functions with
annotator-specific heads on two datasets, we show that group-level entropy
works generally well on both datasets. Importantly, it achieves performance in
terms of both prediction and uncertainty estimation comparable to full-scale
training from disagreement, while saving up to 70% of the annotation budget. | Computational Linguistics |
What field is the article from? | Title: Decomposing Hard SAT Instances with Metaheuristic Optimization
Abstract: In the article, within the framework of the Boolean Satisfiability problem
(SAT), the problem of estimating the hardness of specific Boolean formulas
w.r.t. a specific complete SAT solving algorithm is considered. Based on the
well-known Strong Backdoor Set (SBS) concept, we introduce the notion of
decomposition hardness (d-hardness). If $B$ is an arbitrary subset of the set
of variables occurring in a SAT formula $C$, and $A$ is an arbitrary complete
SAT solver , then the d-hardness expresses an estimate of the hardness of $C$
w.r.t. $A$ and $B$. We show that the d-hardness of $C$ w.r.t. a particular $B$
can be expressed in terms of the expected value of a special random variable
associated with $A$, $B$, and $C$. For its computational evaluation, algorithms
based on the Monte Carlo method can be used. The problem of finding $B$ with
the minimum value of d-hardness is formulated as an optimization problem for a
pseudo-Boolean function whose values are calculated as a result of a
probabilistic experiment. To minimize this function, we use evolutionary
algorithms. In the experimental part, we demonstrate the applicability of the
concept of d-hardness and the methods of its estimation to solving hard
unsatisfiable SAT instances. | Artificial Intelligence |
What field is the article from? | Title: Dates Fruit Disease Recognition using Machine Learning
Abstract: Many countries such as Saudi Arabia, Morocco and Tunisia are among the top
exporters and consumers of palm date fruits. Date fruit production plays a
major role in the economies of the date fruit exporting countries. Date fruits
are susceptible to disease just like any fruit and early detection and
intervention can end up saving the produce. However, with the vast farming
lands, it is nearly impossible for farmers to observe date trees on a frequent
basis for early disease detection. In addition, even with human observation the
process is prone to human error and increases the date fruit cost. With the
recent advances in computer vision, machine learning, drone technology, and
other technologies; an integrated solution can be proposed for the automatic
detection of date fruit disease. In this paper, a hybrid features based method
with the standard classifiers is proposed based on the extraction of L*a*b
color features, statistical features, and Discrete Wavelet Transform (DWT)
texture features for the early detection and classification of date fruit
disease. A dataset was developed for this work consisting of 871 images divided
into the following classes; Healthy date, Initial stage of disease,
Malnourished date, and Parasite infected. The extracted features were input to
common classifiers such as the Random Forest (RF), Multilayer Perceptron (MLP),
Na\"ive Bayes (NB), and Fuzzy Decision Trees (FDT). The highest average
accuracy was achieved when combining the L*a*b, Statistical, and DWT Features. | Computer Vision |
What field is the article from? | Title: Language Model Agents Suffer from Compositional Generalization in Web Automation
Abstract: Language model agents (LMA) recently emerged as a promising paradigm on
muti-step decision making tasks, often outperforming humans and other
reinforcement learning agents. Despite the promise, their performance on
real-world applications that often involve combinations of tasks is still
underexplored. In this work, we introduce a new benchmark, called CompWoB -- 50
new compositional web automation tasks reflecting more realistic assumptions.
We show that while existing prompted LMAs (gpt-3.5-turbo or gpt-4) achieve
94.0% average success rate on base tasks, their performance degrades to 24.9%
success rate on compositional tasks. On the other hand, transferred LMAs
(finetuned only on base tasks) show less generalization gap, dropping from
85.4% to 54.8%. By balancing data distribution across tasks, we train a new
model, HTML-T5++, that surpasses human-level performance (95.2%) on MiniWoB,
and achieves the best zero-shot performance on CompWoB (61.5%). While these
highlight the promise of small-scale finetuned and transferred models for
compositional generalization, their performance further degrades under
different instruction compositions changing combinational order. In contrast to
the recent remarkable success of LMA, our benchmark and detailed analysis
emphasize the necessity of building LMAs that are robust and generalizable to
task compositionality for real-world deployment. | Machine Learning |
What field is the article from? | Title: The Linear Representation Hypothesis and the Geometry of Large Language Models
Abstract: Informally, the 'linear representation hypothesis' is the idea that
high-level concepts are represented linearly as directions in some
representation space. In this paper, we address two closely related questions:
What does "linear representation" actually mean? And, how do we make sense of
geometric notions (e.g., cosine similarity or projection) in the representation
space? To answer these, we use the language of counterfactuals to give two
formalizations of "linear representation", one in the output (word)
representation space, and one in the input (sentence) space. We then prove
these connect to linear probing and model steering, respectively. To make sense
of geometric notions, we use the formalization to identify a particular
(non-Euclidean) inner product that respects language structure in a sense we
make precise. Using this causal inner product, we show how to unify all notions
of linear representation. In particular, this allows the construction of probes
and steering vectors using counterfactual pairs. Experiments with LLaMA-2
demonstrate the existence of linear representations of concepts, the connection
to interpretation and control, and the fundamental role of the choice of inner
product. | Computational Linguistics |
What field is the article from? | Title: UWB Based Static Gesture Classification
Abstract: Our paper presents a robust framework for UWB-based static gesture
recognition, leveraging proprietary UWB radar sensor technology. Extensive data
collection efforts were undertaken to compile datasets containing five commonly
used gestures. Our approach involves a comprehensive data pre-processing
pipeline that encompasses outlier handling, aspect ratio-preserving resizing,
and false-color image transformation. Both CNN and MobileNet models were
trained on the processed images. Remarkably, our best-performing model achieved
an accuracy of 96.78%. Additionally, we developed a user-friendly GUI framework
to assess the model's system resource usage and processing times, which
revealed low memory utilization and real-time task completion in under one
second. This research marks a significant step towards enhancing static gesture
recognition using UWB technology, promising practical applications in various
domains. | Computer Vision |
What field is the article from? | Title: Bridging the Gap: Addressing Discrepancies in Diffusion Model Training for Classifier-Free Guidance
Abstract: Diffusion models have emerged as a pivotal advancement in generative models,
setting new standards to the quality of the generated instances. In the current
paper we aim to underscore a discrepancy between conventional training methods
and the desired conditional sampling behavior of these models. While the
prevalent classifier-free guidance technique works well, it's not without
flaws. At higher values for the guidance scale parameter $w$, we often get out
of distribution samples and mode collapse, whereas at lower values for $w$ we
may not get the desired specificity. To address these challenges, we introduce
an updated loss function that better aligns training objectives with sampling
behaviors. Experimental validation with FID scores on CIFAR-10 elucidates our
method's ability to produce higher quality samples with fewer sampling
timesteps, and be more robust to the choice of guidance scale $w$. We also
experiment with fine-tuning Stable Diffusion on the proposed loss, to provide
early evidence that large diffusion models may also benefit from this refined
loss function. | Machine Learning |
What field is the article from? | Title: Labeling Neural Representations with Inverse Recognition
Abstract: Deep Neural Networks (DNNs) demonstrated remarkable capabilities in learning
complex hierarchical data representations, but the nature of these
representations remains largely unknown. Existing global explainability
methods, such as Network Dissection, face limitations such as reliance on
segmentation masks, lack of statistical significance testing, and high
computational demands. We propose Inverse Recognition (INVERT), a scalable
approach for connecting learned representations with human-understandable
concepts by leveraging their capacity to discriminate between these concepts.
In contrast to prior work, INVERT is capable of handling diverse types of
neurons, exhibits less computational complexity, and does not rely on the
availability of segmentation masks. Moreover, INVERT provides an interpretable
metric assessing the alignment between the representation and its corresponding
explanation and delivering a measure of statistical significance, emphasizing
its utility and credibility. We demonstrate the applicability of INVERT in
various scenarios, including the identification of representations affected by
spurious correlations, and the interpretation of the hierarchical structure of
decision-making within the models. | Machine Learning |
What field is the article from? | Title: Chatbots Are Not Reliable Text Annotators
Abstract: Recent research highlights the significant potential of ChatGPT for text
annotation in social science research. However, ChatGPT is a closed-source
product which has major drawbacks with regards to transparency,
reproducibility, cost, and data protection. Recent advances in open-source (OS)
large language models (LLMs) offer alternatives which remedy these challenges.
This means that it is important to evaluate the performance of OS LLMs relative
to ChatGPT and standard approaches to supervised machine learning
classification. We conduct a systematic comparative evaluation of the
performance of a range of OS LLM models alongside ChatGPT, using both zero- and
few-shot learning as well as generic and custom prompts, with results compared
to more traditional supervised classification models. Using a new dataset of
Tweets from US news media, and focusing on simple binary text annotation tasks
for standard social science concepts, we find significant variation in the
performance of ChatGPT and OS models across the tasks, and that supervised
classifiers consistently outperform both. Given the unreliable performance of
ChatGPT and the significant challenges it poses to Open Science we advise
against using ChatGPT for substantive text annotation tasks in social science
research. | Computational Linguistics |
What field is the article from? | Title: Diffused Task-Agnostic Milestone Planner
Abstract: Addressing decision-making problems using sequence modeling to predict future
trajectories shows promising results in recent years. In this paper, we take a
step further to leverage the sequence predictive method in wider areas such as
long-term planning, vision-based control, and multi-task decision-making. To
this end, we propose a method to utilize a diffusion-based generative sequence
model to plan a series of milestones in a latent space and to have an agent to
follow the milestones to accomplish a given task. The proposed method can learn
control-relevant, low-dimensional latent representations of milestones, which
makes it possible to efficiently perform long-term planning and vision-based
control. Furthermore, our approach exploits generation flexibility of the
diffusion model, which makes it possible to plan diverse trajectories for
multi-task decision-making. We demonstrate the proposed method across offline
reinforcement learning (RL) benchmarks and an visual manipulation environment.
The results show that our approach outperforms offline RL methods in solving
long-horizon, sparse-reward tasks and multi-task problems, while also achieving
the state-of-the-art performance on the most challenging vision-based
manipulation benchmark. | Robotics |
What field is the article from? | Title: Smart Home Goal Feature Model -- A guide to support Smart Homes for Ageing in Place
Abstract: Smart technologies are significant in supporting ageing in place for elderly.
Leveraging Artificial Intelligence (AI) and Machine Learning (ML), it provides
peace of mind, enabling the elderly to continue living independently. Elderly
use smart technologies for entertainment and social interactions, this can be
extended to provide safety and monitor health and environmental conditions,
detect emergencies and notify informal and formal caregivers when care is
needed. This paper provides an overview of the smart home technologies
commercially available to support ageing in place, the advantages and
challenges of smart home technologies, and their usability from elderlys
perspective. Synthesizing prior knowledge, we created a structured Smart Home
Goal Feature Model (SHGFM) to resolve heuristic approaches used by the Subject
Matter Experts (SMEs) at aged care facilities and healthcare researchers in
adapting smart homes. The SHGFM provides SMEs the ability to (i) establish
goals and (ii) identify features to set up strategies to design, develop and
deploy smart homes for the elderly based on personalised needs. Our model
provides guidance to healthcare researchers and aged care industries to set up
smart homes based on the needs of elderly, by defining a set of goals at
different levels mapped to a different set of features. | Human-Computer Interaction |
What field is the article from? | Title: Ego-Exo4D: Understanding Skilled Human Activity from First- and Third-Person Perspectives
Abstract: We present Ego-Exo4D, a diverse, large-scale multimodal multiview video
dataset and benchmark challenge. Ego-Exo4D centers around
simultaneously-captured egocentric and exocentric video of skilled human
activities (e.g., sports, music, dance, bike repair). More than 800
participants from 13 cities worldwide performed these activities in 131
different natural scene contexts, yielding long-form captures from 1 to 42
minutes each and 1,422 hours of video combined. The multimodal nature of the
dataset is unprecedented: the video is accompanied by multichannel audio, eye
gaze, 3D point clouds, camera poses, IMU, and multiple paired language
descriptions -- including a novel "expert commentary" done by coaches and
teachers and tailored to the skilled-activity domain. To push the frontier of
first-person video understanding of skilled human activity, we also present a
suite of benchmark tasks and their annotations, including fine-grained activity
understanding, proficiency estimation, cross-view translation, and 3D hand/body
pose. All resources will be open sourced to fuel new research in the community. | Computer Vision |
What field is the article from? | Title: Past as a Guide: Leveraging Retrospective Learning for Python Code Completion
Abstract: This work presents Past as a Guide (PaG), a simple approach for Large
Language Models (LLMs) to improve the coding capabilities by integrating the
past history with interactive and iterative code refinements. To be specific,
inspired by human cognitive processes, the proposed method enables LLMs to
utilize previous programming and debugging experiences to enhance the Python
code completion tasks. The framework facilitates LLMs to iteratively refine the
Python code based on previous execution and debugging results and optimize
learning and reasoning capabilities. The proposed methodology achieved a 92\%
pass@1 on HumanEval, demonstrating the potential to advance the field by
leveraging retrospection from past experiences and interactive and iterative
refinement processes without external correctness indicators. | Software Engineering |
What field is the article from? | Title: ICRA Roboethics Challenge 2023: Intelligent Disobedience in an Elderly Care Home
Abstract: With the projected surge in the elderly population, service robots offer a
promising avenue to enhance their well-being in elderly care homes. Such robots
will encounter complex scenarios which will require them to perform decisions
with ethical consequences. In this report, we propose to leverage the
Intelligent Disobedience framework in order to give the robot the ability to
perform a deliberation process over decisions with potential ethical
implications. We list the issues that this framework can assist with, define it
formally in the context of the specific elderly care home scenario, and
delineate the requirements for implementing an intelligently disobeying robot.
We conclude this report with some critical analysis and suggestions for future
work. | Robotics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.