instruction
stringclasses 1
value | input
stringlengths 260
2.07k
| output
stringclasses 10
values |
---|---|---|
What field is the article from? | Title: Efficient Large Language Models Fine-Tuning On Graphs
Abstract: Learning from Text-Attributed Graphs (TAGs) has attracted significant
attention due to its wide range of real-world applications. The rapid evolution
of large language models (LLMs) has revolutionized the way we process textual
data, which indicates a strong potential to replace shallow text embedding
generally used in Graph Neural Networks (GNNs). However, we find that existing
LLM approaches that exploit text information in graphs suffer from inferior
computation and data efficiency. In this work, we introduce a novel and
efficient approach for the end-to-end fine-tuning of Large Language Models
(LLMs) on TAGs, named LEADING. The proposed approach maintains computation cost
and memory overhead comparable to the graph-less fine-tuning of LLMs. Moreover,
it transfers the rick knowledge in LLMs to downstream graph learning tasks
effectively with limited labeled data in semi-supervised learning. Its superior
computation and data efficiency are demonstrated through comprehensive
experiments, offering a promising solution for a wide range of LLMs and graph
learning tasks on TAGs. | Machine Learning |
What field is the article from? | Title: MRxaI: Black-Box Explainability for Image Classifiers in a Medical Setting
Abstract: Existing tools for explaining the output of image classifiers can be divided
into white-box, which rely on access to the model internals, and black-box,
agnostic to the model. As the usage of AI in the medical domain grows, so too
does the usage of explainability tools. Existing work on medical image
explanations focuses on white-box tools, such as gradcam. However, there are
clear advantages to switching to a black-box tool, including the ability to use
it with any classifier and the wide selection of black-box tools available. On
standard images, black-box tools are as precise as white-box. In this paper we
compare the performance of several black-box methods against gradcam on a brain
cancer MRI dataset. We demonstrate that most black-box tools are not suitable
for explaining medical image classifications and present a detailed analysis of
the reasons for their shortcomings. We also show that one black-box tool, a
causal explainability-based rex, performs as well as \gradcam. | Computer Vision |
What field is the article from? | Title: SparseSpikformer: A Co-Design Framework for Token and Weight Pruning in Spiking Transformer
Abstract: As the third-generation neural network, the Spiking Neural Network (SNN) has
the advantages of low power consumption and high energy efficiency, making it
suitable for implementation on edge devices. More recently, the most advanced
SNN, Spikformer, combines the self-attention module from Transformer with SNN
to achieve remarkable performance. However, it adopts larger channel dimensions
in MLP layers, leading to an increased number of redundant model parameters. To
effectively decrease the computational complexity and weight parameters of the
model, we explore the Lottery Ticket Hypothesis (LTH) and discover a very
sparse ($\ge$90%) subnetwork that achieves comparable performance to the
original network. Furthermore, we also design a lightweight token selector
module, which can remove unimportant background information from images based
on the average spike firing rate of neurons, selecting only essential
foreground image tokens to participate in attention calculation. Based on that,
we present SparseSpikformer, a co-design framework aimed at achieving sparsity
in Spikformer through token and weight pruning techniques. Experimental results
demonstrate that our framework can significantly reduce 90% model parameters
and cut down Giga Floating-Point Operations (GFLOPs) by 20% while maintaining
the accuracy of the original model. | Computer Vision |
What field is the article from? | Title: ResMGCN: Residual Message Graph Convolution Network for Fast Biomedical Interactions Discovering
Abstract: Biomedical information graphs are crucial for interaction discovering of
biomedical information in modern age, such as identification of multifarious
molecular interactions and drug discovery, which attracts increasing interests
in biomedicine, bioinformatics, and human healthcare communities. Nowadays,
more and more graph neural networks have been proposed to learn the entities of
biomedical information and precisely reveal biomedical molecule interactions
with state-of-the-art results. These methods remedy the fading of features from
a far distance but suffer from remedying such problem at the expensive cost of
redundant memory and time. In our paper, we propose a novel Residual Message
Graph Convolution Network (ResMGCN) for fast and precise biomedical interaction
prediction in a different idea. Specifically, instead of enhancing the message
from far nodes, ResMGCN aggregates lower-order information with the next round
higher information to guide the node update to obtain a more meaningful node
representation. ResMGCN is able to perceive and preserve various messages from
the previous layer and high-order information in the current layer with least
memory and time cost to obtain informative representations of biomedical
entities. We conduct experiments on four biomedical interaction network
datasets, including protein-protein, drug-drug, drug-target, and gene-disease
interactions, which demonstrates that ResMGCN outperforms previous
state-of-the-art models while achieving superb effectiveness on both storage
and time. | Machine Learning |
What field is the article from? | Title: Enhancing Object Coherence in Layout-to-Image Synthesis
Abstract: Layout-to-image synthesis is an emerging technique in conditional image
generation. It aims to generate complex scenes, where users require fine
control over the layout of the objects in a scene. However, it remains
challenging to control the object coherence, including semantic coherence
(e.g., the cat looks at the flowers or not) and physical coherence (e.g., the
hand and the racket should not be misaligned). In this paper, we propose a
novel diffusion model with effective global semantic fusion (GSF) and
self-similarity feature enhancement modules to guide the object coherence for
this task. For semantic coherence, we argue that the image caption contains
rich information for defining the semantic relationship within the objects in
the images. Instead of simply employing cross-attention between captions and
generated images, which addresses the highly relevant layout restriction and
semantic coherence separately and thus leads to unsatisfying results shown in
our experiments, we develop GSF to fuse the supervision from the layout
restriction and semantic coherence requirement and exploit it to guide the
image synthesis process. Moreover, to improve the physical coherence, we
develop a Self-similarity Coherence Attention (SCA) module to explicitly
integrate local contextual physical coherence into each pixel's generation
process. Specifically, we adopt a self-similarity map to encode the coherence
restrictions and employ it to extract coherent features from text embedding.
Through visualization of our self-similarity map, we explore the essence of
SCA, revealing that its effectiveness is not only in capturing reliable
physical coherence patterns but also in enhancing complex texture generation.
Extensive experiments demonstrate the superiority of our proposed method in
both image generation quality and controllability. | Computer Vision |
What field is the article from? | Title: Breaking the Token Barrier: Chunking and Convolution for Efficient Long Text Classification with BERT
Abstract: Transformer-based models, specifically BERT, have propelled research in
various NLP tasks. However, these models are limited to a maximum token limit
of 512 tokens. Consequently, this makes it non-trivial to apply it in a
practical setting with long input. Various complex methods have claimed to
overcome this limit, but recent research questions the efficacy of these models
across different classification tasks. These complex architectures evaluated on
carefully curated long datasets perform at par or worse than simple baselines.
In this work, we propose a relatively simple extension to vanilla BERT
architecture called ChunkBERT that allows finetuning of any pretrained models
to perform inference on arbitrarily long text. The proposed method is based on
chunking token representations and CNN layers, making it compatible with any
pre-trained BERT. We evaluate chunkBERT exclusively on a benchmark for
comparing long-text classification models across a variety of tasks (including
binary classification, multi-class classification, and multi-label
classification). A BERT model finetuned using the ChunkBERT method performs
consistently across long samples in the benchmark while utilizing only a
fraction (6.25\%) of the original memory footprint. These findings suggest that
efficient finetuning and inference can be achieved through simple modifications
to pre-trained BERT models. | Computational Linguistics |
What field is the article from? | Title: Data Center Audio/Video Intelligence on Device (DAVID) -- An Edge-AI Platform for Smart-Toys
Abstract: An overview is given of the DAVID Smart-Toy platform, one of the first Edge
AI platform designs to incorporate advanced low-power data processing by neural
inference models co-located with the relevant image or audio sensors. There is
also on-board capability for in-device text-to-speech generation. Two
alternative embodiments are presented: a smart Teddy-bear, and a roving
dog-like robot. The platform offers a speech-driven user interface and can
observe and interpret user actions and facial expressions via its computer
vision sensor node. A particular benefit of this design is that no personally
identifiable information passes beyond the neural inference nodes thus
providing inbuilt compliance with data protection regulations. | Artificial Intelligence |
What field is the article from? | Title: Adaptive parameter sharing for multi-agent reinforcement learning
Abstract: Parameter sharing, as an important technique in multi-agent systems, can
effectively solve the scalability issue in large-scale agent problems. However,
the effectiveness of parameter sharing largely depends on the environment
setting. When agents have different identities or tasks, naive parameter
sharing makes it difficult to generate sufficiently differentiated strategies
for agents. Inspired by research pertaining to the brain in biology, we propose
a novel parameter sharing method. It maps each type of agent to different
regions within a shared network based on their identity, resulting in distinct
subnetworks. Therefore, our method can increase the diversity of strategies
among different agents without introducing additional training parameters.
Through experiments conducted in multiple environments, our method has shown
better performance than other parameter sharing methods. | Artificial Intelligence |
What field is the article from? | Title: TaBIIC: Taxonomy Building through Iterative and Interactive Clustering
Abstract: Building taxonomies is often a significant part of building an ontology, and
many attempts have been made to automate the creation of such taxonomies from
relevant data. The idea in such approaches is either that relevant definitions
of the intension of concepts can be extracted as patterns in the data (e.g. in
formal concept analysis) or that their extension can be built from grouping
data objects based on similarity (clustering). In both cases, the process leads
to an automatically constructed structure, which can either be too coarse and
lacking in definition, or too fined-grained and detailed, therefore requiring
to be refined into the desired taxonomy. In this paper, we explore a method
that takes inspiration from both approaches in an iterative and interactive
process, so that refinement and definition of the concepts in the taxonomy
occur at the time of identifying those concepts in the data. We show that this
method is applicable on a variety of data sources and leads to taxonomies that
can be more directly integrated into ontologies. | Artificial Intelligence |
What field is the article from? | Title: No prejudice! Fair Federated Graph Neural Networks for Personalized Recommendation
Abstract: Ensuring fairness in Recommendation Systems (RSs) across demographic groups
is critical due to the increased integration of RSs in applications such as
personalized healthcare, finance, and e-commerce. Graph-based RSs play a
crucial role in capturing intricate higher-order interactions among entities.
However, integrating these graph models into the Federated Learning (FL)
paradigm with fairness constraints poses formidable challenges as this requires
access to the entire interaction graph and sensitive user information (such as
gender, age, etc.) at the central server. This paper addresses the pervasive
issue of inherent bias within RSs for different demographic groups without
compromising the privacy of sensitive user attributes in FL environment with
the graph-based model. To address the group bias, we propose F2PGNN (Fair
Federated Personalized Graph Neural Network), a novel framework that leverages
the power of Personalized Graph Neural Network (GNN) coupled with fairness
considerations. Additionally, we use differential privacy techniques to fortify
privacy protection. Experimental evaluation on three publicly available
datasets showcases the efficacy of F2PGNN in mitigating group unfairness by 47%
- 99% compared to the state-of-the-art while preserving privacy and maintaining
the utility. The results validate the significance of our framework in
achieving equitable and personalized recommendations using GNN within the FL
landscape. | Information Retrieval |
What field is the article from? | Title: Adapting Fake News Detection to the Era of Large Language Models
Abstract: In the age of large language models (LLMs) and the widespread adoption of
AI-driven content creation, the landscape of information dissemination has
witnessed a paradigm shift. With the proliferation of both human-written and
machine-generated real and fake news, robustly and effectively discerning the
veracity of news articles has become an intricate challenge. While substantial
research has been dedicated to fake news detection, this either assumes that
all news articles are human-written or abruptly assumes that all
machine-generated news are fake. Thus, a significant gap exists in
understanding the interplay between machine-(paraphrased) real news,
machine-generated fake news, human-written fake news, and human-written real
news. In this paper, we study this gap by conducting a comprehensive evaluation
of fake news detectors trained in various scenarios. Our primary objectives
revolve around the following pivotal question: How to adapt fake news detectors
to the era of LLMs? Our experiments reveal an interesting pattern that
detectors trained exclusively on human-written articles can indeed perform well
at detecting machine-generated fake news, but not vice versa. Moreover, due to
the bias of detectors against machine-generated texts \cite{su2023fake}, they
should be trained on datasets with a lower machine-generated news ratio than
the test set. Building on our findings, we provide a practical strategy for the
development of robust fake news detectors. | Computational Linguistics |
What field is the article from? | Title: FourierGNN: Rethinking Multivariate Time Series Forecasting from a Pure Graph Perspective
Abstract: Multivariate time series (MTS) forecasting has shown great importance in
numerous industries. Current state-of-the-art graph neural network (GNN)-based
forecasting methods usually require both graph networks (e.g., GCN) and
temporal networks (e.g., LSTM) to capture inter-series (spatial) dynamics and
intra-series (temporal) dependencies, respectively. However, the uncertain
compatibility of the two networks puts an extra burden on handcrafted model
designs. Moreover, the separate spatial and temporal modeling naturally
violates the unified spatiotemporal inter-dependencies in real world, which
largely hinders the forecasting performance. To overcome these problems, we
explore an interesting direction of directly applying graph networks and
rethink MTS forecasting from a pure graph perspective. We first define a novel
data structure, hypervariate graph, which regards each series value (regardless
of variates or timestamps) as a graph node, and represents sliding windows as
space-time fully-connected graphs. This perspective considers spatiotemporal
dynamics unitedly and reformulates classic MTS forecasting into the predictions
on hypervariate graphs. Then, we propose a novel architecture Fourier Graph
Neural Network (FourierGNN) by stacking our proposed Fourier Graph Operator
(FGO) to perform matrix multiplications in Fourier space. FourierGNN
accommodates adequate expressiveness and achieves much lower complexity, which
can effectively and efficiently accomplish the forecasting. Besides, our
theoretical analysis reveals FGO's equivalence to graph convolutions in the
time domain, which further verifies the validity of FourierGNN. Extensive
experiments on seven datasets have demonstrated our superior performance with
higher efficiency and fewer parameters compared with state-of-the-art methods. | Machine Learning |
What field is the article from? | Title: Distilling Out-of-Distribution Robustness from Vision-Language Foundation Models
Abstract: We propose a conceptually simple and lightweight framework for improving the
robustness of vision models through the combination of knowledge distillation
and data augmentation. We address the conjecture that larger models do not make
for better teachers by showing strong gains in out-of-distribution robustness
when distilling from pretrained foundation models. Following this finding, we
propose Discrete Adversarial Distillation (DAD), which leverages a robust
teacher to generate adversarial examples and a VQGAN to discretize them,
creating more informative samples than standard data augmentation techniques.
We provide a theoretical framework for the use of a robust teacher in the
knowledge distillation with data augmentation setting and demonstrate strong
gains in out-of-distribution robustness and clean accuracy across different
student architectures. Notably, our method adds minor computational overhead
compared to similar techniques and can be easily combined with other data
augmentations for further improvements. | Machine Learning |
What field is the article from? | Title: DiffiT: Diffusion Vision Transformers for Image Generation
Abstract: Diffusion models with their powerful expressivity and high sample quality
have enabled many new applications and use-cases in various domains. For sample
generation, these models rely on a denoising neural network that generates
images by iterative denoising. Yet, the role of denoising network architecture
is not well-studied with most efforts relying on convolutional residual U-Nets.
In this paper, we study the effectiveness of vision transformers in
diffusion-based generative learning. Specifically, we propose a new model,
denoted as Diffusion Vision Transformers (DiffiT), which consists of a hybrid
hierarchical architecture with a U-shaped encoder and decoder. We introduce a
novel time-dependent self-attention module that allows attention layers to
adapt their behavior at different stages of the denoising process in an
efficient manner. We also introduce latent DiffiT which consists of transformer
model with the proposed self-attention layers, for high-resolution image
generation. Our results show that DiffiT is surprisingly effective in
generating high-fidelity images, and it achieves state-of-the-art (SOTA)
benchmarks on a variety of class-conditional and unconditional synthesis tasks.
In the latent space, DiffiT achieves a new SOTA FID score of 1.73 on
ImageNet-256 dataset. Repository: https://github.com/NVlabs/DiffiT | Computer Vision |
What field is the article from? | Title: VIM: Probing Multimodal Large Language Models for Visual Embedded Instruction Following
Abstract: We introduce VISUAL EMBEDDED INSTRUCTION (VIM), a new framework designed to
evaluate the visual instruction following capability of Multimodal Large
Language Models (MLLMs). As illustrated in Figure 2, VIM challenges the MLLMs
by embedding the instructions into the visual scenes, demanding strong visual
interpretative skills for instruction following. We adapt VIM to various
benchmarks, including VQAv2, MME, MM-Vet, and RefCOCO series, compose a VIM
bench, and probe diverse MLLMs across three distinct in-context learning
settings: Zero Shot, One Shot, and Pair Shot. We observe that there is a
significant performance disparity between the open-source MLLMs and GPT-4V,
implying that their proficiency in visual instruction comprehension is not up
to par. Our results highlight a promising direction for the enhancement of
MLLMs capabilities on instruction following. We aim VIM to serve as a useful
norm for advancing the state of the art and driving further progress in the
field. | Computer Vision |
What field is the article from? | Title: Ovarian Cancer Data Analysis using Deep Learning: A Systematic Review from the Perspectives of Key Features of Data Analysis and AI Assurance
Abstract: Background and objectives: By extracting this information, Machine or Deep
Learning (ML/DL)-based autonomous data analysis tools can assist clinicians and
cancer researchers in discovering patterns and relationships from complex data
sets. Many DL-based analyses on ovarian cancer (OC) data have recently been
published. These analyses are highly diverse in various aspects of cancer
(e.g., subdomain(s) and cancer type they address) and data analysis features.
However, a comprehensive understanding of these analyses in terms of these
features and AI assurance (AIA) is currently lacking. This systematic review
aims to fill this gap by examining the existing literature and identifying
important aspects of OC data analysis using DL, explicitly focusing on the key
features and AI assurance perspectives. Methods: The PRISMA framework was used
to conduct comprehensive searches in three journal databases. Only studies
published between 2015 and 2023 in peer-reviewed journals were included in the
analysis. Results: In the review, a total of 96 DL-driven analyses were
examined. The findings reveal several important insights regarding DL-driven
ovarian cancer data analysis: - Most studies 71% (68 out of 96) focused on
detection and diagnosis, while no study addressed the prediction and prevention
of OC. - The analyses were predominantly based on samples from a non-diverse
population (75% (72/96 studies)), limited to a geographic location or country.
- Only a small proportion of studies (only 33% (32/96)) performed integrated
analyses, most of which used homogeneous data (clinical or omics). - Notably, a
mere 8.3% (8/96) of the studies validated their models using external and
diverse data sets, highlighting the need for enhanced model validation, and -
The inclusion of AIA in cancer data analysis is in a very early stage; only
2.1% (2/96) explicitly addressed AIA through explainability. | Machine Learning |
What field is the article from? | Title: Increasing Coverage and Precision of Textual Information in Multilingual Knowledge Graphs
Abstract: Recent work in Natural Language Processing and Computer Vision has been using
textual information -- e.g., entity names and descriptions -- available in
knowledge graphs to ground neural models to high-quality structured data.
However, when it comes to non-English languages, the quantity and quality of
textual information are comparatively scarce. To address this issue, we
introduce the novel task of automatic Knowledge Graph Enhancement (KGE) and
perform a thorough investigation on bridging the gap in both the quantity and
quality of textual information between English and non-English languages. More
specifically, we: i) bring to light the problem of increasing multilingual
coverage and precision of entity names and descriptions in Wikidata; ii)
demonstrate that state-of-the-art methods, namely, Machine Translation (MT),
Web Search (WS), and Large Language Models (LLMs), struggle with this task;
iii) present M-NTA, a novel unsupervised approach that combines MT, WS, and
LLMs to generate high-quality textual information; and, iv) study the impact of
increasing multilingual coverage and precision of non-English textual
information in Entity Linking, Knowledge Graph Completion, and Question
Answering. As part of our effort towards better multilingual knowledge graphs,
we also introduce WikiKGE-10, the first human-curated benchmark to evaluate KGE
approaches in 10 languages across 7 language families. | Artificial Intelligence |
What field is the article from? | Title: Robust Representation Learning for Unified Online Top-K Recommendation
Abstract: In large-scale industrial e-commerce, the efficiency of an online
recommendation system is crucial in delivering highly relevant item/content
advertising that caters to diverse business scenarios. However, most existing
studies focus solely on item advertising, neglecting the significance of
content advertising. This oversight results in inconsistencies within the
multi-entity structure and unfair retrieval. Furthermore, the challenge of
retrieving top-k advertisements from multi-entity advertisements across
different domains adds to the complexity. Recent research proves that
user-entity behaviors within different domains exhibit characteristics of
differentiation and homogeneity. Therefore, the multi-domain matching models
typically rely on the hybrid-experts framework with domain-invariant and
domain-specific representations. Unfortunately, most approaches primarily focus
on optimizing the combination mode of different experts, failing to address the
inherent difficulty in optimizing the expert modules themselves. The existence
of redundant information across different domains introduces interference and
competition among experts, while the distinct learning objectives of each
domain lead to varying optimization challenges among experts. To tackle these
issues, we propose robust representation learning for the unified online top-k
recommendation. Our approach constructs unified modeling in entity space to
ensure data fairness. The robust representation learning employs domain
adversarial learning and multi-view wasserstein distribution learning to learn
robust representations. Moreover, the proposed method balances conflicting
objectives through the homoscedastic uncertainty weights and orthogonality
constraints. Various experiments validate the effectiveness and rationality of
our proposed method, which has been successfully deployed online to serve real
business scenarios. | Information Retrieval |
What field is the article from? | Title: N-Critics: Self-Refinement of Large Language Models with Ensemble of Critics
Abstract: We propose a self-correction mechanism for Large Language Models (LLMs) to
mitigate issues such as toxicity and fact hallucination. This method involves
refining model outputs through an ensemble of critics and the model's own
feedback. Drawing inspiration from human behavior, we explore whether LLMs can
emulate the self-correction process observed in humans who often engage in
self-reflection and seek input from others to refine their understanding of
complex topics. Our approach is model-agnostic and can be applied across
various domains to enhance trustworthiness by addressing fairness, bias, and
robustness concerns. We consistently observe performance improvements in LLMs
for reducing toxicity and correcting factual errors. | Computational Linguistics |
What field is the article from? | Title: SoftMAC: Differentiable Soft Body Simulation with Forecast-based Contact Model and Two-way Coupling with Articulated Rigid Bodies and Clothes
Abstract: Differentiable physics simulation provides an avenue for tackling previously
intractable challenges through gradient-based optimization, thereby greatly
improving the efficiency of solving robotics-related problems. To apply
differentiable simulation in diverse robotic manipulation scenarios, a key
challenge is to integrate various materials in a unified framework. We present
SoftMAC, a differentiable simulation framework coupling soft bodies with
articulated rigid bodies and clothes. SoftMAC simulates soft bodies with the
continuum-mechanics-based Material Point Method (MPM). We provide a
forecast-based contact model for MPM, which greatly reduces artifacts like
penetration and unnatural rebound. To couple MPM particles with deformable and
non-volumetric clothes meshes, we also propose a penetration tracing algorithm
that reconstructs the signed distance field in local area. Based on simulators
for each modality and the contact model, we develop a differentiable coupling
mechanism to simulate the interactions between soft bodies and the other two
types of materials. Comprehensive experiments are conducted to validate the
effectiveness and accuracy of the proposed differentiable pipeline in
downstream robotic manipulation applications. Supplementary materials and
videos are available on our project website at
https://sites.google.com/view/softmac. | Robotics |
What field is the article from? | Title: Synaptic Sampling of Neural Networks
Abstract: Probabilistic artificial neural networks offer intriguing prospects for
enabling the uncertainty of artificial intelligence methods to be described
explicitly in their function; however, the development of techniques that
quantify uncertainty by well-understood methods such as Monte Carlo sampling
has been limited by the high costs of stochastic sampling on deterministic
computing hardware. Emerging computing systems that are amenable to
hardware-level probabilistic computing, such as those that leverage stochastic
devices, may make probabilistic neural networks more feasible in the
not-too-distant future. This paper describes the scANN technique --
\textit{sampling (by coinflips) artificial neural networks} -- which enables
neural networks to be sampled directly by treating the weights as Bernoulli
coin flips. This method is natively well suited for probabilistic computing
techniques that focus on tunable stochastic devices, nearly matches fully
deterministic performance while also describing the uncertainty of correct and
incorrect neural network outputs. | Artificial Intelligence |
What field is the article from? | Title: QuickDrop: Efficient Federated Unlearning by Integrated Dataset Distillation
Abstract: Federated Unlearning (FU) aims to delete specific training data from an ML
model trained using Federated Learning (FL). We introduce QuickDrop, an
efficient and original FU method that utilizes dataset distillation (DD) to
accelerate unlearning and drastically reduces computational overhead compared
to existing approaches. In QuickDrop, each client uses DD to generate a compact
dataset representative of the original training dataset, called a distilled
dataset, and uses this compact dataset during unlearning. To unlearn specific
knowledge from the global model, QuickDrop has clients execute Stochastic
Gradient Ascent with samples from the distilled datasets, thus significantly
reducing computational overhead compared to conventional FU methods. We further
increase the efficiency of QuickDrop by ingeniously integrating DD into the FL
training process. By reusing the gradient updates produced during FL training
for DD, the overhead of creating distilled datasets becomes close to
negligible. Evaluations on three standard datasets show that, with comparable
accuracy guarantees, QuickDrop reduces the duration of unlearning by 463.8x
compared to model retraining from scratch and 65.1x compared to existing FU
approaches. We also demonstrate the scalability of QuickDrop with 100 clients
and show its effectiveness while handling multiple unlearning operations. | Machine Learning |
What field is the article from? | Title: Can input reconstruction be used to directly estimate uncertainty of a regression U-Net model? -- Application to proton therapy dose prediction for head and neck cancer patients
Abstract: Estimating the uncertainty of deep learning models in a reliable and
efficient way has remained an open problem, where many different solutions have
been proposed in the literature. Most common methods are based on Bayesian
approximations, like Monte Carlo dropout (MCDO) or Deep ensembling (DE), but
they have a high inference time (i.e. require multiple inference passes) and
might not work for out-of-distribution detection (OOD) data (i.e. similar
uncertainty for in-distribution (ID) and OOD). In safety critical environments,
like medical applications, accurate and fast uncertainty estimation methods,
able to detect OOD data, are crucial, since wrong predictions can jeopardize
patients safety. In this study, we present an alternative direct uncertainty
estimation method and apply it for a regression U-Net architecture. The method
consists in the addition of a branch from the bottleneck which reconstructs the
input. The input reconstruction error can be used as a surrogate of the model
uncertainty. For the proof-of-concept, our method is applied to proton therapy
dose prediction in head and neck cancer patients. Accuracy, time-gain, and OOD
detection are analyzed for our method in this particular application and
compared with the popular MCDO and DE. The input reconstruction method showed a
higher Pearson correlation coefficient with the prediction error (0.620) than
DE and MCDO (between 0.447 and 0.612). Moreover, our method allows an easier
identification of OOD (Z-score of 34.05). It estimates the uncertainty
simultaneously to the regression task, therefore requires less time or
computational resources. | Machine Learning |
What field is the article from? | Title: DMLR: Data-centric Machine Learning Research -- Past, Present and Future
Abstract: Drawing from discussions at the inaugural DMLR workshop at ICML 2023 and
meetings prior, in this report we outline the relevance of community engagement
and infrastructure development for the creation of next-generation public
datasets that will advance machine learning science. We chart a path forward as
a collective effort to sustain the creation and maintenance of these datasets
and methods towards positive scientific, societal and business impact. | Machine Learning |
What field is the article from? | Title: Prompt Tuning for Zero-shot Compositional Learning
Abstract: Open World Compositional Zero-Shot Learning (OW-CZSL) is known to be an
extremely challenging task, which aims to recognize unseen compositions formed
from seen attributes and objects without any prior assumption of the output
space. In order to achieve this goal, a model has to be "smart" and
"knowledgeable". To be smart, a model should be good at reasoning the
interactions between attributes and objects from the seen compositions. While
"knowledgeable" means the model owns "common sense" to the open world that can
"foresee" some features of the unseen compositions. Most previous work focuses
on the "smart" part, while few of them provided an effective solution to
achieve the "knowledgeable" goal. In this paper, we proposed a framework named
Multi-Modal Prompt Tuning (MMPT) to inherit the "knowledgeable" property from
the large pre-trained vision-language model. Extensive experiments show that
our proposed MMPT obtains new state-of-the-art results in OW-CZSL task. On the
UT-Zappos dataset, MMPT pushes the AUC score to $29.8$, while the previous best
score is $26.5$. On the more challenging MIT-States dataset, the AUC score of
MMPT is 1.5 times better than the current state-of-the-art. | Computer Vision |
What field is the article from? | Title: Ask One More Time: Self-Agreement Improves Reasoning of Language Models in (Almost) All Scenarios
Abstract: Although chain-of-thought (CoT) prompting combined with language models has
achieved encouraging results on complex reasoning tasks, the naive greedy
decoding used in CoT prompting usually causes the repetitiveness and local
optimality. To address this shortcoming, ensemble-optimization tries to obtain
multiple reasoning paths to get the final answer assembly. However, current
ensemble-optimization methods either simply employ rule-based post-processing
such as \textit{self-consistency}, or train an additional model based on
several task-related human annotations to select the best one among multiple
reasoning paths, yet fail to generalize to realistic settings where the type of
input questions is unknown or the answer format of reasoning paths is unknown.
To avoid their limitations, we propose \textbf{self-agreement}, a generalizable
ensemble-optimization method applying in almost all scenarios where the type of
input questions and the answer format of reasoning paths may be known or
unknown. Self-agreement firstly samples from language model's decoder to
generate a \textit{diverse} set of reasoning paths, and subsequently prompts
the language model \textit{one more time} to determine the optimal answer by
selecting the most \textit{agreed} answer among the sampled reasoning paths.
Self-agreement simultaneously achieves remarkable performance on six public
reasoning benchmarks and superior generalization capabilities. | Computational Linguistics |
What field is the article from? | Title: Long Story Short: a Summarize-then-Search Method for Long Video Question Answering
Abstract: Large language models such as GPT-3 have demonstrated an impressive
capability to adapt to new tasks without requiring task-specific training data.
This capability has been particularly effective in settings such as narrative
question answering, where the diversity of tasks is immense, but the available
supervision data is small. In this work, we investigate if such language models
can extend their zero-shot reasoning abilities to long multimodal narratives in
multimedia content such as drama, movies, and animation, where the story plays
an essential role. We propose Long Story Short, a framework for narrative video
QA that first summarizes the narrative of the video to a short plot and then
searches parts of the video relevant to the question. We also propose to
enhance visual matching with CLIPCheck. Our model outperforms state-of-the-art
supervised models by a large margin, highlighting the potential of zero-shot QA
for long videos. | Computer Vision |
What field is the article from? | Title: New Approach for an Affective Computing-Driven Quality of Experience (QoE) Prediction
Abstract: In human interactions, emotion recognition is crucial. For this reason, the
topic of computer-vision approaches for automatic emotion recognition is
currently being extensively researched. Processing multi-channel
electroencephalogram (EEG) information is one of the most researched methods
for automatic emotion recognition. This paper presents a new model for an
affective computing-driven Quality of Experience (QoE) prediction. In order to
validate the proposed model, a publicly available dataset is used. The dataset
contains EEG, ECG, and respiratory data and is focused on a multimedia QoE
assessment context. The EEG data are retained on which the differential entropy
and the power spectral density are calculated with an observation window of
three seconds. These two features were extracted to train several deep-learning
models to investigate the possibility of predicting QoE with five different
factors. The performance of these models is compared, and the best model is
optimized to improve the results. The best results were obtained with an
LSTM-based model, presenting an F1-score from 68% to 78%. An analysis of the
model and its features shows that the Delta frequency band is the least
necessary, that two electrodes have a higher importance, and that two other
electrodes have a very low impact on the model's performances. | Computer Vision |
What field is the article from? | Title: Hashing it Out: Predicting Unhealthy Conversations on Twitter
Abstract: Personal attacks in the context of social media conversations often lead to
fast-paced derailment, leading to even more harmful exchanges being made.
State-of-the-art systems for the detection of such conversational derailment
often make use of deep learning approaches for prediction purposes. In this
paper, we show that an Attention-based BERT architecture, pre-trained on a
large Twitter corpus and fine-tuned on our task, is efficient and effective in
making such predictions. This model shows clear advantages in performance to
the existing LSTM model we use as a baseline. Additionally, we show that this
impressive performance can be attained through fine-tuning on a relatively
small, novel dataset, particularly after mitigating overfitting issues through
synthetic oversampling techniques. By introducing the first transformer based
model for forecasting conversational events on Twitter, this work lays the
foundation for a practical tool to encourage better interactions on one of the
most ubiquitous social media platforms. | Computational Linguistics |
What field is the article from? | Title: A Simple yet Efficient Ensemble Approach for AI-generated Text Detection
Abstract: Recent Large Language Models (LLMs) have demonstrated remarkable capabilities
in generating text that closely resembles human writing across wide range of
styles and genres. However, such capabilities are prone to potential abuse,
such as fake news generation, spam email creation, and misuse in academic
assignments. Hence, it is essential to build automated approaches capable of
distinguishing between artificially generated text and human-authored text. In
this paper, we propose a simple yet efficient solution to this problem by
ensembling predictions from multiple constituent LLMs. Compared to previous
state-of-the-art approaches, which are perplexity-based or uses ensembles with
a number of LLMs, our condensed ensembling approach uses only two constituent
LLMs to achieve comparable performance. Experiments conducted on four benchmark
datasets for generative text classification show performance improvements in
the range of 0.5 to 100\% compared to previous state-of-the-art approaches. We
also study the influence that the training data from individual LLMs have on
model performance. We found that substituting commercially-restrictive
Generative Pre-trained Transformer (GPT) data with data generated from other
open language models such as Falcon, Large Language Model Meta AI (LLaMA2), and
Mosaic Pretrained Transformers (MPT) is a feasible alternative when developing
generative text detectors. Furthermore, to demonstrate zero-shot
generalization, we experimented with an English essays dataset, and results
suggest that our ensembling approach can handle new data effectively. | Computational Linguistics |
What field is the article from? | Title: CONTRASTE: Supervised Contrastive Pre-training With Aspect-based Prompts For Aspect Sentiment Triplet Extraction
Abstract: Existing works on Aspect Sentiment Triplet Extraction (ASTE) explicitly focus
on developing more efficient fine-tuning techniques for the task. Instead, our
motivation is to come up with a generic approach that can improve the
downstream performances of multiple ABSA tasks simultaneously. Towards this, we
present CONTRASTE, a novel pre-training strategy using CONTRastive learning to
enhance the ASTE performance. While we primarily focus on ASTE, we also
demonstrate the advantage of our proposed technique on other ABSA tasks such as
ACOS, TASD, and AESC. Given a sentence and its associated (aspect, opinion,
sentiment) triplets, first, we design aspect-based prompts with corresponding
sentiments masked. We then (pre)train an encoder-decoder model by applying
contrastive learning on the decoder-generated aspect-aware sentiment
representations of the masked terms. For fine-tuning the model weights thus
obtained, we then propose a novel multi-task approach where the base
encoder-decoder model is combined with two complementary modules, a
tagging-based Opinion Term Detector, and a regression-based Triplet Count
Estimator. Exhaustive experiments on four benchmark datasets and a detailed
ablation study establish the importance of each of our proposed components as
we achieve new state-of-the-art ASTE results. | Computational Linguistics |
What field is the article from? | Title: LLMEval: A Preliminary Study on How to Evaluate Large Language Models
Abstract: Recently, the evaluation of Large Language Models has emerged as a popular
area of research. The three crucial questions for LLM evaluation are ``what,
where, and how to evaluate''. However, the existing research mainly focuses on
the first two questions, which are basically what tasks to give the LLM during
testing and what kind of knowledge it should deal with. As for the third
question, which is about what standards to use, the types of evaluators, how to
score, and how to rank, there hasn't been much discussion. In this paper, we
analyze evaluation methods by comparing various criteria with both manual and
automatic evaluation, utilizing onsite, crowd-sourcing, public annotators and
GPT-4, with different scoring methods and ranking systems. We propose a new
dataset, LLMEval and conduct evaluations on 20 LLMs. A total of 2,186
individuals participated, leading to the generation of 243,337 manual
annotations and 57,511 automatic evaluation results. We perform comparisons and
analyses of different settings and conduct 10 conclusions that can provide some
insights for evaluating LLM in the future. The dataset and the results are
publicly available at https://github.com/llmeval . | Artificial Intelligence |
What field is the article from? | Title: Detecting value-expressive text posts in Russian social media
Abstract: Basic values are concepts or beliefs which pertain to desirable end-states
and transcend specific situations. Studying personal values in social media can
illuminate how and why societal values evolve especially when the stimuli-based
methods, such as surveys, are inefficient, for instance, in hard-to-reach
populations. On the other hand, user-generated content is driven by the massive
use of stereotyped, culturally defined speech constructions rather than
authentic expressions of personal values. We aimed to find a model that can
accurately detect value-expressive posts in Russian social media VKontakte. A
training dataset of 5,035 posts was annotated by three experts, 304
crowd-workers and ChatGPT. Crowd-workers and experts showed only moderate
agreement in categorizing posts. ChatGPT was more consistent but struggled with
spam detection. We applied an ensemble of human- and AI-assisted annotation
involving active learning approach, subsequently trained several LLMs and
selected a model based on embeddings from pre-trained fine-tuned rubert-tiny2,
and reached a high quality of value detection with F1 = 0.75 (F1-macro = 0.80).
This model provides a crucial step to a study of values within and between
Russian social media users. | Computational Linguistics |
What field is the article from? | Title: Sample-Efficient and Safe Deep Reinforcement Learning via Reset Deep Ensemble Agents
Abstract: Deep reinforcement learning (RL) has achieved remarkable success in solving
complex tasks through its integration with deep neural networks (DNNs) as
function approximators. However, the reliance on DNNs has introduced a new
challenge called primacy bias, whereby these function approximators tend to
prioritize early experiences, leading to overfitting. To mitigate this primacy
bias, a reset method has been proposed, which performs periodic resets of a
portion or the entirety of a deep RL agent while preserving the replay buffer.
However, the use of the reset method can result in performance collapses after
executing the reset, which can be detrimental from the perspective of safe RL
and regret minimization. In this paper, we propose a new reset-based method
that leverages deep ensemble learning to address the limitations of the vanilla
reset method and enhance sample efficiency. The proposed method is evaluated
through various experiments including those in the domain of safe RL. Numerical
results show its effectiveness in high sample efficiency and safety
considerations. | Machine Learning |
What field is the article from? | Title: PreWoMe: Exploiting Presuppositions as Working Memory for Long Form Question Answering
Abstract: Information-seeking questions in long-form question answering (LFQA) often
prove misleading due to ambiguity or false presupposition in the question.
While many existing approaches handle misleading questions, they are tailored
to limited questions, which are insufficient in a real-world setting with
unpredictable input characteristics. In this work, we propose PreWoMe, a
unified approach capable of handling any type of information-seeking question.
The key idea of PreWoMe involves extracting presuppositions in the question and
exploiting them as working memory to generate feedback and action about the
question. Our experiment shows that PreWoMe is effective not only in tackling
misleading questions but also in handling normal ones, thereby demonstrating
the effectiveness of leveraging presuppositions, feedback, and action for
real-world QA settings. | Computational Linguistics |
What field is the article from? | Title: Inclusive Portraits: Race-Aware Human-in-the-Loop Technology
Abstract: AI has revolutionized the processing of various services, including the
automatic facial verification of people. Automated approaches have demonstrated
their speed and efficiency in verifying a large volume of faces, but they can
face challenges when processing content from certain communities, including
communities of people of color. This challenge has prompted the adoption of
"human-in-the-loop" (HITL) approaches, where human workers collaborate with the
AI to minimize errors. However, most HITL approaches do not consider workers'
individual characteristics and backgrounds. This paper proposes a new approach,
called Inclusive Portraits (IP), that connects with social theories around race
to design a racially-aware human-in-the-loop system. Our experiments have
provided evidence that incorporating race into human-in-the-loop (HITL) systems
for facial verification can significantly enhance performance, especially for
services delivered to people of color. Our findings also highlight the
importance of considering individual worker characteristics in the design of
HITL systems, rather than treating workers as a homogenous group. Our research
has significant design implications for developing AI-enhanced services that
are more inclusive and equitable. | Human-Computer Interaction |
What field is the article from? | Title: Towards Publicly Accountable Frontier LLMs: Building an External Scrutiny Ecosystem under the ASPIRE Framework
Abstract: With the increasing integration of frontier large language models (LLMs) into
society and the economy, decisions related to their training, deployment, and
use have far-reaching implications. These decisions should not be left solely
in the hands of frontier LLM developers. LLM users, civil society and
policymakers need trustworthy sources of information to steer such decisions
for the better. Involving outside actors in the evaluation of these systems -
what we term 'external scrutiny' - via red-teaming, auditing, and external
researcher access, offers a solution. Though there are encouraging signs of
increasing external scrutiny of frontier LLMs, its success is not assured. In
this paper, we survey six requirements for effective external scrutiny of
frontier AI systems and organize them under the ASPIRE framework: Access,
Searching attitude, Proportionality to the risks, Independence, Resources, and
Expertise. We then illustrate how external scrutiny might function throughout
the AI lifecycle and offer recommendations to policymakers. | Computers and Society |
What field is the article from? | Title: Inspecting Model Fairness in Ultrasound Segmentation Tasks
Abstract: With the rapid expansion of machine learning and deep learning (DL),
researchers are increasingly employing learning-based algorithms to alleviate
diagnostic challenges across diverse medical tasks and applications. While
advancements in diagnostic precision are notable, some researchers have
identified a concerning trend: their models exhibit biased performance across
subgroups characterized by different sensitive attributes. This bias not only
infringes upon the rights of patients but also has the potential to lead to
life-altering consequences. In this paper, we inspect a series of DL
segmentation models using two ultrasound datasets, aiming to assess the
presence of model unfairness in these specific tasks. Our findings reveal that
even state-of-the-art DL algorithms demonstrate unfair behavior in ultrasound
segmentation tasks. These results serve as a crucial warning, underscoring the
necessity for careful model evaluation before their deployment in real-world
scenarios. Such assessments are imperative to ensure ethical considerations and
mitigate the risk of adverse impacts on patient outcomes. | Computer Vision |
What field is the article from? | Title: Clinfo.ai: An Open-Source Retrieval-Augmented Large Language Model System for Answering Medical Questions using Scientific Literature
Abstract: The quickly-expanding nature of published medical literature makes it
challenging for clinicians and researchers to keep up with and summarize
recent, relevant findings in a timely manner. While several closed-source
summarization tools based on large language models (LLMs) now exist, rigorous
and systematic evaluations of their outputs are lacking. Furthermore, there is
a paucity of high-quality datasets and appropriate benchmark tasks with which
to evaluate these tools. We address these issues with four contributions: we
release Clinfo.ai, an open-source WebApp that answers clinical questions based
on dynamically retrieved scientific literature; we specify an information
retrieval and abstractive summarization task to evaluate the performance of
such retrieval-augmented LLM systems; we release a dataset of 200 questions and
corresponding answers derived from published systematic reviews, which we name
PubMed Retrieval and Synthesis (PubMedRS-200); and report benchmark results for
Clinfo.ai and other publicly available OpenQA systems on PubMedRS-200. | Information Retrieval |
What field is the article from? | Title: EipFormer: Emphasizing Instance Positions in 3D Instance Segmentation
Abstract: 3D instance segmentation plays a crucial role in comprehending 3D scenes.
Despite recent advancements in this field, existing approaches exhibit certain
limitations. These methods often rely on fixed instance positions obtained from
sampled representative points in vast 3D point clouds, using center prediction
or farthest point sampling. However, these selected positions may deviate from
actual instance centers, posing challenges in precisely grouping instances.
Moreover, the common practice of grouping candidate instances from a single
type of coordinates introduces difficulties in identifying neighboring
instances or incorporating edge points. To tackle these issues, we present a
novel Transformer-based architecture, EipFormer, which comprises progressive
aggregation and dual position embedding. The progressive aggregation mechanism
leverages instance positions to refine instance proposals. It enhances the
initial instance positions through weighted farthest point sampling and further
refines the instance positions and proposals using aggregation averaging and
center matching. Additionally, dual position embedding superposes the original
and centralized position embeddings, thereby enhancing the model performance in
distinguishing adjacent instances. Extensive experiments on popular datasets
demonstrate that EipFormer achieves superior or comparable performance compared
to state-of-the-art approaches. | Computer Vision |
What field is the article from? | Title: Anomalous Behavior Detection in Trajectory Data of Older Drivers
Abstract: Given a road network and a set of trajectory data, the anomalous behavior
detection (ABD) problem is to identify drivers that show significant
directional deviations, hardbrakings, and accelerations in their trips. The ABD
problem is important in many societal applications, including Mild Cognitive
Impairment (MCI) detection and safe route recommendations for older drivers.
The ABD problem is computationally challenging due to the large size of
temporally-detailed trajectories dataset. In this paper, we propose an
Edge-Attributed Matrix that can represent the key properties of
temporally-detailed trajectory datasets and identify abnormal driving
behaviors. Experiments using real-world datasets demonstrated that our approach
identifies abnormal driving behaviors. | Artificial Intelligence |
What field is the article from? | Title: Sample-based Dynamic Hierarchical Transformer with Layer and Head Flexibility via Contextual Bandit
Abstract: Transformer requires a fixed number of layers and heads which makes them
inflexible to the complexity of individual samples and expensive in training
and inference. To address this, we propose a sample-based Dynamic Hierarchical
Transformer (DHT) model whose layers and heads can be dynamically configured
with single data samples via solving contextual bandit problems. To determine
the number of layers and heads, we use the Uniform Confidence Bound while we
deploy combinatorial Thompson Sampling in order to select specific head
combinations given their number. Different from previous work that focuses on
compressing trained networks for inference only, DHT is not only advantageous
for adaptively optimizing the underlying network architecture during training
but also has a flexible network for efficient inference. To the best of our
knowledge, this is the first comprehensive data-driven dynamic transformer
without any additional auxiliary neural networks that implement the dynamic
system. According to the experiment results, we achieve up to 74% computational
savings for both training and inference with a minimal loss of accuracy. | Machine Learning |
What field is the article from? | Title: Improving Traffic Density Forecasting in Intelligent Transportation Systems Using Gated Graph Neural Networks
Abstract: This study delves into the application of graph neural networks in the realm
of traffic forecasting, a crucial facet of intelligent transportation systems.
Accurate traffic predictions are vital for functions like trip planning,
traffic control, and vehicle routing in such systems. Three prominent GNN
architectures Graph Convolutional Networks (Graph Sample and Aggregation) and
Gated Graph Neural Networks are explored within the context of traffic
prediction. Each architecture's methodology is thoroughly examined, including
layer configurations, activation functions,and hyperparameters. The primary
goal is to minimize prediction errors, with GGNNs emerging as the most
effective choice among the three models. The research outlines outcomes for
each architecture, elucidating their predictive performance through root mean
squared error and mean absolute error (MAE). Hypothetical results reveal
intriguing insights: GCNs display an RMSE of 9.10 and an MAE of 8.00, while
GraphSAGE shows improvement with an RMSE of 8.3 and an MAE of 7.5. Gated Graph
Neural Networks (GGNNs) exhibit the lowest RMSE at 9.15 and an impressive MAE
of 7.1, positioning them as the frontrunner. | Machine Learning |
What field is the article from? | Title: HiFi Tuner: High-Fidelity Subject-Driven Fine-Tuning for Diffusion Models
Abstract: This paper explores advancements in high-fidelity personalized image
generation through the utilization of pre-trained text-to-image diffusion
models. While previous approaches have made significant strides in generating
versatile scenes based on text descriptions and a few input images, challenges
persist in maintaining the subject fidelity within the generated images. In
this work, we introduce an innovative algorithm named HiFi Tuner to enhance the
appearance preservation of objects during personalized image generation. Our
proposed method employs a parameter-efficient fine-tuning framework, comprising
a denoising process and a pivotal inversion process. Key enhancements include
the utilization of mask guidance, a novel parameter regularization technique,
and the incorporation of step-wise subject representations to elevate the
sample fidelity. Additionally, we propose a reference-guided generation
approach that leverages the pivotal inversion of a reference image to mitigate
unwanted subject variations and artifacts. We further extend our method to a
novel image editing task: substituting the subject in an image through textual
manipulations. Experimental evaluations conducted on the DreamBooth dataset
using the Stable Diffusion model showcase promising results. Fine-tuning solely
on textual embeddings improves CLIP-T score by 3.6 points and improves DINO
score by 9.6 points over Textual Inversion. When fine-tuning all parameters,
HiFi Tuner improves CLIP-T score by 1.2 points and improves DINO score by 1.2
points over DreamBooth, establishing a new state of the art. | Computer Vision |
What field is the article from? | Title: CPST: Comprehension-Preserving Style Transfer for Multi-Modal Narratives
Abstract: We investigate the challenges of style transfer in multi-modal visual
narratives. Among static visual narratives such as comics and manga, there are
distinct visual styles in terms of presentation. They include style features
across multiple dimensions, such as panel layout, size, shape, and color. They
include both visual and text media elements. The layout of both text and media
elements is also significant in terms of narrative communication. The
sequential transitions between panels are where readers make inferences about
the narrative world. These feature differences provide an interesting challenge
for style transfer in which there are distinctions between the processing of
features for each modality. We introduce the notion of comprehension-preserving
style transfer (CPST) in such multi-modal domains. CPST requires not only
traditional metrics of style transfer but also metrics of narrative
comprehension. To spur further research in this area, we present an annotated
dataset of comics and manga and an initial set of algorithms that utilize
separate style transfer modules for the visual, textual, and layout parameters.
To test whether the style transfer preserves narrative semantics, we evaluate
this algorithm through visual story cloze tests inspired by work in
computational cognition of narrative systems. Understanding the connection
between style and narrative semantics provides insight for applications ranging
from informational brochure designs to data storytelling. | Computer Vision |
What field is the article from? | Title: When Graph Data Meets Multimodal: A New Paradigm for Graph Understanding and Reasoning
Abstract: Graph data is ubiquitous in the physical world, and it has always been a
challenge to efficiently model graph structures using a unified paradigm for
the understanding and reasoning on various graphs. Moreover, in the era of
large language models, integrating complex graph information into text
sequences has become exceptionally difficult, which hinders the ability to
interact with graph data through natural language instructions.The paper
presents a new paradigm for understanding and reasoning about graph data by
integrating image encoding and multimodal technologies. This approach enables
the comprehension of graph data through an instruction-response format,
utilizing GPT-4V's advanced capabilities. The study evaluates this paradigm on
various graph types, highlighting the model's strengths and weaknesses,
particularly in Chinese OCR performance and complex reasoning tasks. The
findings suggest new direction for enhancing graph data processing and natural
language interaction. | Artificial Intelligence |
What field is the article from? | Title: Fine-tuning pre-trained extractive QA models for clinical document parsing
Abstract: Electronic health records (EHRs) contain a vast amount of high-dimensional
multi-modal data that can accurately represent a patient's medical history.
Unfortunately, most of this data is either unstructured or semi-structured,
rendering it unsuitable for real-time and retrospective analyses. A remote
patient monitoring (RPM) program for Heart Failure (HF) patients needs to have
access to clinical markers like EF (Ejection Fraction) or LVEF (Left
Ventricular Ejection Fraction) in order to ascertain eligibility and
appropriateness for the program. This paper explains a system that can parse
echocardiogram reports and verify EF values. This system helps identify
eligible HF patients who can be enrolled in such a program. At the heart of
this system is a pre-trained extractive QA transformer model that is fine-tuned
on custom-labeled data. The methods used to prepare such a model for deployment
are illustrated by running experiments on a public clinical dataset like
MIMIC-IV-Note. The pipeline can be used to generalize solutions to similar
problems in a low-resource setting. We found that the system saved over 1500
hours for our clinicians over 12 months by automating the task at scale. | Computational Linguistics |
What field is the article from? | Title: Deep Image Semantic Communication Model for Artificial Intelligent Internet of Things
Abstract: With the rapid development of Artificial Intelligent Internet of Things
(AIoT), the image data from AIoT devices has been witnessing the explosive
increasing. In this paper, a novel deep image semantic communication model is
proposed for the efficient image communication in AIoT. Particularly, at the
transmitter side, a high-precision image semantic segmentation algorithm is
proposed to extract the semantic information of the image to achieve
significant compression of the image data. At the receiver side, a semantic
image restoration algorithm based on Generative Adversarial Network (GAN) is
proposed to convert the semantic image to a real scene image with detailed
information. Simulation results demonstrate that the proposed image semantic
communication model can improve the image compression ratio and recovery
accuracy by 71.93% and 25.07% on average in comparison with WebP and CycleGAN,
respectively. More importantly, our demo experiment shows that the proposed
model reduces the total delay by 95.26% in the image communication, when
comparing with the original image transmission. | Computer Vision |
What field is the article from? | Title: Prompt Sketching for Large Language Models
Abstract: Many recent prompting strategies for large language models (LLMs) query the
model multiple times sequentially -- first to produce intermediate results and
then the final answer. However, using these methods, both decoder and model are
unaware of potential follow-up prompts, leading to disconnected and undesirably
wordy intermediate responses. In this work, we address this issue by proposing
prompt sketching, a new prompting paradigm in which an LLM does not only
respond by completing a prompt, but by predicting values for multiple variables
in a template. This way, sketching grants users more control over the
generation process, e.g., by providing a reasoning framework via intermediate
instructions, leading to better overall results. The key idea enabling
sketching with existing, autoregressive models is to adapt the decoding
procedure to also score follow-up instructions during text generation, thus
optimizing overall template likelihood in inference. Our experiments show that
in a zero-shot setting, prompt sketching outperforms existing, sequential
prompting schemes such as direct asking or chain-of-thought on 7 out of 8 LLM
benchmarking tasks, including state tracking, arithmetic reasoning, and general
question answering. To facilitate future use, we release a number of generic,
yet effective sketches applicable to many tasks, and an open source library
called dclib, powering our sketch-aware decoders. | Computational Linguistics |
What field is the article from? | Title: Beyond Two-Tower Matching: Learning Sparse Retrievable Cross-Interactions for Recommendation
Abstract: Two-tower models are a prevalent matching framework for recommendation, which
have been widely deployed in industrial applications. The success of two-tower
matching attributes to its efficiency in retrieval among a large number of
items, since the item tower can be precomputed and used for fast Approximate
Nearest Neighbor (ANN) search. However, it suffers two main challenges,
including limited feature interaction capability and reduced accuracy in online
serving. Existing approaches attempt to design novel late interactions instead
of dot products, but they still fail to support complex feature interactions or
lose retrieval efficiency. To address these challenges, we propose a new
matching paradigm named SparCode, which supports not only sophisticated feature
interactions but also efficient retrieval. Specifically, SparCode introduces an
all-to-all interaction module to model fine-grained query-item interactions.
Besides, we design a discrete code-based sparse inverted index jointly trained
with the model to achieve effective and efficient model inference. Extensive
experiments have been conducted on open benchmark datasets to demonstrate the
superiority of our framework. The results show that SparCode significantly
improves the accuracy of candidate item matching while retaining the same level
of retrieval efficiency with two-tower models. Our source code will be
available at MindSpore/models. | Information Retrieval |
What field is the article from? | Title: Simple Transferability Estimation for Regression Tasks
Abstract: We consider transferability estimation, the problem of estimating how well
deep learning models transfer from a source to a target task. We focus on
regression tasks, which received little previous attention, and propose two
simple and computationally efficient approaches that estimate transferability
based on the negative regularized mean squared error of a linear regression
model. We prove novel theoretical results connecting our approaches to the
actual transferability of the optimal target models obtained from the transfer
learning process. Despite their simplicity, our approaches significantly
outperform existing state-of-the-art regression transferability estimators in
both accuracy and efficiency. On two large-scale keypoint regression
benchmarks, our approaches yield 12% to 36% better results on average while
being at least 27% faster than previous state-of-the-art methods. | Machine Learning |
What field is the article from? | Title: Equivariant Flow Matching with Hybrid Probability Transport
Abstract: The generation of 3D molecules requires simultaneously deciding the
categorical features~(atom types) and continuous features~(atom coordinates).
Deep generative models, especially Diffusion Models (DMs), have demonstrated
effectiveness in generating feature-rich geometries. However, existing DMs
typically suffer from unstable probability dynamics with inefficient sampling
speed. In this paper, we introduce geometric flow matching, which enjoys the
advantages of both equivariant modeling and stabilized probability dynamics.
More specifically, we propose a hybrid probability path where the coordinates
probability path is regularized by an equivariant optimal transport, and the
information between different modalities is aligned. Experimentally, the
proposed method could consistently achieve better performance on multiple
molecule generation benchmarks with 4.75$\times$ speed up of sampling on
average. | Machine Learning |
What field is the article from? | Title: Competition-Level Problems are Effective LLM Evaluators
Abstract: Large language models (LLMs) have demonstrated impressive reasoning
capabilities, yet there is ongoing debate about these abilities and the
potential data contamination problem recently. This paper aims to evaluate the
reasoning capacities of LLMs, specifically in solving recent competition-level
programming problems in Codeforces, which are expert-crafted and unique,
requiring deep understanding and robust reasoning skills. We first provide a
comprehensive evaluation of GPT-4's peiceived zero-shot performance on this
task, considering various aspects such as problems' release time, difficulties,
and types of errors encountered. Surprisingly, the peiceived performance of
GPT-4 has experienced a cliff like decline in problems after September 2021
consistently across all the difficulties and types of problems, which shows the
potential data contamination, as well as the challenges for any existing LLM to
solve unseen complex reasoning problems. We further explore various approaches
such as fine-tuning, Chain-of-Thought prompting and problem description
simplification, unfortunately none of them is able to consistently mitigate the
challenges. Through our work, we emphasis the importance of this excellent data
source for assessing the genuine reasoning capabilities of LLMs, and foster the
development of LLMs with stronger reasoning abilities and better generalization
in the future. | Computational Linguistics |
What field is the article from? | Title: Parameter Exchange for Robust Dynamic Domain Generalization
Abstract: Agnostic domain shift is the main reason of model degradation on the unknown
target domains, which brings an urgent need to develop Domain Generalization
(DG). Recent advances at DG use dynamic networks to achieve training-free
adaptation on the unknown target domains, termed Dynamic Domain Generalization
(DDG), which compensates for the lack of self-adaptability in static models
with fixed weights. The parameters of dynamic networks can be decoupled into a
static and a dynamic component, which are designed to learn domain-invariant
and domain-specific features, respectively. Based on the existing arts, in this
work, we try to push the limits of DDG by disentangling the static and dynamic
components more thoroughly from an optimization perspective. Our main
consideration is that we can enable the static component to learn
domain-invariant features more comprehensively by augmenting the
domain-specific information. As a result, the more comprehensive
domain-invariant features learned by the static component can then enforce the
dynamic component to focus more on learning adaptive domain-specific features.
To this end, we propose a simple yet effective Parameter Exchange (PE) method
to perturb the combination between the static and dynamic components. We
optimize the model using the gradients from both the perturbed and
non-perturbed feed-forward jointly to implicitly achieve the aforementioned
disentanglement. In this way, the two components can be optimized in a
mutually-beneficial manner, which can resist the agnostic domain shifts and
improve the self-adaptability on the unknown target domain. Extensive
experiments show that PE can be easily plugged into existing dynamic networks
to improve their generalization ability without bells and whistles. | Computer Vision |
What field is the article from? | Title: Understanding and Leveraging the Learning Phases of Neural Networks
Abstract: The learning dynamics of deep neural networks are not well understood. The
information bottleneck (IB) theory proclaimed separate fitting and compression
phases. But they have since been heavily debated. We comprehensively analyze
the learning dynamics by investigating a layer's reconstruction ability of the
input and prediction performance based on the evolution of parameters during
training. We empirically show the existence of three phases using common
datasets and architectures such as ResNet and VGG: (i) near constant
reconstruction loss, (ii) decrease, and (iii) increase. We also derive an
empirically grounded data model and prove the existence of phases for
single-layer networks. Technically, our approach leverages classical complexity
analysis. It differs from IB by relying on measuring reconstruction loss rather
than information theoretic measures to relate information of intermediate
layers and inputs. Our work implies a new best practice for transfer learning:
We show empirically that the pre-training of a classifier should stop well
before its performance is optimal. | Machine Learning |
What field is the article from? | Title: Know Your Audience: Do LLMs Adapt to Different Age and Education Levels?
Abstract: Large language models (LLMs) offer a range of new possibilities, including
adapting the text to different audiences and their reading needs. But how well
do they adapt? We evaluate the readability of answers generated by four
state-of-the-art LLMs (commercial and open-source) to science questions when
prompted to target different age groups and education levels. To assess the
adaptability of LLMs to diverse audiences, we compare the readability scores of
the generated responses against the recommended comprehension level of each age
and education group. We find large variations in the readability of the answers
by different LLMs. Our results suggest LLM answers need to be better adapted to
the intended audience demographics to be more comprehensible. They underline
the importance of enhancing the adaptability of LLMs in education settings to
cater to diverse age and education levels. Overall, current LLMs have set
readability ranges and do not adapt well to different audiences, even when
prompted. That limits their potential for educational purposes. | Computational Linguistics |
What field is the article from? | Title: Optimize Planning Heuristics to Rank, not to Estimate Cost-to-Goal
Abstract: In imitation learning for planning, parameters of heuristic functions are
optimized against a set of solved problem instances. This work revisits the
necessary and sufficient conditions of strictly optimally efficient heuristics
for forward search algorithms, mainly A* and greedy best-first search, which
expand only states on the returned optimal path. It then proposes a family of
loss functions based on ranking tailored for a given variant of the forward
search algorithm. Furthermore, from a learning theory point of view, it
discusses why optimizing cost-to-goal \hstar\ is unnecessarily difficult. The
experimental comparison on a diverse set of problems unequivocally supports the
derived theory. | Artificial Intelligence |
What field is the article from? | Title: Mirror: A Universal Framework for Various Information Extraction Tasks
Abstract: Sharing knowledge between information extraction tasks has always been a
challenge due to the diverse data formats and task variations. Meanwhile, this
divergence leads to information waste and increases difficulties in building
complex applications in real scenarios. Recent studies often formulate IE tasks
as a triplet extraction problem. However, such a paradigm does not support
multi-span and n-ary extraction, leading to weak versatility. To this end, we
reorganize IE problems into unified multi-slot tuples and propose a universal
framework for various IE tasks, namely Mirror. Specifically, we recast existing
IE tasks as a multi-span cyclic graph extraction problem and devise a
non-autoregressive graph decoding algorithm to extract all spans in a single
step. It is worth noting that this graph structure is incredibly versatile, and
it supports not only complex IE tasks, but also machine reading comprehension
and classification tasks. We manually construct a corpus containing 57 datasets
for model pretraining, and conduct experiments on 30 datasets across 8
downstream tasks. The experimental results demonstrate that our model has
decent compatibility and outperforms or reaches competitive performance with
SOTA systems under few-shot and zero-shot settings. The code, model weights,
and pretraining corpus are available at https://github.com/Spico197/Mirror . | Computational Linguistics |
What field is the article from? | Title: Evaluating the Potential of Leading Large Language Models in Reasoning Biology Questions
Abstract: Recent advances in Large Language Models (LLMs) have presented new
opportunities for integrating Artificial General Intelligence (AGI) into
biological research and education. This study evaluated the capabilities of
leading LLMs, including GPT-4, GPT-3.5, PaLM2, Claude2, and SenseNova, in
answering conceptual biology questions. The models were tested on a
108-question multiple-choice exam covering biology topics in molecular biology,
biological techniques, metabolic engineering, and synthetic biology. Among the
models, GPT-4 achieved the highest average score of 90 and demonstrated the
greatest consistency across trials with different prompts. The results
indicated GPT-4's proficiency in logical reasoning and its potential to aid
biology research through capabilities like data analysis, hypothesis
generation, and knowledge integration. However, further development and
validation are still required before the promise of LLMs in accelerating
biological discovery can be realized. | Computational Linguistics |
What field is the article from? | Title: WorldSense: A Synthetic Benchmark for Grounded Reasoning in Large Language Models
Abstract: We propose WorldSense, a benchmark designed to assess the extent to which
LLMs are consistently able to sustain tacit world models, by testing how they
draw simple inferences from descriptions of simple arrangements of entities.
Worldsense is a synthetic benchmark with three problem types, each with their
own trivial control, which explicitly avoids bias by decorrelating the abstract
structure of problems from the vocabulary and expressions, and by decorrelating
all problem subparts with the correct response. We run our benchmark on three
state-of-the-art chat-LLMs (GPT3.5, GPT4 and Llama2-chat) and show that these
models make errors even with as few as three objects. Furthermore, they have
quite heavy response biases, preferring certain responses irrespective of the
question. Errors persist even with chain-of-thought prompting and in-context
learning. Lastly, we show that while finetuning on similar problems does result
in substantial improvements -- within- and out-of-distribution -- the finetuned
models do not generalise beyond a constraint problem space. | Computational Linguistics |
What field is the article from? | Title: Translating Universal Scene Descriptions into Knowledge Graphs for Robotic Environment
Abstract: Robots performing human-scale manipulation tasks require an extensive amount
of knowledge about their surroundings in order to perform their actions
competently and human-like. In this work, we investigate the use of virtual
reality technology as an implementation for robot environment modeling, and
present a technique for translating scene graphs into knowledge bases. To this
end, we take advantage of the Universal Scene Description (USD) format which is
an emerging standard for the authoring, visualization and simulation of complex
environments. We investigate the conversion of USD-based environment models
into Knowledge Graph (KG) representations that facilitate semantic querying and
integration with additional knowledge sources. | Robotics |
What field is the article from? | Title: Neurosymbolic Value-Inspired AI (Why, What, and How)
Abstract: The rapid progression of Artificial Intelligence (AI) systems, facilitated by
the advent of Large Language Models (LLMs), has resulted in their widespread
application to provide human assistance across diverse industries. This trend
has sparked significant discourse centered around the ever-increasing need for
LLM-based AI systems to function among humans as part of human society, sharing
human values, especially as these systems are deployed in high-stakes settings
(e.g., healthcare, autonomous driving, etc.). Towards this end, neurosymbolic
AI systems are attractive due to their potential to enable easy-to-understand
and interpretable interfaces for facilitating value-based decision-making, by
leveraging explicit representations of shared values. In this paper, we
introduce substantial extensions to Khaneman's System one/two framework and
propose a neurosymbolic computational framework called Value-Inspired AI (VAI).
It outlines the crucial components essential for the robust and practical
implementation of VAI systems, aiming to represent and integrate various
dimensions of human values. Finally, we further offer insights into the current
progress made in this direction and outline potential future directions for the
field. | Artificial Intelligence |
What field is the article from? | Title: Are "Hierarchical" Visual Representations Hierarchical?
Abstract: Learned visual representations often capture large amounts of semantic
information for accurate downstream applications. Human understanding of the
world is fundamentally grounded in hierarchy. To mimic this and further improve
representation capabilities, the community has explored "hierarchical" visual
representations that aim at modeling the underlying hierarchy of the visual
world. In this work, we set out to investigate if hierarchical visual
representations truly capture the human perceived hierarchy better than
standard learned representations. To this end, we create HierNet, a suite of 12
datasets spanning 3 kinds of hierarchy from the BREEDs subset of ImageNet.
After extensive evaluation of Hyperbolic and Matryoshka Representations across
training setups, we conclude that they do not capture hierarchy any better than
the standard representations but can assist in other aspects like search
efficiency and interpretability. Our benchmark and the datasets are
open-sourced at https://github.com/ethanlshen/HierNet. | Computer Vision |
What field is the article from? | Title: Building a Safer Maritime Environment Through Multi-Path Long-Term Vessel Trajectory Forecasting
Abstract: Maritime transportation is paramount in achieving global economic growth,
entailing concurrent ecological obligations in sustainability and safeguarding
endangered marine species, most notably preserving large whale populations. In
this regard, the Automatic Identification System (AIS) data plays a significant
role by offering real-time streaming data on vessel movement, allowing enhanced
traffic monitoring. This study explores using AIS data to prevent
vessel-to-whale collisions by forecasting long-term vessel trajectories from
engineered AIS data sequences. For such a task, we have developed an
encoder-decoder model architecture using Bidirectional Long Short-Term Memory
Networks (Bi-LSTM) to predict the next 12 hours of vessel trajectories using 1
to 3 hours of AIS data as input. We feed the model with probabilistic features
engineered from historical AIS data that refer to each trajectory's potential
route and destination. The model then predicts the vessel's trajectory,
considering these additional features by leveraging convolutional layers for
spatial feature learning and a position-aware attention mechanism that
increases the importance of recent timesteps of a sequence during temporal
feature learning. The probabilistic features have an F1 Score of approximately
85% and 75% for each feature type, respectively, demonstrating their
effectiveness in augmenting information to the neural network. We test our
model on the Gulf of St. Lawrence, a region known to be the habitat of North
Atlantic Right Whales (NARW). Our model achieved a high R2 score of over 98%
using various techniques and features. It stands out among other approaches as
it can make complex decisions during turnings and path selection. Our study
highlights the potential of data engineering and trajectory forecasting models
for marine life species preservation. | Machine Learning |
What field is the article from? | Title: An Open Source Data Contamination Report for Large Language Models
Abstract: Data contamination in language model evaluation is increasingly prevalent as
the popularity of large language models. It allows models to "cheat" via
memorisation instead of displaying true capabilities. Therefore, contamination
analysis has became an crucial part of reliable model evaluation to validate
results. However, existing contamination analysis is usually conducted
internally by LLM developers and often lacks transparency and completeness.
This paper present an open source data contamination reports for the Llama
series models. We analyse six popular multi-choice QA benchmarks and quantify
their overlapping with the training set of Llama. Various levels of
contamination ranging from 1\% to 8.7\% are found across benchmarks. Our
comparison also reveals that Llama models can gain over 5\% higher accuracy on
contaminated subsets versus clean subsets. Data and code are available at:
https://github.com/liyucheng09/Contamination_Detector. | Computational Linguistics |
What field is the article from? | Title: Pixel-Superpixel Contrastive Learning and Pseudo-Label Correction for Hyperspectral Image Clustering
Abstract: Hyperspectral image (HSI) clustering is gaining considerable attention owing
to recent methods that overcome the inefficiency and misleading results from
the absence of supervised information. Contrastive learning methods excel at
existing pixel level and super pixel level HSI clustering tasks. The
pixel-level contrastive learning method can effectively improve the ability of
the model to capture fine features of HSI but requires a large time overhead.
The super pixel-level contrastive learning method utilizes the homogeneity of
HSI and reduces computing resources; however, it yields rough classification
results. To exploit the strengths of both methods, we present a pixel super
pixel contrastive learning and pseudo-label correction (PSCPC) method for the
HSI clustering. PSCPC can reasonably capture domain-specific and fine-grained
features through super pixels and the comparative learning of a small number of
pixels within the super pixels. To improve the clustering performance of super
pixels, this paper proposes a pseudo-label correction module that aligns the
clustering pseudo-labels of pixels and super-pixels. In addition, pixel-level
clustering results are used to supervise super pixel-level clustering,
improving the generalization ability of the model. Extensive experiments
demonstrate the effectiveness and efficiency of PSCPC. | Computer Vision |
What field is the article from? | Title: The Impact of Adversarial Node Placement in Decentralized Federated Learning Networks
Abstract: As Federated Learning (FL) grows in popularity, new decentralized frameworks
are becoming widespread. These frameworks leverage the benefits of
decentralized environments to enable fast and energy-efficient inter-device
communication. However, this growing popularity also intensifies the need for
robust security measures. While existing research has explored various aspects
of FL security, the role of adversarial node placement in decentralized
networks remains largely unexplored. This paper addresses this gap by analyzing
the performance of decentralized FL for various adversarial placement
strategies when adversaries can jointly coordinate their placement within a
network. We establish two baseline strategies for placing adversarial node:
random placement and network centrality-based placement. Building on this
foundation, we propose a novel attack algorithm that prioritizes adversarial
spread over adversarial centrality by maximizing the average network distance
between adversaries. We show that the new attack algorithm significantly
impacts key performance metrics such as testing accuracy, outperforming the
baseline frameworks by between 9% and 66.5% for the considered setups. Our
findings provide valuable insights into the vulnerabilities of decentralized FL
systems, setting the stage for future research aimed at developing more secure
and robust decentralized FL frameworks. | Cryptography and Security |
What field is the article from? | Title: In-Context Learning Functions with Varying Number of Minima
Abstract: Large Language Models (LLMs) have proven effective at In-Context Learning
(ICL), an ability that allows them to create predictors from labeled examples.
Few studies have explored the interplay between ICL and specific properties of
functions it attempts to approximate. In our study, we use a formal framework
to explore ICL and propose a new task of approximating functions with varying
number of minima. We implement a method that allows for producing functions
with given inputs as minima. We find that increasing the number of minima
degrades ICL performance. At the same time, our evaluation shows that ICL
outperforms 2-layer Neural Network (2NN) model. Furthermore, ICL learns faster
than 2NN in all settings. We validate the findings through a set of few-shot
experiments across various hyperparameter configurations. | Machine Learning |
What field is the article from? | Title: Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models
Abstract: Large Language Models (LLMs) have greatly propelled the progress in natural
language(NL)-centric tasks based on NL interface. However, the NL form is not
enough for world knowledge. Current works focus on this question by injecting
specific symbolic knowledge into LLM, which ignore two critical challenges: the
interrelations between various symbols and the balance between symbolic-centric
and NL-centric capabilities. In this work, we tackle these challenges from both
a data and framework perspective and introduce Symbol-LLM series models. First,
we collect 34 symbolic tasks, covering ~20 different forms, which are unified
to capture symbol interrelations. Then, a two-stage tuning framework succeeds
in injecting symbolic knowledge without loss of the generality ability.
Extensive experiments on both symbol- and NL-centric tasks demonstrate the
balanced and superior performances of Symbol-LLM series models. | Computational Linguistics |
What field is the article from? | Title: Graph Deep Learning for Time Series Forecasting
Abstract: Graph-based deep learning methods have become popular tools to process
collections of correlated time series. Differently from traditional
multivariate forecasting methods, neural graph-based predictors take advantage
of pairwise relationships by conditioning forecasts on a (possibly dynamic)
graph spanning the time series collection. The conditioning can take the form
of an architectural inductive bias on the neural forecasting architecture,
resulting in a family of deep learning models called spatiotemporal graph
neural networks. Such relational inductive biases enable the training of global
forecasting models on large time-series collections, while at the same time
localizing predictions w.r.t. each element in the set (i.e., graph nodes) by
accounting for local correlations among them (i.e., graph edges). Indeed,
recent theoretical and practical advances in graph neural networks and deep
learning for time series forecasting make the adoption of such processing
frameworks appealing and timely. However, most of the studies in the literature
focus on proposing variations of existing neural architectures by taking
advantage of modern deep learning practices, while foundational and
methodological aspects have not been subject to systematic investigation. To
fill the gap, this paper aims to introduce a comprehensive methodological
framework that formalizes the forecasting problem and provides design
principles for graph-based predictive models and methods to assess their
performance. At the same time, together with an overview of the field, we
provide design guidelines, recommendations, and best practices, as well as an
in-depth discussion of open challenges and future research directions. | Machine Learning |
What field is the article from? | Title: A Multilingual Virtual Guide for Self-Attachment Technique
Abstract: In this work, we propose a computational framework that leverages existing
out-of-language data to create a conversational agent for the delivery of
Self-Attachment Technique (SAT) in Mandarin. Our framework does not require
large-scale human translations, yet it achieves a comparable performance whilst
also maintaining safety and reliability. We propose two different methods of
augmenting available response data through empathetic rewriting. We evaluate
our chatbot against a previous, English-only SAT chatbot through non-clinical
human trials (N=42), each lasting five days, and quantitatively show that we
are able to attain a comparable level of performance to the English SAT
chatbot. We provide qualitative analysis on the limitations of our study and
suggestions with the aim of guiding future improvements. | Computational Linguistics |
What field is the article from? | Title: NeuroWrite: Predictive Handwritten Digit Classification using Deep Neural Networks
Abstract: The rapid evolution of deep neural networks has revolutionized the field of
machine learning, enabling remarkable advancements in various domains. In this
article, we introduce NeuroWrite, a unique method for predicting the
categorization of handwritten digits using deep neural networks. Our model
exhibits outstanding accuracy in identifying and categorising handwritten
digits by utilising the strength of convolutional neural networks (CNNs) and
recurrent neural networks (RNNs).In this article, we give a thorough
examination of the data preparation methods, network design, and training
methods used in NeuroWrite. By implementing state-of-the-art techniques, we
showcase how NeuroWrite can achieve high classification accuracy and robust
generalization on handwritten digit datasets, such as MNIST. Furthermore, we
explore the model's potential for real-world applications, including digit
recognition in digitized documents, signature verification, and automated
postal code recognition. NeuroWrite is a useful tool for computer vision and
pattern recognition because of its performance and adaptability.The
architecture, training procedure, and evaluation metrics of NeuroWrite are
covered in detail in this study, illustrating how it can improve a number of
applications that call for handwritten digit classification. The outcomes show
that NeuroWrite is a promising method for raising the bar for deep neural
network-based handwritten digit recognition. | Computer Vision |
What field is the article from? | Title: Investigating Relative Performance of Transfer and Meta Learning
Abstract: Over the past decade, the field of machine learning has experienced
remarkable advancements. While image recognition systems have achieved
impressive levels of accuracy, they continue to rely on extensive training
datasets. Additionally, a significant challenge has emerged in the form of poor
out-of-distribution performance, which necessitates retraining neural networks
when they encounter conditions that deviate from their training data. This
limitation has notably contributed to the slow progress in self-driving car
technology. These pressing issues have sparked considerable interest in methods
that enable neural networks to learn effectively from limited data. This paper
presents the outcomes of an extensive investigation designed to compare two
distinct approaches, transfer learning and meta learning, as potential
solutions to this problem. The overarching objective was to establish a robust
criterion for selecting the most suitable method in diverse machine learning
scenarios. Building upon prior research, I expanded the comparative analysis by
introducing a new meta learning method into the investigation. Subsequently, I
assessed whether the findings remained consistent under varying conditions.
Finally, I delved into the impact of altering the size of the training dataset
on the relative performance of these methods. This comprehensive exploration
has yielded insights into the conditions favoring each approach, thereby
facilitating the development of a criterion for selecting the most appropriate
method in any given situation | Machine Learning |
What field is the article from? | Title: Diffusion Cocktail: Fused Generation from Diffusion Models
Abstract: Diffusion models excel at generating high-quality images and are easy to
extend, making them extremely popular among active users who have created an
extensive collection of diffusion models with various styles by fine-tuning
base models such as Stable Diffusion. Recent work has focused on uncovering
semantic and visual information encoded in various components of a diffusion
model, enabling better generation quality and more fine-grained control.
However, those methods target improving a single model and overlook the vastly
available collection of fine-tuned diffusion models. In this work, we study the
combinations of diffusion models. We propose Diffusion Cocktail (Ditail), a
training-free method that can accurately transfer content information between
two diffusion models. This allows us to perform diverse generations using a set
of diffusion models, resulting in novel images that are unlikely to be obtained
by a single model alone. We also explore utilizing Ditail for style transfer,
with the target style set by a diffusion model instead of an image. Ditail
offers a more detailed manipulation of the diffusion generation, thereby
enabling the vast community to integrate various styles and contents seamlessly
and generate any content of any style. | Computer Vision |
What field is the article from? | Title: Utilizing Speech Emotion Recognition and Recommender Systems for Negative Emotion Handling in Therapy Chatbots
Abstract: Emotional well-being significantly influences mental health and overall
quality of life. As therapy chatbots become increasingly prevalent, their
ability to comprehend and respond empathetically to users' emotions remains
limited. This paper addresses this limitation by proposing an approach to
enhance therapy chatbots with auditory perception, enabling them to understand
users' feelings and provide human-like empathy. The proposed method
incorporates speech emotion recognition (SER) techniques using Convolutional
Neural Network (CNN) models and the ShEMO dataset to accurately detect and
classify negative emotions, including anger, fear, and sadness. The SER model
achieves a validation accuracy of 88%, demonstrating its effectiveness in
recognizing emotional states from speech signals. Furthermore, a recommender
system is developed, leveraging the SER model's output to generate personalized
recommendations for managing negative emotions, for which a new bilingual
dataset was generated as well since there is no such dataset available for this
task. The recommender model achieves an accuracy of 98% by employing a
combination of global vectors for word representation (GloVe) and LSTM models.
To provide a more immersive and empathetic user experience, a text-to-speech
model called GlowTTS is integrated, enabling the therapy chatbot to audibly
communicate the generated recommendations to users in both English and Persian.
The proposed approach offers promising potential to enhance therapy chatbots by
providing them with the ability to recognize and respond to users' emotions,
ultimately improving the delivery of mental health support for both English and
Persian-speaking users. | Computational Linguistics |
What field is the article from? | Title: Uplift Modeling based on Graph Neural Network Combined with Causal Knowledge
Abstract: Uplift modeling is a fundamental component of marketing effect modeling,
which is commonly employed to evaluate the effects of treatments on outcomes.
Through uplift modeling, we can identify the treatment with the greatest
benefit. On the other side, we can identify clients who are likely to make
favorable decisions in response to a certain treatment. In the past, uplift
modeling approaches relied heavily on the difference-in-difference (DID)
architecture, paired with a machine learning model as the estimation learner,
while neglecting the link and confidential information between features. We
proposed a framework based on graph neural networks that combine causal
knowledge with an estimate of uplift value. Firstly, we presented a causal
representation technique based on CATE (conditional average treatment effect)
estimation and adjacency matrix structure learning. Secondly, we suggested a
more scalable uplift modeling framework based on graph convolution networks for
combining causal knowledge. Our findings demonstrate that this method works
effectively for predicting uplift values, with small errors in typical
simulated data, and its effectiveness has been verified in actual industry
marketing data. | Machine Learning |
What field is the article from? | Title: Flexible Model Interpretability through Natural Language Model Editing
Abstract: Model interpretability and model editing are crucial goals in the age of
large language models. Interestingly, there exists a link between these two
goals: if a method is able to systematically edit model behavior with regard to
a human concept of interest, this editor method can help make internal
representations more interpretable by pointing towards relevant representations
and systematically manipulating them. | Computational Linguistics |
What field is the article from? | Title: Eliciting Latent Knowledge from Quirky Language Models
Abstract: Eliciting Latent Knowledge (ELK) aims to find patterns in a neural network's
activations which robustly track the true state of the world, even when the
network's overt output is false or misleading. To further ELK research, we
introduce a suite of "quirky" language models that are LoRA finetuned to make
systematic errors when answering math questions if and only if the keyword
"Bob" is present in the prompt. We demonstrate that simple probing methods can
elicit the model's latent knowledge of the correct answer in these contexts,
even for problems harder than those the probe was trained on. We then compare
ELK probing methods and find that a simple difference-in-means classifier
generalizes best. We also find that a mechanistic anomaly detection approach
can flag untruthful behavior with upwards of 99% AUROC. Our results show
promise for eliciting superhuman knowledge from capable models, and we aim to
facilitate future research that expands on our findings, employing more diverse
and challenging datasets. | Machine Learning |
What field is the article from? | Title: Magicoder: Source Code Is All You Need
Abstract: We introduce Magicoder, a series of fully open-source (code, weights, and
data) Large Language Models (LLMs) for code that significantly closes the gap
with top code models while having no more than 7B parameters. Magicoder models
are trained on 75K synthetic instruction data using OSS-Instruct, a novel
approach to enlightening LLMs with open-source code snippets to generate
high-quality instruction data for code. Our main motivation is to mitigate the
inherent bias of the synthetic data generated by LLMs by empowering them with a
wealth of open-source references for the production of more diverse, realistic,
and controllable data. The orthogonality of OSS-Instruct and other data
generation methods like Evol-Instruct further enables us to build an enhanced
MagicoderS. Both Magicoder and MagicoderS substantially outperform
state-of-the-art code models with similar or even larger sizes on a wide range
of coding benchmarks, including Python text-to-code generation, multilingual
coding, and data-science program completion. Notably, MagicoderS-CL-7B based on
CodeLlama even surpasses the prominent ChatGPT on HumanEval+ (66.5 vs. 65.9 in
pass@1). Overall, OSS-Instruct opens a new direction for low-bias and
high-quality instruction tuning using abundant open-source references. | Computational Linguistics |
What field is the article from? | Title: Low-power, Continuous Remote Behavioral Localization with Event Cameras
Abstract: Researchers in natural science need reliable methods for quantifying animal
behavior. Recently, numerous computer vision methods emerged to automate the
process. However, observing wild species at remote locations remains a
challenging task due to difficult lighting conditions and constraints on power
supply and data storage. Event cameras offer unique advantages for
battery-dependent remote monitoring due to their low power consumption and high
dynamic range capabilities. We use this novel sensor to quantify a behavior in
Chinstrap penguins called ecstatic display. We formulate the problem as a
temporal action detection task, determining the start and end times of the
behavior. For this purpose, we recorded a colony of breeding penguins in
Antarctica during several weeks and labeled event data on 16 nests. The
developed method consists of a generator of candidate time intervals
(proposals) and a classifier of the actions within them. The experiments show
that the event cameras' natural response to motion is effective for continuous
behavior monitoring and detection, reaching a mean average precision (mAP) of
58% (which increases to 63% in good weather conditions). The results also
demonstrate the robustness against various lighting conditions contained in the
challenging dataset. The low-power capabilities of the event camera allows to
record three times longer than with a conventional camera. This work pioneers
the use of event cameras for remote wildlife observation, opening new
interdisciplinary opportunities. https://tub-rip.github.io/eventpenguins/ | Computer Vision |
What field is the article from? | Title: Efficiently Quantifying Individual Agent Importance in Cooperative MARL
Abstract: Measuring the contribution of individual agents is challenging in cooperative
multi-agent reinforcement learning (MARL). In cooperative MARL, team
performance is typically inferred from a single shared global reward. Arguably,
among the best current approaches to effectively measure individual agent
contributions is to use Shapley values. However, calculating these values is
expensive as the computational complexity grows exponentially with respect to
the number of agents. In this paper, we adapt difference rewards into an
efficient method for quantifying the contribution of individual agents,
referred to as Agent Importance, offering a linear computational complexity
relative to the number of agents. We show empirically that the computed values
are strongly correlated with the true Shapley values, as well as the true
underlying individual agent rewards, used as the ground truth in environments
where these are available. We demonstrate how Agent Importance can be used to
help study MARL systems by diagnosing algorithmic failures discovered in prior
MARL benchmarking work. Our analysis illustrates Agent Importance as a valuable
explainability component for future MARL benchmarks. | Artificial Intelligence |
What field is the article from? | Title: Scalable Decentralized Cooperative Platoon using Multi-Agent Deep Reinforcement Learning
Abstract: Cooperative autonomous driving plays a pivotal role in improving road
capacity and safety within intelligent transportation systems, particularly
through the deployment of autonomous vehicles on urban streets. By enabling
vehicle-to-vehicle communication, these systems expand the vehicles
environmental awareness, allowing them to detect hidden obstacles and thereby
enhancing safety and reducing crash rates compared to human drivers who rely
solely on visual perception. A key application of this technology is vehicle
platooning, where connected vehicles drive in a coordinated formation. This
paper introduces a vehicle platooning approach designed to enhance traffic flow
and safety. Developed using deep reinforcement learning in the Unity 3D game
engine, known for its advanced physics, this approach aims for a high-fidelity
physical simulation that closely mirrors real-world conditions. The proposed
platooning model focuses on scalability, decentralization, and fostering
positive cooperation through the introduced predecessor-follower "sharing and
caring" communication framework. The study demonstrates how these elements
collectively enhance autonomous driving performance and robustness, both for
individual vehicles and for the platoon as a whole, in an urban setting. This
results in improved road safety and reduced traffic congestion. | Robotics |
What field is the article from? | Title: Bayes-enhanced Multi-view Attention Networks for Robust POI Recommendation
Abstract: POI recommendation is practically important to facilitate various
Location-Based Social Network services, and has attracted rising research
attention recently. Existing works generally assume the available POI check-ins
reported by users are the ground-truth depiction of user behaviors. However, in
real application scenarios, the check-in data can be rather unreliable due to
both subjective and objective causes including positioning error and user
privacy concerns, leading to significant negative impacts on the performance of
the POI recommendation. To this end, we investigate a novel problem of robust
POI recommendation by considering the uncertainty factors of the user
check-ins, and proposes a Bayes-enhanced Multi-view Attention Network.
Specifically, we construct personal POI transition graph, the semantic-based
POI graph and distance-based POI graph to comprehensively model the
dependencies among the POIs. As the personal POI transition graph is usually
sparse and sensitive to noise, we design a Bayes-enhanced spatial dependency
learning module for data augmentation from the local view. A Bayesian posterior
guided graph augmentation approach is adopted to generate a new graph with
collaborative signals to increase the data diversity. Then both the original
and the augmented graphs are used for POI representation learning to counteract
the data uncertainty issue. Next, the POI representations of the three view
graphs are input into the proposed multi-view attention-based user preference
learning module. By incorporating the semantic and distance correlations of
POIs, the user preference can be effectively refined and finally robust
recommendation results are achieved. The results of extensive experiments show
that BayMAN significantly outperforms the state-of-the-art methods in POI
recommendation when the available check-ins are incomplete and noisy. | Information Retrieval |
What field is the article from? | Title: MTS-DVGAN: Anomaly Detection in Cyber-Physical Systems using a Dual Variational Generative Adversarial Network
Abstract: Deep generative models are promising in detecting novel cyber-physical
attacks, mitigating the vulnerability of Cyber-physical systems (CPSs) without
relying on labeled information. Nonetheless, these generative models face
challenges in identifying attack behaviors that closely resemble normal data,
or deviate from the normal data distribution but are in close proximity to the
manifold of the normal cluster in latent space. To tackle this problem, this
article proposes a novel unsupervised dual variational generative adversarial
model named MST-DVGAN, to perform anomaly detection in multivariate time series
data for CPS security. The central concept is to enhance the model's
discriminative capability by widening the distinction between reconstructed
abnormal samples and their normal counterparts. Specifically, we propose an
augmented module by imposing contrastive constraints on the reconstruction
process to obtain a more compact embedding. Then, by exploiting the
distribution property and modeling the normal patterns of multivariate time
series, a variational autoencoder is introduced to force the generative
adversarial network (GAN) to generate diverse samples. Furthermore, two
augmented loss functions are designed to extract essential characteristics in a
self-supervised manner through mutual guidance between the augmented samples
and original samples. Finally, a specific feature center loss is introduced for
the generator network to enhance its stability. Empirical experiments are
conducted on three public datasets, namely SWAT, WADI and NSL_KDD. Comparing
with the state-of-the-art methods, the evaluation results show that the
proposed MTS-DVGAN is more stable and can achieve consistent performance
improvement. | Cryptography and Security |
What field is the article from? | Title: MMDesign: Multi-Modality Transfer Learning for Generative Protein Design
Abstract: Protein design involves generating protein sequences based on their
corresponding protein backbones. While deep generative models show promise for
learning protein design directly from data, the lack of publicly available
structure-sequence pairings limits their generalization capabilities. Previous
efforts of generative protein design have focused on architectural improvements
and pseudo-data augmentation to overcome this bottleneck. To further address
this challenge, we propose a novel protein design paradigm called MMDesign,
which leverages multi-modality transfer learning. To our knowledge, MMDesign is
the first framework that combines a pretrained structural module with a
pretrained contextual module, using an auto-encoder (AE) based language model
to incorporate prior semantic knowledge of protein sequences. We also introduce
a cross-layer cross-modal alignment algorithm to enable the structural module
to learn long-term temporal information and ensure consistency between
structural and contextual modalities. Experimental results, only training with
the small CATH dataset, demonstrate that our MMDesign framework consistently
outperforms other baselines on various public test sets. To further assess the
biological plausibility of the generated protein sequences and data
distribution, we present systematic quantitative analysis techniques that
provide interpretability and reveal more about the laws of protein design. | Artificial Intelligence |
What field is the article from? | Title: Evaluating the Efficacy of Hybrid Deep Learning Models in Distinguishing AI-Generated Text
Abstract: My research investigates the use of cutting-edge hybrid deep learning models
to accurately differentiate between AI-generated text and human writing. I
applied a robust methodology, utilising a carefully selected dataset comprising
AI and human texts from various sources, each tagged with instructions.
Advanced natural language processing techniques facilitated the analysis of
textual features. Combining sophisticated neural networks, the custom model
enabled it to detect nuanced differences between AI and human content. | Computational Linguistics |
What field is the article from? | Title: Leveraging AI-derived Data for Carbon Accounting: Information Extraction from Alternative Sources
Abstract: Carbon accounting is a fundamental building block in our global path to
emissions reduction and decarbonization, yet many challenges exist in achieving
reliable and trusted carbon accounting measures. We motivate that carbon
accounting not only needs to be more data-driven, but also more
methodologically sound. We discuss the need for alternative, more diverse data
sources that can play a significant role on our path to trusted carbon
accounting procedures and elaborate on not only why, but how Artificial
Intelligence (AI) in general and Natural Language Processing (NLP) in
particular can unlock reasonable access to a treasure trove of alternative data
sets in light of the recent advances in the field that better enable the
utilization of unstructured data in this process. We present a case study of
the recent developments on real-world data via an NLP-powered analysis using
OpenAI's GPT API on financial and shipping data. We conclude the paper with a
discussion on how these methods and approaches can be integrated into a broader
framework for AI-enabled integrative carbon accounting. | Computational Linguistics |
What field is the article from? | Title: Predict-Then-Optimize by Proxy: Learning Joint Models of Prediction and Optimization
Abstract: Many real-world decision processes are modeled by optimization problems whose
defining parameters are unknown and must be inferred from observable data. The
Predict-Then-Optimize framework uses machine learning models to predict unknown
parameters of an optimization problem from features before solving. Recent
works show that decision quality can be improved in this setting by solving and
differentiating the optimization problem in the training loop, enabling
end-to-end training with loss functions defined directly on the resulting
decisions. However, this approach can be inefficient and requires handcrafted,
problem-specific rules for backpropagation through the optimization step. This
paper proposes an alternative method, in which optimal solutions are learned
directly from the observable features by predictive models. The approach is
generic, and based on an adaptation of the Learning-to-Optimize paradigm, from
which a rich variety of existing techniques can be employed. Experimental
evaluations show the ability of several Learning-to-Optimize methods to provide
efficient, accurate, and flexible solutions to an array of challenging
Predict-Then-Optimize problems. | Machine Learning |
What field is the article from? | Title: Filtered Semi-Markov CRF
Abstract: Semi-Markov CRF has been proposed as an alternative to the traditional Linear
Chain CRF for text segmentation tasks such as Named Entity Recognition (NER).
Unlike CRF, which treats text segmentation as token-level prediction, Semi-CRF
considers segments as the basic unit, making it more expressive. However,
Semi-CRF suffers from two major drawbacks: (1) quadratic complexity over
sequence length, as it operates on every span of the input sequence, and (2)
inferior performance compared to CRF for sequence labeling tasks like NER. In
this paper, we introduce Filtered Semi-Markov CRF, a variant of Semi-CRF that
addresses these issues by incorporating a filtering step to eliminate
irrelevant segments, reducing complexity and search space. Our approach is
evaluated on several NER benchmarks, where it outperforms both CRF and Semi-CRF
while being significantly faster. The implementation of our method is available
on \href{https://github.com/urchade/Filtered-Semi-Markov-CRF}{Github}. | Computational Linguistics |
What field is the article from? | Title: On the Difficulty of Defending Contrastive Learning against Backdoor Attacks
Abstract: Recent studies have shown that contrastive learning, like supervised
learning, is highly vulnerable to backdoor attacks wherein malicious functions
are injected into target models, only to be activated by specific triggers.
However, thus far it remains under-explored how contrastive backdoor attacks
fundamentally differ from their supervised counterparts, which impedes the
development of effective defenses against the emerging threat.
This work represents a solid step toward answering this critical question.
Specifically, we define TRL, a unified framework that encompasses both
supervised and contrastive backdoor attacks. Through the lens of TRL, we
uncover that the two types of attacks operate through distinctive mechanisms:
in supervised attacks, the learning of benign and backdoor tasks tends to occur
independently, while in contrastive attacks, the two tasks are deeply
intertwined both in their representations and throughout their learning
processes. This distinction leads to the disparate learning dynamics and
feature distributions of supervised and contrastive attacks. More importantly,
we reveal that the specificities of contrastive backdoor attacks entail
important implications from a defense perspective: existing defenses for
supervised attacks are often inadequate and not easily retrofitted to
contrastive attacks. We also explore several alternative defenses and discuss
their potential challenges. Our findings highlight the need for defenses
tailored to the specificities of contrastive backdoor attacks, pointing to
promising directions for future research. | Cryptography and Security |
What field is the article from? | Title: The Analysis and Extraction of Structure from Organizational Charts
Abstract: Organizational charts, also known as org charts, are critical representations
of an organization's structure and the hierarchical relationships between its
components and positions. However, manually extracting information from org
charts can be error-prone and time-consuming. To solve this, we present an
automated and end-to-end approach that uses computer vision, deep learning, and
natural language processing techniques. Additionally, we propose a metric to
evaluate the completeness and hierarchical accuracy of the extracted
information. This approach has the potential to improve organizational
restructuring and resource utilization by providing a clear and concise
representation of the organizational structure. Our study lays a foundation for
further research on the topic of hierarchical chart analysis. | Computer Vision |
What field is the article from? | Title: Contact Energy Based Hindsight Experience Prioritization
Abstract: Multi-goal robot manipulation tasks with sparse rewards are difficult for
reinforcement learning (RL) algorithms due to the inefficiency in collecting
successful experiences. Recent algorithms such as Hindsight Experience Replay
(HER) expedite learning by taking advantage of failed trajectories and
replacing the desired goal with one of the achieved states so that any failed
trajectory can be utilized as a contribution to learning. However, HER
uniformly chooses failed trajectories, without taking into account which ones
might be the most valuable for learning. In this paper, we address this problem
and propose a novel approach Contact Energy Based Prioritization~(CEBP) to
select the samples from the replay buffer based on rich information due to
contact, leveraging the touch sensors in the gripper of the robot and object
displacement. Our prioritization scheme favors sampling of contact-rich
experiences, which are arguably the ones providing the largest amount of
information. We evaluate our proposed approach on various sparse reward robotic
tasks and compare them with the state-of-the-art methods. We show that our
method surpasses or performs on par with those methods on robot manipulation
tasks. Finally, we deploy the trained policy from our method to a real Franka
robot for a pick-and-place task. We observe that the robot can solve the task
successfully. The videos and code are publicly available at:
https://erdiphd.github.io/HER_force | Robotics |
What field is the article from? | Title: Large Language Models Illuminate a Progressive Pathway to Artificial Healthcare Assistant: A Review
Abstract: With the rapid development of artificial intelligence, large language models
(LLMs) have shown promising capabilities in mimicking human-level language
comprehension and reasoning. This has sparked significant interest in applying
LLMs to enhance various aspects of healthcare, ranging from medical education
to clinical decision support. However, medicine involves multifaceted data
modalities and nuanced reasoning skills, presenting challenges for integrating
LLMs. This paper provides a comprehensive review on the applications and
implications of LLMs in medicine. It begins by examining the fundamental
applications of general-purpose and specialized LLMs, demonstrating their
utilities in knowledge retrieval, research support, clinical workflow
automation, and diagnostic assistance. Recognizing the inherent multimodality
of medicine, the review then focuses on multimodal LLMs, investigating their
ability to process diverse data types like medical imaging and EHRs to augment
diagnostic accuracy. To address LLMs' limitations regarding personalization and
complex clinical reasoning, the paper explores the emerging development of
LLM-powered autonomous agents for healthcare. Furthermore, it summarizes the
evaluation methodologies for assessing LLMs' reliability and safety in medical
contexts. Overall, this review offers an extensive analysis on the
transformative potential of LLMs in modern medicine. It also highlights the
pivotal need for continuous optimizations and ethical oversight before these
models can be effectively integrated into clinical practice. Visit
https://github.com/mingze-yuan/Awesome-LLM-Healthcare for an accompanying
GitHub repository containing latest papers. | Computational Linguistics |
What field is the article from? | Title: Unlocking the Potential of Federated Learning: The Symphony of Dataset Distillation via Deep Generative Latents
Abstract: Data heterogeneity presents significant challenges for federated learning
(FL). Recently, dataset distillation techniques have been introduced, and
performed at the client level, to attempt to mitigate some of these challenges.
In this paper, we propose a highly efficient FL dataset distillation framework
on the server side, significantly reducing both the computational and
communication demands on local devices while enhancing the clients' privacy.
Unlike previous strategies that perform dataset distillation on local devices
and upload synthetic data to the server, our technique enables the server to
leverage prior knowledge from pre-trained deep generative models to synthesize
essential data representations from a heterogeneous model architecture. This
process allows local devices to train smaller surrogate models while enabling
the training of a larger global model on the server, effectively minimizing
resource utilization. We substantiate our claim with a theoretical analysis,
demonstrating the asymptotic resemblance of the process to the hypothetical
ideal of completely centralized training on a heterogeneous dataset. Empirical
evidence from our comprehensive experiments indicates our method's superiority,
delivering an accuracy enhancement of up to 40% over non-dataset-distillation
techniques in highly heterogeneous FL contexts, and surpassing existing
dataset-distillation methods by 18%. In addition to the high accuracy, our
framework converges faster than the baselines because rather than the server
trains on several sets of heterogeneous data distributions, it trains on a
multi-modal distribution. Our code is available at
https://github.com/FedDG23/FedDG-main.git | Machine Learning |
What field is the article from? | Title: Identification of Knowledge Neurons in Protein Language Models
Abstract: Neural language models have become powerful tools for learning complex
representations of entities in natural language processing tasks. However,
their interpretability remains a significant challenge, particularly in domains
like computational biology where trust in model predictions is crucial. In this
work, we aim to enhance the interpretability of protein language models,
specifically the state-of-the-art ESM model, by identifying and characterizing
knowledge neurons - components that express understanding of key information.
After fine-tuning the ESM model for the task of enzyme sequence classification,
we compare two knowledge neuron selection methods that preserve a subset of
neurons from the original model. The two methods, activation-based and
integrated gradient-based selection, consistently outperform a random baseline.
In particular, these methods show that there is a high density of knowledge
neurons in the key vector prediction networks of self-attention modules. Given
that key vectors specialize in understanding different features of input
sequences, these knowledge neurons could capture knowledge of different enzyme
sequence motifs. In the future, the types of knowledge captured by each neuron
could be characterized. | Machine Learning |
What field is the article from? | Title: Advances in ACL2 Proof Debugging Tools
Abstract: The experience of an ACL2 user generally includes many failed proof attempts.
A key to successful use of the ACL2 prover is the effective use of tools to
debug those failures. We focus on changes made after ACL2 Version 8.5: the
improved break-rewrite utility and the new utility, with-brr-data. | Artificial Intelligence |
What field is the article from? | Title: Supported Trust Region Optimization for Offline Reinforcement Learning
Abstract: Offline reinforcement learning suffers from the out-of-distribution issue and
extrapolation error. Most policy constraint methods regularize the density of
the trained policy towards the behavior policy, which is too restrictive in
most cases. We propose Supported Trust Region optimization (STR) which performs
trust region policy optimization with the policy constrained within the support
of the behavior policy, enjoying the less restrictive support constraint. We
show that, when assuming no approximation and sampling error, STR guarantees
strict policy improvement until convergence to the optimal support-constrained
policy in the dataset. Further with both errors incorporated, STR still
guarantees safe policy improvement for each step. Empirical results validate
the theory of STR and demonstrate its state-of-the-art performance on MuJoCo
locomotion domains and much more challenging AntMaze domains. | Machine Learning |
What field is the article from? | Title: Integrating Language Models into Direct Speech Translation: An Inference-Time Solution to Control Gender Inflection
Abstract: When translating words referring to the speaker, speech translation (ST)
systems should not resort to default masculine generics nor rely on potentially
misleading vocal traits. Rather, they should assign gender according to the
speakers' preference. The existing solutions to do so, though effective, are
hardly feasible in practice as they involve dedicated model re-training on
gender-labeled ST data. To overcome these limitations, we propose the first
inference-time solution to control speaker-related gender inflections in ST.
Our approach partially replaces the (biased) internal language model (LM)
implicitly learned by the ST decoder with gender-specific external LMs.
Experiments on en->es/fr/it show that our solution outperforms the base models
and the best training-time mitigation strategy by up to 31.0 and 1.6 points in
gender accuracy, respectively, for feminine forms. The gains are even larger
(up to 32.0 and 3.4) in the challenging condition where speakers' vocal traits
conflict with their gender. | Computational Linguistics |
What field is the article from? | Title: AI-based Wildfire Prevention, Detection and Suppression System
Abstract: Wildfires pose a serious threat to the environment of the world. The global
wildfire season length has increased by 19% and severe wildfires have besieged
nations around the world. Every year, forests are burned by wildfires, causing
vast amounts of carbon dioxide to be released into the atmosphere, contributing
to climate change. There is a need for a system which prevents, detects, and
suppresses wildfires. The AI based Wildfire Prevention, Detection and
Suppression System (WPDSS) is a novel, fully automated, end to end, AI based
solution to effectively predict hotspots and detect wildfires, deploy drones to
spray fire retardant, preventing and suppressing wildfires. WPDSS consists of
four steps. 1. Preprocessing: WPDSS loads real time satellite data from NASA
and meteorological data from NOAA of vegetation, temperature, precipitation,
wind, soil moisture, and land cover for prevention. For detection, it loads the
real time data of Land Cover, Humidity, Temperature, Vegetation, Burned Area
Index, Ozone, and CO2. It uses the process of masking to eliminate not hotspots
and not wildfires such as water bodies, and rainfall. 2. Learning: The AI model
consists of a random forest classifier, which is trained using a labeled
dataset of hotspots and wildfires and not hotspots and not wildfires. 3.
Identification of hotspots and wildfires: WPDSS runs the real time data through
the model to automatically identify hotspots and wildfires. 4. Drone
deployment: The drone flies to the identified hotspot or wildfire location.
WPDSS attained a 98.6% accuracy in identifying hotspots and a 98.7% accuracy in
detecting wildfires. WPDSS will reduce the impacts of climate change, protect
ecosystems and biodiversity, avert huge economic losses, and save human lives.
The power of WPDSS developed can be applied to any location globally to prevent
and suppress wildfires, reducing climate change. | Artificial Intelligence |
What field is the article from? | Title: Assessing AI Chatbots Performance in Comprehensive Standardized Test Preparation; A Case Study with GRE
Abstract: This research paper presents a comprehensive evaluation of the performance of
three artificial 10 intelligence chatbots: Bing, ChatGPT, and GPT-4, in
addressing standardized test questions. Graduate record examination, known as
GRE, serves as a case study in this paper, encompassing both quantitative
reasoning and verbal skills. A total of 137 quantitative reasoning questions,
featuring diverse styles and 157 verbal questions categorized into varying
levels of difficulty (easy, medium, and hard) were administered to assess the
chatbots' capabilities. This paper provides a detailed examination of the
results and their implications for the utilization of artificial intelligence
in standardized test preparation by presenting the performance of each chatbot
across various skills and styles tested in the exam. Additionally, this paper
explores the proficiency of artificial intelligence in addressing image-based
questions and illustrates the uncertainty level of each chatbot. The results
reveal varying degrees of success across the chatbots, demonstrating the
influence of model sophistication and training data. GPT-4 emerged as the most
proficient, especially in complex language understanding tasks, highlighting
the evolution of artificial intelligence in language comprehension and its
ability to pass the exam with a high score. | Computational Linguistics |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.