instruction
stringclasses 1
value | input
stringlengths 260
2.07k
| output
stringclasses 10
values |
---|---|---|
What field is the article from? | Title: Latent Lab: Large Language Models for Knowledge Exploration
Abstract: This paper investigates the potential of AI models, particularly large
language models (LLMs), to support knowledge exploration and augment human
creativity during ideation. We present "Latent Lab" an interactive tool for
discovering connections among MIT Media Lab research projects, emphasizing
"exploration" over search. The work offers insights into collaborative AI
systems by addressing the challenges of organizing, searching, and synthesizing
content. In a user study, the tool's success was evaluated based on its ability
to introduce users to an unfamiliar knowledge base, ultimately setting the
groundwork for the ongoing advancement of human-AI knowledge exploration
systems. | Artificial Intelligence |
What field is the article from? | Title: Taming Gradient Variance in Federated Learning with Networked Control Variates
Abstract: Federated learning, a decentralized approach to machine learning, faces
significant challenges such as extensive communication overheads, slow
convergence, and unstable improvements. These challenges primarily stem from
the gradient variance due to heterogeneous client data distributions. To
address this, we introduce a novel Networked Control Variates (FedNCV)
framework for Federated Learning. We adopt the REINFORCE Leave-One-Out (RLOO)
as a fundamental control variate unit in the FedNCV framework, implemented at
both client and server levels. At the client level, the RLOO control variate is
employed to optimize local gradient updates, mitigating the variance introduced
by data samples. Once relayed to the server, the RLOO-based estimator further
provides an unbiased and low-variance aggregated gradient, leading to robust
global updates. This dual-side application is formalized as a linear
combination of composite control variates. We provide a mathematical expression
capturing this integration of double control variates within FedNCV and present
three theoretical results with corresponding proofs. This unique dual structure
equips FedNCV to address data heterogeneity and scalability issues, thus
potentially paving the way for large-scale applications. Moreover, we tested
FedNCV on six diverse datasets under a Dirichlet distribution with {\alpha} =
0.1, and benchmarked its performance against six SOTA methods, demonstrating
its superiority. | Machine Learning |
What field is the article from? | Title: Direct Preference-Based Evolutionary Multi-Objective Optimization with Dueling Bandit
Abstract: Optimization problems find widespread use in both single-objective and
multi-objective scenarios. In practical applications, users aspire for
solutions that converge to the region of interest (ROI) along the Pareto front
(PF). While the conventional approach involves approximating a fitness function
or an objective function to reflect user preferences, this paper explores an
alternative avenue. Specifically, we aim to discover a method that sidesteps
the need for calculating the fitness function, relying solely on human
feedback. Our proposed approach entails conducting direct preference learning
facilitated by an active dueling bandit algorithm. The experimental phase is
structured into three sessions. Firstly, we assess the performance of our
active dueling bandit algorithm. Secondly, we implement our proposed method
within the context of Multi-objective Evolutionary Algorithms (MOEAs). Finally,
we deploy our method in a practical problem, specifically in protein structure
prediction (PSP). This research presents a novel interactive preference-based
MOEA framework that not only addresses the limitations of traditional
techniques but also unveils new possibilities for optimization problems. | Artificial Intelligence |
What field is the article from? | Title: MAFALDA: A Benchmark and Comprehensive Study of Fallacy Detection and Classification
Abstract: Fallacies can be used to spread disinformation, fake news, and propaganda,
underlining the importance of their detection. Automated detection and
classification of fallacies, however, remain challenging, mainly because of the
innate subjectivity of the task and the need for a comprehensive, unified
approach in existing research. Addressing these limitations, our study
introduces a novel taxonomy of fallacies that aligns and refines previous
classifications, a new annotation scheme tailored for subjective NLP tasks, and
a new evaluation method designed to handle subjectivity, adapted to precision,
recall, and F1-Score metrics. Using our annotation scheme, the paper introduces
MAFALDA (Multi-level Annotated FALlacy DAtaset), a gold standard dataset.
MAFALDA is based on examples from various previously existing fallacy datasets
under our unified taxonomy across three levels of granularity. We then evaluate
several language models under a zero-shot learning setting using MAFALDA to
assess their fallacy detection and classification capability. Our comprehensive
evaluation not only benchmarks the performance of these models but also
provides valuable insights into their strengths and limitations in addressing
fallacious reasoning. | Computational Linguistics |
What field is the article from? | Title: LSTM Network Analysis of Vehicle-Type Fatalities on Great Britain's Roads
Abstract: This study harnesses the predictive capabilities of Long Short-Term Memory
(LSTM) networks to analyse and predict road traffic accidents in Great Britain.
It addresses the challenge of traffic accident forecasting, which is paramount
for devising effective preventive measures. We utilised an extensive dataset
encompassing reported collisions, casualties, and vehicles involvements from
1926 to 2022, provided by the Department for Transport (DfT). The data
underwent stringent processing to rectify missing values and normalise
features, ensuring robust LSTM network input. | Machine Learning |
What field is the article from? | Title: The Energy Prediction Smart-Meter Dataset: Analysis of Previous Competitions and Beyond
Abstract: This paper presents the real-world smart-meter dataset and offers an analysis
of solutions derived from the Energy Prediction Technical Challenges, focusing
primarily on two key competitions: the IEEE Computational Intelligence Society
(IEEE-CIS) Technical Challenge on Energy Prediction from Smart Meter data in
2020 (named EP) and its follow-up challenge at the IEEE International
Conference on Fuzzy Systems (FUZZ-IEEE) in 2021 (named as XEP). These
competitions focus on accurate energy consumption forecasting and the
importance of interpretability in understanding the underlying factors. The
challenge aims to predict monthly and yearly estimated consumption for
households, addressing the accurate billing problem with limited historical
smart meter data. The dataset comprises 3,248 smart meters, with varying data
availability ranging from a minimum of one month to a year. This paper delves
into the challenges, solutions and analysing issues related to the provided
real-world smart meter data, developing accurate predictions at the household
level, and introducing evaluation criteria for assessing interpretability.
Additionally, this paper discusses aspects beyond the competitions:
opportunities for energy disaggregation and pattern detection applications at
the household level, significance of communicating energy-driven factors for
optimised billing, and emphasising the importance of responsible AI and data
privacy considerations. These aspects provide insights into the broader
implications and potential advancements in energy consumption prediction.
Overall, these competitions provide a dataset for residential energy research
and serve as a catalyst for exploring accurate forecasting, enhancing
interpretability, and driving progress towards the discussion of various
aspects such as energy disaggregation, demand response programs or behavioural
interventions. | Machine Learning |
What field is the article from? | Title: Linear time Evidence Accumulation Clustering with KMeans
Abstract: Among ensemble clustering methods, Evidence Accumulation Clustering is one of
the simplest technics. In this approach, a co-association (CA) matrix
representing the co-clustering frequency is built and then clustered to extract
consensus clusters. Compared to other approaches, this one is simple as there
is no need to find matches between clusters obtained from two different
partitionings. Nevertheless, this method suffers from computational issues, as
it requires to compute and store a matrix of size n x n, where n is the number
of items. Due to the quadratic cost, this approach is reserved for small
datasets. This work describes a trick which mimic the behavior of average
linkage clustering. We found a way of computing efficiently the density of a
partitioning, reducing the cost from a quadratic to linear complexity.
Additionally, we proved that the k-means maximizes naturally the density. We
performed experiments on several benchmark datasets where we compared the
k-means and the bisecting version to other state-of-the-art consensus
algorithms. The k-means results are comparable to the best state of the art in
terms of NMI while keeping the computational cost low. Additionally, the
k-means led to the best results in terms of density. These results provide
evidence that consensus clustering can be solved with simple algorithms. | Machine Learning |
What field is the article from? | Title: Navigating the Ocean of Biases: Political Bias Attribution in Language Models via Causal Structures
Abstract: The rapid advancement of Large Language Models (LLMs) has sparked intense
debate regarding their ability to perceive and interpret complex
socio-political landscapes. In this study, we undertake an exploration of
decision-making processes and inherent biases within LLMs, exemplified by
ChatGPT, specifically contextualizing our analysis within political debates. We
aim not to critique or validate LLMs' values, but rather to discern how they
interpret and adjudicate "good arguments." By applying Activity Dependency
Networks (ADNs), we extract the LLMs' implicit criteria for such assessments
and illustrate how normative values influence these perceptions. We discuss the
consequences of our findings for human-AI alignment and bias mitigation. Our
code and data at https://github.com/david-jenny/LLM-Political-Study. | Computational Linguistics |
What field is the article from? | Title: Improving Pacing in Long-Form Story Planning
Abstract: Existing LLM-based systems for writing long-form stories or story outlines
frequently suffer from unnatural pacing, whether glossing over important events
or over-elaborating on insignificant details, resulting in a jarring experience
for the reader. We propose a CONCrete Outline ConTrol (CONCOCT) system to
improve pacing when automatically generating story outlines. We first train a
concreteness evaluator to judge which of two events is more concrete
(low-level-detailed). This evaluator can then be used to control pacing in
hierarchical outline generation; in this work, we explore a vaguest-first
expansion procedure that aims for uniform pacing. We further use the evaluator
to filter new outline items based on predicted concreteness. Compared to a
baseline hierarchical outline generator, humans judge CONCOCT's pacing to be
more consistent over 57% of the time across multiple outline lengths; the gains
also translate to downstream stories. All code, data, and models are
open-sourced. | Computational Linguistics |
What field is the article from? | Title: From Concept to Manufacturing: Evaluating Vision-Language Models for Engineering Design
Abstract: Engineering Design is undergoing a transformative shift with the advent of
AI, marking a new era in how we approach product, system, and service planning.
Large language models have demonstrated impressive capabilities in enabling
this shift. Yet, with text as their only input modality, they cannot leverage
the large body of visual artifacts that engineers have used for centuries and
are accustomed to. This gap is addressed with the release of multimodal vision
language models, such as GPT-4V, enabling AI to impact many more types of
tasks. In light of these advancements, this paper presents a comprehensive
evaluation of GPT-4V, a vision language model, across a wide spectrum of
engineering design tasks, categorized into four main areas: Conceptual Design,
System-Level and Detailed Design, Manufacturing and Inspection, and Engineering
Education Tasks. Our study assesses GPT-4V's capabilities in design tasks such
as sketch similarity analysis, concept selection using Pugh Charts, material
selection, engineering drawing analysis, CAD generation, topology optimization,
design for additive and subtractive manufacturing, spatial reasoning
challenges, and textbook problems. Through this structured evaluation, we not
only explore GPT-4V's proficiency in handling complex design and manufacturing
challenges but also identify its limitations in complex engineering design
applications. Our research establishes a foundation for future assessments of
vision language models, emphasizing their immense potential for innovating and
enhancing the engineering design and manufacturing landscape. It also
contributes a set of benchmark testing datasets, with more than 1000 queries,
for ongoing advancements and applications in this field. | Artificial Intelligence |
What field is the article from? | Title: Learning From Mistakes Makes LLM Better Reasoner
Abstract: Large language models (LLMs) recently exhibited remarkable reasoning
capabilities on solving math problems. To further improve this capability, this
work proposes Learning from Mistakes (LeMa), akin to human learning processes.
Consider a human student who failed to solve a math problem, he will learn from
what mistake he has made and how to correct it. Mimicking this error-driven
learning process, LeMa fine-tunes LLMs on mistake-correction data pairs
generated by GPT-4. Specifically, we first collect inaccurate reasoning paths
from various LLMs and then employ GPT-4 as a "corrector" to (1) identify the
mistake step, (2) explain the reason for the mistake, and (3) correct the
mistake and generate the final answer. Experimental results demonstrate the
effectiveness of LeMa: across five backbone LLMs and two mathematical reasoning
tasks, LeMa consistently improves the performance compared with fine-tuning on
CoT data alone. Impressively, LeMa can also benefit specialized LLMs such as
WizardMath and MetaMath, achieving 85.4% pass@1 accuracy on GSM8K and 27.1% on
MATH. This surpasses the SOTA performance achieved by non-execution open-source
models on these challenging tasks. Our code, data and models will be publicly
available at https://github.com/microsoft/LEMA. | Computational Linguistics |
What field is the article from? | Title: Which way is `right'?: Uncovering limitations of Vision-and-Language Navigation model
Abstract: The challenging task of Vision-and-Language Navigation (VLN) requires
embodied agents to follow natural language instructions to reach a goal
location or object (e.g. `walk down the hallway and turn left at the piano').
For agents to complete this task successfully, they must be able to ground
objects referenced into the instruction (e.g.`piano') into the visual scene as
well as ground directional phrases (e.g.`turn left') into actions. In this work
we ask the following question -- to what degree are spatial and directional
language cues informing the navigation model's decisions? We propose a series
of simple masking experiments to inspect the model's reliance on different
parts of the instruction. Surprisingly we uncover that certain top performing
models rely only on the noun tokens of the instructions. We propose two
training methods to alleviate this concerning limitation. | Computer Vision |
What field is the article from? | Title: Sub-Sentence Encoder: Contrastive Learning of Propositional Semantic Representations
Abstract: We introduce sub-sentence encoder, a contrastively-learned contextual
embedding model for fine-grained semantic representation of text. In contrast
to the standard practice with sentence embeddings, where the meaning of an
entire sequence of text is encoded into a fixed-length vector, the sub-sentence
encoder learns to produce distinct contextual embeddings corresponding to
different atomic propositions, i.e. atomic units of meaning expressed within a
text sequence. The sub-sentence embeddings are contrastively learned to
recognize (inferred) semantic equivalence between propositions across different
text sequences. Our experiments show the effectiveness of sub-sentence encoders
in applications, such as retrieving supporting facts for fine-grained text
attribution or recognizing the conditional semantic similarity between texts.
In practice, we demonstrate that sub-sentence encoders keep the same level of
inference cost and space complexity compared to sentence encoders. | Computational Linguistics |
What field is the article from? | Title: Replicable Benchmarking of Neural Machine Translation (NMT) on Low-Resource Local Languages in Indonesia
Abstract: Neural machine translation (NMT) for low-resource local languages in
Indonesia faces significant challenges, including the need for a representative
benchmark and limited data availability. This work addresses these challenges
by comprehensively analyzing training NMT systems for four low-resource local
languages in Indonesia: Javanese, Sundanese, Minangkabau, and Balinese. Our
study encompasses various training approaches, paradigms, data sizes, and a
preliminary study into using large language models for synthetic low-resource
languages parallel data generation. We reveal specific trends and insights into
practical strategies for low-resource language translation. Our research
demonstrates that despite limited computational resources and textual data,
several of our NMT systems achieve competitive performances, rivaling the
translation quality of zero-shot gpt-3.5-turbo. These findings significantly
advance NMT for low-resource languages, offering valuable guidance for
researchers in similar contexts. | Computational Linguistics |
What field is the article from? | Title: Morphology-Enhanced CAM-Guided SAM for weakly supervised Breast Lesion Segmentation
Abstract: Breast cancer diagnosis challenges both patients and clinicians, with early
detection being crucial for effective treatment. Ultrasound imaging plays a key
role in this, but its utility is hampered by the need for precise lesion
segmentation-a task that is both time-consuming and labor-intensive. To address
these challenges, we propose a new framework: a morphology-enhanced, Class
Activation Map (CAM)-guided model, which is optimized using a computer vision
foundation model known as SAM. This innovative framework is specifically
designed for weakly supervised lesion segmentation in early-stage breast
ultrasound images. Our approach uniquely leverages image-level annotations,
which removes the requirement for detailed pixel-level annotation. Initially,
we perform a preliminary segmentation using breast lesion morphology knowledge.
Following this, we accurately localize lesions by extracting semantic
information through a CAM-based heatmap. These two elements are then fused
together, serving as a prompt to guide the SAM in performing refined
segmentation. Subsequently, post-processing techniques are employed to rectify
topological errors made by the SAM. Our method not only simplifies the
segmentation process but also attains accuracy comparable to supervised
learning methods that rely on pixel-level annotation. Our framework achieves a
Dice score of 74.39% on the test set, demonstrating compareable performance
with supervised learning methods. Additionally, it outperforms a supervised
learning model, in terms of the Hausdorff distance, scoring 24.27 compared to
Deeplabv3+'s 32.22. These experimental results showcase its feasibility and
superior performance in integrating weakly supervised learning with SAM. The
code is made available at: https://github.com/YueXin18/MorSeg-CAM-SAM. | Computer Vision |
What field is the article from? | Title: A Survey on Large Language Models for Personalized and Explainable Recommendations
Abstract: In recent years, Recommender Systems(RS) have witnessed a transformative
shift with the advent of Large Language Models(LLMs) in the field of Natural
Language Processing(NLP). These models such as OpenAI's GPT-3.5/4, Llama from
Meta, have demonstrated unprecedented capabilities in understanding and
generating human-like text. This has led to a paradigm shift in the realm of
personalized and explainable recommendations, as LLMs offer a versatile toolset
for processing vast amounts of textual data to enhance user experiences. To
provide a comprehensive understanding of the existing LLM-based recommendation
systems, this survey aims to analyze how RS can benefit from LLM-based
methodologies. Furthermore, we describe major challenges in Personalized
Explanation Generating(PEG) tasks, which are cold-start problems, unfairness
and bias problems in RS. | Information Retrieval |
What field is the article from? | Title: An Evaluation of GPT-4V and Gemini in Online VQA
Abstract: A comprehensive evaluation is critical to assess the capabilities of large
multimodal models (LMM). In this study, we evaluate the state-of-the-art LMMs,
namely GPT-4V and Gemini, utilizing the VQAonline dataset. VQAonline is an
end-to-end authentic VQA dataset sourced from a diverse range of everyday
users. Compared previous benchmarks, VQAonline well aligns with real-world
tasks. It enables us to effectively evaluate the generality of an LMM, and
facilitates a direct comparison with human performance. To comprehensively
evaluate GPT-4V and Gemini, we generate seven types of metadata for around
2,000 visual questions, such as image type and the required image processing
capabilities. Leveraging this array of metadata, we analyze the zero-shot
performance of GPT-4V and Gemini, and identify the most challenging questions
for both models. | Computer Vision |
What field is the article from? | Title: Designing AI Support for Human Involvement in AI-assisted Decision Making: A Taxonomy of Human-AI Interactions from a Systematic Review
Abstract: Efforts in levering Artificial Intelligence (AI) in decision support systems
have disproportionately focused on technological advancements, often
overlooking the alignment between algorithmic outputs and human expectations.
To address this, explainable AI promotes AI development from a more
human-centered perspective. Determining what information AI should provide to
aid humans is vital, however, how the information is presented, e. g., the
sequence of recommendations and the solicitation of interpretations, is equally
crucial. This motivates the need to more precisely study Human-AI interaction
as a pivotal component of AI-based decision support. While several empirical
studies have evaluated Human-AI interactions in multiple application domains in
which interactions can take many forms, there is not yet a common vocabulary to
describe human-AI interaction protocols. To address this gap, we describe the
results of a systematic review of the AI-assisted decision making literature,
analyzing 105 selected articles, which grounds the introduction of a taxonomy
of interaction patterns that delineate various modes of human-AI interactivity.
We find that current interactions are dominated by simplistic collaboration
paradigms and report comparatively little support for truly interactive
functionality. Our taxonomy serves as a valuable tool to understand how
interactivity with AI is currently supported in decision-making contexts and
foster deliberate choices of interaction designs. | Human-Computer Interaction |
What field is the article from? | Title: Perceptual Group Tokenizer: Building Perception with Iterative Grouping
Abstract: Human visual recognition system shows astonishing capability of compressing
visual information into a set of tokens containing rich representations without
label supervision. One critical driving principle behind it is perceptual
grouping. Despite being widely used in computer vision in the early 2010s, it
remains a mystery whether perceptual grouping can be leveraged to derive a
neural visual recognition backbone that generates as powerful representations.
In this paper, we propose the Perceptual Group Tokenizer, a model that entirely
relies on grouping operations to extract visual features and perform
self-supervised representation learning, where a series of grouping operations
are used to iteratively hypothesize the context for pixels or superpixels to
refine feature representations. We show that the proposed model can achieve
competitive performance compared to state-of-the-art vision architectures, and
inherits desirable properties including adaptive computation without
re-training, and interpretability. Specifically, Perceptual Group Tokenizer
achieves 80.3% on ImageNet-1K self-supervised learning benchmark with linear
probe evaluation, marking a new progress under this paradigm. | Computer Vision |
What field is the article from? | Title: Brain Networks and Intelligence: A Graph Neural Network Based Approach to Resting State fMRI Data
Abstract: Resting-state functional magnetic resonance imaging (rsfMRI) is a powerful
tool for investigating the relationship between brain function and cognitive
processes as it allows for the functional organization of the brain to be
captured without relying on a specific task or stimuli. In this paper, we
present a novel modeling architecture called BrainRGIN for predicting
intelligence (fluid, crystallized, and total intelligence) using graph neural
networks on rsfMRI derived static functional network connectivity matrices.
Extending from the existing graph convolution networks, our approach
incorporates a clustering-based embedding and graph isomorphism network in the
graph convolutional layer to reflect the nature of the brain sub-network
organization and efficient network expression, in combination with TopK pooling
and attention-based readout functions. We evaluated our proposed architecture
on a large dataset, specifically the Adolescent Brain Cognitive Development
Dataset, and demonstrated its effectiveness in predicting individual
differences in intelligence. Our model achieved lower mean squared errors and
higher correlation scores than existing relevant graph architectures and other
traditional machine learning models for all of the intelligence prediction
tasks. The middle frontal gyrus exhibited a significant contribution to both
fluid and crystallized intelligence, suggesting their pivotal role in these
cognitive processes. Total composite scores identified a diverse set of brain
regions to be relevant which underscores the complex nature of total
intelligence. | Machine Learning |
What field is the article from? | Title: Interpretable Long Term Waypoint-Based Trajectory Prediction Model
Abstract: Predicting the future trajectories of dynamic agents in complex environments
is crucial for a variety of applications, including autonomous driving,
robotics, and human-computer interaction. It is a challenging task as the
behavior of the agent is unknown and intrinsically multimodal. Our key insight
is that the agents behaviors are influenced not only by their past trajectories
and their interaction with their immediate environment but also largely with
their long term waypoint (LTW). In this paper, we study the impact of adding a
long-term goal on the performance of a trajectory prediction framework. We
present an interpretable long term waypoint-driven prediction framework
(WayDCM). WayDCM first predict an agent's intermediate goal (IG) by encoding
his interactions with the environment as well as his LTW using a combination of
a Discrete choice Model (DCM) and a Neural Network model (NN). Then, our model
predicts the corresponding trajectories. This is in contrast to previous work
which does not consider the ultimate intent of the agent to predict his
trajectory. We evaluate and show the effectiveness of our approach on the Waymo
Open dataset. | Artificial Intelligence |
What field is the article from? | Title: Integrating Pre-trained Language Model into Neural Machine Translation
Abstract: Neural Machine Translation (NMT) has become a significant technology in
natural language processing through extensive research and development.
However, the deficiency of high-quality bilingual language pair data still
poses a major challenge to improving NMT performance. Recent studies have been
exploring the use of contextual information from pre-trained language model
(PLM) to address this problem. Yet, the issue of incompatibility between PLM
and NMT model remains unresolved. This study proposes PLM-integrated NMT
(PiNMT) model to overcome the identified problems. PiNMT model consists of
three critical components, PLM Multi Layer Converter, Embedding Fusion, and
Cosine Alignment, each playing a vital role in providing effective PLM
information to NMT. Furthermore, two training strategies, Separate Learning
Rates and Dual Step Training, are also introduced in this paper. By
implementing the proposed PiNMT model and training strategy, we achieve
state-of-the-art performance on the IWSLT'14 En$\leftrightarrow$De dataset.
This study's outcomes are noteworthy as they demonstrate a novel approach for
efficiently integrating PLM with NMT to overcome incompatibility and enhance
performance. | Computational Linguistics |
What field is the article from? | Title: A Vision for Operationalising Diversity and Inclusion in AI
Abstract: The growing presence of Artificial Intelligence (AI) in various sectors
necessitates systems that accurately reflect societal diversity. This study
seeks to envision the operationalization of the ethical imperatives of
diversity and inclusion (D&I) within AI ecosystems, addressing the current
disconnect between ethical guidelines and their practical implementation. A
significant challenge in AI development is the effective operationalization of
D&I principles, which is critical to prevent the reinforcement of existing
biases and ensure equity across AI applications. This paper proposes a vision
of a framework for developing a tool utilizing persona-based simulation by
Generative AI (GenAI). The approach aims to facilitate the representation of
the needs of diverse users in the requirements analysis process for AI
software. The proposed framework is expected to lead to a comprehensive persona
repository with diverse attributes that inform the development process with
detailed user narratives. This research contributes to the development of an
inclusive AI paradigm that ensures future technological advances are designed
with a commitment to the diverse fabric of humanity. | Artificial Intelligence |
What field is the article from? | Title: Active teacher selection for reinforcement learning from human feedback
Abstract: Reinforcement learning from human feedback (RLHF) enables machine learning
systems to learn objectives from human feedback. A core limitation of these
systems is their assumption that all feedback comes from a single human
teacher, despite querying a range of distinct teachers. We propose the Hidden
Utility Bandit (HUB) framework to model differences in teacher rationality,
expertise, and costliness, formalizing the problem of learning from multiple
teachers. We develop a variety of solution algorithms and apply them to two
real-world domains: paper recommendation systems and COVID-19 vaccine testing.
We find that the Active Teacher Selection (ATS) algorithm outperforms baseline
algorithms by actively selecting when and which teacher to query. The HUB
framework and ATS algorithm demonstrate the importance of leveraging
differences between teachers to learn accurate reward models, facilitating
future research on active teacher selection for robust reward modeling. | Artificial Intelligence |
What field is the article from? | Title: Rethinking and Benchmarking Predict-then-Optimize Paradigm for Combinatorial Optimization Problems
Abstract: Numerous web applications rely on solving combinatorial optimization
problems, such as energy cost-aware scheduling, budget allocation on web
advertising, and graph matching on social networks. However, many optimization
problems involve unknown coefficients, and improper predictions of these
factors may lead to inferior decisions which may cause energy wastage,
inefficient resource allocation, inappropriate matching in social networks,
etc. Such a research topic is referred to as "Predict-Then-Optimize (PTO)"
which considers the performance of prediction and decision-making in a unified
system. A noteworthy recent development is the end-to-end methods by directly
optimizing the ultimate decision quality which claims to yield better results
in contrast to the traditional two-stage approach. However, the evaluation
benchmarks in this field are fragmented and the effectiveness of various models
in different scenarios remains unclear, hindering the comprehensive assessment
and fast deployment of these methods. To address these issues, we provide a
comprehensive categorization of current approaches and integrate existing
experimental scenarios to establish a unified benchmark, elucidating the
circumstances under which end-to-end training yields improvements, as well as
the contexts in which it performs ineffectively. We also introduce a new
dataset for the industrial combinatorial advertising problem for inclusive
finance to open-source. We hope the rethinking and benchmarking of PTO could
facilitate more convenient evaluation and deployment, and inspire further
improvements both in the academy and industry within this field. | Machine Learning |
What field is the article from? | Title: BrainWash: A Poisoning Attack to Forget in Continual Learning
Abstract: Continual learning has gained substantial attention within the deep learning
community, offering promising solutions to the challenging problem of
sequential learning. Yet, a largely unexplored facet of this paradigm is its
susceptibility to adversarial attacks, especially with the aim of inducing
forgetting. In this paper, we introduce "BrainWash," a novel data poisoning
method tailored to impose forgetting on a continual learner. By adding the
BrainWash noise to a variety of baselines, we demonstrate how a trained
continual learner can be induced to forget its previously learned tasks
catastrophically, even when using these continual learning baselines. An
important feature of our approach is that the attacker requires no access to
previous tasks' data and is armed merely with the model's current parameters
and the data belonging to the most recent task. Our extensive experiments
highlight the efficacy of BrainWash, showcasing degradation in performance
across various regularization-based continual learning methods. | Machine Learning |
What field is the article from? | Title: Tamil-Llama: A New Tamil Language Model Based on Llama 2
Abstract: Language modeling has witnessed remarkable advancements in recent years, with
Large Language Models (LLMs) like ChatGPT setting unparalleled benchmarks in
human-like text generation. However, a prevailing limitation is the
underrepresentation of languages like Tamil in these cutting-edge models,
leading to suboptimal performance in diverse linguistic contexts. This paper
addresses this lacuna, enhancing the open-source LLaMA model with an addition
of 16,000 Tamil tokens, aiming to achieve superior text generation and
comprehension in the Tamil language. We strategically employ the LoRA
methodology for efficient model training on a comprehensive Tamil corpus,
ensuring computational feasibility and model robustness. Moreover, we introduce
a Tamil-translated version of the Alpaca dataset and a subset of the OpenOrca
dataset tailored for instruction fine-tuning. Our results showcase significant
performance improvements in Tamil text generation, with potential implications
for the broader landscape of LLMs in Indian languages. We further underscore
our commitment to open research by making our models, datasets, and code
publicly accessible, fostering further innovations in language modeling. | Computational Linguistics |
What field is the article from? | Title: RecExplainer: Aligning Large Language Models for Recommendation Model Interpretability
Abstract: Recommender systems are widely used in various online services, with
embedding-based models being particularly popular due to their expressiveness
in representing complex signals. However, these models often lack
interpretability, making them less reliable and transparent for both users and
developers. With the emergence of large language models (LLMs), we find that
their capabilities in language expression, knowledge-aware reasoning, and
instruction following are exceptionally powerful. Based on this, we propose a
new model interpretation approach for recommender systems, by using LLMs as
surrogate models and learn to mimic and comprehend target recommender models.
Specifically, we introduce three alignment methods: behavior alignment,
intention alignment, and hybrid alignment. Behavior alignment operates in the
language space, representing user preferences and item information as text to
learn the recommendation model's behavior; intention alignment works in the
latent space of the recommendation model, using user and item representations
to understand the model's behavior; hybrid alignment combines both language and
latent spaces for alignment training. To demonstrate the effectiveness of our
methods, we conduct evaluation from two perspectives: alignment effect, and
explanation generation ability on three public datasets. Experimental results
indicate that our approach effectively enables LLMs to comprehend the patterns
of recommendation models and generate highly credible recommendation
explanations. | Information Retrieval |
What field is the article from? | Title: Code Models are Zero-shot Precondition Reasoners
Abstract: One of the fundamental skills required for an agent acting in an environment
to complete tasks is the ability to understand what actions are plausible at
any given point. This work explores a novel use of code representations to
reason about action preconditions for sequential decision making tasks. Code
representations offer the flexibility to model procedural activities and
associated constraints as well as the ability to execute and verify constraint
satisfaction. Leveraging code representations, we extract action preconditions
from demonstration trajectories in a zero-shot manner using pre-trained code
models. Given these extracted preconditions, we propose a precondition-aware
action sampling strategy that ensures actions predicted by a policy are
consistent with preconditions. We demonstrate that the proposed approach
enhances the performance of few-shot policy learning approaches across
task-oriented dialog and embodied textworld benchmarks. | Artificial Intelligence |
What field is the article from? | Title: On the Multiple Roles of Ontologies in Explainable AI
Abstract: This paper discusses the different roles that explicit knowledge, in
particular ontologies, can play in Explainable AI and in the development of
human-centric explainable systems and intelligible explanations. We consider
three main perspectives in which ontologies can contribute significantly,
namely reference modelling, common-sense reasoning, and knowledge refinement
and complexity management. We overview some of the existing approaches in the
literature, and we position them according to these three proposed
perspectives. The paper concludes by discussing what challenges still need to
be addressed to enable ontology-based approaches to explanation and to evaluate
their human-understandability and effectiveness. | Artificial Intelligence |
What field is the article from? | Title: A Survey of Generative AI for Intelligent Transportation Systems
Abstract: Intelligent transportation systems play a crucial role in modern traffic
management and optimization, greatly improving traffic efficiency and safety.
With the rapid development of generative artificial intelligence (Generative
AI) technologies in the fields of image generation and natural language
processing, generative AI has also played a crucial role in addressing key
issues in intelligent transportation systems, such as data sparsity, difficulty
in observing abnormal scenarios, and in modeling data uncertainty. In this
review, we systematically investigate the relevant literature on generative AI
techniques in addressing key issues in different types of tasks in intelligent
transportation systems. First, we introduce the principles of different
generative AI techniques, and their potential applications. Then, we classify
tasks in intelligent transportation systems into four types: traffic
perception, traffic prediction, traffic simulation, and traffic
decision-making. We systematically illustrate how generative AI techniques
addresses key issues in these four different types of tasks. Finally, we
summarize the challenges faced in applying generative AI to intelligent
transportation systems, and discuss future research directions based on
different application scenarios. | Artificial Intelligence |
What field is the article from? | Title: Age-Friendly Route Planner: Calculating Comfortable Routes for Senior Citizens
Abstract: The application of routing algorithms to real-world situations is a widely
studied research topic. Despite this, routing algorithms and applications are
usually developed for a general purpose, meaning that certain groups, such as
ageing people, are often marginalized due to the broad approach of the designed
algorithms. This situation may pose a problem in cities which are suffering a
slow but progressive ageing of their populations. With this motivation in mind,
this paper focuses on describing our implemented Age-Friendly Route Planner,
whose goal is to improve the experience in the city for senior citizens. In
order to measure the age-friendliness of a route, several variables have been
deemed, such as the number of amenities along the route, the amount of
comfortable elements found, or the avoidance of sloppy sections. In this paper,
we describe one of the main features of the Age-Friendly Route Planner: the
preference-based routes, and we also demonstrate how it can contribute to the
creation of adapted friendly routes. | Artificial Intelligence |
What field is the article from? | Title: On Meta-Prompting
Abstract: Certain statistical models are capable of interpreting input strings as
instructions, or prompts, and carry out tasks based on them. Many approaches to
prompting and pre-training these models involve the automated generation of
these prompts. We call these approaches meta-prompting, or prompting to obtain
prompts. We propose a theoretical framework based on category theory to
generalize and describe them. This framework is flexible enough to account for
LLM stochasticity; and allows us to obtain formal results around task
agnosticity and equivalence of various meta-prompting approaches. We experiment
with meta-prompting in two active areas of model research: creativity and
ideation. We find that user preference favors (p < 0.01) the prompts generated
under meta-prompting, as well as their corresponding outputs, over a series of
hardcoded baseline prompts that include the original task prompt. Using our
framework, we argue that meta-prompting is more effective than basic prompting
at generating desirable outputs. | Computational Linguistics |
What field is the article from? | Title: Frequency Domain-based Dataset Distillation
Abstract: This paper presents FreD, a novel parameterization method for dataset
distillation, which utilizes the frequency domain to distill a small-sized
synthetic dataset from a large-sized original dataset. Unlike conventional
approaches that focus on the spatial domain, FreD employs frequency-based
transforms to optimize the frequency representations of each data instance. By
leveraging the concentration of spatial domain information on specific
frequency components, FreD intelligently selects a subset of frequency
dimensions for optimization, leading to a significant reduction in the required
budget for synthesizing an instance. Through the selection of frequency
dimensions based on the explained variance, FreD demonstrates both theoretical
and empirical evidence of its ability to operate efficiently within a limited
budget, while better preserving the information of the original dataset
compared to conventional parameterization methods. Furthermore, based on the
orthogonal compatibility of FreD with existing methods, we confirm that FreD
consistently improves the performances of existing distillation methods over
the evaluation scenarios with different benchmark datasets. We release the code
at https://github.com/sdh0818/FreD. | Machine Learning |
What field is the article from? | Title: Divergences between Language Models and Human Brains
Abstract: Do machines and humans process language in similar ways? A recent line of
research has hinted in the affirmative, demonstrating that human brain signals
can be effectively predicted using the internal representations of language
models (LMs). This is thought to reflect shared computational principles
between LMs and human language processing. However, there are also clear
differences in how LMs and humans acquire and use language, even if the final
task they are performing is the same. Despite this, there is little work
exploring systematic differences between human and machine language processing
using brain data. To address this question, we examine the differences between
LM representations and the human brain's responses to language, specifically by
examining a dataset of Magnetoencephalography (MEG) responses to a written
narrative. In doing so we identify three phenomena that, in prior work, LMs
have been found to not capture well: emotional understanding, figurative
language processing, and physical commonsense. By fine-tuning LMs on datasets
related to these phenomena, we observe that fine-tuned LMs show improved
alignment with human brain responses across these tasks. Our study implies that
the observed divergences between LMs and human brains may stem from LMs'
inadequate representation of these specific types of knowledge. | Computational Linguistics |
What field is the article from? | Title: Irreducible Curriculum for Language Model Pretraining
Abstract: Automatic data selection and curriculum design for training large language
models is challenging, with only a few existing methods showing improvements
over standard training. Furthermore, current schemes focus on domain-level
selection, overlooking the more fine-grained contributions of each individual
training point. It is difficult to apply traditional datapoint selection
methods on large language models: most online batch selection methods perform
two-times forward or backward passes, which introduces considerable extra costs
with large-scale models. To mitigate these obstacles, we propose irreducible
curriculum as a curriculum learning algorithm for language model pretraining,
which prioritizes samples with higher learnability. Specifically, to avoid
prohibitive extra computation overhead, we simulate the sample loss along the
main model's training trajectory using a small-scale proxy model. Our
experiments on the RedPajama-1B dataset demonstrate a consistent improvement on
validation perplexity across all 7 domains compared to random uniform baseline
and the anti-curriculum strategy. Our method also reduces the sharpness of the
network and illustrates a better 5-shot accuracy on MMLU benchmarks. | Computational Linguistics |
What field is the article from? | Title: Why "classic" Transformers are shallow and how to make them go deep
Abstract: Since its introduction in 2017, Transformer has emerged as the leading neural
network architecture, catalyzing revolutionary advancements in many AI
disciplines. The key innovation in Transformer is a Self-Attention (SA)
mechanism designed to capture contextual information. However, extending the
original Transformer design to models of greater depth has proven exceedingly
challenging, if not impossible. Even though various modifications have been
proposed in order to stack more layers of SA mechanism into deeper models, a
full understanding of this depth problem remains elusive. In this paper, we
conduct a comprehensive investigation, both theoretically and empirically, to
substantiate the claim that the depth problem is caused by \emph{token
similarity escalation}; that is, tokens grow increasingly alike after repeated
applications of the SA mechanism. Our analysis reveals that, driven by the
invariant leading eigenspace and large spectral gaps of attention matrices,
token similarity provably escalates at a linear rate. Based on the gained
insight, we propose a simple strategy that, unlike most existing methods,
surgically removes excessive similarity without discounting the SA mechanism as
a whole. Preliminary experimental results confirm the effectiveness of the
proposed approach on moderate-scale post-norm Transformer models. | Machine Learning |
What field is the article from? | Title: Deployment of a Robust and Explainable Mortality Prediction Model: The COVID-19 Pandemic and Beyond
Abstract: This study investigated the performance, explainability, and robustness of
deployed artificial intelligence (AI) models in predicting mortality during the
COVID-19 pandemic and beyond. The first study of its kind, we found that
Bayesian Neural Networks (BNNs) and intelligent training techniques allowed our
models to maintain performance amidst significant data shifts. Our results
emphasize the importance of developing robust AI models capable of matching or
surpassing clinician predictions, even under challenging conditions. Our
exploration of model explainability revealed that stochastic models generate
more diverse and personalized explanations thereby highlighting the need for AI
models that provide detailed and individualized insights in real-world clinical
settings. Furthermore, we underscored the importance of quantifying uncertainty
in AI models which enables clinicians to make better-informed decisions based
on reliable predictions. Our study advocates for prioritizing implementation
science in AI research for healthcare and ensuring that AI solutions are
practical, beneficial, and sustainable in real-world clinical environments. By
addressing unique challenges and complexities in healthcare settings,
researchers can develop AI models that effectively improve clinical practice
and patient outcomes. | Machine Learning |
What field is the article from? | Title: UniFolding: Towards Sample-efficient, Scalable, and Generalizable Robotic Garment Folding
Abstract: This paper explores the development of UniFolding, a sample-efficient,
scalable, and generalizable robotic system for unfolding and folding various
garments. UniFolding employs the proposed UFONet neural network to integrate
unfolding and folding decisions into a single policy model that is adaptable to
different garment types and states. The design of UniFolding is based on a
garment's partial point cloud, which aids in generalization and reduces
sensitivity to variations in texture and shape. The training pipeline
prioritizes low-cost, sample-efficient data collection. Training data is
collected via a human-centric process with offline and online stages. The
offline stage involves human unfolding and folding actions via Virtual Reality,
while the online stage utilizes human-in-the-loop learning to fine-tune the
model in a real-world setting. The system is tested on two garment types:
long-sleeve and short-sleeve shirts. Performance is evaluated on 20 shirts with
significant variations in textures, shapes, and materials. More experiments and
videos can be found in the supplementary materials and on the website:
https://unifolding.robotflow.ai | Robotics |
What field is the article from? | Title: DRNet: A Decision-Making Method for Autonomous Lane Changingwith Deep Reinforcement Learning
Abstract: Machine learning techniques have outperformed numerous rule-based methods for
decision-making in autonomous vehicles. Despite recent efforts, lane changing
remains a major challenge, due to the complex driving scenarios and changeable
social behaviors of surrounding vehicles. To help improve the state of the art,
we propose to leveraging the emerging \underline{D}eep
\underline{R}einforcement learning (DRL) approach for la\underline{NE} changing
at the \underline{T}actical level. To this end, we present "DRNet", a novel and
highly efficient DRL-based framework that enables a DRL agent to learn to drive
by executing reasonable lane changing on simulated highways with an arbitrary
number of lanes, and considering driving style of surrounding vehicles to make
better decisions. Furthermore, to achieve a safe policy for decision-making,
DRNet incorporates ideas from safety verification, the most important component
of autonomous driving, to ensure that only safe actions are chosen at any time.
The setting of our state representation and reward function enables the trained
agent to take appropriate actions in a real-world-like simulator. Our DRL agent
has the ability to learn the desired task without causing collisions and
outperforms DDQN and other baseline models. | Robotics |
What field is the article from? | Title: LLM Augmented Hierarchical Agents
Abstract: Solving long-horizon, temporally-extended tasks using Reinforcement Learning
(RL) is challenging, compounded by the common practice of learning without
prior knowledge (or tabula rasa learning). Humans can generate and execute
plans with temporally-extended actions and quickly learn to perform new tasks
because we almost never solve problems from scratch. We want autonomous agents
to have this same ability. Recently, LLMs have been shown to encode a
tremendous amount of knowledge about the world and to perform impressive
in-context learning and reasoning. However, using LLMs to solve real world
problems is hard because they are not grounded in the current task. In this
paper we exploit the planning capabilities of LLMs while using RL to provide
learning from the environment, resulting in a hierarchical agent that uses LLMs
to solve long-horizon tasks. Instead of completely relying on LLMs, they guide
a high-level policy, making learning significantly more sample efficient. This
approach is evaluated in simulation environments such as MiniGrid, SkillHack,
and Crafter, and on a real robot arm in block manipulation tasks. We show that
agents trained using our approach outperform other baselines methods and, once
trained, don't need access to LLMs during deployment. | Machine Learning |
What field is the article from? | Title: Privacy Issues in Large Language Models: A Survey
Abstract: This is the first survey of the active area of AI research that focuses on
privacy issues in Large Language Models (LLMs). Specifically, we focus on work
that red-teams models to highlight privacy risks, attempts to build privacy
into the training or inference process, enables efficient data deletion from
trained models to comply with existing privacy regulations, and tries to
mitigate copyright issues. Our focus is on summarizing technical research that
develops algorithms, proves theorems, and runs empirical evaluations. While
there is an extensive body of legal and policy work addressing these challenges
from a different angle, that is not the focus of our survey. Nevertheless,
these works, along with recent legal developments do inform how these technical
problems are formalized, and so we discuss them briefly in Section 1. While we
have made our best effort to include all the relevant work, due to the fast
moving nature of this research we may have missed some recent work. If we have
missed some of your work please contact us, as we will attempt to keep this
survey relatively up to date. We are maintaining a repository with the list of
papers covered in this survey and any relevant code that was publicly available
at https://github.com/safr-ml-lab/survey-llm. | Artificial Intelligence |
What field is the article from? | Title: Mini Minds: Exploring Bebeshka and Zlata Baby Models
Abstract: In this paper, we describe the University of Lyon 2 submission to the
Strict-Small track of the BabyLM competition. The shared task is created with
an emphasis on small-scale language modelling from scratch on limited-size data
and human language acquisition. Dataset released for the Strict-Small track has
10M words, which is comparable to children's vocabulary size. We approach the
task with an architecture search, minimizing masked language modelling loss on
the data of the shared task. Having found an optimal configuration, we
introduce two small-size language models (LMs) that were submitted for
evaluation, a 4-layer encoder with 8 attention heads and a 6-layer decoder
model with 12 heads which we term Bebeshka and Zlata, respectively. Despite
being half the scale of the baseline LMs, our proposed models achieve
comparable performance. We further explore the applicability of small-scale
language models in tasks involving moral judgment, aligning their predictions
with human values. These findings highlight the potential of compact LMs in
addressing practical language understanding tasks. | Computational Linguistics |
What field is the article from? | Title: Large language models for aspect-based sentiment analysis
Abstract: Large language models (LLMs) offer unprecedented text completion
capabilities. As general models, they can fulfill a wide range of roles,
including those of more specialized models. We assess the performance of GPT-4
and GPT-3.5 in zero shot, few shot and fine-tuned settings on the aspect-based
sentiment analysis (ABSA) task. Fine-tuned GPT-3.5 achieves a state-of-the-art
F1 score of 83.8 on the joint aspect term extraction and polarity
classification task of the SemEval-2014 Task 4, improving upon InstructABSA
[@scaria_instructabsa_2023] by 5.7%. However, this comes at the price of 1000
times more model parameters and thus increased inference cost. We discuss the
the cost-performance trade-offs of different models, and analyze the typical
errors that they make. Our results also indicate that detailed prompts improve
performance in zero-shot and few-shot settings but are not necessary for
fine-tuned models. This evidence is relevant for practioners that are faced
with the choice of prompt engineering versus fine-tuning when using LLMs for
ABSA. | Computational Linguistics |
What field is the article from? | Title: Large-Scale Application of Fault Injection into PyTorch Models -- an Extension to PyTorchFI for Validation Efficiency
Abstract: Transient or permanent faults in hardware can render the output of Neural
Networks (NN) incorrect without user-specific traces of the error, i.e. silent
data errors (SDE). On the other hand, modern NNs also possess an inherent
redundancy that can tolerate specific faults. To establish a safety case, it is
necessary to distinguish and quantify both types of corruptions. To study the
effects of hardware (HW) faults on software (SW) in general and NN models in
particular, several fault injection (FI) methods have been established in
recent years. Current FI methods focus on the methodology of injecting faults
but often fall short of accounting for large-scale FI tests, where many fault
locations based on a particular fault model need to be analyzed in a short
time. Results need to be concise, repeatable, and comparable. To address these
requirements and enable fault injection as the default component in a machine
learning development cycle, we introduce a novel fault injection framework
called PyTorchALFI (Application Level Fault Injection for PyTorch) based on
PyTorchFI. PyTorchALFI provides an efficient way to define randomly generated
and reusable sets of faults to inject into PyTorch models, defines complex test
scenarios, enhances data sets, and generates test KPIs while tightly coupling
fault-free, faulty, and modified NN. In this paper, we provide details about
the definition of test scenarios, software architecture, and several examples
of how to use the new framework to apply iterative changes in fault location
and number, compare different model modifications, and analyze test results. | Artificial Intelligence |
What field is the article from? | Title: User-Like Bots for Cognitive Automation: A Survey
Abstract: Software bots have attracted increasing interest and popularity in both
research and society. Their contributions span automation, digital twins, game
characters with conscious-like behavior, and social media. However, there is
still a lack of intelligent bots that can adapt to web environments'
variability and dynamic nature. Unlike human users, they have difficulty
understanding and exploiting the affordances across multiple virtual
environments.
Despite the hype, bots with human user-like cognition do not currently exist.
Chatbots, for instance, lack situational awareness on the digital platforms
where they operate, preventing them from enacting meaningful and autonomous
intelligent behavior similar to human users.
In this survey, we aim to explore the role of cognitive architectures in
supporting efforts towards engineering software bots with advanced general
intelligence. We discuss how cognitive architectures can contribute to creating
intelligent software bots. Furthermore, we highlight key architectural
recommendations for the future development of autonomous, user-like cognitive
bots. | Human-Computer Interaction |
What field is the article from? | Title: Generating Medical Prescriptions with Conditional Transformer
Abstract: Access to real-world medication prescriptions is essential for medical
research and healthcare quality improvement. However, access to real medication
prescriptions is often limited due to the sensitive nature of the information
expressed. Additionally, manually labelling these instructions for training and
fine-tuning Natural Language Processing (NLP) models can be tedious and
expensive. We introduce a novel task-specific model architecture,
Label-To-Text-Transformer (\textbf{LT3}), tailored to generate synthetic
medication prescriptions based on provided labels, such as a vocabulary list of
medications and their attributes. LT3 is trained on a set of around 2K lines of
medication prescriptions extracted from the MIMIC-III database, allowing the
model to produce valuable synthetic medication prescriptions. We evaluate LT3's
performance by contrasting it with a state-of-the-art Pre-trained Language
Model (PLM), T5, analysing the quality and diversity of generated texts. We
deploy the generated synthetic data to train the SpacyNER model for the Named
Entity Recognition (NER) task over the n2c2-2018 dataset. The experiments show
that the model trained on synthetic data can achieve a 96-98\% F1 score at
Label Recognition on Drug, Frequency, Route, Strength, and Form. LT3 codes and
data will be shared at
\url{https://github.com/HECTA-UoM/Label-To-Text-Transformer} | Computational Linguistics |
What field is the article from? | Title: Causal Structure Representation Learning of Confounders in Latent Space for Recommendation
Abstract: Inferring user preferences from the historical feedback of users is a
valuable problem in recommender systems. Conventional approaches often rely on
the assumption that user preferences in the feedback data are equivalent to the
real user preferences without additional noise, which simplifies the problem
modeling. However, there are various confounders during user-item interactions,
such as weather and even the recommendation system itself. Therefore,
neglecting the influence of confounders will result in inaccurate user
preferences and suboptimal performance of the model. Furthermore, the
unobservability of confounders poses a challenge in further addressing the
problem. To address these issues, we refine the problem and propose a more
rational solution. Specifically, we consider the influence of confounders,
disentangle them from user preferences in the latent space, and employ causal
graphs to model their interdependencies without specific labels. By cleverly
combining local and global causal graphs, we capture the user-specificity of
confounders on user preferences. We theoretically demonstrate the
identifiability of the obtained causal graph. Finally, we propose our model
based on Variational Autoencoders, named Causal Structure representation
learning of Confounders in latent space (CSC). We conducted extensive
experiments on one synthetic dataset and five real-world datasets,
demonstrating the superiority of our model. Furthermore, we demonstrate that
the learned causal representations of confounders are controllable, potentially
offering users fine-grained control over the objectives of their recommendation
lists with the learned causal graphs. | Information Retrieval |
What field is the article from? | Title: An Improved Neural Network Model Based On CNN Using For Fruit Sugar Degree Detection
Abstract: Artificial Intelligence(AI) widely applies in Image Classification and
Recognition, Text Understanding and Natural Language Processing, which makes
great progress. In this paper, we introduced AI into the fruit quality
detection field. We designed a fruit sugar degree regression model using an
Artificial Neural Network based on spectra of fruits within the
visible/near-infrared(V/NIR)range. After analysis of fruit spectra, we
innovatively proposed a new neural network structure: low layers consist of a
Multilayer Perceptron(MLP), a middle layer is a 2-dimensional correlation
matrix layer, and high layers consist of several Convolutional Neural
Network(CNN) layers. In this study, we used fruit sugar value as a detection
target, collecting two fruits called Gan Nan Navel and Tian Shan Pear as
samples, doing experiments respectively, and comparing their results. We used
Analysis of Variance(ANOVA) to evaluate the reliability of the dataset we
collected. Then, we tried multiple strategies to process spectrum data,
evaluating their effects. In this paper, we tried to add Wavelet
Decomposition(WD) to reduce feature dimensions and a Genetic Algorithm(GA) to
find excellent features. Then, we compared Neural Network models with
traditional Partial Least Squares(PLS) based models. We also compared the
neural network structure we designed(MLP-CNN) with other traditional neural
network structures. In this paper, we proposed a new evaluation standard
derived from dataset standard deviation(STD) for evaluating detection
performance, validating the viability of using an artificial neural network
model to do fruit sugar degree nondestructive detection. | Artificial Intelligence |
What field is the article from? | Title: Using linear initialisation to improve speed of convergence and fully-trained error in Autoencoders
Abstract: Good weight initialisation is an important step in successful training of
Artificial Neural Networks. Over time a number of improvements have been
proposed to this process. In this paper we introduce a novel weight
initialisation technique called the Straddled Matrix Initialiser. This
initialisation technique is motivated by our assumption that major,
global-scale relationships in data are linear with only smaller effects
requiring complex non-linearities. Combination of Straddled Matrix and ReLU
activation function initialises a Neural Network as a de facto linear model,
which we postulate should be a better starting point for optimisation given our
assumptions. We test this by training autoencoders on three datasets using
Straddled Matrix and seven other state-of-the-art weight initialisation
techniques. In all our experiments the Straddeled Matrix Initialiser clearly
outperforms all other methods. | Machine Learning |
What field is the article from? | Title: Critical Analysis of 5G Networks Traffic Intrusion using PCA, t-SNE and UMAP Visualization and Classifying Attacks
Abstract: Networks, threat models, and malicious actors are advancing quickly. With the
increased deployment of the 5G networks, the security issues of the attached 5G
physical devices have also increased. Therefore, artificial intelligence based
autonomous end-to-end security design is needed that can deal with incoming
threats by detecting network traffic anomalies. To address this requirement, in
this research, we used a recently published 5G traffic dataset, 5G-NIDD, to
detect network traffic anomalies using machine and deep learning approaches.
First, we analyzed the dataset using three visualization techniques:
t-Distributed Stochastic Neighbor Embedding (t-SNE), Uniform Manifold
Approximation and Projection (UMAP), and Principal Component Analysis (PCA).
Second, we reduced the data dimensionality using mutual information and PCA
techniques. Third, we solve the class imbalance issue by inserting synthetic
records of minority classes. Last, we performed classification using six
different classifiers and presented the evaluation metrics. We received the
best results when K-Nearest Neighbors classifier was used: accuracy (97.2%),
detection rate (96.7%), and false positive rate (2.2%). | Cryptography and Security |
What field is the article from? | Title: Neural Network Models of Becoming a Cardinal Principle Knower
Abstract: As children enter elementary school, their understanding of the ordinal
structure of numbers transitions from a memorized count list of the first
50-100 numbers to knowing the successor function and understanding the
countably infinite. We investigate this developmental change in two neural
network models that learn the successor function on the pairs (N, N+1) for N in
(0, 98). The first uses a one-hot encoding of the input and output values and
corresponds to children memorizing a count list, while the second model uses a
place-value encoding and corresponds to children learning the language rules
for naming numbers. The place-value model showed a predicted drop in
representational similarity across tens boundaries. Counting across a tens
boundary can be understood as a vector operation in 2D space, where the numbers
with the same tens place are organized in a linearly separable manner, whereas
those with the same ones place are grouped together. A curriculum learning
simulation shows that, in the expanding numerical environment of the developing
child, representations of smaller numbers continue to be sharpened even as
larger numbers begin to be learned. These models set the stage for future work
using recurrent architectures to move beyond learning the successor function to
simulating the counting process more generally, and point towards a deeper
understanding of what it means to understand the countably infinite. | Machine Learning |
What field is the article from? | Title: E-Sparse: Boosting the Large Language Model Inference through Entropy-based N:M Sparsity
Abstract: Traditional pruning methods are known to be challenging to work in Large
Language Models (LLMs) for Generative AI because of their unaffordable training
process and large computational demands. For the first time, we introduce the
information entropy of hidden state features into a pruning metric design,
namely E-Sparse, to improve the accuracy of N:M sparsity on LLM. E-Sparse
employs the information richness to leverage the channel importance, and
further incorporates several novel techniques to put it into effect: (1) it
introduces information entropy to enhance the significance of parameter weights
and input feature norms as a novel pruning metric, and performs N:M sparsity
without modifying the remaining weights. (2) it designs global naive shuffle
and local block shuffle to quickly optimize the information distribution and
adequately cope with the impact of N:M sparsity on LLMs' accuracy. E-Sparse is
implemented as a Sparse-GEMM on FasterTransformer and runs on NVIDIA Ampere
GPUs. Extensive experiments on the LLaMA family and OPT models show that
E-Sparse can significantly speed up the model inference over the dense model
(up to 1.53X) and obtain significant memory saving (up to 43.52%), with
acceptable accuracy loss. | Machine Learning |
What field is the article from? | Title: Are cascade dialogue state tracking models speaking out of turn in spoken dialogues?
Abstract: In Task-Oriented Dialogue (TOD) systems, correctly updating the system's
understanding of the user's needs is key to a smooth interaction. Traditionally
TOD systems are composed of several modules that interact with one another.
While each of these components is the focus of active research communities,
their behavior in interaction can be overlooked. This paper proposes a
comprehensive analysis of the errors of state of the art systems in complex
settings such as Dialogue State Tracking which highly depends on the dialogue
context. Based on spoken MultiWoz, we identify that errors on non-categorical
slots' values are essential to address in order to bridge the gap between
spoken and chat-based dialogue systems. We explore potential solutions to
improve transcriptions and help dialogue state tracking generative models
correct such errors. | Computational Linguistics |
What field is the article from? | Title: Vision-Language Foundation Models as Effective Robot Imitators
Abstract: Recent progress in vision language foundation models has shown their ability
to understand multimodal data and resolve complicated vision language tasks,
including robotics manipulation. We seek a straightforward way of making use of
existing vision-language models (VLMs) with simple fine-tuning on robotics
data. To this end, we derive a simple and novel vision-language manipulation
framework, dubbed RoboFlamingo, built upon the open-source VLMs, OpenFlamingo.
Unlike prior works, RoboFlamingo utilizes pre-trained VLMs for single-step
vision-language comprehension, models sequential history information with an
explicit policy head, and is slightly fine-tuned by imitation learning only on
language-conditioned manipulation datasets. Such a decomposition provides
RoboFlamingo the flexibility for open-loop control and deployment on
low-performance platforms. By exceeding the state-of-the-art performance with a
large margin on the tested benchmark, we show RoboFlamingo can be an effective
and competitive alternative to adapt VLMs to robot control. Our extensive
experimental results also reveal several interesting conclusions regarding the
behavior of different pre-trained VLMs on manipulation tasks. We believe
RoboFlamingo has the potential to be a cost-effective and easy-to-use solution
for robotics manipulation, empowering everyone with the ability to fine-tune
their own robotics policy. | Robotics |
What field is the article from? | Title: Brain-Driven Representation Learning Based on Diffusion Model
Abstract: Interpreting EEG signals linked to spoken language presents a complex
challenge, given the data's intricate temporal and spatial attributes, as well
as the various noise factors. Denoising diffusion probabilistic models (DDPMs),
which have recently gained prominence in diverse areas for their capabilities
in representation learning, are explored in our research as a means to address
this issue. Using DDPMs in conjunction with a conditional autoencoder, our new
approach considerably outperforms traditional machine learning algorithms and
established baseline models in accuracy. Our results highlight the potential of
DDPMs as a sophisticated computational method for the analysis of
speech-related EEG signals. This could lead to significant advances in
brain-computer interfaces tailored for spoken communication. | Computational Linguistics |
What field is the article from? | Title: Comparative Knowledge Distillation
Abstract: In the era of large scale pretrained models, Knowledge Distillation (KD)
serves an important role in transferring the wisdom of computationally heavy
teacher models to lightweight, efficient student models while preserving
performance. Traditional KD paradigms, however, assume readily available access
to teacher models for frequent inference -- a notion increasingly at odds with
the realities of costly, often proprietary, large scale models. Addressing this
gap, our paper considers how to minimize the dependency on teacher model
inferences in KD in a setting we term Few Teacher Inference Knowledge
Distillation (FTI KD). We observe that prevalent KD techniques and state of the
art data augmentation strategies fall short in this constrained setting.
Drawing inspiration from educational principles that emphasize learning through
comparison, we propose Comparative Knowledge Distillation (CKD), which
encourages student models to understand the nuanced differences in a teacher
model's interpretations of samples. Critically, CKD provides additional
learning signals to the student without making additional teacher calls. We
also extend the principle of CKD to groups of samples, enabling even more
efficient learning from limited teacher calls. Empirical evaluation across
varied experimental settings indicates that CKD consistently outperforms state
of the art data augmentation and KD techniques. | Machine Learning |
What field is the article from? | Title: A Simple Interpretable Transformer for Fine-Grained Image Classification and Analysis
Abstract: We present a novel usage of Transformers to make image classification
interpretable. Unlike mainstream classifiers that wait until the last
fully-connected layer to incorporate class information to make predictions, we
investigate a proactive approach, asking each class to search for itself in an
image. We realize this idea via a Transformer encoder-decoder inspired by
DEtection TRansformer (DETR). We learn ``class-specific'' queries (one for each
class) as input to the decoder, enabling each class to localize its patterns in
an image via cross-attention. We name our approach INterpretable TRansformer
(INTR), which is fairly easy to implement and exhibits several compelling
properties. We show that INTR intrinsically encourages each class to attend
distinctively; the cross-attention weights thus provide a faithful
interpretation of the prediction. Interestingly, via ``multi-head''
cross-attention, INTR could identify different ``attributes'' of a class,
making it particularly suitable for fine-grained classification and analysis,
which we demonstrate on eight datasets. Our code and pre-trained model are
publicly accessible at https://github.com/Imageomics/INTR. | Computer Vision |
What field is the article from? | Title: A Systematic Review of Deep Graph Neural Networks: Challenges, Classification, Architectures, Applications & Potential Utility in Bioinformatics
Abstract: In recent years, tasks of machine learning ranging from image processing &
audio/video analysis to natural language understanding have been transformed by
deep learning. The data content in all these scenarios are expressed via
Euclidean space. However, a considerable amount of application data is
structured in non-Euclidean space and is expressed as graphs, e.g. dealing with
complicated interactions & object interdependencies. Modelling physical
systems, learning molecular signatures, identifying protein interactions and
predicting diseases involve utilising a model that can adapt from graph data.
Graph neural networks (GNNs), specified as artificial-neural models, employ
message transmission between graph nodes to represent graph dependencies and
are primarily used in the non-Euclidean domain. Variants of GNN like Graph
Recurrent Networks (GRN), Graph Auto Encoder (GAE), Graph Convolution Networks
(GCN), Graph Adversarial Methods & Graph Reinforcement learning have exhibited
breakthrough productivity on a wide range of tasks, especially in the field of
bioinformatics, in recent years as a result of the rapid collection of
biological network data. Apart from presenting all existing GNN models,
mathematical analysis and comparison of the variants of all types of GNN have
been highlighted in this survey. Graph neural networks are investigated for
their potential real-world applications in various fields, focusing on
Bioinformatics. Furthermore, resources for evaluating graph neural network
models and accessing open-source code & benchmark data sets are included.
Ultimately, we provide some (seven) proposals for future research in this
rapidly evolving domain. GNNs have the potential to be an excellent tool for
solving a wide range of biological challenges in bioinformatics research, as
they are best represented as connected complex graphs. | Machine Learning |
What field is the article from? | Title: SC-MIL: Sparsely Coded Multiple Instance Learning for Whole Slide Image Classification
Abstract: Multiple Instance Learning (MIL) has been widely used in weakly supervised
whole slide image (WSI) classification. Typical MIL methods include a feature
embedding part that embeds the instances into features via a pre-trained
feature extractor and the MIL aggregator that combines instance embeddings into
predictions. The current focus has been directed toward improving these parts
by refining the feature embeddings through self-supervised pre-training and
modeling the correlations between instances separately. In this paper, we
proposed a sparsely coded MIL (SC-MIL) that addresses those two aspects at the
same time by leveraging sparse dictionary learning. The sparse dictionary
learning captures the similarities of instances by expressing them as a sparse
linear combination of atoms in an over-complete dictionary. In addition,
imposing sparsity help enhance the instance feature embeddings by suppressing
irrelevant instances while retaining the most relevant ones. To make the
conventional sparse coding algorithm compatible with deep learning, we unrolled
it into an SC module by leveraging deep unrolling. The proposed SC module can
be incorporated into any existing MIL framework in a plug-and-play manner with
an acceptable computation cost. The experimental results on multiple datasets
demonstrated that the proposed SC module could substantially boost the
performance of state-of-the-art MIL methods. The codes are available at
\href{https://github.com/sotiraslab/SCMIL.git}{https://github.com/sotiraslab/SCMIL.git}. | Computer Vision |
What field is the article from? | Title: Hallucination-minimized Data-to-answer Framework for Financial Decision-makers
Abstract: Large Language Models (LLMs) have been applied to build several automation
and personalized question-answering prototypes so far. However, scaling such
prototypes to robust products with minimized hallucinations or fake responses
still remains an open challenge, especially in niche data-table heavy domains
such as financial decision making. In this work, we present a novel
Langchain-based framework that transforms data tables into hierarchical textual
data chunks to enable a wide variety of actionable question answering. First,
the user-queries are classified by intention followed by automated retrieval of
the most relevant data chunks to generate customized LLM prompts per query.
Next, the custom prompts and their responses undergo multi-metric scoring to
assess for hallucinations and response confidence. The proposed system is
optimized with user-query intention classification, advanced prompting, data
scaling capabilities and it achieves over 90% confidence scores for a variety
of user-queries responses ranging from {What, Where, Why, How, predict, trend,
anomalies, exceptions} that are crucial for financial decision making
applications. The proposed data to answers framework can be extended to other
analytical domains such as sales and payroll to ensure optimal hallucination
control guardrails. | Computational Linguistics |
What field is the article from? | Title: No Representation Rules Them All in Category Discovery
Abstract: In this paper we tackle the problem of Generalized Category Discovery (GCD).
Specifically, given a dataset with labelled and unlabelled images, the task is
to cluster all images in the unlabelled subset, whether or not they belong to
the labelled categories. Our first contribution is to recognize that most
existing GCD benchmarks only contain labels for a single clustering of the
data, making it difficult to ascertain whether models are using the available
labels to solve the GCD task, or simply solving an unsupervised clustering
problem. As such, we present a synthetic dataset, named 'Clevr-4', for category
discovery. Clevr-4 contains four equally valid partitions of the data, i.e
based on object shape, texture, color or count. To solve the task, models are
required to extrapolate the taxonomy specified by the labelled set, rather than
simply latching onto a single natural grouping of the data. We use this dataset
to demonstrate the limitations of unsupervised clustering in the GCD setting,
showing that even very strong unsupervised models fail on Clevr-4. We further
use Clevr-4 to examine the weaknesses of existing GCD algorithms, and propose a
new method which addresses these shortcomings, leveraging consistent findings
from the representation learning literature to do so. Our simple solution,
which is based on 'mean teachers' and termed $\mu$GCD, substantially
outperforms implemented baselines on Clevr-4. Finally, when we transfer these
findings to real data on the challenging Semantic Shift Benchmark (SSB), we
find that $\mu$GCD outperforms all prior work, setting a new state-of-the-art.
For the project webpage, see https://www.robots.ox.ac.uk/~vgg/data/clevr4/ | Computer Vision |
What field is the article from? | Title: On the Noise Scheduling for Generating Plausible Designs with Diffusion Models
Abstract: Deep Generative Models (DGMs) are widely used to create innovative designs
across multiple industries, ranging from fashion to the automotive sector. In
addition to generating images of high visual quality, the task of structural
design generation imposes more stringent constrains on the semantic expression,
e.g., no floating material or missing part, which we refer to as plausibility
in this work. We delve into the impact of noise schedules of diffusion models
on the plausibility of the outcome: there exists a range of noise levels at
which the model's performance decides the result plausibility. Also, we propose
two techniques to determine such a range for a given image set and devise a
novel parametric noise schedule for better plausibility. We apply this noise
schedule to the training and sampling of the well-known diffusion model EDM and
compare it to its default noise schedule. Compared to EDM, our schedule
significantly improves the rate of plausible designs from 83.4% to 93.5% and
Fr\'echet Inception Distance (FID) from 7.84 to 4.87. Further applications of
advanced image editing tools demonstrate the model's solid understanding of
structure. | Computer Vision |
What field is the article from? | Title: Continual Learning of Diffusion Models with Generative Distillation
Abstract: Diffusion models are powerful generative models that achieve state-of-the-art
performance in tasks such as image synthesis. However, training them demands
substantial amounts of data and computational resources. Continual learning
would allow for incrementally learning new tasks and accumulating knowledge,
thus reusing already trained models would be possible. One potentially suitable
approach is generative replay, where a copy of a generative model trained on
previous tasks produces synthetic data that are interleaved with data from the
current task. However, standard generative replay applied to diffusion models
results in a catastrophic loss in denoising capabilities. In this paper, we
propose generative distillation, an approach that distils the entire reverse
process of a diffusion model. We demonstrate that our approach significantly
improves the continual learning performance of generative replay with only a
moderate increase in the computational costs. | Machine Learning |
What field is the article from? | Title: IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI
Abstract: Diffusion-based image generation models, such as Stable Diffusion or DALL-E
2, are able to learn from given images and generate high-quality samples
following the guidance from prompts. For instance, they can be used to create
artistic images that mimic the style of an artist based on his/her original
artworks or to maliciously edit the original images for fake content. However,
such ability also brings serious ethical issues without proper authorization
from the owner of the original images. In response, several attempts have been
made to protect the original images from such unauthorized data usage by adding
imperceptible perturbations, which are designed to mislead the diffusion model
and make it unable to properly generate new samples. In this work, we introduce
a perturbation purification platform, named IMPRESS, to evaluate the
effectiveness of imperceptible perturbations as a protective measure. IMPRESS
is based on the key observation that imperceptible perturbations could lead to
a perceptible inconsistency between the original image and the
diffusion-reconstructed image, which can be used to devise a new optimization
strategy for purifying the image, which may weaken the protection of the
original image from unauthorized data usage (e.g., style mimicking, malicious
editing). The proposed IMPRESS platform offers a comprehensive evaluation of
several contemporary protection methods, and can be used as an evaluation
platform for future protection methods. | Computer Vision |
What field is the article from? | Title: Analyzing Vision Transformers for Image Classification in Class Embedding Space
Abstract: Despite the growing use of transformer models in computer vision, a
mechanistic understanding of these networks is still needed. This work
introduces a method to reverse-engineer Vision Transformers trained to solve
image classification tasks. Inspired by previous research in NLP, we
demonstrate how the inner representations at any level of the hierarchy can be
projected onto the learned class embedding space to uncover how these networks
build categorical representations for their predictions. We use our framework
to show how image tokens develop class-specific representations that depend on
attention mechanisms and contextual information, and give insights on how
self-attention and MLP layers differentially contribute to this categorical
composition. We additionally demonstrate that this method (1) can be used to
determine the parts of an image that would be important for detecting the class
of interest, and (2) exhibits significant advantages over traditional linear
probing approaches. Taken together, our results position our proposed framework
as a powerful tool for mechanistic interpretability and explainability
research. | Computer Vision |
What field is the article from? | Title: Caring Trouble and Musical AI: Considerations towards a Feminist Musical AI
Abstract: The ethics of AI as both material and medium for interaction remains in murky
waters within the context of musical and artistic practice. The
interdisciplinarity of the field is revealing matters of concern and care,
which necessitate interdisciplinary methodologies for evaluation to trouble and
critique the inheritance of "residue-laden" AI-tools in musical applications.
Seeking to unsettle these murky waters, this paper critically examines the
example of Holly+, a deep neural network that generates raw audio in the
likeness of its creator Holly Herndon. Drawing from theoretical concerns and
considerations from speculative feminism and care ethics, we care-fully trouble
the structures, frameworks and assumptions that oscillate within and around
Holly+. We contribute with several considerations and contemplate future
directions for integrating speculative feminism and care into musical-AI agent
and system design, derived from our critical feminist examination. | Human-Computer Interaction |
What field is the article from? | Title: DesignGPT: Multi-Agent Collaboration in Design
Abstract: Generative AI faces many challenges when entering the product design
workflow, such as interface usability and interaction patterns. Therefore,
based on design thinking and design process, we developed the DesignGPT
multi-agent collaboration framework, which uses artificial intelligence agents
to simulate the roles of different positions in the design company and allows
human designers to collaborate with them in natural language. Experimental
results show that compared with separate AI tools, DesignGPT improves the
performance of designers, highlighting the potential of applying multi-agent
systems that integrate design domain knowledge to product scheme design. | Artificial Intelligence |
What field is the article from? | Title: Not All Data Matters: An End-to-End Adaptive Dataset Pruning Framework for Enhancing Model Performance and Efficiency
Abstract: While deep neural networks have demonstrated remarkable performance across
various tasks, they typically require massive training data. Due to the
presence of redundancies and biases in real-world datasets, not all data in the
training dataset contributes to the model performance. To address this issue,
dataset pruning techniques have been introduced to enhance model performance
and efficiency by eliminating redundant training samples and reducing
computational and memory overhead. However, previous works most rely on
manually crafted scalar scores, limiting their practical performance and
scalability across diverse deep networks and datasets. In this paper, we
propose AdaPruner, an end-to-end Adaptive DAtaset PRUNing framEwoRk. AdaPruner
can perform effective dataset pruning without the need for explicitly defined
metrics. Our framework jointly prunes training data and fine-tunes models with
task-specific optimization objectives. AdaPruner leverages (1) An adaptive
dataset pruning (ADP) module, which iteratively prunes redundant samples to an
expected pruning ratio; and (2) A pruning performance controller (PPC) module,
which optimizes the model performance for accurate pruning. Therefore,
AdaPruner exhibits high scalability and compatibility across various datasets
and deep networks, yielding improved dataset distribution and enhanced model
performance. AdaPruner can still significantly enhance model performance even
after pruning up to 10-30\% of the training data. Notably, these improvements
are accompanied by substantial savings in memory and computation costs.
Qualitative and quantitative experiments suggest that AdaPruner outperforms
other state-of-the-art dataset pruning methods by a large margin. | Artificial Intelligence |
What field is the article from? | Title: Structural Information Guided Multimodal Pre-training for Vehicle-centric Perception
Abstract: Understanding vehicles in images is important for various applications such
as intelligent transportation and self-driving system. Existing vehicle-centric
works typically pre-train models on large-scale classification datasets and
then fine-tune them for specific downstream tasks. However, they neglect the
specific characteristics of vehicle perception in different tasks and might
thus lead to sub-optimal performance. To address this issue, we propose a novel
vehicle-centric pre-training framework called VehicleMAE, which incorporates
the structural information including the spatial structure from vehicle profile
information and the semantic structure from informative high-level natural
language descriptions for effective masked vehicle appearance reconstruction.
To be specific, we explicitly extract the sketch lines of vehicles as a form of
the spatial structure to guide vehicle reconstruction. The more comprehensive
knowledge distilled from the CLIP big model based on the similarity between the
paired/unpaired vehicle image-text sample is further taken into consideration
to help achieve a better understanding of vehicles. A large-scale dataset is
built to pre-train our model, termed Autobot1M, which contains about 1M vehicle
images and 12693 text information. Extensive experiments on four vehicle-based
downstream tasks fully validated the effectiveness of our VehicleMAE. The
source code and pre-trained models will be released at
https://github.com/Event-AHU/VehicleMAE. | Computer Vision |
What field is the article from? | Title: Revisiting the Domain Shift and Sample Uncertainty in Multi-source Active Domain Transfer
Abstract: Active Domain Adaptation (ADA) aims to maximally boost model adaptation in a
new target domain by actively selecting a limited number of target data to
annotate.This setting neglects the more practical scenario where training data
are collected from multiple sources. This motivates us to target a new and
challenging setting of knowledge transfer that extends ADA from a single source
domain to multiple source domains, termed Multi-source Active Domain Adaptation
(MADA). Not surprisingly, we find that most traditional ADA methods cannot work
directly in such a setting, mainly due to the excessive domain gap introduced
by all the source domains and thus their uncertainty-aware sample selection can
easily become miscalibrated under the multi-domain shifts. Considering this, we
propose a Dynamic integrated uncertainty valuation framework(Detective) that
comprehensively consider the domain shift between multi-source domains and
target domain to detect the informative target samples. Specifically, the
leverages a dynamic Domain Adaptation(DA) model that learns how to adapt the
model's parameters to fit the union of multi-source domains. This enables an
approximate single-source domain modeling by the dynamic model. We then
comprehensively measure both domain uncertainty and predictive uncertainty in
the target domain to detect informative target samples using evidential deep
learning, thereby mitigating uncertainty miscalibration. Furthermore, we
introduce a contextual diversity-aware calculator to enhance the diversity of
the selected samples. Experiments demonstrate that our solution outperforms
existing methods by a considerable margin on three domain adaptation
benchmarks. | Artificial Intelligence |
What field is the article from? | Title: QMGeo: Differentially Private Federated Learning via Stochastic Quantization with Mixed Truncated Geometric Distribution
Abstract: Federated learning (FL) is a framework which allows multiple users to jointly
train a global machine learning (ML) model by transmitting only model updates
under the coordination of a parameter server, while being able to keep their
datasets local. One key motivation of such distributed frameworks is to provide
privacy guarantees to the users. However, preserving the users' datasets
locally is shown to be not sufficient for privacy. Several differential privacy
(DP) mechanisms have been proposed to provide provable privacy guarantees by
introducing randomness into the framework, and majority of these mechanisms
rely on injecting additive noise. FL frameworks also face the challenge of
communication efficiency, especially as machine learning models grow in
complexity and size. Quantization is a commonly utilized method, reducing the
communication cost by transmitting compressed representation of the underlying
information. Although there have been several studies on DP and quantization in
FL, the potential contribution of the quantization method alone in providing
privacy guarantees has not been extensively analyzed yet. We in this paper
present a novel stochastic quantization method, utilizing a mixed geometric
distribution to introduce the randomness needed to provide DP, without any
additive noise. We provide convergence analysis for our framework and
empirically study its performance. | Machine Learning |
What field is the article from? | Title: Digital Twin Framework for Optimal and Autonomous Decision-Making in Cyber-Physical Systems: Enhancing Reliability and Adaptability in the Oil and Gas Industry
Abstract: The concept of creating a virtual copy of a complete Cyber-Physical System
opens up numerous possibilities, including real-time assessments of the
physical environment and continuous learning from the system to provide
reliable and precise information. This process, known as the twinning process
or the development of a digital twin (DT), has been widely adopted across
various industries. However, challenges arise when considering the
computational demands of implementing AI models, such as those employed in
digital twins, in real-time information exchange scenarios. This work proposes
a digital twin framework for optimal and autonomous decision-making applied to
a gas-lift process in the oil and gas industry, focusing on enhancing the
robustness and adaptability of the DT. The framework combines Bayesian
inference, Monte Carlo simulations, transfer learning, online learning, and
novel strategies to confer cognition to the DT, including model
hyperdimensional reduction and cognitive tack. Consequently, creating a
framework for efficient, reliable, and trustworthy DT identification was
possible. The proposed approach addresses the current gap in the literature
regarding integrating various learning techniques and uncertainty management in
digital twin strategies. This digital twin framework aims to provide a reliable
and efficient system capable of adapting to changing environments and
incorporating prediction uncertainty, thus enhancing the overall
decision-making process in complex, real-world scenarios. Additionally, this
work lays the foundation for further developments in digital twins for process
systems engineering, potentially fostering new advancements and applications
across various industrial sectors. | Artificial Intelligence |
What field is the article from? | Title: AbsPyramid: Benchmarking the Abstraction Ability of Language Models with a Unified Entailment Graph
Abstract: Cognitive research indicates that abstraction ability is essential in human
intelligence, which remains under-explored in language models. In this paper,
we present AbsPyramid, a unified entailment graph of 221K textual descriptions
of abstraction knowledge. While existing resources only touch nouns or verbs
within simplified events or specific domains, AbsPyramid collects abstract
knowledge for three components of diverse events to comprehensively evaluate
the abstraction ability of language models in the open domain. Experimental
results demonstrate that current LLMs face challenges comprehending abstraction
knowledge in zero-shot and few-shot settings. By training on our rich
abstraction knowledge, we find LLMs can acquire basic abstraction abilities and
generalize to unseen events. In the meantime, we empirically show that our
benchmark is comprehensive to enhance LLMs across two previous abstraction
tasks. | Computational Linguistics |
What field is the article from? | Title: Gaussian Mixture Solvers for Diffusion Models
Abstract: Recently, diffusion models have achieved great success in generative tasks.
Sampling from diffusion models is equivalent to solving the reverse diffusion
stochastic differential equations (SDEs) or the corresponding probability flow
ordinary differential equations (ODEs). In comparison, SDE-based solvers can
generate samples of higher quality and are suited for image translation tasks
like stroke-based synthesis. During inference, however, existing SDE-based
solvers are severely constrained by the efficiency-effectiveness dilemma. Our
investigation suggests that this is because the Gaussian assumption in the
reverse transition kernel is frequently violated (even in the case of simple
mixture data) given a limited number of discretization steps. To overcome this
limitation, we introduce a novel class of SDE-based solvers called
\emph{Gaussian Mixture Solvers (GMS)} for diffusion models. Our solver
estimates the first three-order moments and optimizes the parameters of a
Gaussian mixture transition kernel using generalized methods of moments in each
step during sampling. Empirically, our solver outperforms numerous SDE-based
solvers in terms of sample quality in image generation and stroke-based
synthesis in various diffusion models, which validates the motivation and
effectiveness of GMS. Our code is available at
https://github.com/Guohanzhong/GMS. | Machine Learning |
What field is the article from? | Title: Can Large Language Models Follow Concept Annotation Guidelines? A Case Study on Scientific and Financial Domains
Abstract: Although large language models (LLMs) exhibit remarkable capacity to leverage
in-context demonstrations, it is still unclear to what extent they can learn
new concepts or facts from ground-truth labels. To address this question, we
examine the capacity of instruction-tuned LLMs to follow in-context concept
guidelines for sentence labeling tasks. We design guidelines that present
different types of factual and counterfactual concept definitions, which are
used as prompts for zero-shot sentence classification tasks. Our results show
that although concept definitions consistently help in task performance, only
the larger models (with 70B parameters or more) have limited ability to work
under counterfactual contexts. Importantly, only proprietary models such as
GPT-3.5 and GPT-4 can recognize nonsensical guidelines, which we hypothesize is
due to more sophisticated alignment methods. Finally, we find that
Falcon-180B-chat is outperformed by Llama-2-70B-chat is most cases, which
indicates that careful fine-tuning is more effective than increasing model
scale. Altogether, our simple evaluation method reveals significant gaps in
concept understanding between the most capable open-source language models and
the leading proprietary APIs. | Computational Linguistics |
What field is the article from? | Title: MechAgents: Large language model multi-agent collaborations can solve mechanics problems, generate new data, and integrate knowledge
Abstract: Solving mechanics problems using numerical methods requires comprehensive
intelligent capability of retrieving relevant knowledge and theory,
constructing and executing codes, analyzing the results, a task that has thus
far mainly been reserved for humans. While emerging AI methods can provide
effective approaches to solve end-to-end problems, for instance via the use of
deep surrogate models or various data analytics strategies, they often lack
physical intuition since knowledge is baked into the parametric complement
through training, offering less flexibility when it comes to incorporating
mathematical or physical insights. By leveraging diverse capabilities of
multiple dynamically interacting large language models (LLMs), we can overcome
the limitations of conventional approaches and develop a new class of
physics-inspired generative machine learning platform, here referred to as
MechAgents. A set of AI agents can solve mechanics tasks, here demonstrated for
elasticity problems, via autonomous collaborations. A two-agent team can
effectively write, execute and self-correct code, in order to apply finite
element methods to solve classical elasticity problems in various flavors
(different boundary conditions, domain geometries, meshes, small/finite
deformation and linear/hyper-elastic constitutive laws, and others). For more
complex tasks, we construct a larger group of agents with enhanced division of
labor among planning, formulating, coding, executing and criticizing the
process and results. The agents mutually correct each other to improve the
overall team-work performance in understanding, formulating and validating the
solution. Our framework shows the potential of synergizing the intelligence of
language models, the reliability of physics-based modeling, and the dynamic
collaborations among diverse agents, opening novel avenues for automation of
solving engineering problems. | Artificial Intelligence |
What field is the article from? | Title: Smart Traffic Management of Vehicles using Faster R-CNN based Deep Learning Method
Abstract: With constant growth of civilization and modernization of cities all across
the world since past few centuries smart traffic management of vehicles is one
of the most sorted after problem by research community. It is a challenging
problem in computer vision and artificial intelligence domain. Smart traffic
management basically involves segmentation of vehicles, estimation of traffic
density and tracking of vehicles. The vehicle segmentation from traffic videos
helps realization of niche applications such as monitoring of speed and
estimation of traffic. When occlusions, background with clutters and traffic
with density variations are present, this problem becomes more intractable in
nature. Keeping this motivation in this research work, we investigate Faster
R-CNN based deep learning method towards segmentation of vehicles. This problem
is addressed in four steps viz minimization with adaptive background model,
Faster R-CNN based subnet operation, Faster R-CNN initial refinement and result
optimization with extended topological active nets. The computational framework
uses ideas of adaptive background modeling. It also addresses shadow and
illumination related issues. Higher segmentation accuracy is achieved through
topological active net deformable models. The topological and extended
topological active nets help to achieve stated deformations. Mesh deformation
is achieved with minimization of energy. The segmentation accuracy is improved
with modified version of extended topological active net. The experimental
results demonstrate superiority of this computational framework | Computer Vision |
What field is the article from? | Title: ViP-Mixer: A Convolutional Mixer for Video Prediction
Abstract: Video prediction aims to predict future frames from a video's previous
content. Existing methods mainly process video data where the time dimension
mingles with the space and channel dimensions from three distinct angles: as a
sequence of individual frames, as a 3D volume in spatiotemporal coordinates, or
as a stacked image where frames are treated as separate channels. Most of them
generally focus on one of these perspectives and may fail to fully exploit the
relationships across different dimensions. To address this issue, this paper
introduces a convolutional mixer for video prediction, termed ViP-Mixer, to
model the spatiotemporal evolution in the latent space of an autoencoder. The
ViP-Mixers are stacked sequentially and interleave feature mixing at three
levels: frames, channels, and locations. Extensive experiments demonstrate that
our proposed method achieves new state-of-the-art prediction performance on
three benchmark video datasets covering both synthetic and real-world
scenarios. | Computer Vision |
What field is the article from? | Title: Innovative Horizons in Aerial Imagery: LSKNet Meets DiffusionDet for Advanced Object Detection
Abstract: In the realm of aerial image analysis, object detection plays a pivotal role,
with significant implications for areas such as remote sensing, urban planning,
and disaster management. This study addresses the inherent challenges in this
domain, notably the detection of small objects, managing densely packed
elements, and accounting for diverse orientations. We present an in-depth
evaluation of an object detection model that integrates the Large Selective
Kernel Network (LSKNet)as its backbone with the DiffusionDet head, utilizing
the iSAID dataset for empirical analysis. Our approach encompasses the
introduction of novel methodologies and extensive ablation studies. These
studies critically assess various aspects such as loss functions, box
regression techniques, and classification strategies to refine the model's
precision in object detection. The paper details the experimental application
of the LSKNet backbone in synergy with the DiffusionDet heads, a combination
tailored to meet the specific challenges in aerial image object detection. The
findings of this research indicate a substantial enhancement in the model's
performance, especially in the accuracy-time tradeoff. The proposed model
achieves a mean average precision (MAP) of approximately 45.7%, which is a
significant improvement, outperforming the RCNN model by 4.7% on the same
dataset. This advancement underscores the effectiveness of the proposed
modifications and sets a new benchmark in aerial image analysis, paving the way
for more accurate and efficient object detection methodologies. The code is
publicly available at https://github.com/SashaMatsun/LSKDiffDet | Computer Vision |
What field is the article from? | Title: Navigating Complex Search Tasks with AI Copilots
Abstract: As many of us in the information retrieval (IR) research community know and
appreciate, search is far from being a solved problem. Millions of people
struggle with tasks on search engines every day. Often, their struggles relate
to the intrinsic complexity of their task and the failure of search systems to
fully understand the task and serve relevant results. The task motivates the
search, creating the gap/problematic situation that searchers attempt to
bridge/resolve and drives search behavior as they work through different task
facets. Complex search tasks require more than support for rudimentary fact
finding or re-finding. Research on methods to support complex tasks includes
work on generating query and website suggestions, personalizing and
contextualizing search, and developing new search experiences, including those
that span time and space. The recent emergence of generative artificial
intelligence (AI) and the arrival of assistive agents, or copilots, based on
this technology, has the potential to offer further assistance to searchers,
especially those engaged in complex tasks. There are profound implications from
these advances for the design of intelligent systems and for the future of
search itself. This article, based on a keynote by the author at the 2023 ACM
SIGIR Conference, explores these issues and charts a course toward new horizons
in information access guided by AI copilots. | Information Retrieval |
What field is the article from? | Title: Constrained Meta-Reinforcement Learning for Adaptable Safety Guarantee with Differentiable Convex Programming
Abstract: Despite remarkable achievements in artificial intelligence, the deployability
of learning-enabled systems in high-stakes real-world environments still faces
persistent challenges. For example, in safety-critical domains like autonomous
driving, robotic manipulation, and healthcare, it is crucial not only to
achieve high performance but also to comply with given constraints.
Furthermore, adaptability becomes paramount in non-stationary domains, where
environmental parameters are subject to change. While safety and adaptability
are recognized as key qualities for the new generation of AI, current
approaches have not demonstrated effective adaptable performance in constrained
settings. Hence, this paper breaks new ground by studying the unique challenges
of ensuring safety in non-stationary environments by solving constrained
problems through the lens of the meta-learning approach (learning-to-learn).
While unconstrained meta-learning al-ready encounters complexities in
end-to-end differentiation of the loss due to the bi-level nature, its
constrained counterpart introduces an additional layer of difficulty, since the
constraints imposed on task-level updates complicate the differentiation
process. To address the issue, we first employ successive convex-constrained
policy updates across multiple tasks with differentiable convexprogramming,
which allows meta-learning in constrained scenarios by enabling end-to-end
differentiation. This approach empowers the agent to rapidly adapt to new tasks
under non-stationarity while ensuring compliance with safety constraints. | Artificial Intelligence |
What field is the article from? | Title: Scene-Driven Multimodal Knowledge Graph Construction for Embodied AI
Abstract: Embodied AI is one of the most popular studies in artificial intelligence and
robotics, which can effectively improve the intelligence of real-world agents
(i.e. robots) serving human beings. Scene knowledge is important for an agent
to understand the surroundings and make correct decisions in the varied open
world. Currently, knowledge base for embodied tasks is missing and most
existing work use general knowledge base or pre-trained models to enhance the
intelligence of an agent. For conventional knowledge base, it is sparse,
insufficient in capacity and cost in data collection. For pre-trained models,
they face the uncertainty of knowledge and hard maintenance. To overcome the
challenges of scene knowledge, we propose a scene-driven multimodal knowledge
graph (Scene-MMKG) construction method combining conventional knowledge
engineering and large language models. A unified scene knowledge injection
framework is introduced for knowledge representation. To evaluate the
advantages of our proposed method, we instantiate Scene-MMKG considering
typical indoor robotic functionalities (Manipulation and Mobility), named
ManipMob-MMKG. Comparisons in characteristics indicate our instantiated
ManipMob-MMKG has broad superiority in data-collection efficiency and knowledge
quality. Experimental results on typical embodied tasks show that
knowledge-enhanced methods using our instantiated ManipMob-MMKG can improve the
performance obviously without re-designing model structures complexly. Our
project can be found at https://sites.google.com/view/manipmob-mmkg | Artificial Intelligence |
What field is the article from? | Title: Leveraging LLMs for Synthesizing Training Data Across Many Languages in Multilingual Dense Retrieval
Abstract: Dense retrieval models have predominantly been studied for English, where
models have shown great success, due to the availability of human-labeled
training pairs. However, there has been limited success for multilingual
retrieval so far, as training data is uneven or scarcely available across
multiple languages. Synthetic training data generation is promising (e.g.,
InPars or Promptagator), but has been investigated only for English. Therefore,
to study model capabilities across both cross-lingual and monolingual retrieval
tasks, we develop SWIM-IR, a synthetic retrieval training dataset containing 33
(high to very-low resource) languages for training multilingual dense retrieval
models without requiring any human supervision. To construct SWIM-IR, we
propose SAP (summarize-then-ask prompting), where the large language model
(LLM) generates a textual summary prior to the query generation step. SAP
assists the LLM in generating informative queries in the target language. Using
SWIM-IR, we explore synthetic fine-tuning of multilingual dense retrieval
models and evaluate them robustly on three retrieval benchmarks: XOR-Retrieve
(cross-lingual), XTREME-UP (cross-lingual) and MIRACL (monolingual). Our
models, called SWIM-X, are competitive with human-supervised dense retrieval
models, e.g., mContriever, finding that SWIM-IR can cheaply substitute for
expensive human-labeled retrieval training data. | Information Retrieval |
What field is the article from? | Title: Temporal Knowledge Question Answering via Abstract Reasoning Induction
Abstract: In this paper, we tackle the significant challenge of temporal knowledge
reasoning in Large Language Models (LLMs), an area where such models frequently
encounter difficulties. These difficulties often result in the generation of
misleading or incorrect information, primarily due to their limited capacity to
process evolving factual knowledge and complex temporal logic. In response, we
propose a novel, constructivism-based approach that advocates for a paradigm
shift in LLM learning towards an active, ongoing process of knowledge synthesis
and customization. At the heart of our proposal is the Abstract Reasoning
Induction ARI framework, which divides temporal reasoning into two distinct
phases: Knowledge-agnostic and Knowledge-based. This division aims to reduce
instances of hallucinations and improve LLMs' capacity for integrating abstract
methodologies derived from historical data. Our approach achieves remarkable
improvements, with relative gains of 29.7\% and 9.27\% on two temporal QA
datasets, underscoring its efficacy in advancing temporal reasoning in LLMs.
The code will be released at https://github.com/czy1999/ARI. | Computational Linguistics |
What field is the article from? | Title: Multimodal Clinical Benchmark for Emergency Care (MC-BEC): A Comprehensive Benchmark for Evaluating Foundation Models in Emergency Medicine
Abstract: We propose the Multimodal Clinical Benchmark for Emergency Care (MC-BEC), a
comprehensive benchmark for evaluating foundation models in Emergency Medicine
using a dataset of 100K+ continuously monitored Emergency Department visits
from 2020-2022. MC-BEC focuses on clinically relevant prediction tasks at
timescales from minutes to days, including predicting patient decompensation,
disposition, and emergency department (ED) revisit, and includes a standardized
evaluation framework with train-test splits and evaluation metrics. The
multimodal dataset includes a wide range of detailed clinical data, including
triage information, prior diagnoses and medications, continuously measured
vital signs, electrocardiogram and photoplethysmograph waveforms, orders placed
and medications administered throughout the visit, free-text reports of imaging
studies, and information on ED diagnosis, disposition, and subsequent revisits.
We provide performance baselines for each prediction task to enable the
evaluation of multimodal, multitask models. We believe that MC-BEC will
encourage researchers to develop more effective, generalizable, and accessible
foundation models for multimodal clinical data. | Machine Learning |
What field is the article from? | Title: Cross-modal Contrastive Learning with Asymmetric Co-attention Network for Video Moment Retrieval
Abstract: Video moment retrieval is a challenging task requiring fine-grained
interactions between video and text modalities. Recent work in image-text
pretraining has demonstrated that most existing pretrained models suffer from
information asymmetry due to the difference in length between visual and
textual sequences. We question whether the same problem also exists in the
video-text domain with an auxiliary need to preserve both spatial and temporal
information. Thus, we evaluate a recently proposed solution involving the
addition of an asymmetric co-attention network for video grounding tasks.
Additionally, we incorporate momentum contrastive loss for robust,
discriminative representation learning in both modalities. We note that the
integration of these supplementary modules yields better performance compared
to state-of-the-art models on the TACoS dataset and comparable results on
ActivityNet Captions, all while utilizing significantly fewer parameters with
respect to baseline. | Computer Vision |
What field is the article from? | Title: Training Multi-layer Neural Networks on Ising Machine
Abstract: As a dedicated quantum device, Ising machines could solve large-scale binary
optimization problems in milliseconds. There is emerging interest in utilizing
Ising machines to train feedforward neural networks due to the prosperity of
generative artificial intelligence. However, existing methods can only train
single-layer feedforward networks because of the complex nonlinear network
topology. This paper proposes an Ising learning algorithm to train quantized
neural network (QNN), by incorporating two essential techinques, namely binary
representation of topological network and order reduction of loss function. As
far as we know, this is the first algorithm to train multi-layer feedforward
networks on Ising machines, providing an alternative to gradient-based
backpropagation. Firstly, training QNN is formulated as a quadratic constrained
binary optimization (QCBO) problem by representing neuron connection and
activation function as equality constraints. All quantized variables are
encoded by binary bits based on binary encoding protocol. Secondly, QCBO is
converted to a quadratic unconstrained binary optimization (QUBO) problem, that
can be efficiently solved on Ising machines. The conversion leverages both
penalty function and Rosenberg order reduction, who together eliminate equality
constraints and reduce high-order loss function into a quadratic one. With some
assumptions, theoretical analysis shows the space complexity of our algorithm
is $\mathcal{O}(H^2L + HLN\log H)$, quantifying the required number of Ising
spins. Finally, the algorithm effectiveness is validated with a simulated Ising
machine on MNIST dataset. After annealing 700 ms, the classification accuracy
achieves 98.3%. Among 100 runs, the success probability of finding the optimal
solution is 72%. Along with the increasing number of spins on Ising machine,
our algorithm has the potential to train deeper neural networks. | Machine Learning |
What field is the article from? | Title: A Survey on Vulnerability of Federated Learning: A Learning Algorithm Perspective
Abstract: This review paper takes a comprehensive look at malicious attacks against FL,
categorizing them from new perspectives on attack origins and targets, and
providing insights into their methodology and impact. In this survey, we focus
on threat models targeting the learning process of FL systems. Based on the
source and target of the attack, we categorize existing threat models into four
types, Data to Model (D2M), Model to Data (M2D), Model to Model (M2M) and
composite attacks. For each attack type, we discuss the defense strategies
proposed, highlighting their effectiveness, assumptions and potential areas for
improvement. Defense strategies have evolved from using a singular metric to
excluding malicious clients, to employing a multifaceted approach examining
client models at various phases. In this survey paper, our research indicates
that the to-learn data, the learning gradients, and the learned model at
different stages all can be manipulated to initiate malicious attacks that
range from undermining model performance, reconstructing private local data,
and to inserting backdoors. We have also seen these threat are becoming more
insidious. While earlier studies typically amplified malicious gradients,
recent endeavors subtly alter the least significant weights in local models to
bypass defense measures. This literature review provides a holistic
understanding of the current FL threat landscape and highlights the importance
of developing robust, efficient, and privacy-preserving defenses to ensure the
safe and trusted adoption of FL in real-world applications. | Machine Learning |
What field is the article from? | Title: Developing Linguistic Patterns to Mitigate Inherent Human Bias in Offensive Language Detection
Abstract: With the proliferation of social media, there has been a sharp increase in
offensive content, particularly targeting vulnerable groups, exacerbating
social problems such as hatred, racism, and sexism. Detecting offensive
language use is crucial to prevent offensive language from being widely shared
on social media. However, the accurate detection of irony, implication, and
various forms of hate speech on social media remains a challenge. Natural
language-based deep learning models require extensive training with large,
comprehensive, and labeled datasets. Unfortunately, manually creating such
datasets is both costly and error-prone. Additionally, the presence of
human-bias in offensive language datasets is a major concern for deep learning
models. In this paper, we propose a linguistic data augmentation approach to
reduce bias in labeling processes, which aims to mitigate the influence of
human bias by leveraging the power of machines to improve the accuracy and
fairness of labeling processes. This approach has the potential to improve
offensive language classification tasks across multiple languages and reduce
the prevalence of offensive content on social media. | Computational Linguistics |
What field is the article from? | Title: Breaking the Entanglement of Homophily and Heterophily in Semi-supervised Node Classification
Abstract: Recently, graph neural networks (GNNs) have shown prominent performance in
semi-supervised node classification by leveraging knowledge from the graph
database. However, most existing GNNs follow the homophily assumption, where
connected nodes are more likely to exhibit similar feature distributions and
the same labels, and such an assumption has proven to be vulnerable in a
growing number of practical applications. As a supplement, heterophily reflects
dissimilarity in connected nodes, which has gained significant attention in
graph learning. To this end, data engineers aim to develop a powerful GNN model
that can ensure performance under both homophily and heterophily. Despite
numerous attempts, most existing GNNs struggle to achieve optimal node
representations due to the constraints of undirected graphs. The neglect of
directed edges results in sub-optimal graph representations, thereby hindering
the capacity of GNNs. To address this issue, we introduce AMUD, which
quantifies the relationship between node profiles and topology from a
statistical perspective, offering valuable insights for \underline{A}daptively
\underline{M}odeling the natural directed graphs as the \underline{U}ndirected
or \underline{D}irected graph to maximize the benefits from subsequent graph
learning. Furthermore, we propose \underline{A}daptive \underline{D}irected
\underline{P}attern \underline{A}ggregation (ADPA) as a new directed graph
learning paradigm for AMUD. Empirical studies have demonstrated that AMUD
guides efficient graph learning. Meanwhile, extensive experiments on 14
benchmark datasets substantiate the impressive performance of ADPA,
outperforming baselines by significant margins of 3.96\%. | Machine Learning |
What field is the article from? | Title: Geometry-Calibrated DRO: Combating Over-Pessimism with Free Energy Implications
Abstract: Machine learning algorithms minimizing average risk are susceptible to
distributional shifts. Distributionally Robust Optimization (DRO) addresses
this issue by optimizing the worst-case risk within an uncertainty set.
However, DRO suffers from over-pessimism, leading to low-confidence
predictions, poor parameter estimations as well as poor generalization. In this
work, we conduct a theoretical analysis of a probable root cause of
over-pessimism: excessive focus on noisy samples. To alleviate the impact of
noise, we incorporate data geometry into calibration terms in DRO, resulting in
our novel Geometry-Calibrated DRO (GCDRO) for regression. We establish the
connection between our risk objective and the Helmholtz free energy in
statistical physics, and this free-energy-based risk can extend to standard DRO
methods. Leveraging gradient flow in Wasserstein space, we develop an
approximate minimax optimization algorithm with a bounded error ratio and
elucidate how our approach mitigates noisy sample effects. Comprehensive
experiments confirm GCDRO's superiority over conventional DRO methods. | Machine Learning |
What field is the article from? | Title: Contrastive Multi-view Subspace Clustering of Hyperspectral Images based on Graph Convolutional Networks
Abstract: High-dimensional and complex spectral structures make the clustering of
hyperspectral images (HSI) a challenging task. Subspace clustering is an
effective approach for addressing this problem. However, current subspace
clustering algorithms are primarily designed for a single view and do not fully
exploit the spatial or textural feature information in HSI. In this study,
contrastive multi-view subspace clustering of HSI was proposed based on graph
convolutional networks. Pixel neighbor textural and spatial-spectral
information were sent to construct two graph convolutional subspaces to learn
their affinity matrices. To maximize the interaction between different views, a
contrastive learning algorithm was introduced to promote the consistency of
positive samples and assist the model in extracting robust features. An
attention-based fusion module was used to adaptively integrate these affinity
matrices, constructing a more discriminative affinity matrix. The model was
evaluated using four popular HSI datasets: Indian Pines, Pavia University,
Houston, and Xu Zhou. It achieved overall accuracies of 97.61%, 96.69%, 87.21%,
and 97.65%, respectively, and significantly outperformed state-of-the-art
clustering methods. In conclusion, the proposed model effectively improves the
clustering accuracy of HSI. | Computer Vision |
What field is the article from? | Title: Virtual Fusion with Contrastive Learning for Single Sensor-based Activity Recognition
Abstract: Various types of sensors can be used for Human Activity Recognition (HAR),
and each of them has different strengths and weaknesses. Sometimes a single
sensor cannot fully observe the user's motions from its perspective, which
causes wrong predictions. While sensor fusion provides more information for
HAR, it comes with many inherent drawbacks like user privacy and acceptance,
costly set-up, operation, and maintenance. To deal with this problem, we
propose Virtual Fusion - a new method that takes advantage of unlabeled data
from multiple time-synchronized sensors during training, but only needs one
sensor for inference. Contrastive learning is adopted to exploit the
correlation among sensors. Virtual Fusion gives significantly better accuracy
than training with the same single sensor, and in some cases, it even surpasses
actual fusion using multiple sensors at test time. We also extend this method
to a more general version called Actual Fusion within Virtual Fusion (AFVF),
which uses a subset of training sensors during inference. Our method achieves
state-of-the-art accuracy and F1-score on UCI-HAR and PAMAP2 benchmark
datasets. Implementation is available upon request. | Machine Learning |
What field is the article from? | Title: Adaptive Proximal Policy Optimization with Upper Confidence Bound
Abstract: Trust Region Policy Optimization (TRPO) attractively optimizes the policy
while constraining the update of the new policy within a trust region, ensuring
the stability and monotonic optimization. Building on the theoretical
guarantees of trust region optimization, Proximal Policy Optimization (PPO)
successfully enhances the algorithm's sample efficiency and reduces deployment
complexity by confining the update of the new and old policies within a
surrogate trust region. However, this approach is limited by the fixed setting
of surrogate trust region and is not sufficiently adaptive, because there is no
theoretical proof that the optimal clipping bound remains consistent throughout
the entire training process, truncating the ratio of the new and old policies
within surrogate trust region can ensure that the algorithm achieves its best
performance, therefore, exploring and researching a dynamic clip bound for
improving PPO's performance can be quite beneficial. To design an adaptive
clipped trust region and explore the dynamic clip bound's impact on the
performance of PPO, we introduce an adaptive PPO-CLIP (Adaptive-PPO) method
that dynamically explores and exploits the clip bound using a bandit during the
online training process. Furthermore, ample experiments will initially
demonstrate that our Adaptive-PPO exhibits sample efficiency and performance
compared to PPO-CLIP. | Machine Learning |
What field is the article from? | Title: Chatbots as social companions: How people perceive consciousness, human likeness, and social health benefits in machines
Abstract: As artificial intelligence (AI) becomes more widespread, one question that
arises is how human-AI interaction might impact human-human interaction.
Chatbots, for example, are increasingly used as social companions, but little
is known about how their use impacts human relationships. A common hypothesis
is that these companion bots are detrimental to social health by harming or
replacing human interaction. To understand how companion bots impact social
health, we studied people who used companion bots and people who did not.
Contrary to expectations, companion bot users indicated that these
relationships were beneficial to their social health, whereas nonusers viewed
them as harmful. Another common assumption is that people perceive conscious,
humanlike AI as disturbing and threatening. Among both users and nonusers,
however, we found the opposite: perceiving companion bots as more conscious and
humanlike correlated with more positive opinions and better social health
benefits. Humanlike bots may aid social health by supplying reliable and safe
interactions, without necessarily harming human relationships. | Human-Computer Interaction |
What field is the article from? | Title: Efficient Data Fusion using the Tsetlin Machine
Abstract: We propose a novel way of assessing and fusing noisy dynamic data using a
Tsetlin Machine. Our approach consists in monitoring how explanations in form
of logical clauses that a TM learns changes with possible noise in dynamic
data. This way TM can recognize the noise by lowering weights of previously
learned clauses, or reflect it in the form of new clauses. We also perform a
comprehensive experimental study using notably different datasets that
demonstrated high performance of the proposed approach. | Artificial Intelligence |
What field is the article from? | Title: Italian Crossword Generator: Enhancing Education through Interactive Word Puzzles
Abstract: Educational crosswords offer numerous benefits for students, including
increased engagement, improved understanding, critical thinking, and memory
retention. Creating high-quality educational crosswords can be challenging, but
recent advances in natural language processing and machine learning have made
it possible to use language models to generate nice wordplays. The exploitation
of cutting-edge language models like GPT3-DaVinci, GPT3-Curie, GPT3-Babbage,
GPT3-Ada, and BERT-uncased has led to the development of a comprehensive system
for generating and verifying crossword clues. A large dataset of clue-answer
pairs was compiled to fine-tune the models in a supervised manner to generate
original and challenging clues from a given keyword. On the other hand, for
generating crossword clues from a given text, Zero/Few-shot learning techniques
were used to extract clues from the input text, adding variety and creativity
to the puzzles. We employed the fine-tuned model to generate data and labeled
the acceptability of clue-answer parts with human supervision. To ensure
quality, we developed a classifier by fine-tuning existing language models on
the labeled dataset. Conversely, to assess the quality of clues generated from
the given text using zero/few-shot learning, we employed a zero-shot learning
approach to check the quality of generated clues. The results of the evaluation
have been very promising, demonstrating the effectiveness of the approach in
creating high-standard educational crosswords that offer students engaging and
rewarding learning experiences. | Computational Linguistics |
What field is the article from? | Title: Improving Adaptability and Generalizability of Efficient Transfer Learning for Vision-Language Models
Abstract: Vision-Language Models (VLMs) like CLIP have demonstrated remarkable
applicability across a variety of downstream tasks, including zero-shot image
classification. Recently, the use of prompts or adapters for efficient transfer
learning has gained significant attention for effectively adapting to
downstream tasks. However, the roles of vision and text prompts, as well as
adapters in terms of generalization and transfer difficulty, have been
overlooked, limiting performance on unseen tasks. In this paper, we empirically
analyze how VLMs behave when using vision and text prompts, adapters, and a
combination of these components, marking a novel exploration by our study. Our
observations find that utilizing vision prompts for class separability and text
adapters for task adaptation is crucial for adaptability and generalizability.
Moreover, to improve generalization across every domain, we propose an adaptive
ensemble method that effectively combines the general knowledge of VLMs with
task-specific knowledge according to transfer difficulty. Upon experimenting
with extensive benchmarks, our method consistently outperforms all baselines,
particularly on unseen tasks, demonstrating the effectiveness of our proposed
approach. | Computer Vision |
What field is the article from? | Title: GOPlan: Goal-conditioned Offline Reinforcement Learning by Planning with Learned Models
Abstract: Offline goal-conditioned RL (GCRL) offers a feasible paradigm to learn
general-purpose policies from diverse and multi-task offline datasets. Despite
notable recent progress, the predominant offline GCRL methods have been
restricted to model-free approaches, constraining their capacity to tackle
limited data budgets and unseen goal generalization. In this work, we propose a
novel two-stage model-based framework, Goal-conditioned Offline Planning
(GOPlan), including (1) pretraining a prior policy capable of capturing
multi-modal action distribution within the multi-goal dataset; (2) employing
the reanalysis method with planning to generate imagined trajectories for
funetuning policies. Specifically, the prior policy is based on an
advantage-weighted Conditioned Generative Adversarial Networks that exhibits
distinct mode separation to overcome the pitfalls of out-of-distribution (OOD)
actions. For further policy optimization, the reanalysis method generates
high-quality imaginary data by planning with learned models for both
intra-trajectory and inter-trajectory goals. Through experimental evaluations,
we demonstrate that GOPlan achieves state-of-the-art performance on various
offline multi-goal manipulation tasks. Moreover, our results highlight the
superior ability of GOPlan to handle small data budgets and generalize to OOD
goals. | Machine Learning |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.