Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2503.20749 | Yuxuan Lu | Yuxuan Lu, Jing Huang, Yan Han, Bennet Bei, Yaochen Xie, Dakuo Wang,
Jessie Wang, Qi He | Beyond Believability: Accurate Human Behavior Simulation with Fine-Tuned
LLMs | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent research shows that LLMs can simulate ``believable'' human behaviors
to power LLM agents via prompt-only methods. In this work, we focus on
evaluating and improving LLM's objective ``accuracy'' rather than the
subjective ``believability'' in the web action generation task, leveraging a
large-scale, real-world dataset collected from online shopping human actions.
We present the first comprehensive quantitative evaluation of state-of-the-art
LLMs (e.g., DeepSeek-R1, Llama, and Claude) on the task of web action
generation. Our results show that fine-tuning LLMs on real-world behavioral
data substantially improves their ability to generate actions compared to
prompt-only methods. Furthermore, incorporating synthesized reasoning traces
into model training leads to additional performance gains, demonstrating the
value of explicit rationale in behavior modeling. This work establishes a new
benchmark for evaluating LLMs in behavior simulation and offers actionable
insights into how real-world action data and reasoning augmentation can enhance
the fidelity of LLM agents.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:33:27 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 02:42:03 GMT"
},
{
"version": "v3",
"created": "Sat, 5 Apr 2025 02:45:14 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lu",
"Yuxuan",
""
],
[
"Huang",
"Jing",
""
],
[
"Han",
"Yan",
""
],
[
"Bei",
"Bennet",
""
],
[
"Xie",
"Yaochen",
""
],
[
"Wang",
"Dakuo",
""
],
[
"Wang",
"Jessie",
""
],
[
"He",
"Qi",
""
]
] | TITLE: Beyond Believability: Accurate Human Behavior Simulation with Fine-Tuned
LLMs
ABSTRACT: Recent research shows that LLMs can simulate ``believable'' human behaviors
to power LLM agents via prompt-only methods. In this work, we focus on
evaluating and improving LLM's objective ``accuracy'' rather than the
subjective ``believability'' in the web action generation task, leveraging a
large-scale, real-world dataset collected from online shopping human actions.
We present the first comprehensive quantitative evaluation of state-of-the-art
LLMs (e.g., DeepSeek-R1, Llama, and Claude) on the task of web action
generation. Our results show that fine-tuning LLMs on real-world behavioral
data substantially improves their ability to generate actions compared to
prompt-only methods. Furthermore, incorporating synthesized reasoning traces
into model training leads to additional performance gains, demonstrating the
value of explicit rationale in behavior modeling. This work establishes a new
benchmark for evaluating LLMs in behavior simulation and offers actionable
insights into how real-world action data and reasoning augmentation can enhance
the fidelity of LLM agents.
|
2503.20771 | Soufiane Belharbi | Masoumeh Sharafi, Emma Ollivier, Muhammad Osama Zeeshan, Soufiane
Belharbi, Marco Pedersoli, Alessandro Lameiras Koerich, Simon Bacon, Eric
Granger | Disentangled Source-Free Personalization for Facial Expression
Recognition with Neutral Target Data | 14 pages, 9 figures, FG 2025: IEEE Conf. on Automatic Face and
Gesture Recognition | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Facial Expression Recognition (FER) from videos is a crucial task in various
application areas, such as human-computer interaction and health monitoring
(e.g., pain, depression, fatigue, and stress). Beyond the challenges of
recognizing subtle emotional or health states, the effectiveness of deep FER
models is often hindered by the considerable variability of expressions among
subjects. Source-free domain adaptation (SFDA) methods are employed to adapt a
pre-trained source model using only unlabeled target domain data, thereby
avoiding data privacy and storage issues. Typically, SFDA methods adapt to a
target domain dataset corresponding to an entire population and assume it
includes data from all recognition classes. However, collecting such
comprehensive target data can be difficult or even impossible for FER in
healthcare applications. In many real-world scenarios, it may be feasible to
collect a short neutral control video (displaying only neutral expressions) for
target subjects before deployment. These videos can be used to adapt a model to
better handle the variability of expressions among subjects. This paper
introduces the Disentangled Source-Free Domain Adaptation (DSFDA) method to
address the SFDA challenge posed by missing target expression data. DSFDA
leverages data from a neutral target control video for end-to-end generation
and adaptation of target data with missing non-neutral data. Our method learns
to disentangle features related to expressions and identity while generating
the missing non-neutral target data, thereby enhancing model accuracy.
Additionally, our self-supervision strategy improves model adaptation by
reconstructing target images that maintain the same identity and source
expression. Experimental results on the challenging BioVid and UNBC-McMaster
pain datasets indicate that our DSFDA approach can outperform state-of-the-art
adaptation method.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 17:53:53 GMT"
},
{
"version": "v2",
"created": "Sat, 29 Mar 2025 01:24:17 GMT"
},
{
"version": "v3",
"created": "Sat, 5 Apr 2025 12:55:44 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Sharafi",
"Masoumeh",
""
],
[
"Ollivier",
"Emma",
""
],
[
"Zeeshan",
"Muhammad Osama",
""
],
[
"Belharbi",
"Soufiane",
""
],
[
"Pedersoli",
"Marco",
""
],
[
"Koerich",
"Alessandro Lameiras",
""
],
[
"Bacon",
"Simon",
""
],
[
"Granger",
"Eric",
""
]
] | TITLE: Disentangled Source-Free Personalization for Facial Expression
Recognition with Neutral Target Data
ABSTRACT: Facial Expression Recognition (FER) from videos is a crucial task in various
application areas, such as human-computer interaction and health monitoring
(e.g., pain, depression, fatigue, and stress). Beyond the challenges of
recognizing subtle emotional or health states, the effectiveness of deep FER
models is often hindered by the considerable variability of expressions among
subjects. Source-free domain adaptation (SFDA) methods are employed to adapt a
pre-trained source model using only unlabeled target domain data, thereby
avoiding data privacy and storage issues. Typically, SFDA methods adapt to a
target domain dataset corresponding to an entire population and assume it
includes data from all recognition classes. However, collecting such
comprehensive target data can be difficult or even impossible for FER in
healthcare applications. In many real-world scenarios, it may be feasible to
collect a short neutral control video (displaying only neutral expressions) for
target subjects before deployment. These videos can be used to adapt a model to
better handle the variability of expressions among subjects. This paper
introduces the Disentangled Source-Free Domain Adaptation (DSFDA) method to
address the SFDA challenge posed by missing target expression data. DSFDA
leverages data from a neutral target control video for end-to-end generation
and adaptation of target data with missing non-neutral data. Our method learns
to disentangle features related to expressions and identity while generating
the missing non-neutral target data, thereby enhancing model accuracy.
Additionally, our self-supervision strategy improves model adaptation by
reconstructing target images that maintain the same identity and source
expression. Experimental results on the challenging BioVid and UNBC-McMaster
pain datasets indicate that our DSFDA approach can outperform state-of-the-art
adaptation method.
|
2503.21953 | Sorin Matei | Sorin Adam Matei, Rajesh Kalyanam | Risk-Prone and Risk-Averse Behavior in Natural Emergencies: An Appraisal
Theory Approach | 26 pages, 5 figures | null | null | null | cs.SI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Individuals who shared actionable information during Hurricane Sandy were
significantly more likely to exhibit risk-prone behavior, as measured by a
novel Risk Behavior Quotient (RBQ). Using a dataset of 36595 geo-located tweets
from 774 users in the New York area, we found that a higher proportion of
actional tweets predicted increased exposure to physical even if overall users
ultimately moved toward lower-risk zones. This counterintuitive finding
suggests that proactivity, manifested in sharing crisis relevant content,
correlates with greater exposure to risk, possibly due to increased mobility or
engagement in hazardous areas. In contrast, a greater number of social media
peers was associated with reduced risk exposure. This study builds on appraisal
theory, which frames risk-related decisions as outcomes of cognitively mediated
emotional and rational evaluations. We extend this theory to digital crisis
behavior, distinguishing between emotional and actional appraisals expressed
via social media. Tweets were categorized using sentiment analysis and semantic
classification, enabling the isolation of affective and behavioral signals. Our
methodology combines natural language processing with spatial vector analysis
to estimate individual movement paths and risk exposure based on evacuation and
flooding maps. The resulting RBQ captures both direction and intensity of risk
behavior, allowing us to model how online communication reflects and predicts
real-world risk engagement during natural disasters.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 19:59:00 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 16:16:29 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Matei",
"Sorin Adam",
""
],
[
"Kalyanam",
"Rajesh",
""
]
] | TITLE: Risk-Prone and Risk-Averse Behavior in Natural Emergencies: An Appraisal
Theory Approach
ABSTRACT: Individuals who shared actionable information during Hurricane Sandy were
significantly more likely to exhibit risk-prone behavior, as measured by a
novel Risk Behavior Quotient (RBQ). Using a dataset of 36595 geo-located tweets
from 774 users in the New York area, we found that a higher proportion of
actional tweets predicted increased exposure to physical even if overall users
ultimately moved toward lower-risk zones. This counterintuitive finding
suggests that proactivity, manifested in sharing crisis relevant content,
correlates with greater exposure to risk, possibly due to increased mobility or
engagement in hazardous areas. In contrast, a greater number of social media
peers was associated with reduced risk exposure. This study builds on appraisal
theory, which frames risk-related decisions as outcomes of cognitively mediated
emotional and rational evaluations. We extend this theory to digital crisis
behavior, distinguishing between emotional and actional appraisals expressed
via social media. Tweets were categorized using sentiment analysis and semantic
classification, enabling the isolation of affective and behavioral signals. Our
methodology combines natural language processing with spatial vector analysis
to estimate individual movement paths and risk exposure based on evacuation and
flooding maps. The resulting RBQ captures both direction and intensity of risk
behavior, allowing us to model how online communication reflects and predicts
real-world risk engagement during natural disasters.
|
2503.22869 | Alexey Gavryushin | Alexey Gavryushin, Florian Redhardt, Gaia Di Lorenzo, Luc Van Gool,
Marc Pollefeys, Kaichun Mo, Xi Wang | SIGHT: Single-Image Conditioned Generation of Hand Trajectories for
Hand-Object Interaction | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce a novel task of generating realistic and diverse 3D hand
trajectories given a single image of an object, which could be involved in a
hand-object interaction scene or pictured by itself. When humans grasp an
object, appropriate trajectories naturally form in our minds to use it for
specific tasks. Hand-object interaction trajectory priors can greatly benefit
applications in robotics, embodied AI, augmented reality and related fields.
However, synthesizing realistic and appropriate hand trajectories given a
single object or hand-object interaction image is a highly ambiguous task,
requiring to correctly identify the object of interest and possibly even the
correct interaction among many possible alternatives. To tackle this
challenging problem, we propose the SIGHT-Fusion system, consisting of a
curated pipeline for extracting visual features of hand-object interaction
details from egocentric videos involving object manipulation, and a
diffusion-based conditional motion generation model processing the extracted
features. We train our method given video data with corresponding hand
trajectory annotations, without supervision in the form of action labels. For
the evaluation, we establish benchmarks utilizing the first-person FPHAB and
HOI4D datasets, testing our method against various baselines and using multiple
metrics. We also introduce task simulators for executing the generated hand
trajectories and reporting task success rates as an additional metric.
Experiments show that our method generates more appropriate and realistic hand
trajectories than baselines and presents promising generalization capability on
unseen objects. The accuracy of the generated hand trajectories is confirmed in
a physics simulation setting, showcasing the authenticity of the created
sequences and their applicability in downstream uses.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 20:53:20 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 09:35:08 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Gavryushin",
"Alexey",
""
],
[
"Redhardt",
"Florian",
""
],
[
"Di Lorenzo",
"Gaia",
""
],
[
"Van Gool",
"Luc",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Mo",
"Kaichun",
""
],
[
"Wang",
"Xi",
""
]
] | TITLE: SIGHT: Single-Image Conditioned Generation of Hand Trajectories for
Hand-Object Interaction
ABSTRACT: We introduce a novel task of generating realistic and diverse 3D hand
trajectories given a single image of an object, which could be involved in a
hand-object interaction scene or pictured by itself. When humans grasp an
object, appropriate trajectories naturally form in our minds to use it for
specific tasks. Hand-object interaction trajectory priors can greatly benefit
applications in robotics, embodied AI, augmented reality and related fields.
However, synthesizing realistic and appropriate hand trajectories given a
single object or hand-object interaction image is a highly ambiguous task,
requiring to correctly identify the object of interest and possibly even the
correct interaction among many possible alternatives. To tackle this
challenging problem, we propose the SIGHT-Fusion system, consisting of a
curated pipeline for extracting visual features of hand-object interaction
details from egocentric videos involving object manipulation, and a
diffusion-based conditional motion generation model processing the extracted
features. We train our method given video data with corresponding hand
trajectory annotations, without supervision in the form of action labels. For
the evaluation, we establish benchmarks utilizing the first-person FPHAB and
HOI4D datasets, testing our method against various baselines and using multiple
metrics. We also introduce task simulators for executing the generated hand
trajectories and reporting task success rates as an additional metric.
Experiments show that our method generates more appropriate and realistic hand
trajectories than baselines and presents promising generalization capability on
unseen objects. The accuracy of the generated hand trajectories is confirmed in
a physics simulation setting, showcasing the authenticity of the created
sequences and their applicability in downstream uses.
|
2503.23982 | Boris Hanin | Mike Winer, Boris Hanin | Deep Neural Nets as Hamiltonians | 19+7 pages | null | null | null | cond-mat.dis-nn cond-mat.stat-mech cs.AI cs.LG math.PR | http://creativecommons.org/licenses/by/4.0/ | Neural networks are complex functions of both their inputs and parameters.
Much prior work in deep learning theory analyzes the distribution of network
outputs at a fixed a set of inputs (e.g. a training dataset) over random
initializations of the network parameters. The purpose of this article is to
consider the opposite situation: we view a randomly initialized Multi-Layer
Perceptron (MLP) as a Hamiltonian over its inputs. For typical realizations of
the network parameters, we study the properties of the energy landscape induced
by this Hamiltonian, focusing on the structure of near-global minimum in the
limit of infinite width. Specifically, we use the replica trick to perform an
exact analytic calculation giving the entropy (log volume of space) at a given
energy. We further derive saddle point equations that describe the overlaps
between inputs sampled iid from the Gibbs distribution induced by the random
MLP. For linear activations we solve these saddle point equations exactly. But
we also solve them numerically for a variety of depths and activation
functions, including $\tanh, \sin, \text{ReLU}$, and shaped non-linearities. We
find even at infinite width a rich range of behaviors. For some
non-linearities, such as $\sin$, for instance, we find that the landscapes of
random MLPs exhibit full replica symmetry breaking, while shallow $\tanh$ and
ReLU networks or deep shaped MLPs are instead replica symmetric.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 11:51:10 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 09:41:03 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Winer",
"Mike",
""
],
[
"Hanin",
"Boris",
""
]
] | TITLE: Deep Neural Nets as Hamiltonians
ABSTRACT: Neural networks are complex functions of both their inputs and parameters.
Much prior work in deep learning theory analyzes the distribution of network
outputs at a fixed a set of inputs (e.g. a training dataset) over random
initializations of the network parameters. The purpose of this article is to
consider the opposite situation: we view a randomly initialized Multi-Layer
Perceptron (MLP) as a Hamiltonian over its inputs. For typical realizations of
the network parameters, we study the properties of the energy landscape induced
by this Hamiltonian, focusing on the structure of near-global minimum in the
limit of infinite width. Specifically, we use the replica trick to perform an
exact analytic calculation giving the entropy (log volume of space) at a given
energy. We further derive saddle point equations that describe the overlaps
between inputs sampled iid from the Gibbs distribution induced by the random
MLP. For linear activations we solve these saddle point equations exactly. But
we also solve them numerically for a variety of depths and activation
functions, including $\tanh, \sin, \text{ReLU}$, and shaped non-linearities. We
find even at infinite width a rich range of behaviors. For some
non-linearities, such as $\sin$, for instance, we find that the landscapes of
random MLPs exhibit full replica symmetry breaking, while shallow $\tanh$ and
ReLU networks or deep shaped MLPs are instead replica symmetric.
|
2504.00027 | Muhammad Ahmad | Muhammad Ahmad, Humaira Farid, Iqra Ameer, Maaz Amjad, Muhammad
Muzamil, Ameer Hamza, Muhammad Jalal, Ildar Batyrshin, and Grigori Sidorov | Opioid Named Entity Recognition (ONER-2025) from Reddit | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The opioid overdose epidemic remains a critical public health crisis,
particularly in the United States, leading to significant mortality and
societal costs. Social media platforms like Reddit provide vast amounts of
unstructured data that offer insights into public perceptions, discussions, and
experiences related to opioid use. This study leverages Natural Language
Processing (NLP), specifically Opioid Named Entity Recognition (ONER-2025), to
extract actionable information from these platforms. Our research makes four
key contributions. First, we created a unique, manually annotated dataset
sourced from Reddit, where users share self-reported experiences of opioid use
via different administration routes. This dataset contains 331,285 tokens and
includes eight major opioid entity categories. Second, we detail our annotation
process and guidelines while discussing the challenges of labeling the
ONER-2025 dataset. Third, we analyze key linguistic challenges, including
slang, ambiguity, fragmented sentences, and emotionally charged language, in
opioid discussions. Fourth, we propose a real-time monitoring system to process
streaming data from social media, healthcare records, and emergency services to
identify overdose events. Using 5-fold cross-validation in 11 experiments, our
system integrates machine learning, deep learning, and transformer-based
language models with advanced contextual embeddings to enhance understanding.
Our transformer-based models (bert-base-NER and roberta-base) achieved 97%
accuracy and F1-score, outperforming baselines by 10.23% (RF=0.88).
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 20:51:06 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 04:25:58 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ahmad",
"Muhammad",
""
],
[
"Farid",
"Humaira",
""
],
[
"Ameer",
"Iqra",
""
],
[
"Amjad",
"Maaz",
""
],
[
"Muzamil",
"Muhammad",
""
],
[
"Hamza",
"Ameer",
""
],
[
"Jalal",
"Muhammad",
""
],
[
"Batyrshin",
"Ildar",
""
],
[
"Sidorov",
"Grigori",
""
]
] | TITLE: Opioid Named Entity Recognition (ONER-2025) from Reddit
ABSTRACT: The opioid overdose epidemic remains a critical public health crisis,
particularly in the United States, leading to significant mortality and
societal costs. Social media platforms like Reddit provide vast amounts of
unstructured data that offer insights into public perceptions, discussions, and
experiences related to opioid use. This study leverages Natural Language
Processing (NLP), specifically Opioid Named Entity Recognition (ONER-2025), to
extract actionable information from these platforms. Our research makes four
key contributions. First, we created a unique, manually annotated dataset
sourced from Reddit, where users share self-reported experiences of opioid use
via different administration routes. This dataset contains 331,285 tokens and
includes eight major opioid entity categories. Second, we detail our annotation
process and guidelines while discussing the challenges of labeling the
ONER-2025 dataset. Third, we analyze key linguistic challenges, including
slang, ambiguity, fragmented sentences, and emotionally charged language, in
opioid discussions. Fourth, we propose a real-time monitoring system to process
streaming data from social media, healthcare records, and emergency services to
identify overdose events. Using 5-fold cross-validation in 11 experiments, our
system integrates machine learning, deep learning, and transformer-based
language models with advanced contextual embeddings to enhance understanding.
Our transformer-based models (bert-base-NER and roberta-base) achieved 97%
accuracy and F1-score, outperforming baselines by 10.23% (RF=0.88).
|
2504.00041 | Jos\'e Vinicius De S Souza | J. V. S. Souza, C. B. Vieira, G. D. C. Cavalcanti, R. M. O. Cruz | Imbalanced malware classification: an approach based on dynamic
classifier selection | Short paper accepted at SSCI 2025. 4 pages + 1 reference page, 3
figures, 1 table | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, the rise of cyber threats has emphasized the need for robust
malware detection systems, especially on mobile devices. Malware, which targets
vulnerabilities in devices and user data, represents a substantial security
risk. A significant challenge in malware detection is the imbalance in
datasets, where most applications are benign, with only a small fraction posing
a threat. This study addresses the often-overlooked issue of class imbalance in
malware detection by evaluating various machine learning strategies for
detecting malware in Android applications. We assess monolithic classifiers and
ensemble methods, focusing on dynamic selection algorithms, which have shown
superior performance compared to traditional approaches. In contrast to
balancing strategies performed on the whole dataset, we propose a balancing
procedure that works individually for each classifier in the pool. Our
empirical analysis demonstrates that the KNOP algorithm obtained the best
results using a pool of Random Forest. Additionally, an instance hardness
assessment revealed that balancing reduces the difficulty of the minority class
and enhances the detection of the minority class (malware). The code used for
the experiments is available at
https://github.com/jvss2/Machine-Learning-Empirical-Evaluation.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 19:12:16 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 19:40:14 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Souza",
"J. V. S.",
""
],
[
"Vieira",
"C. B.",
""
],
[
"Cavalcanti",
"G. D. C.",
""
],
[
"Cruz",
"R. M. O.",
""
]
] | TITLE: Imbalanced malware classification: an approach based on dynamic
classifier selection
ABSTRACT: In recent years, the rise of cyber threats has emphasized the need for robust
malware detection systems, especially on mobile devices. Malware, which targets
vulnerabilities in devices and user data, represents a substantial security
risk. A significant challenge in malware detection is the imbalance in
datasets, where most applications are benign, with only a small fraction posing
a threat. This study addresses the often-overlooked issue of class imbalance in
malware detection by evaluating various machine learning strategies for
detecting malware in Android applications. We assess monolithic classifiers and
ensemble methods, focusing on dynamic selection algorithms, which have shown
superior performance compared to traditional approaches. In contrast to
balancing strategies performed on the whole dataset, we propose a balancing
procedure that works individually for each classifier in the pool. Our
empirical analysis demonstrates that the KNOP algorithm obtained the best
results using a pool of Random Forest. Additionally, an instance hardness
assessment revealed that balancing reduces the difficulty of the minority class
and enhances the detection of the minority class (malware). The code used for
the experiments is available at
https://github.com/jvss2/Machine-Learning-Empirical-Evaluation.
|
2504.00891 | Runze Liu | Jian Zhao, Runze Liu, Kaiyan Zhang, Zhimu Zhou, Junqi Gao, Dong Li,
Jiafei Lyu, Zhouyi Qian, Biqing Qi, Xiu Li, Bowen Zhou | GenPRM: Scaling Test-Time Compute of Process Reward Models via
Generative Reasoning | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in Large Language Models (LLMs) have shown that it is
promising to utilize Process Reward Models (PRMs) as verifiers to enhance the
performance of LLMs. However, current PRMs face three key challenges: (1)
limited process supervision and generalization capabilities, (2) dependence on
scalar value prediction without leveraging the generative abilities of LLMs,
and (3) inability to scale the test-time compute of PRMs. In this work, we
introduce GenPRM, a generative process reward model that performs explicit
Chain-of-Thought (CoT) reasoning with code verification before providing
judgment for each reasoning step. To obtain high-quality process supervision
labels and rationale data, we propose Relative Progress Estimation (RPE) and a
rationale synthesis framework that incorporates code verification. Experimental
results on ProcessBench and several mathematical reasoning tasks show that
GenPRM significantly outperforms prior PRMs with only 23K training data from
MATH dataset. Through test-time scaling, a 1.5B GenPRM outperforms GPT-4o, and
a 7B GenPRM surpasses Qwen2.5-Math-PRM-72B on ProcessBench. Additionally,
GenPRM demonstrates strong abilities to serve as a critic model for policy
model refinement. This work establishes a new paradigm for process supervision
that bridges the gap between PRMs and critic models in LLMs. Our code, model,
and data will be available in https://ryanliu112.github.io/GenPRM.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 15:21:05 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 03:04:37 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhao",
"Jian",
""
],
[
"Liu",
"Runze",
""
],
[
"Zhang",
"Kaiyan",
""
],
[
"Zhou",
"Zhimu",
""
],
[
"Gao",
"Junqi",
""
],
[
"Li",
"Dong",
""
],
[
"Lyu",
"Jiafei",
""
],
[
"Qian",
"Zhouyi",
""
],
[
"Qi",
"Biqing",
""
],
[
"Li",
"Xiu",
""
],
[
"Zhou",
"Bowen",
""
]
] | TITLE: GenPRM: Scaling Test-Time Compute of Process Reward Models via
Generative Reasoning
ABSTRACT: Recent advancements in Large Language Models (LLMs) have shown that it is
promising to utilize Process Reward Models (PRMs) as verifiers to enhance the
performance of LLMs. However, current PRMs face three key challenges: (1)
limited process supervision and generalization capabilities, (2) dependence on
scalar value prediction without leveraging the generative abilities of LLMs,
and (3) inability to scale the test-time compute of PRMs. In this work, we
introduce GenPRM, a generative process reward model that performs explicit
Chain-of-Thought (CoT) reasoning with code verification before providing
judgment for each reasoning step. To obtain high-quality process supervision
labels and rationale data, we propose Relative Progress Estimation (RPE) and a
rationale synthesis framework that incorporates code verification. Experimental
results on ProcessBench and several mathematical reasoning tasks show that
GenPRM significantly outperforms prior PRMs with only 23K training data from
MATH dataset. Through test-time scaling, a 1.5B GenPRM outperforms GPT-4o, and
a 7B GenPRM surpasses Qwen2.5-Math-PRM-72B on ProcessBench. Additionally,
GenPRM demonstrates strong abilities to serve as a critic model for policy
model refinement. This work establishes a new paradigm for process supervision
that bridges the gap between PRMs and critic models in LLMs. Our code, model,
and data will be available in https://ryanliu112.github.io/GenPRM.
|
2504.00969 | Giovanni Cioffi | Giovanni Cioffi, Leonard Bauersfeld, Davide Scaramuzza | HDVIO2.0: Wind and Disturbance Estimation with Hybrid Dynamics VIO | null | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual-inertial odometry (VIO) is widely used for state estimation in
autonomous micro aerial vehicles using onboard sensors. Current methods improve
VIO by incorporating a model of the translational vehicle dynamics, yet their
performance degrades when faced with low-accuracy vehicle models or continuous
external disturbances, like wind. Additionally, incorporating rotational
dynamics in these models is computationally intractable when they are deployed
in online applications, e.g., in a closed-loop control system. We present
HDVIO2.0, which models full 6-DoF, translational and rotational, vehicle
dynamics and tightly incorporates them into a VIO with minimal impact on the
runtime. HDVIO2.0 builds upon the previous work, HDVIO, and addresses these
challenges through a hybrid dynamics model combining a point-mass vehicle model
with a learning-based component, with access to control commands and IMU
history, to capture complex aerodynamic effects. The key idea behind modeling
the rotational dynamics is to represent them with continuous-time functions.
HDVIO2.0 leverages the divergence between the actual motion and the predicted
motion from the hybrid dynamics model to estimate external forces as well as
the robot state. Our system surpasses the performance of state-of-the-art
methods in experiments using public and new drone dynamics datasets, as well as
real-world flights in winds up to 25 km/h. Unlike existing approaches, we also
show that accurate vehicle dynamics predictions are achievable without precise
knowledge of the full vehicle state.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 17:08:27 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 06:48:15 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Cioffi",
"Giovanni",
""
],
[
"Bauersfeld",
"Leonard",
""
],
[
"Scaramuzza",
"Davide",
""
]
] | TITLE: HDVIO2.0: Wind and Disturbance Estimation with Hybrid Dynamics VIO
ABSTRACT: Visual-inertial odometry (VIO) is widely used for state estimation in
autonomous micro aerial vehicles using onboard sensors. Current methods improve
VIO by incorporating a model of the translational vehicle dynamics, yet their
performance degrades when faced with low-accuracy vehicle models or continuous
external disturbances, like wind. Additionally, incorporating rotational
dynamics in these models is computationally intractable when they are deployed
in online applications, e.g., in a closed-loop control system. We present
HDVIO2.0, which models full 6-DoF, translational and rotational, vehicle
dynamics and tightly incorporates them into a VIO with minimal impact on the
runtime. HDVIO2.0 builds upon the previous work, HDVIO, and addresses these
challenges through a hybrid dynamics model combining a point-mass vehicle model
with a learning-based component, with access to control commands and IMU
history, to capture complex aerodynamic effects. The key idea behind modeling
the rotational dynamics is to represent them with continuous-time functions.
HDVIO2.0 leverages the divergence between the actual motion and the predicted
motion from the hybrid dynamics model to estimate external forces as well as
the robot state. Our system surpasses the performance of state-of-the-art
methods in experiments using public and new drone dynamics datasets, as well as
real-world flights in winds up to 25 km/h. Unlike existing approaches, we also
show that accurate vehicle dynamics predictions are achievable without precise
knowledge of the full vehicle state.
|
2504.00993 | Juncheng Wu | Juncheng Wu, Wenlong Deng, Xingxuan Li, Sheng Liu, Taomian Mi, Yifan
Peng, Ziyang Xu, Yi Liu, Hyunjin Cho, Chang-In Choi, Yihan Cao, Hui Ren,
Xiang Li, Xiaoxiao Li, Yuyin Zhou | MedReason: Eliciting Factual Medical Reasoning Steps in LLMs via
Knowledge Graphs | 18 pages, 11 figures, 6 tables. Project page:
https://github.com/UCSC-VLAA/MedReason | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Medical tasks such as diagnosis and treatment planning require precise and
complex reasoning, particularly in life-critical domains. Unlike mathematical
reasoning, medical reasoning demands meticulous, verifiable thought processes
to ensure reliability and accuracy. However, there is a notable lack of
datasets that provide transparent, step-by-step reasoning to validate and
enhance the medical reasoning ability of AI models. To bridge this gap, we
introduce MedReason, a large-scale high-quality medical reasoning dataset
designed to enable faithful and explainable medical problem-solving in large
language models (LLMs). We utilize a structured medical knowledge graph (KG) to
convert clinical QA pairs into logical chains of reasoning, or ``thinking
paths'', which trace connections from question elements to answers via relevant
KG entities. Each path is validated for consistency with clinical logic and
evidence-based medicine. Our pipeline generates detailed reasoning for various
medical questions from 7 medical datasets, resulting in a dataset of 32,682
question-answer pairs, each with detailed, step-by-step explanations.
Experiments demonstrate that fine-tuning with our dataset consistently boosts
medical problem-solving capabilities, achieving significant gains of up to 7.7%
for DeepSeek-Ditill-8B. Our top-performing model, MedReason-8B, outperforms the
Huatuo-o1-8B, a state-of-the-art medical reasoning model, by up to 4.2% on the
clinical benchmark MedBullets. We also engage medical professionals from
diverse specialties to assess our dataset's quality, ensuring MedReason offers
accurate and coherent medical reasoning. Our data, models, and code is
available at https://github.com/UCSC-VLAA/MedReason.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 17:31:44 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 18:29:18 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wu",
"Juncheng",
""
],
[
"Deng",
"Wenlong",
""
],
[
"Li",
"Xingxuan",
""
],
[
"Liu",
"Sheng",
""
],
[
"Mi",
"Taomian",
""
],
[
"Peng",
"Yifan",
""
],
[
"Xu",
"Ziyang",
""
],
[
"Liu",
"Yi",
""
],
[
"Cho",
"Hyunjin",
""
],
[
"Choi",
"Chang-In",
""
],
[
"Cao",
"Yihan",
""
],
[
"Ren",
"Hui",
""
],
[
"Li",
"Xiang",
""
],
[
"Li",
"Xiaoxiao",
""
],
[
"Zhou",
"Yuyin",
""
]
] | TITLE: MedReason: Eliciting Factual Medical Reasoning Steps in LLMs via
Knowledge Graphs
ABSTRACT: Medical tasks such as diagnosis and treatment planning require precise and
complex reasoning, particularly in life-critical domains. Unlike mathematical
reasoning, medical reasoning demands meticulous, verifiable thought processes
to ensure reliability and accuracy. However, there is a notable lack of
datasets that provide transparent, step-by-step reasoning to validate and
enhance the medical reasoning ability of AI models. To bridge this gap, we
introduce MedReason, a large-scale high-quality medical reasoning dataset
designed to enable faithful and explainable medical problem-solving in large
language models (LLMs). We utilize a structured medical knowledge graph (KG) to
convert clinical QA pairs into logical chains of reasoning, or ``thinking
paths'', which trace connections from question elements to answers via relevant
KG entities. Each path is validated for consistency with clinical logic and
evidence-based medicine. Our pipeline generates detailed reasoning for various
medical questions from 7 medical datasets, resulting in a dataset of 32,682
question-answer pairs, each with detailed, step-by-step explanations.
Experiments demonstrate that fine-tuning with our dataset consistently boosts
medical problem-solving capabilities, achieving significant gains of up to 7.7%
for DeepSeek-Ditill-8B. Our top-performing model, MedReason-8B, outperforms the
Huatuo-o1-8B, a state-of-the-art medical reasoning model, by up to 4.2% on the
clinical benchmark MedBullets. We also engage medical professionals from
diverse specialties to assess our dataset's quality, ensuring MedReason offers
accurate and coherent medical reasoning. Our data, models, and code is
available at https://github.com/UCSC-VLAA/MedReason.
|
2504.01308 | Jiawei Wang | Jiawei Wang and Yushen Zuo and Yuanjun Chai and Zhendong Liu and
Yicheng Fu and Yichun Feng and Kin-Man Lam | Safeguarding Vision-Language Models: Mitigating Vulnerabilities to
Gaussian Noise in Perturbation-based Attacks | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Vision-Language Models (VLMs) extend the capabilities of Large Language
Models (LLMs) by incorporating visual information, yet they remain vulnerable
to jailbreak attacks, especially when processing noisy or corrupted images.
Although existing VLMs adopt security measures during training to mitigate such
attacks, vulnerabilities associated with noise-augmented visual inputs are
overlooked. In this work, we identify that missing noise-augmented training
causes critical security gaps: many VLMs are susceptible to even simple
perturbations such as Gaussian noise. To address this challenge, we propose
Robust-VLGuard, a multimodal safety dataset with aligned / misaligned
image-text pairs, combined with noise-augmented fine-tuning that reduces attack
success rates while preserving functionality of VLM. For stronger
optimization-based visual perturbation attacks, we propose DiffPure-VLM,
leveraging diffusion models to convert adversarial perturbations into
Gaussian-like noise, which can be defended by VLMs with noise-augmented safety
fine-tuning. Experimental results demonstrate that the distribution-shifting
property of diffusion model aligns well with our fine-tuned VLMs, significantly
mitigating adversarial perturbations across varying intensities. The dataset
and code are available at https://github.com/JarvisUSTC/DiffPure-RobustVLM.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 02:35:19 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 02:40:38 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Jiawei",
""
],
[
"Zuo",
"Yushen",
""
],
[
"Chai",
"Yuanjun",
""
],
[
"Liu",
"Zhendong",
""
],
[
"Fu",
"Yicheng",
""
],
[
"Feng",
"Yichun",
""
],
[
"Lam",
"Kin-Man",
""
]
] | TITLE: Safeguarding Vision-Language Models: Mitigating Vulnerabilities to
Gaussian Noise in Perturbation-based Attacks
ABSTRACT: Vision-Language Models (VLMs) extend the capabilities of Large Language
Models (LLMs) by incorporating visual information, yet they remain vulnerable
to jailbreak attacks, especially when processing noisy or corrupted images.
Although existing VLMs adopt security measures during training to mitigate such
attacks, vulnerabilities associated with noise-augmented visual inputs are
overlooked. In this work, we identify that missing noise-augmented training
causes critical security gaps: many VLMs are susceptible to even simple
perturbations such as Gaussian noise. To address this challenge, we propose
Robust-VLGuard, a multimodal safety dataset with aligned / misaligned
image-text pairs, combined with noise-augmented fine-tuning that reduces attack
success rates while preserving functionality of VLM. For stronger
optimization-based visual perturbation attacks, we propose DiffPure-VLM,
leveraging diffusion models to convert adversarial perturbations into
Gaussian-like noise, which can be defended by VLMs with noise-augmented safety
fine-tuning. Experimental results demonstrate that the distribution-shifting
property of diffusion model aligns well with our fine-tuned VLMs, significantly
mitigating adversarial perturbations across varying intensities. The dataset
and code are available at https://github.com/JarvisUSTC/DiffPure-RobustVLM.
|
2504.01553 | Zihao Wu | Zihao Wu | Bhakti: A Lightweight Vector Database Management System for Endowing
Large Language Models with Semantic Search Capabilities and Memory | 17 pages,5 figures | null | null | null | cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid development of big data and artificial intelligence
technologies, the demand for effective processing and retrieval of vector data
is growing. Against this backdrop, I have developed the Bhakti vector database,
aiming to provide a lightweight and easy-to-deploy solution to meet the storage
and semantic search needs of small and medium-sized datasets. Bhakti supports a
variety of similarity calculation methods and a domain-specific language (DSL)
for document-based pattern matching pre-filtering, facilitating migration of
data with its portable data files, flexible data management and seamless
integration with Python3. Furthermore, I propose a memory-enhanced large
language model dialogue solution based on the Bhakti database, which can assign
different weights to the question and answer in dialogue history, achieving
fine-grained control over the semantic importance of each segment in a single
dialogue history. Through experimental validation, my method shows significant
performance in the application of semantic search and question-answering
systems. Although there are limitations in processing large datasets, such as
not supporting approximate calculation methods like HNSW, the lightweight
nature of Bhakti gives it a clear advantage in scenarios involving small and
medium-sized datasets.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 09:52:54 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 02:33:21 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wu",
"Zihao",
""
]
] | TITLE: Bhakti: A Lightweight Vector Database Management System for Endowing
Large Language Models with Semantic Search Capabilities and Memory
ABSTRACT: With the rapid development of big data and artificial intelligence
technologies, the demand for effective processing and retrieval of vector data
is growing. Against this backdrop, I have developed the Bhakti vector database,
aiming to provide a lightweight and easy-to-deploy solution to meet the storage
and semantic search needs of small and medium-sized datasets. Bhakti supports a
variety of similarity calculation methods and a domain-specific language (DSL)
for document-based pattern matching pre-filtering, facilitating migration of
data with its portable data files, flexible data management and seamless
integration with Python3. Furthermore, I propose a memory-enhanced large
language model dialogue solution based on the Bhakti database, which can assign
different weights to the question and answer in dialogue history, achieving
fine-grained control over the semantic importance of each segment in a single
dialogue history. Through experimental validation, my method shows significant
performance in the application of semantic search and question-answering
systems. Although there are limitations in processing large datasets, such as
not supporting approximate calculation methods like HNSW, the lightweight
nature of Bhakti gives it a clear advantage in scenarios involving small and
medium-sized datasets.
|
2504.01636 | Stefan-Razvan Anton | Stefan R. Anton, Denis E. Tranca, Stefan G. Stanciu, Adrian M. Ionescu
and George A. Stanciu | Dataset and Methodology for Material Identification Using AFM Phase
Approach Curves | null | null | null | null | physics.optics | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Atomic force microscopy (AFM) phase approach-curves have significant
potential for nanoscale material characterization, however, the availability of
robust datasets and automated analysis tools has been limited. In this paper,
we introduce a novel methodology for material identification using a
high-dimensional dataset consisting of AFM phase approach-curves collected from
five distinct materials: silicon, silicon dioxide, platinum, silver, and gold.
Each measurement comprises 50 phase values obtained at progressively increasing
tip-sample distances, resulting in 50x50x50 voxel images that represent phase
variations at different depths. Using this dataset, we compare k-nearest
neighbors (KNN), random forest (RF), and feedforward neural network (FNN)
methods for material segmentation. Our results indicate that the FNN provides
the highest accuracy and F1 score, outperforming more traditional approaches.
Finally, we demonstrate the practical value of these segmented maps by
generating simulated scattering-type scanning near-field optical microscopy
(s-SNOM) images, highlighting how AFM phase approach-curves can be leveraged to
produce detailed, predictive tools for nanoscale optical analysis.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 11:42:03 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 19:37:19 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Anton",
"Stefan R.",
""
],
[
"Tranca",
"Denis E.",
""
],
[
"Stanciu",
"Stefan G.",
""
],
[
"Ionescu",
"Adrian M.",
""
],
[
"Stanciu",
"George A.",
""
]
] | TITLE: Dataset and Methodology for Material Identification Using AFM Phase
Approach Curves
ABSTRACT: Atomic force microscopy (AFM) phase approach-curves have significant
potential for nanoscale material characterization, however, the availability of
robust datasets and automated analysis tools has been limited. In this paper,
we introduce a novel methodology for material identification using a
high-dimensional dataset consisting of AFM phase approach-curves collected from
five distinct materials: silicon, silicon dioxide, platinum, silver, and gold.
Each measurement comprises 50 phase values obtained at progressively increasing
tip-sample distances, resulting in 50x50x50 voxel images that represent phase
variations at different depths. Using this dataset, we compare k-nearest
neighbors (KNN), random forest (RF), and feedforward neural network (FNN)
methods for material segmentation. Our results indicate that the FNN provides
the highest accuracy and F1 score, outperforming more traditional approaches.
Finally, we demonstrate the practical value of these segmented maps by
generating simulated scattering-type scanning near-field optical microscopy
(s-SNOM) images, highlighting how AFM phase approach-curves can be leveraged to
produce detailed, predictive tools for nanoscale optical analysis.
|
2504.01774 | Jiankai Tang | Kegang Wang, Jiankai Tang, Yuxuan Fan, Jiatong Ji, Yuanchun Shi, and
Yuntao Wang | Memory-efficient Low-latency Remote Photoplethysmography through
Temporal-Spatial State Space Duality | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Remote photoplethysmography (rPPG), enabling non-contact physiological
monitoring through facial light reflection analysis, faces critical
computational bottlenecks as deep learning introduces performance gains at the
cost of prohibitive resource demands. This paper proposes ME-rPPG, a
memory-efficient algorithm built on temporal-spatial state space duality, which
resolves the trilemma of model scalability, cross-dataset generalization, and
real-time constraints. Leveraging a transferable state space, ME-rPPG
efficiently captures subtle periodic variations across facial frames while
maintaining minimal computational overhead, enabling training on extended video
sequences and supporting low-latency inference. Achieving cross-dataset MAEs of
5.38 (MMPD), 0.70 (VitalVideo), and 0.25 (PURE), ME-rPPG outperforms all
baselines with improvements ranging from 21.3% to 60.2%. Our solution enables
real-time inference with only 3.6 MB memory usage and 9.46 ms latency --
surpassing existing methods by 19.5%-49.7% accuracy and 43.2% user satisfaction
gains in real-world deployments. The code and demos are released for
reproducibility on https://health-hci-group.github.io/ME-rPPG-demo/.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 14:34:04 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 05:04:14 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Kegang",
""
],
[
"Tang",
"Jiankai",
""
],
[
"Fan",
"Yuxuan",
""
],
[
"Ji",
"Jiatong",
""
],
[
"Shi",
"Yuanchun",
""
],
[
"Wang",
"Yuntao",
""
]
] | TITLE: Memory-efficient Low-latency Remote Photoplethysmography through
Temporal-Spatial State Space Duality
ABSTRACT: Remote photoplethysmography (rPPG), enabling non-contact physiological
monitoring through facial light reflection analysis, faces critical
computational bottlenecks as deep learning introduces performance gains at the
cost of prohibitive resource demands. This paper proposes ME-rPPG, a
memory-efficient algorithm built on temporal-spatial state space duality, which
resolves the trilemma of model scalability, cross-dataset generalization, and
real-time constraints. Leveraging a transferable state space, ME-rPPG
efficiently captures subtle periodic variations across facial frames while
maintaining minimal computational overhead, enabling training on extended video
sequences and supporting low-latency inference. Achieving cross-dataset MAEs of
5.38 (MMPD), 0.70 (VitalVideo), and 0.25 (PURE), ME-rPPG outperforms all
baselines with improvements ranging from 21.3% to 60.2%. Our solution enables
real-time inference with only 3.6 MB memory usage and 9.46 ms latency --
surpassing existing methods by 19.5%-49.7% accuracy and 43.2% user satisfaction
gains in real-world deployments. The code and demos are released for
reproducibility on https://health-hci-group.github.io/ME-rPPG-demo/.
|
2504.01890 | Shreyank N Gowda | Shreyank N Gowda, Boyan Gao, Xiao Gu, Xiaobo Jin | Is Temporal Prompting All We Need For Limited Labeled Action
Recognition? | Accepted in CVPR-W 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Video understanding has shown remarkable improvements in recent years,
largely dependent on the availability of large scaled labeled datasets. Recent
advancements in visual-language models, especially based on contrastive
pretraining, have shown remarkable generalization in zero-shot tasks, helping
to overcome this dependence on labeled datasets. Adaptations of such models for
videos, typically involve modifying the architecture of vision-language models
to cater to video data. However, this is not trivial, since such adaptations
are mostly computationally intensive and struggle with temporal modeling. We
present TP-CLIP, an adaptation of CLIP that leverages temporal visual prompting
for temporal adaptation without modifying the core CLIP architecture. This
preserves its generalization abilities. TP-CLIP efficiently integrates into the
CLIP architecture, leveraging its pre-trained capabilities for video data.
Extensive experiments across various datasets demonstrate its efficacy in
zero-shot and few-shot learning, outperforming existing approaches with fewer
parameters and computational efficiency. In particular, we use just 1/3 the
GFLOPs and 1/28 the number of tuneable parameters in comparison to recent
state-of-the-art and still outperform it by up to 15.8% depending on the task
and dataset.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 16:50:28 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 08:59:15 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Gowda",
"Shreyank N",
""
],
[
"Gao",
"Boyan",
""
],
[
"Gu",
"Xiao",
""
],
[
"Jin",
"Xiaobo",
""
]
] | TITLE: Is Temporal Prompting All We Need For Limited Labeled Action
Recognition?
ABSTRACT: Video understanding has shown remarkable improvements in recent years,
largely dependent on the availability of large scaled labeled datasets. Recent
advancements in visual-language models, especially based on contrastive
pretraining, have shown remarkable generalization in zero-shot tasks, helping
to overcome this dependence on labeled datasets. Adaptations of such models for
videos, typically involve modifying the architecture of vision-language models
to cater to video data. However, this is not trivial, since such adaptations
are mostly computationally intensive and struggle with temporal modeling. We
present TP-CLIP, an adaptation of CLIP that leverages temporal visual prompting
for temporal adaptation without modifying the core CLIP architecture. This
preserves its generalization abilities. TP-CLIP efficiently integrates into the
CLIP architecture, leveraging its pre-trained capabilities for video data.
Extensive experiments across various datasets demonstrate its efficacy in
zero-shot and few-shot learning, outperforming existing approaches with fewer
parameters and computational efficiency. In particular, we use just 1/3 the
GFLOPs and 1/28 the number of tuneable parameters in comparison to recent
state-of-the-art and still outperform it by up to 15.8% depending on the task
and dataset.
|
2504.02052 | Yuetian Mao | Yuetian Mao, Junjie He, Chunyang Chen | From Prompts to Templates: A Systematic Prompt Template Analysis for
Real-world LLMapps | Accepted at ACM International Conference on the Foundations of
Software Engineering (FSE 2025) Industry Track | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have revolutionized human-AI interaction by
enabling intuitive task execution through natural language prompts. Despite
their potential, designing effective prompts remains a significant challenge,
as small variations in structure or wording can result in substantial
differences in output. To address these challenges, LLM-powered applications
(LLMapps) rely on prompt templates to simplify interactions, enhance usability,
and support specialized tasks such as document analysis, creative content
generation, and code synthesis. However, current practices heavily depend on
individual expertise and iterative trial-and-error processes, underscoring the
need for systematic methods to optimize prompt template design in LLMapps. This
paper presents a comprehensive analysis of prompt templates in practical
LLMapps. We construct a dataset of real-world templates from open-source
LLMapps, including those from leading companies like Uber and Microsoft.
Through a combination of LLM-driven analysis and human review, we categorize
template components and placeholders, analyze their distributions, and identify
frequent co-occurrence patterns. Additionally, we evaluate the impact of
identified patterns on LLMs' instruction-following performance through sample
testing. Our findings provide practical insights on prompt template design for
developers, supporting the broader adoption and optimization of LLMapps in
industrial settings.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 18:20:06 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 08:25:21 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Mao",
"Yuetian",
""
],
[
"He",
"Junjie",
""
],
[
"Chen",
"Chunyang",
""
]
] | TITLE: From Prompts to Templates: A Systematic Prompt Template Analysis for
Real-world LLMapps
ABSTRACT: Large Language Models (LLMs) have revolutionized human-AI interaction by
enabling intuitive task execution through natural language prompts. Despite
their potential, designing effective prompts remains a significant challenge,
as small variations in structure or wording can result in substantial
differences in output. To address these challenges, LLM-powered applications
(LLMapps) rely on prompt templates to simplify interactions, enhance usability,
and support specialized tasks such as document analysis, creative content
generation, and code synthesis. However, current practices heavily depend on
individual expertise and iterative trial-and-error processes, underscoring the
need for systematic methods to optimize prompt template design in LLMapps. This
paper presents a comprehensive analysis of prompt templates in practical
LLMapps. We construct a dataset of real-world templates from open-source
LLMapps, including those from leading companies like Uber and Microsoft.
Through a combination of LLM-driven analysis and human review, we categorize
template components and placeholders, analyze their distributions, and identify
frequent co-occurrence patterns. Additionally, we evaluate the impact of
identified patterns on LLMs' instruction-following performance through sample
testing. Our findings provide practical insights on prompt template design for
developers, supporting the broader adoption and optimization of LLMapps in
industrial settings.
|
2504.02259 | Jinhui Ye | Jinhui Ye, Zihan Wang, Haosen Sun, Keshigeyan Chandrasegaran, Zane
Durante, Cristobal Eyzaguirre, Yonatan Bisk, Juan Carlos Niebles, Ehsan
Adeli, Li Fei-Fei, Jiajun Wu and Manling Li | Re-thinking Temporal Search for Long-Form Video Understanding | Accepted by CVPR 2025; A real-world long video needle-in-haystack
benchmark; long-video QA with human ref frames | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Efficiently understanding long-form videos remains a significant challenge in
computer vision. In this work, we revisit temporal search paradigms for
long-form video understanding and address a fundamental issue pertaining to all
state-of-the-art (SOTA) long-context vision-language models (VLMs). Our
contributions are twofold: First, we frame temporal search as a Long Video
Haystack problem: finding a minimal set of relevant frames (e.g., one to five)
from tens of thousands based on specific queries. Upon this formulation, we
introduce LV-Haystack, the first dataset with 480 hours of videos, 15,092
human-annotated instances for both training and evaluation aiming to improve
temporal search quality and efficiency. Results on LV-Haystack highlight a
significant research gap in temporal search capabilities, with current SOTA
search methods only achieving 2.1% temporal F1 score on the Longvideobench
subset. Next, inspired by visual search in images, we propose a lightweight
temporal search framework, T* that reframes costly temporal search as spatial
search. T* leverages powerful visual localization techniques commonly used in
images and introduces an adaptive zooming-in mechanism that operates across
both temporal and spatial dimensions. Extensive experiments show that
integrating T* with existing methods significantly improves SOTA long-form
video understanding. Under an inference budget of 32 frames, T* improves
GPT-4o's performance from 50.5% to 53.1% and LLaVA-OneVision-OV-72B's
performance from 56.5% to 62.4% on the Longvideobench XL subset. Our code,
benchmark, and models are provided in the Supplementary material.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 04:03:10 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 14:10:42 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ye",
"Jinhui",
""
],
[
"Wang",
"Zihan",
""
],
[
"Sun",
"Haosen",
""
],
[
"Chandrasegaran",
"Keshigeyan",
""
],
[
"Durante",
"Zane",
""
],
[
"Eyzaguirre",
"Cristobal",
""
],
[
"Bisk",
"Yonatan",
""
],
[
"Niebles",
"Juan Carlos",
""
],
[
"Adeli",
"Ehsan",
""
],
[
"Fei-Fei",
"Li",
""
],
[
"Wu",
"Jiajun",
""
],
[
"Li",
"Manling",
""
]
] | TITLE: Re-thinking Temporal Search for Long-Form Video Understanding
ABSTRACT: Efficiently understanding long-form videos remains a significant challenge in
computer vision. In this work, we revisit temporal search paradigms for
long-form video understanding and address a fundamental issue pertaining to all
state-of-the-art (SOTA) long-context vision-language models (VLMs). Our
contributions are twofold: First, we frame temporal search as a Long Video
Haystack problem: finding a minimal set of relevant frames (e.g., one to five)
from tens of thousands based on specific queries. Upon this formulation, we
introduce LV-Haystack, the first dataset with 480 hours of videos, 15,092
human-annotated instances for both training and evaluation aiming to improve
temporal search quality and efficiency. Results on LV-Haystack highlight a
significant research gap in temporal search capabilities, with current SOTA
search methods only achieving 2.1% temporal F1 score on the Longvideobench
subset. Next, inspired by visual search in images, we propose a lightweight
temporal search framework, T* that reframes costly temporal search as spatial
search. T* leverages powerful visual localization techniques commonly used in
images and introduces an adaptive zooming-in mechanism that operates across
both temporal and spatial dimensions. Extensive experiments show that
integrating T* with existing methods significantly improves SOTA long-form
video understanding. Under an inference budget of 32 frames, T* improves
GPT-4o's performance from 50.5% to 53.1% and LLaVA-OneVision-OV-72B's
performance from 56.5% to 62.4% on the Longvideobench XL subset. Our code,
benchmark, and models are provided in the Supplementary material.
|
2504.02279 | Trung Thanh Nguyen | Trung Thanh Nguyen, Yasutomo Kawanishi, Vijay John, Takahiro Komamizu,
Ichiro Ide | MultiTSF: Transformer-based Sensor Fusion for Human-Centric Multi-view
and Multi-modal Action Recognition | This is a part of article arXiv:2504.02287 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Action recognition from multi-modal and multi-view observations holds
significant potential for applications in surveillance, robotics, and smart
environments. However, existing methods often fall short of addressing
real-world challenges such as diverse environmental conditions, strict sensor
synchronization, and the need for fine-grained annotations. In this study, we
propose the Multi-modal Multi-view Transformer-based Sensor Fusion (MultiTSF).
The proposed method leverages a Transformer-based to dynamically model
inter-view relationships and capture temporal dependencies across multiple
views. Additionally, we introduce a Human Detection Module to generate
pseudo-ground-truth labels, enabling the model to prioritize frames containing
human activity and enhance spatial feature learning. Comprehensive experiments
conducted on our in-house MultiSensor-Home dataset and the existing MM-Office
dataset demonstrate that MultiTSF outperforms state-of-the-art methods in both
video sequence-level and frame-level action recognition settings.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 05:04:05 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 11:53:15 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Nguyen",
"Trung Thanh",
""
],
[
"Kawanishi",
"Yasutomo",
""
],
[
"John",
"Vijay",
""
],
[
"Komamizu",
"Takahiro",
""
],
[
"Ide",
"Ichiro",
""
]
] | TITLE: MultiTSF: Transformer-based Sensor Fusion for Human-Centric Multi-view
and Multi-modal Action Recognition
ABSTRACT: Action recognition from multi-modal and multi-view observations holds
significant potential for applications in surveillance, robotics, and smart
environments. However, existing methods often fall short of addressing
real-world challenges such as diverse environmental conditions, strict sensor
synchronization, and the need for fine-grained annotations. In this study, we
propose the Multi-modal Multi-view Transformer-based Sensor Fusion (MultiTSF).
The proposed method leverages a Transformer-based to dynamically model
inter-view relationships and capture temporal dependencies across multiple
views. Additionally, we introduce a Human Detection Module to generate
pseudo-ground-truth labels, enabling the model to prioritize frames containing
human activity and enhance spatial feature learning. Comprehensive experiments
conducted on our in-house MultiSensor-Home dataset and the existing MM-Office
dataset demonstrate that MultiTSF outperforms state-of-the-art methods in both
video sequence-level and frame-level action recognition settings.
|
2504.02559 | Tushar Kataria | Siddharth Khincha, Tushar Kataria, Ankita Anand, Dan Roth, Vivek Gupta | Leveraging LLM For Synchronizing Information Across Multilingual Tables | 17 Pages, 11 Tables, 2 Figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The vast amount of online information today poses challenges for non-English
speakers, as much of it is concentrated in high-resource languages such as
English and French. Wikipedia reflects this imbalance, with content in
low-resource languages frequently outdated or incomplete. Recent research has
sought to improve cross-language synchronization of Wikipedia tables using
rule-based methods. These approaches can be effective, but they struggle with
complexity and generalization. This paper explores large language models (LLMs)
for multilingual information synchronization, using zero-shot prompting as a
scalable solution. We introduce the Information Updation dataset, simulating
the real-world process of updating outdated Wikipedia tables, and evaluate LLM
performance. Our findings reveal that single-prompt approaches often produce
suboptimal results, prompting us to introduce a task decomposition strategy
that enhances coherence and accuracy. Our proposed method outperforms existing
baselines, particularly in Information Updation (1.79%) and Information
Addition (20.58%), highlighting the model strength in dynamically updating and
enriching data across architectures.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 13:15:18 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 19:18:32 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Khincha",
"Siddharth",
""
],
[
"Kataria",
"Tushar",
""
],
[
"Anand",
"Ankita",
""
],
[
"Roth",
"Dan",
""
],
[
"Gupta",
"Vivek",
""
]
] | TITLE: Leveraging LLM For Synchronizing Information Across Multilingual Tables
ABSTRACT: The vast amount of online information today poses challenges for non-English
speakers, as much of it is concentrated in high-resource languages such as
English and French. Wikipedia reflects this imbalance, with content in
low-resource languages frequently outdated or incomplete. Recent research has
sought to improve cross-language synchronization of Wikipedia tables using
rule-based methods. These approaches can be effective, but they struggle with
complexity and generalization. This paper explores large language models (LLMs)
for multilingual information synchronization, using zero-shot prompting as a
scalable solution. We introduce the Information Updation dataset, simulating
the real-world process of updating outdated Wikipedia tables, and evaluate LLM
performance. Our findings reveal that single-prompt approaches often produce
suboptimal results, prompting us to introduce a task decomposition strategy
that enhances coherence and accuracy. Our proposed method outperforms existing
baselines, particularly in Information Updation (1.79%) and Information
Addition (20.58%), highlighting the model strength in dynamically updating and
enriching data across architectures.
|
2504.02658 | Beichen Huang | Beichen Huang, Yueming Yuan, Zelei Shao, Minjia Zhang | MiLo: Efficient Quantized MoE Inference with Mixture of Low-Rank
Compensators | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | A critical approach for efficiently deploying Mixture-of-Experts (MoE) models
with massive parameters is quantization. However, state-of-the-art MoE models
suffer from non-negligible accuracy loss with extreme quantization, such as
under 4 bits. To address this, we introduce MiLo, a novel method that augments
highly quantized MoEs with a mixture of low-rank compensators. These
compensators consume only a small amount of additional memory but significantly
recover accuracy loss from extreme quantization. MiLo also identifies that
MoEmodels exhibit distinctive characteristics across weights due to their
hybrid dense-sparse architectures, and employs adaptive rank selection policies
along with iterative optimizations to close the accuracy gap. MiLo does not
rely on calibration data, allowing it to generalize to different MoE models and
datasets without overfitting to a calibration set. To avoid the hardware
inefficiencies of extreme quantization, such as 3-bit, MiLo develops Tensor
Core-friendly 3-bit kernels, enabling measured latency speedups on 3-bit
quantized MoE models. Our evaluation shows that MiLo outperforms existing
methods on SoTA MoE models across various tasks.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 14:54:17 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 17:09:26 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Huang",
"Beichen",
""
],
[
"Yuan",
"Yueming",
""
],
[
"Shao",
"Zelei",
""
],
[
"Zhang",
"Minjia",
""
]
] | TITLE: MiLo: Efficient Quantized MoE Inference with Mixture of Low-Rank
Compensators
ABSTRACT: A critical approach for efficiently deploying Mixture-of-Experts (MoE) models
with massive parameters is quantization. However, state-of-the-art MoE models
suffer from non-negligible accuracy loss with extreme quantization, such as
under 4 bits. To address this, we introduce MiLo, a novel method that augments
highly quantized MoEs with a mixture of low-rank compensators. These
compensators consume only a small amount of additional memory but significantly
recover accuracy loss from extreme quantization. MiLo also identifies that
MoEmodels exhibit distinctive characteristics across weights due to their
hybrid dense-sparse architectures, and employs adaptive rank selection policies
along with iterative optimizations to close the accuracy gap. MiLo does not
rely on calibration data, allowing it to generalize to different MoE models and
datasets without overfitting to a calibration set. To avoid the hardware
inefficiencies of extreme quantization, such as 3-bit, MiLo develops Tensor
Core-friendly 3-bit kernels, enabling measured latency speedups on 3-bit
quantized MoE models. Our evaluation shows that MiLo outperforms existing
methods on SoTA MoE models across various tasks.
|
2504.02965 | Abhishek Sharma | Abhishek Sharma and Dan Goldwasser | CoLa -- Learning to Interactively Collaborate with Large LMs | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | LLMs' remarkable ability to tackle a wide range of language tasks opened new
opportunities for collaborative human-AI problem solving. LLMs can amplify
human capabilities by applying their intuitions and reasoning strategies at
scale. We explore whether human guides can be simulated, by generalizing from
human demonstrations of guiding an AI system to solve complex language
problems. We introduce CoLa, a novel self-guided learning paradigm for training
automated $\textit{guides}$ and evaluate it on two QA datasets, a
puzzle-solving task, and a constrained text generation task. Our empirical
results show that CoLa consistently outperforms competitive approaches across
all domains. Moreover, a small-sized trained guide outperforms a strong model
like GPT-4 when acting as a guide. We compare the strategies employed by humans
and automated guides by conducting a human study on a QA dataset. We show that
automated guides outperform humans by adapting their strategies to reasoners'
capabilities and conduct qualitative analyses highlighting distinct differences
in guiding strategies.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 18:34:36 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 01:08:58 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Sharma",
"Abhishek",
""
],
[
"Goldwasser",
"Dan",
""
]
] | TITLE: CoLa -- Learning to Interactively Collaborate with Large LMs
ABSTRACT: LLMs' remarkable ability to tackle a wide range of language tasks opened new
opportunities for collaborative human-AI problem solving. LLMs can amplify
human capabilities by applying their intuitions and reasoning strategies at
scale. We explore whether human guides can be simulated, by generalizing from
human demonstrations of guiding an AI system to solve complex language
problems. We introduce CoLa, a novel self-guided learning paradigm for training
automated $\textit{guides}$ and evaluate it on two QA datasets, a
puzzle-solving task, and a constrained text generation task. Our empirical
results show that CoLa consistently outperforms competitive approaches across
all domains. Moreover, a small-sized trained guide outperforms a strong model
like GPT-4 when acting as a guide. We compare the strategies employed by humans
and automated guides by conducting a human study on a QA dataset. We show that
automated guides outperform humans by adapting their strategies to reasoners'
capabilities and conduct qualitative analyses highlighting distinct differences
in guiding strategies.
|
2504.03164 | Kexin Tian | Kexin Tian, Jingrui Mao, Yunlong Zhang, Jiwan Jiang, Yang Zhou,
Zhengzhong Tu | NuScenes-SpatialQA: A Spatial Understanding and Reasoning Benchmark for
Vision-Language Models in Autonomous Driving | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advancements in Vision-Language Models (VLMs) have demonstrated strong
potential for autonomous driving tasks. However, their spatial understanding
and reasoning-key capabilities for autonomous driving-still exhibit significant
limitations. Notably, none of the existing benchmarks systematically evaluate
VLMs' spatial reasoning capabilities in driving scenarios. To fill this gap, we
propose NuScenes-SpatialQA, the first large-scale ground-truth-based
Question-Answer (QA) benchmark specifically designed to evaluate the spatial
understanding and reasoning capabilities of VLMs in autonomous driving. Built
upon the NuScenes dataset, the benchmark is constructed through an automated 3D
scene graph generation pipeline and a QA generation pipeline. The benchmark
systematically evaluates VLMs' performance in both spatial understanding and
reasoning across multiple dimensions. Using this benchmark, we conduct
extensive experiments on diverse VLMs, including both general and
spatial-enhanced models, providing the first comprehensive evaluation of their
spatial capabilities in autonomous driving. Surprisingly, the experimental
results show that the spatial-enhanced VLM outperforms in qualitative QA but
does not demonstrate competitiveness in quantitative QA. In general, VLMs still
face considerable challenges in spatial understanding and reasoning.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 04:43:10 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 03:39:02 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Tian",
"Kexin",
""
],
[
"Mao",
"Jingrui",
""
],
[
"Zhang",
"Yunlong",
""
],
[
"Jiang",
"Jiwan",
""
],
[
"Zhou",
"Yang",
""
],
[
"Tu",
"Zhengzhong",
""
]
] | TITLE: NuScenes-SpatialQA: A Spatial Understanding and Reasoning Benchmark for
Vision-Language Models in Autonomous Driving
ABSTRACT: Recent advancements in Vision-Language Models (VLMs) have demonstrated strong
potential for autonomous driving tasks. However, their spatial understanding
and reasoning-key capabilities for autonomous driving-still exhibit significant
limitations. Notably, none of the existing benchmarks systematically evaluate
VLMs' spatial reasoning capabilities in driving scenarios. To fill this gap, we
propose NuScenes-SpatialQA, the first large-scale ground-truth-based
Question-Answer (QA) benchmark specifically designed to evaluate the spatial
understanding and reasoning capabilities of VLMs in autonomous driving. Built
upon the NuScenes dataset, the benchmark is constructed through an automated 3D
scene graph generation pipeline and a QA generation pipeline. The benchmark
systematically evaluates VLMs' performance in both spatial understanding and
reasoning across multiple dimensions. Using this benchmark, we conduct
extensive experiments on diverse VLMs, including both general and
spatial-enhanced models, providing the first comprehensive evaluation of their
spatial capabilities in autonomous driving. Surprisingly, the experimental
results show that the spatial-enhanced VLM outperforms in qualitative QA but
does not demonstrate competitiveness in quantitative QA. In general, VLMs still
face considerable challenges in spatial understanding and reasoning.
|
2504.03197 | Jaewoo Park | Jaewoo Park, Jungyang Park, Dongju Jang, Jiwan Chung, Byungwoo Yoo,
Jaewoo Shin, Seonjoon Park, Taehyeong Kim, Youngjae Yu | Explain with Visual Keypoints Like a Real Mentor! A Benchmark for
Multimodal Solution Explanation | 18 pages, 4 figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid advancement of mathematical reasoning capabilities in Large
Language Models (LLMs), AI systems are increasingly being adopted in
educational settings to support students' comprehension of problem-solving
processes. However, a critical component remains underexplored in current
LLM-generated explanations: visual explanation. In real-world instructional
contexts, human tutors routinely employ visual aids - such as diagrams,
markings, and highlights - to enhance conceptual clarity. To bridge this gap,
we introduce a novel task of visual solution explanation, which requires
generating explanations that incorporate newly introduced visual elements
essential for understanding (e.g., auxiliary lines, annotations, or geometric
constructions). To evaluate model performance on this task, we propose
MathExplain, a multimodal benchmark consisting of 997 math problems annotated
with visual keypoints and corresponding explanatory text that references those
elements. Our empirical results show that while some closed-source models
demonstrate promising capabilities on visual solution-explaining, current
open-source general-purpose models perform inconsistently, particularly in
identifying relevant visual components and producing coherent keypoint-based
explanations. We expect that visual solution-explaining and the MathExplain
dataset will catalyze further research on multimodal LLMs in education and
advance their deployment as effective, explanation-oriented AI tutors. Code and
data will be released publicly.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 06:03:13 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 14:23:25 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Park",
"Jaewoo",
""
],
[
"Park",
"Jungyang",
""
],
[
"Jang",
"Dongju",
""
],
[
"Chung",
"Jiwan",
""
],
[
"Yoo",
"Byungwoo",
""
],
[
"Shin",
"Jaewoo",
""
],
[
"Park",
"Seonjoon",
""
],
[
"Kim",
"Taehyeong",
""
],
[
"Yu",
"Youngjae",
""
]
] | TITLE: Explain with Visual Keypoints Like a Real Mentor! A Benchmark for
Multimodal Solution Explanation
ABSTRACT: With the rapid advancement of mathematical reasoning capabilities in Large
Language Models (LLMs), AI systems are increasingly being adopted in
educational settings to support students' comprehension of problem-solving
processes. However, a critical component remains underexplored in current
LLM-generated explanations: visual explanation. In real-world instructional
contexts, human tutors routinely employ visual aids - such as diagrams,
markings, and highlights - to enhance conceptual clarity. To bridge this gap,
we introduce a novel task of visual solution explanation, which requires
generating explanations that incorporate newly introduced visual elements
essential for understanding (e.g., auxiliary lines, annotations, or geometric
constructions). To evaluate model performance on this task, we propose
MathExplain, a multimodal benchmark consisting of 997 math problems annotated
with visual keypoints and corresponding explanatory text that references those
elements. Our empirical results show that while some closed-source models
demonstrate promising capabilities on visual solution-explaining, current
open-source general-purpose models perform inconsistently, particularly in
identifying relevant visual components and producing coherent keypoint-based
explanations. We expect that visual solution-explaining and the MathExplain
dataset will catalyze further research on multimodal LLMs in education and
advance their deployment as effective, explanation-oriented AI tutors. Code and
data will be released publicly.
|
2504.03438 | Shichen Qiao | Sheng Yang, Tong Zhan, Shichen Qiao, Jicheng Gong, Qing Yang, Jian
Wang, Yanfeng Lu | ZFusion: An Effective Fuser of Camera and 4D Radar for 3D Object
Perception in Autonomous Driving | CVPR 2025 WDFM-AD | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reliable 3D object perception is essential in autonomous driving. Owing to
its sensing capabilities in all weather conditions, 4D radar has recently
received much attention. However, compared to LiDAR, 4D radar provides much
sparser point cloud. In this paper, we propose a 3D object detection method,
termed ZFusion, which fuses 4D radar and vision modality. As the core of
ZFusion, our proposed FP-DDCA (Feature Pyramid-Double Deformable Cross
Attention) fuser complements the (sparse) radar information and (dense) vision
information, effectively. Specifically, with a feature-pyramid structure, the
FP-DDCA fuser packs Transformer blocks to interactively fuse multi-modal
features at different scales, thus enhancing perception accuracy. In addition,
we utilize the Depth-Context-Split view transformation module due to the
physical properties of 4D radar. Considering that 4D radar has a much lower
cost than LiDAR, ZFusion is an attractive alternative to LiDAR-based methods.
In typical traffic scenarios like the VoD (View-of-Delft) dataset, experiments
show that with reasonable inference speed, ZFusion achieved the
state-of-the-art mAP (mean average precision) in the region of interest, while
having competitive mAP in the entire area compared to the baseline methods,
which demonstrates performance close to LiDAR and greatly outperforms those
camera-only methods.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 13:29:32 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 12:35:03 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yang",
"Sheng",
""
],
[
"Zhan",
"Tong",
""
],
[
"Qiao",
"Shichen",
""
],
[
"Gong",
"Jicheng",
""
],
[
"Yang",
"Qing",
""
],
[
"Wang",
"Jian",
""
],
[
"Lu",
"Yanfeng",
""
]
] | TITLE: ZFusion: An Effective Fuser of Camera and 4D Radar for 3D Object
Perception in Autonomous Driving
ABSTRACT: Reliable 3D object perception is essential in autonomous driving. Owing to
its sensing capabilities in all weather conditions, 4D radar has recently
received much attention. However, compared to LiDAR, 4D radar provides much
sparser point cloud. In this paper, we propose a 3D object detection method,
termed ZFusion, which fuses 4D radar and vision modality. As the core of
ZFusion, our proposed FP-DDCA (Feature Pyramid-Double Deformable Cross
Attention) fuser complements the (sparse) radar information and (dense) vision
information, effectively. Specifically, with a feature-pyramid structure, the
FP-DDCA fuser packs Transformer blocks to interactively fuse multi-modal
features at different scales, thus enhancing perception accuracy. In addition,
we utilize the Depth-Context-Split view transformation module due to the
physical properties of 4D radar. Considering that 4D radar has a much lower
cost than LiDAR, ZFusion is an attractive alternative to LiDAR-based methods.
In typical traffic scenarios like the VoD (View-of-Delft) dataset, experiments
show that with reasonable inference speed, ZFusion achieved the
state-of-the-art mAP (mean average precision) in the region of interest, while
having competitive mAP in the entire area compared to the baseline methods,
which demonstrates performance close to LiDAR and greatly outperforms those
camera-only methods.
|
2504.03641 | Yi-Fan Zhang | Wulin Xie, Yi-Fan Zhang, Chaoyou Fu, Yang Shi, Bingyan Nie, Hongkai
Chen, Zhang Zhang, Liang Wang, Tieniu Tan | MME-Unify: A Comprehensive Benchmark for Unified Multimodal
Understanding and Generation Models | Project page: https://mme-unify.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing MLLM benchmarks face significant challenges in evaluating Unified
MLLMs (U-MLLMs) due to: 1) lack of standardized benchmarks for traditional
tasks, leading to inconsistent comparisons; 2) absence of benchmarks for
mixed-modality generation, which fails to assess multimodal reasoning
capabilities. We present a comprehensive evaluation framework designed to
systematically assess U-MLLMs. Our benchmark includes: Standardized Traditional
Task Evaluation. We sample from 12 datasets, covering 10 tasks with 30
subtasks, ensuring consistent and fair comparisons across studies." 2. Unified
Task Assessment. We introduce five novel tasks testing multimodal reasoning,
including image editing, commonsense QA with image generation, and geometric
reasoning. 3. Comprehensive Model Benchmarking. We evaluate 12 leading U-MLLMs,
such as Janus-Pro, EMU3, VILA-U, and Gemini2-flash, alongside specialized
understanding (e.g., Claude-3.5-Sonnet) and generation models (e.g., DALL-E-3).
Our findings reveal substantial performance gaps in existing U-MLLMs,
highlighting the need for more robust models capable of handling mixed-modality
tasks effectively. The code and evaluation data can be found in
https://mme-unify.github.io/.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 17:59:55 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 16:12:54 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Xie",
"Wulin",
""
],
[
"Zhang",
"Yi-Fan",
""
],
[
"Fu",
"Chaoyou",
""
],
[
"Shi",
"Yang",
""
],
[
"Nie",
"Bingyan",
""
],
[
"Chen",
"Hongkai",
""
],
[
"Zhang",
"Zhang",
""
],
[
"Wang",
"Liang",
""
],
[
"Tan",
"Tieniu",
""
]
] | TITLE: MME-Unify: A Comprehensive Benchmark for Unified Multimodal
Understanding and Generation Models
ABSTRACT: Existing MLLM benchmarks face significant challenges in evaluating Unified
MLLMs (U-MLLMs) due to: 1) lack of standardized benchmarks for traditional
tasks, leading to inconsistent comparisons; 2) absence of benchmarks for
mixed-modality generation, which fails to assess multimodal reasoning
capabilities. We present a comprehensive evaluation framework designed to
systematically assess U-MLLMs. Our benchmark includes: Standardized Traditional
Task Evaluation. We sample from 12 datasets, covering 10 tasks with 30
subtasks, ensuring consistent and fair comparisons across studies." 2. Unified
Task Assessment. We introduce five novel tasks testing multimodal reasoning,
including image editing, commonsense QA with image generation, and geometric
reasoning. 3. Comprehensive Model Benchmarking. We evaluate 12 leading U-MLLMs,
such as Janus-Pro, EMU3, VILA-U, and Gemini2-flash, alongside specialized
understanding (e.g., Claude-3.5-Sonnet) and generation models (e.g., DALL-E-3).
Our findings reveal substantial performance gaps in existing U-MLLMs,
highlighting the need for more robust models capable of handling mixed-modality
tasks effectively. The code and evaluation data can be found in
https://mme-unify.github.io/.
|
2504.03649 | Samy JAD | Samy Jad (LGP), Xavier Desforges (LGP), Pierre-Yves Villard, Christian
Caussid\'ery, Kamal Medjaher (LGP) | Diagnostic Method for Hydropower Plant Condition-based Maintenance
combining Autoencoder with Clustering Algorithms | null | Advanced Maintenance Engineering, Services and Technology - 6th
AMEST 2024, International Federation of Automatic Control, Jun 2024, Cagliari
(Sardaigne), Italy. pp.151-156 | 10.1016/j.ifacol.2024.08.065 | null | cs.AI cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The French company EDF uses supervisory control and data acquisition systems
in conjunction with a data management platform to monitor hydropower plant,
allowing engineers and technicians to analyse the time-series collected.
Depending on the strategic importance of the monitored hydropower plant, the
number of time-series collected can vary greatly making it difficult to
generate valuable information from the extracted data. In an attempt to provide
an answer to this particular problem, a condition detection and diagnosis
method combining clustering algorithms and autoencoder neural networks for
pattern recognition has been developed and is presented in this paper. First, a
dimension reduction algorithm is used to create a 2-or 3-dimensional projection
that allows the users to identify unsuspected relationships between datapoints.
Then, a collection of clustering algorithms regroups the datapoints into
clusters. For each identified cluster, an autoencoder neural network is trained
on the corresponding dataset. The aim is to measure the reconstruction error
between each autoencoder model and the measured values, thus creating a
proximity index for each state discovered during the clustering stage.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 08:57:47 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Jad",
"Samy",
"",
"LGP"
],
[
"Desforges",
"Xavier",
"",
"LGP"
],
[
"Villard",
"Pierre-Yves",
"",
"LGP"
],
[
"Caussidéry",
"Christian",
"",
"LGP"
],
[
"Medjaher",
"Kamal",
"",
"LGP"
]
] | TITLE: Diagnostic Method for Hydropower Plant Condition-based Maintenance
combining Autoencoder with Clustering Algorithms
ABSTRACT: The French company EDF uses supervisory control and data acquisition systems
in conjunction with a data management platform to monitor hydropower plant,
allowing engineers and technicians to analyse the time-series collected.
Depending on the strategic importance of the monitored hydropower plant, the
number of time-series collected can vary greatly making it difficult to
generate valuable information from the extracted data. In an attempt to provide
an answer to this particular problem, a condition detection and diagnosis
method combining clustering algorithms and autoencoder neural networks for
pattern recognition has been developed and is presented in this paper. First, a
dimension reduction algorithm is used to create a 2-or 3-dimensional projection
that allows the users to identify unsuspected relationships between datapoints.
Then, a collection of clustering algorithms regroups the datapoints into
clusters. For each identified cluster, an autoencoder neural network is trained
on the corresponding dataset. The aim is to measure the reconstruction error
between each autoencoder model and the measured values, thus creating a
proximity index for each state discovered during the clustering stage.
|
2504.03654 | Keondo Park | Keondo Park, You Rim Choi, Inhoe Lee, Hyung-Sin Kim | PointSplit: Towards On-device 3D Object Detection with Heterogeneous
Low-power Accelerators | null | IPSN 23. ACM, 67-81 (2023) | 10.1145/3583120.3587045 | null | cs.DC cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Running deep learning models on resource-constrained edge devices has drawn
significant attention due to its fast response, privacy preservation, and
robust operation regardless of Internet connectivity. While these devices
already cope with various intelligent tasks, the latest edge devices that are
equipped with multiple types of low-power accelerators (i.e., both mobile GPU
and NPU) can bring another opportunity; a task that used to be too heavy for an
edge device in the single-accelerator world might become viable in the upcoming
heterogeneous-accelerator world.To realize the potential in the context of 3D
object detection, we identify several technical challenges and propose
PointSplit, a novel 3D object detection framework for multi-accelerator edge
devices that addresses the problems. Specifically, our PointSplit design
includes (1) 2D semantics-aware biased point sampling, (2) parallelized 3D
feature extraction, and (3) role-based group-wise quantization. We implement
PointSplit on TensorFlow Lite and evaluate it on a customized hardware platform
comprising both mobile GPU and EdgeTPU. Experimental results on representative
RGB-D datasets, SUN RGB-D and Scannet V2, demonstrate that PointSplit on a
multi-accelerator device is 24.7 times faster with similar accuracy compared to
the full-precision, 2D-3D fusion-based 3D detector on a GPU-only device.
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 07:17:04 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Park",
"Keondo",
""
],
[
"Choi",
"You Rim",
""
],
[
"Lee",
"Inhoe",
""
],
[
"Kim",
"Hyung-Sin",
""
]
] | TITLE: PointSplit: Towards On-device 3D Object Detection with Heterogeneous
Low-power Accelerators
ABSTRACT: Running deep learning models on resource-constrained edge devices has drawn
significant attention due to its fast response, privacy preservation, and
robust operation regardless of Internet connectivity. While these devices
already cope with various intelligent tasks, the latest edge devices that are
equipped with multiple types of low-power accelerators (i.e., both mobile GPU
and NPU) can bring another opportunity; a task that used to be too heavy for an
edge device in the single-accelerator world might become viable in the upcoming
heterogeneous-accelerator world.To realize the potential in the context of 3D
object detection, we identify several technical challenges and propose
PointSplit, a novel 3D object detection framework for multi-accelerator edge
devices that addresses the problems. Specifically, our PointSplit design
includes (1) 2D semantics-aware biased point sampling, (2) parallelized 3D
feature extraction, and (3) role-based group-wise quantization. We implement
PointSplit on TensorFlow Lite and evaluate it on a customized hardware platform
comprising both mobile GPU and EdgeTPU. Experimental results on representative
RGB-D datasets, SUN RGB-D and Scannet V2, demonstrate that PointSplit on a
multi-accelerator device is 24.7 times faster with similar accuracy compared to
the full-precision, 2D-3D fusion-based 3D detector on a GPU-only device.
|
2504.03681 | Aseem Subedi | Aseem Subedi, Rahul, Lora Cavuoto, Steven Schwaitzberg, Matthew
Hackett, Jack Norfleet, and Suvranu De | End-to-End Deep Learning for Real-Time Neuroimaging-Based Assessment of
Bimanual Motor Skills | null | null | null | null | eess.SP cs.LG q-bio.NC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The real-time assessment of complex motor skills presents a challenge in
fields such as surgical training and rehabilitation. Recent advancements in
neuroimaging, particularly functional near-infrared spectroscopy (fNIRS), have
enabled objective assessment of such skills with high accuracy. However, these
techniques are hindered by extensive preprocessing requirements to extract
neural biomarkers. This study presents a novel end-to-end deep learning
framework that processes raw fNIRS signals directly, eliminating the need for
intermediate preprocessing steps. The model was evaluated on datasets from
three distinct bimanual motor tasks--suturing, pattern cutting, and
endotracheal intubation (ETI)--using performance metrics derived from both
training and retention datasets. It achieved a mean classification accuracy of
93.9% (SD 4.4) and a generalization accuracy of 92.6% (SD 1.9) on unseen skill
retention datasets, with a leave-one-subject-out cross-validation yielding an
accuracy of 94.1% (SD 3.6). Contralateral prefrontal cortex activations
exhibited task-specific discriminative power, while motor cortex activations
consistently contributed to accurate classification. The model also
demonstrated resilience to neurovascular coupling saturation caused by extended
task sessions, maintaining robust performance across trials. Comparative
analysis confirms that the end-to-end model performs on par with or surpasses
baseline models optimized for fully processed fNIRS data, with statistically
similar (p<0.05) or improved prediction accuracies. By eliminating the need for
extensive signal preprocessing, this work provides a foundation for real-time,
non-invasive assessment of bimanual motor skills in medical training
environments, with potential applications in robotics, rehabilitation, and
sports.
| [
{
"version": "v1",
"created": "Fri, 21 Mar 2025 22:56:54 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Subedi",
"Aseem",
""
],
[
"Rahul",
"",
""
],
[
"Cavuoto",
"Lora",
""
],
[
"Schwaitzberg",
"Steven",
""
],
[
"Hackett",
"Matthew",
""
],
[
"Norfleet",
"Jack",
""
],
[
"De",
"Suvranu",
""
]
] | TITLE: End-to-End Deep Learning for Real-Time Neuroimaging-Based Assessment of
Bimanual Motor Skills
ABSTRACT: The real-time assessment of complex motor skills presents a challenge in
fields such as surgical training and rehabilitation. Recent advancements in
neuroimaging, particularly functional near-infrared spectroscopy (fNIRS), have
enabled objective assessment of such skills with high accuracy. However, these
techniques are hindered by extensive preprocessing requirements to extract
neural biomarkers. This study presents a novel end-to-end deep learning
framework that processes raw fNIRS signals directly, eliminating the need for
intermediate preprocessing steps. The model was evaluated on datasets from
three distinct bimanual motor tasks--suturing, pattern cutting, and
endotracheal intubation (ETI)--using performance metrics derived from both
training and retention datasets. It achieved a mean classification accuracy of
93.9% (SD 4.4) and a generalization accuracy of 92.6% (SD 1.9) on unseen skill
retention datasets, with a leave-one-subject-out cross-validation yielding an
accuracy of 94.1% (SD 3.6). Contralateral prefrontal cortex activations
exhibited task-specific discriminative power, while motor cortex activations
consistently contributed to accurate classification. The model also
demonstrated resilience to neurovascular coupling saturation caused by extended
task sessions, maintaining robust performance across trials. Comparative
analysis confirms that the end-to-end model performs on par with or surpasses
baseline models optimized for fully processed fNIRS data, with statistically
similar (p<0.05) or improved prediction accuracies. By eliminating the need for
extensive signal preprocessing, this work provides a foundation for real-time,
non-invasive assessment of bimanual motor skills in medical training
environments, with potential applications in robotics, rehabilitation, and
sports.
|
2504.03687 | Hang Xiao | Hanyu Liu, Ying Yu, Hang Xiao, Siyao Li, Xuze Li, Jiarui Li, Haotian
Tang | Process Optimization and Deployment for Sensor-Based Human Activity
Recognition Based on Deep Learning | null | null | null | null | eess.SP cs.AI cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Sensor-based human activity recognition is a key technology for many
human-centered intelligent applications. However, this research is still in its
infancy and faces many unresolved challenges. To address these, we propose a
comprehensive optimization process approach centered on multi-attention
interaction. We first utilize unsupervised statistical feature-guided diffusion
models for highly adaptive data enhancement, and introduce a novel network
architecture-Multi-branch Spatiotemporal Interaction Network, which uses
multi-branch features at different levels to effectively Sequential ), which
uses multi-branch features at different levels to effectively Sequential
spatio-temporal interaction to enhance the ability to mine advanced latent
features. In addition, we adopt a multi-loss function fusion strategy in the
training phase to dynamically adjust the fusion weights between batches to
optimize the training results. Finally, we also conducted actual deployment on
embedded devices to extensively test the practical feasibility of the proposed
method in existing work. We conduct extensive testing on three public datasets,
including ablation studies, comparisons of related work, and embedded
deployments.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 16:48:16 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Hanyu",
""
],
[
"Yu",
"Ying",
""
],
[
"Xiao",
"Hang",
""
],
[
"Li",
"Siyao",
""
],
[
"Li",
"Xuze",
""
],
[
"Li",
"Jiarui",
""
],
[
"Tang",
"Haotian",
""
]
] | TITLE: Process Optimization and Deployment for Sensor-Based Human Activity
Recognition Based on Deep Learning
ABSTRACT: Sensor-based human activity recognition is a key technology for many
human-centered intelligent applications. However, this research is still in its
infancy and faces many unresolved challenges. To address these, we propose a
comprehensive optimization process approach centered on multi-attention
interaction. We first utilize unsupervised statistical feature-guided diffusion
models for highly adaptive data enhancement, and introduce a novel network
architecture-Multi-branch Spatiotemporal Interaction Network, which uses
multi-branch features at different levels to effectively Sequential ), which
uses multi-branch features at different levels to effectively Sequential
spatio-temporal interaction to enhance the ability to mine advanced latent
features. In addition, we adopt a multi-loss function fusion strategy in the
training phase to dynamically adjust the fusion weights between batches to
optimize the training results. Finally, we also conducted actual deployment on
embedded devices to extensively test the practical feasibility of the proposed
method in existing work. We conduct extensive testing on three public datasets,
including ablation studies, comparisons of related work, and embedded
deployments.
|
2504.03690 | Selim Firat Yilmaz | Selim F. Yilmaz, Can Karamanli, Deniz Gunduz | Learning to Interfere in Non-Orthogonal Multiple-Access Joint
Source-Channel Coding | 18 pages, 19 figures | null | null | null | cs.NI cs.AI cs.IT cs.LG math.IT | http://creativecommons.org/licenses/by/4.0/ | We consider multiple transmitters aiming to communicate their source signals
(e.g., images) over a multiple access channel (MAC). Conventional communication
systems minimize interference by orthogonally allocating resources (time and/or
bandwidth) among users, which limits their capacity. We introduce a machine
learning (ML)-aided wireless image transmission method that merges compression
and channel coding using a multi-view autoencoder, which allows the
transmitters to use all the available channel resources simultaneously,
resulting in a non-orthogonal multiple access (NOMA) scheme. The receiver must
recover all the images from the received superposed signal, while also
associating each image with its transmitter. Traditional ML models deal with
individual samples, whereas our model allows signals from different users to
interfere in order to leverage gains from NOMA under limited bandwidth and
power constraints. We introduce a progressive fine-tuning algorithm that
doubles the number of users at each iteration, maintaining initial performance
with orthogonalized user-specific projections, which is then improved through
fine-tuning steps. Remarkably, our method scales up to 16 users and beyond,
with only a 0.6% increase in the number of trainable parameters compared to a
single-user model, significantly enhancing recovered image quality and
outperforming existing NOMA-based methods over a wide range of datasets,
metrics, and channel conditions. Our approach paves the way for more efficient
and robust multi-user communication systems, leveraging innovative ML
components and strategies.
| [
{
"version": "v1",
"created": "Sun, 23 Mar 2025 12:27:20 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yilmaz",
"Selim F.",
""
],
[
"Karamanli",
"Can",
""
],
[
"Gunduz",
"Deniz",
""
]
] | TITLE: Learning to Interfere in Non-Orthogonal Multiple-Access Joint
Source-Channel Coding
ABSTRACT: We consider multiple transmitters aiming to communicate their source signals
(e.g., images) over a multiple access channel (MAC). Conventional communication
systems minimize interference by orthogonally allocating resources (time and/or
bandwidth) among users, which limits their capacity. We introduce a machine
learning (ML)-aided wireless image transmission method that merges compression
and channel coding using a multi-view autoencoder, which allows the
transmitters to use all the available channel resources simultaneously,
resulting in a non-orthogonal multiple access (NOMA) scheme. The receiver must
recover all the images from the received superposed signal, while also
associating each image with its transmitter. Traditional ML models deal with
individual samples, whereas our model allows signals from different users to
interfere in order to leverage gains from NOMA under limited bandwidth and
power constraints. We introduce a progressive fine-tuning algorithm that
doubles the number of users at each iteration, maintaining initial performance
with orthogonalized user-specific projections, which is then improved through
fine-tuning steps. Remarkably, our method scales up to 16 users and beyond,
with only a 0.6% increase in the number of trainable parameters compared to a
single-user model, significantly enhancing recovered image quality and
outperforming existing NOMA-based methods over a wide range of datasets,
metrics, and channel conditions. Our approach paves the way for more efficient
and robust multi-user communication systems, leveraging innovative ML
components and strategies.
|
2504.03695 | Haroon Lone | Nilesh Kumar Sahu, Snehil Gupta, Haroon R Lone | Are Anxiety Detection Models Generalizable? A Cross-Activity and
Cross-Population Study Using Wearables | null | null | null | null | eess.SP cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anxiety-provoking activities, such as public speaking, can trigger heightened
anxiety responses in individuals with anxiety disorders. Recent research
suggests that physiological signals, including electrocardiogram (ECG) and
electrodermal activity (EDA), collected via wearable devices, can be used to
detect anxiety in such contexts through machine learning models. However, the
generalizability of these anxiety prediction models across different activities
and diverse populations remains underexplored-an essential step for assessing
model bias and fostering user trust in broader applications. To address this
gap, we conducted a study with 111 participants who engaged in three
anxiety-provoking activities. Utilizing both our collected dataset and two
well-known publicly available datasets, we evaluated the generalizability of
anxiety detection models within participants (for both same-activity and
cross-activity scenarios) and across participants (within-activity and
cross-activity). In total, we trained and tested more than 3348 anxiety
detection models (using six classifiers, 31 feature sets, and 18 train-test
configurations). Our results indicate that three key metrics-AUROC, recall for
anxious states, and recall for non-anxious states-were slightly above the
baseline score of 0.5. The best AUROC scores ranged from 0.62 to 0.73, with
recall for the anxious class spanning 35.19% to 74.3%. Interestingly, model
performance (as measured by AUROC) remained relatively stable across different
activities and participant groups, though recall for the anxious class did
exhibit some variation.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 11:43:34 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Sahu",
"Nilesh Kumar",
""
],
[
"Gupta",
"Snehil",
""
],
[
"Lone",
"Haroon R",
""
]
] | TITLE: Are Anxiety Detection Models Generalizable? A Cross-Activity and
Cross-Population Study Using Wearables
ABSTRACT: Anxiety-provoking activities, such as public speaking, can trigger heightened
anxiety responses in individuals with anxiety disorders. Recent research
suggests that physiological signals, including electrocardiogram (ECG) and
electrodermal activity (EDA), collected via wearable devices, can be used to
detect anxiety in such contexts through machine learning models. However, the
generalizability of these anxiety prediction models across different activities
and diverse populations remains underexplored-an essential step for assessing
model bias and fostering user trust in broader applications. To address this
gap, we conducted a study with 111 participants who engaged in three
anxiety-provoking activities. Utilizing both our collected dataset and two
well-known publicly available datasets, we evaluated the generalizability of
anxiety detection models within participants (for both same-activity and
cross-activity scenarios) and across participants (within-activity and
cross-activity). In total, we trained and tested more than 3348 anxiety
detection models (using six classifiers, 31 feature sets, and 18 train-test
configurations). Our results indicate that three key metrics-AUROC, recall for
anxious states, and recall for non-anxious states-were slightly above the
baseline score of 0.5. The best AUROC scores ranged from 0.62 to 0.73, with
recall for the anxious class spanning 35.19% to 74.3%. Interestingly, model
performance (as measured by AUROC) remained relatively stable across different
activities and participant groups, though recall for the anxious class did
exhibit some variation.
|
2504.03700 | Xiaohe Li | Xiaohe Li, Haohua Wu, Jiahao Li, Zide Fan, Kaixin Zhang, Xinming Li,
Yunping Ge, Xinyu Zhao | SAFE: Self-Adjustment Federated Learning Framework for Remote Sensing
Collaborative Perception | null | null | null | null | cs.LG cs.AI eess.SP | http://creativecommons.org/licenses/by/4.0/ | The rapid increase in remote sensing satellites has led to the emergence of
distributed space-based observation systems. However, existing distributed
remote sensing models often rely on centralized training, resulting in data
leakage, communication overhead, and reduced accuracy due to data distribution
discrepancies across platforms. To address these challenges, we propose the
\textit{Self-Adjustment FEderated Learning} (SAFE) framework, which
innovatively leverages federated learning to enhance collaborative sensing in
remote sensing scenarios. SAFE introduces four key strategies: (1)
\textit{Class Rectification Optimization}, which autonomously addresses class
imbalance under unknown local and global distributions. (2) \textit{Feature
Alignment Update}, which mitigates Non-IID data issues via locally controlled
EMA updates. (3) \textit{Dual-Factor Modulation Rheostat}, which dynamically
balances optimization effects during training. (4) \textit{Adaptive Context
Enhancement}, which is designed to improve model performance by dynamically
refining foreground regions, ensuring computational efficiency with accuracy
improvement across distributed satellites. Experiments on real-world image
classification and object segmentation datasets validate the effectiveness and
reliability of the SAFE framework in complex remote sensing scenarios.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 06:39:34 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Xiaohe",
""
],
[
"Wu",
"Haohua",
""
],
[
"Li",
"Jiahao",
""
],
[
"Fan",
"Zide",
""
],
[
"Zhang",
"Kaixin",
""
],
[
"Li",
"Xinming",
""
],
[
"Ge",
"Yunping",
""
],
[
"Zhao",
"Xinyu",
""
]
] | TITLE: SAFE: Self-Adjustment Federated Learning Framework for Remote Sensing
Collaborative Perception
ABSTRACT: The rapid increase in remote sensing satellites has led to the emergence of
distributed space-based observation systems. However, existing distributed
remote sensing models often rely on centralized training, resulting in data
leakage, communication overhead, and reduced accuracy due to data distribution
discrepancies across platforms. To address these challenges, we propose the
\textit{Self-Adjustment FEderated Learning} (SAFE) framework, which
innovatively leverages federated learning to enhance collaborative sensing in
remote sensing scenarios. SAFE introduces four key strategies: (1)
\textit{Class Rectification Optimization}, which autonomously addresses class
imbalance under unknown local and global distributions. (2) \textit{Feature
Alignment Update}, which mitigates Non-IID data issues via locally controlled
EMA updates. (3) \textit{Dual-Factor Modulation Rheostat}, which dynamically
balances optimization effects during training. (4) \textit{Adaptive Context
Enhancement}, which is designed to improve model performance by dynamically
refining foreground regions, ensuring computational efficiency with accuracy
improvement across distributed satellites. Experiments on real-world image
classification and object segmentation datasets validate the effectiveness and
reliability of the SAFE framework in complex remote sensing scenarios.
|
2504.03701 | Yuqi Li | Yuqi Li, Han Zhang, Xiaofan Gui, Zhao Chen, Yu Li, Xiwen Chi, Quan
Zhou, Shun Zheng, Ziheng Lu, Wei Xu, Jiang Bian, Liquan Chen, Hong Li | Chemistry-aware battery degradation prediction under simulated
real-world cyclic protocols | null | null | null | null | eess.SP cs.LG | http://creativecommons.org/licenses/by/4.0/ | Battery degradation is governed by complex and randomized cyclic conditions,
yet existing modeling and prediction frameworks usually rely on rigid,
unchanging protocols that fail to capture real-world dynamics. The stochastic
electrical signals make such prediction extremely challenging, while, on the
other hand, they provide abundant additional information, such as voltage
fluctuations, which may probe the degradation mechanisms. Here, we present
chemistry-aware battery degradation prediction under dynamic conditions with
machine learning, which integrates hidden Markov processes for realistic power
simulations, an automated batch-testing system that generates a large
electrochemical dataset under randomized conditions, an interfacial chemistry
database derived from high-throughput X-ray photoelectron spectroscopy for
mechanistic probing, and a machine learning model for prediction. By
automatically constructing a polynomial-scale feature space from irregular
electrochemical curves, our model accurately predicts both battery life and
critical knee points. This feature space also predicts the composition of the
solid electrolyte interphase, revealing six distinct failure
mechanisms-demonstrating a viable approach to use electrical signals to infer
interfacial chemistry. This work establishes a scalable and adaptive framework
for integrating chemical engineering and data science to advance noninvasive
diagnostics and optimize processes for more durable and sustainable energy
storage technologies.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 07:01:50 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Yuqi",
""
],
[
"Zhang",
"Han",
""
],
[
"Gui",
"Xiaofan",
""
],
[
"Chen",
"Zhao",
""
],
[
"Li",
"Yu",
""
],
[
"Chi",
"Xiwen",
""
],
[
"Zhou",
"Quan",
""
],
[
"Zheng",
"Shun",
""
],
[
"Lu",
"Ziheng",
""
],
[
"Xu",
"Wei",
""
],
[
"Bian",
"Jiang",
""
],
[
"Chen",
"Liquan",
""
],
[
"Li",
"Hong",
""
]
] | TITLE: Chemistry-aware battery degradation prediction under simulated
real-world cyclic protocols
ABSTRACT: Battery degradation is governed by complex and randomized cyclic conditions,
yet existing modeling and prediction frameworks usually rely on rigid,
unchanging protocols that fail to capture real-world dynamics. The stochastic
electrical signals make such prediction extremely challenging, while, on the
other hand, they provide abundant additional information, such as voltage
fluctuations, which may probe the degradation mechanisms. Here, we present
chemistry-aware battery degradation prediction under dynamic conditions with
machine learning, which integrates hidden Markov processes for realistic power
simulations, an automated batch-testing system that generates a large
electrochemical dataset under randomized conditions, an interfacial chemistry
database derived from high-throughput X-ray photoelectron spectroscopy for
mechanistic probing, and a machine learning model for prediction. By
automatically constructing a polynomial-scale feature space from irregular
electrochemical curves, our model accurately predicts both battery life and
critical knee points. This feature space also predicts the composition of the
solid electrolyte interphase, revealing six distinct failure
mechanisms-demonstrating a viable approach to use electrical signals to infer
interfacial chemistry. This work establishes a scalable and adaptive framework
for integrating chemical engineering and data science to advance noninvasive
diagnostics and optimize processes for more durable and sustainable energy
storage technologies.
|
2504.03702 | Zhihan Jiang | Zhihan Jiang, Yujie Huang, Guangba Yu, Junjie Huang, Jiazhen Gu and
Michael R. Lyu | Hierarchical Prediction-based Management for LMaaS Systems | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have revolutionized fields such as natural
language processing and software engineering, fueling the growth of
Language-Model-as-a-Service (LMaaS) platforms hosted by industry leaders like
OpenAI. These platforms handle millions of queries daily, requiring efficient
management to reduce serving latency and meet Service Level Objectives (SLOs)
while optimizing resource utilization. However, conventional cloud service
management techniques, originally designed for traditional workloads, are
suboptimal for LMaaS due to its dynamic service workloads and variable request
loads. To address this, we propose PreServe, a tailored LMaaS management
framework centered on hierarchical prediction. PreServe incorporates a service
workload predictor to estimate periodic token density at a coarse granularity
and a novel request load predictor to assess the resource demand of individual
LLM requests, enabling the construction of a load anticipator for each LLM
instance. By integrating both long-term and short-term predictions, PreServe
adjusts resource allocation in advance, mitigating the risks of instance under-
or over-provisioning. Moreover, PreServe optimizes request routing by
considering both current and anticipated future instance loads, ensuring
balanced load distribution across instances. Evaluations on real-world LMaaS
production datasets demonstrate that \nm outperforms state-of-the-art
approaches, achieving over 45.9% reduction in tail latency, an average 44.5%
decrease in resource consumption, while incurring only 0.23% additional
overhead.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 07:41:28 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Jiang",
"Zhihan",
""
],
[
"Huang",
"Yujie",
""
],
[
"Yu",
"Guangba",
""
],
[
"Huang",
"Junjie",
""
],
[
"Gu",
"Jiazhen",
""
],
[
"Lyu",
"Michael R.",
""
]
] | TITLE: Hierarchical Prediction-based Management for LMaaS Systems
ABSTRACT: Large Language Models (LLMs) have revolutionized fields such as natural
language processing and software engineering, fueling the growth of
Language-Model-as-a-Service (LMaaS) platforms hosted by industry leaders like
OpenAI. These platforms handle millions of queries daily, requiring efficient
management to reduce serving latency and meet Service Level Objectives (SLOs)
while optimizing resource utilization. However, conventional cloud service
management techniques, originally designed for traditional workloads, are
suboptimal for LMaaS due to its dynamic service workloads and variable request
loads. To address this, we propose PreServe, a tailored LMaaS management
framework centered on hierarchical prediction. PreServe incorporates a service
workload predictor to estimate periodic token density at a coarse granularity
and a novel request load predictor to assess the resource demand of individual
LLM requests, enabling the construction of a load anticipator for each LLM
instance. By integrating both long-term and short-term predictions, PreServe
adjusts resource allocation in advance, mitigating the risks of instance under-
or over-provisioning. Moreover, PreServe optimizes request routing by
considering both current and anticipated future instance loads, ensuring
balanced load distribution across instances. Evaluations on real-world LMaaS
production datasets demonstrate that \nm outperforms state-of-the-art
approaches, achieving over 45.9% reduction in tail latency, an average 44.5%
decrease in resource consumption, while incurring only 0.23% additional
overhead.
|
2504.03703 | Mohamed Nafea | Mario Padilla Rodriguez and Mohamed Nafea | Hierarchical Attention Network for Interpretable ECG-based Heart Disease
Classification | Work in progress. 7 pages, 4 figures | null | null | null | eess.SP cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Cardiovascular disease remains one of the leading causes of mortality
worldwide, underscoring the need for accurate as well as interpretable
diagnostic machine learning tools. In this work, we investigate heart disease
classification using electrocardiogram (ECG) data from two widely-utilized
datasets: The MIT-BIH Arrhythmia and the PTB-XL datasets. We adapt a
hierarchical attention network (HAN), originally developed for text
classification, into an ECG-based heart-disease classification task. Our
adapted HAN incorporates two attention layers that focus on ECG data segments
of varying sizes. We conduct a comparative analysis between our adapted HAN and
a more sophisticated state-of-the-art architecture, featuring a network with
convolution, attention, and transformer layers (CAT-Net). Our empirical
evaluation encompasses multiple aspects including test accuracy (quantified by
0-1 loss); model complexity (measured by the number of model parameters); and
interpretability (through attention map visualization). Our adapted HAN
demonstrates comparable test accuracy with significant reductions in model
complexity and enhanced interpretability analysis: For the MIT-BIH dataset, our
adapted HAN achieves 98.55\% test accuracy compared to 99.14\% for CAT-Net,
while reducing the number of model parameters by a factor of 15.6. For the
PTB-XL dataset, our adapted HAN achieves a 19.3-fold reduction in model
complexity compared to CAT-Net, with only a 5\% lower test accuracy. From an
interpretability perspective, the significantly simpler architecture and the
hierarchical nature of our adapted HAN model facilitate a more straightforward
interpretability analysis based on visualizing attention weights. Building on
this advantage, we conduct an interpretability analysis of our HAN that
highlights the regions of the ECG signal most relevant to the model's
decisions.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 13:06:06 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Rodriguez",
"Mario Padilla",
""
],
[
"Nafea",
"Mohamed",
""
]
] | TITLE: Hierarchical Attention Network for Interpretable ECG-based Heart Disease
Classification
ABSTRACT: Cardiovascular disease remains one of the leading causes of mortality
worldwide, underscoring the need for accurate as well as interpretable
diagnostic machine learning tools. In this work, we investigate heart disease
classification using electrocardiogram (ECG) data from two widely-utilized
datasets: The MIT-BIH Arrhythmia and the PTB-XL datasets. We adapt a
hierarchical attention network (HAN), originally developed for text
classification, into an ECG-based heart-disease classification task. Our
adapted HAN incorporates two attention layers that focus on ECG data segments
of varying sizes. We conduct a comparative analysis between our adapted HAN and
a more sophisticated state-of-the-art architecture, featuring a network with
convolution, attention, and transformer layers (CAT-Net). Our empirical
evaluation encompasses multiple aspects including test accuracy (quantified by
0-1 loss); model complexity (measured by the number of model parameters); and
interpretability (through attention map visualization). Our adapted HAN
demonstrates comparable test accuracy with significant reductions in model
complexity and enhanced interpretability analysis: For the MIT-BIH dataset, our
adapted HAN achieves 98.55\% test accuracy compared to 99.14\% for CAT-Net,
while reducing the number of model parameters by a factor of 15.6. For the
PTB-XL dataset, our adapted HAN achieves a 19.3-fold reduction in model
complexity compared to CAT-Net, with only a 5\% lower test accuracy. From an
interpretability perspective, the significantly simpler architecture and the
hierarchical nature of our adapted HAN model facilitate a more straightforward
interpretability analysis based on visualizing attention weights. Building on
this advantage, we conduct an interpretability analysis of our HAN that
highlights the regions of the ECG signal most relevant to the model's
decisions.
|
2504.03706 | Yuzhu Lei | Yuzhu Lei, Guanding Yu | A multi-scale lithium-ion battery capacity prediction using mixture of
experts and patch-based MLP | null | null | null | null | eess.SP cs.LG cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Lithium-ion battery health management has become increasingly important as
the application of batteries expands. Precise forecasting of capacity
degradation is critical for ensuring the healthy usage of batteries. In this
paper, we innovatively propose MSPMLP, a multi-scale capacity prediction model
utilizing the mixture of experts (MoE) architecture and patch-based multi-layer
perceptron (MLP) blocks, to capture both the long-term degradation trend and
local capacity regeneration phenomena. Specifically, we utilize patch-based MLP
blocks with varying patch sizes to extract multi-scale features from the
capacity sequence. Leveraging the MoE architecture, the model adaptively
integrates the extracted features, thereby enhancing its capacity and
expressiveness. Finally, the future battery capacity is predicted based on the
integrated features, achieving high prediction accuracy and generalization.
Experimental results on the public NASA dataset indicate that MSPMLP achieves a
mean absolute error (MAE) of 0.0078, improving by 41.8\% compared to existing
methods. These findings highlight that MSPMLP, owing to its multi-scale
modeling capability and generalizability, provides a promising solution to the
battery capacity prediction challenges caused by capacity regeneration
phenomena and complex usage conditions. The code of this work is provided at
https://github.com/LeiYuzhu/CapacityPredict.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 13:59:48 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lei",
"Yuzhu",
""
],
[
"Yu",
"Guanding",
""
]
] | TITLE: A multi-scale lithium-ion battery capacity prediction using mixture of
experts and patch-based MLP
ABSTRACT: Lithium-ion battery health management has become increasingly important as
the application of batteries expands. Precise forecasting of capacity
degradation is critical for ensuring the healthy usage of batteries. In this
paper, we innovatively propose MSPMLP, a multi-scale capacity prediction model
utilizing the mixture of experts (MoE) architecture and patch-based multi-layer
perceptron (MLP) blocks, to capture both the long-term degradation trend and
local capacity regeneration phenomena. Specifically, we utilize patch-based MLP
blocks with varying patch sizes to extract multi-scale features from the
capacity sequence. Leveraging the MoE architecture, the model adaptively
integrates the extracted features, thereby enhancing its capacity and
expressiveness. Finally, the future battery capacity is predicted based on the
integrated features, achieving high prediction accuracy and generalization.
Experimental results on the public NASA dataset indicate that MSPMLP achieves a
mean absolute error (MAE) of 0.0078, improving by 41.8\% compared to existing
methods. These findings highlight that MSPMLP, owing to its multi-scale
modeling capability and generalizability, provides a promising solution to the
battery capacity prediction challenges caused by capacity regeneration
phenomena and complex usage conditions. The code of this work is provided at
https://github.com/LeiYuzhu/CapacityPredict.
|
2504.03707 | Naimul Mefraz Khan | Md Niaz Imtiaz and Naimul Khan | Towards Practical Emotion Recognition: An Unsupervised Source-Free
Approach for EEG Domain Adaptation | Under review | null | null | null | eess.SP cs.LG | http://creativecommons.org/licenses/by/4.0/ | Emotion recognition is crucial for advancing mental health, healthcare, and
technologies like brain-computer interfaces (BCIs). However, EEG-based emotion
recognition models face challenges in cross-domain applications due to the high
cost of labeled data and variations in EEG signals from individual differences
and recording conditions. Unsupervised domain adaptation methods typically
require access to source domain data, which may not always be feasible in
real-world scenarios due to privacy and computational constraints. Source-free
unsupervised domain adaptation (SF-UDA) has recently emerged as a solution,
enabling target domain adaptation without source data, but its application in
emotion recognition remains unexplored. We propose a novel SF-UDA approach for
EEG-based emotion classification across domains, introducing a multi-stage
framework that enhances model adaptability without requiring source data. Our
approach incorporates Dual-Loss Adaptive Regularization (DLAR) to minimize
prediction discrepancies on confident samples and align predictions with
expected pseudo-labels. Additionally, we introduce Localized Consistency
Learning (LCL), which enforces local consistency by promoting similar
predictions from reliable neighbors. These techniques together address domain
shift and reduce the impact of noisy pseudo-labels, a key challenge in
traditional SF-UDA models. Experiments on two widely used datasets, DEAP and
SEED, demonstrate the effectiveness of our method. Our approach significantly
outperforms state-of-the-art methods, achieving 65.84% accuracy when trained on
DEAP and tested on SEED, and 58.99% accuracy in the reverse scenario. It excels
at detecting both positive and negative emotions, making it well-suited for
practical emotion recognition applications.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 14:29:20 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Imtiaz",
"Md Niaz",
""
],
[
"Khan",
"Naimul",
""
]
] | TITLE: Towards Practical Emotion Recognition: An Unsupervised Source-Free
Approach for EEG Domain Adaptation
ABSTRACT: Emotion recognition is crucial for advancing mental health, healthcare, and
technologies like brain-computer interfaces (BCIs). However, EEG-based emotion
recognition models face challenges in cross-domain applications due to the high
cost of labeled data and variations in EEG signals from individual differences
and recording conditions. Unsupervised domain adaptation methods typically
require access to source domain data, which may not always be feasible in
real-world scenarios due to privacy and computational constraints. Source-free
unsupervised domain adaptation (SF-UDA) has recently emerged as a solution,
enabling target domain adaptation without source data, but its application in
emotion recognition remains unexplored. We propose a novel SF-UDA approach for
EEG-based emotion classification across domains, introducing a multi-stage
framework that enhances model adaptability without requiring source data. Our
approach incorporates Dual-Loss Adaptive Regularization (DLAR) to minimize
prediction discrepancies on confident samples and align predictions with
expected pseudo-labels. Additionally, we introduce Localized Consistency
Learning (LCL), which enforces local consistency by promoting similar
predictions from reliable neighbors. These techniques together address domain
shift and reduce the impact of noisy pseudo-labels, a key challenge in
traditional SF-UDA models. Experiments on two widely used datasets, DEAP and
SEED, demonstrate the effectiveness of our method. Our approach significantly
outperforms state-of-the-art methods, achieving 65.84% accuracy when trained on
DEAP and tested on SEED, and 58.99% accuracy in the reverse scenario. It excels
at detecting both positive and negative emotions, making it well-suited for
practical emotion recognition applications.
|
2504.03709 | Suman Raj | Suman Raj, Bhavani A Madhabhavi, Kautuk Astu, Arnav A Rajesh, Pratham
M and Yogesh Simmhan | Ocularone-Bench: Benchmarking DNN Models on GPUs to Assist the Visually
Impaired | 11 pages, 6 figures, To Appear at the IEEE Workshop on Parallel and
Distributed Processing for Computational Social Systems (ParSocial),
Co-located with IEEE IPDPS 2025 | null | null | null | cs.DC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | VIP navigation requires multiple DNN models for identification, posture
analysis, and depth estimation to ensure safe mobility. Using a hazard vest as
a unique identifier enhances visibility while selecting the right DNN model and
computing device balances accuracy and real-time performance. We present
Ocularone-Bench, which is a benchmark suite designed to address the lack of
curated datasets for uniquely identifying individuals in crowded environments
and the need for benchmarking DNN inference times on resource-constrained edge
devices. The suite evaluates the accuracy-latency trade-offs of YOLO models
retrained on this dataset and benchmarks inference times of situation awareness
models across edge accelerators and high-end GPU workstations. Our study on
NVIDIA Jetson devices and RTX 4090 workstation demonstrates significant
improvements in detection accuracy, achieving up to 99.4% precision, while also
providing insights into real-time feasibility for mobile deployment. Beyond VIP
navigation, Ocularone-Bench is applicable to senior citizens, children and
worker safety monitoring, and other vision-based applications.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 10:08:18 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Raj",
"Suman",
""
],
[
"Madhabhavi",
"Bhavani A",
""
],
[
"Astu",
"Kautuk",
""
],
[
"Rajesh",
"Arnav A",
""
],
[
"M",
"Pratham",
""
],
[
"Simmhan",
"Yogesh",
""
]
] | TITLE: Ocularone-Bench: Benchmarking DNN Models on GPUs to Assist the Visually
Impaired
ABSTRACT: VIP navigation requires multiple DNN models for identification, posture
analysis, and depth estimation to ensure safe mobility. Using a hazard vest as
a unique identifier enhances visibility while selecting the right DNN model and
computing device balances accuracy and real-time performance. We present
Ocularone-Bench, which is a benchmark suite designed to address the lack of
curated datasets for uniquely identifying individuals in crowded environments
and the need for benchmarking DNN inference times on resource-constrained edge
devices. The suite evaluates the accuracy-latency trade-offs of YOLO models
retrained on this dataset and benchmarks inference times of situation awareness
models across edge accelerators and high-end GPU workstations. Our study on
NVIDIA Jetson devices and RTX 4090 workstation demonstrates significant
improvements in detection accuracy, achieving up to 99.4% precision, while also
providing insights into real-time feasibility for mobile deployment. Beyond VIP
navigation, Ocularone-Bench is applicable to senior citizens, children and
worker safety monitoring, and other vision-based applications.
|
2504.03712 | Jan Lewen | Jan Lewen, Max Pargmann, Jenia Jitsev, Mehdi Cherti, Robert Pitz-Paal,
Daniel Maldonado Quinto | Scalable heliostat surface predictions from focal spots: Sim-to-Real
transfer of inverse Deep Learning Raytracing | null | null | null | null | cs.CV cs.AI cs.CE cs.LG | http://creativecommons.org/licenses/by/4.0/ | Concentrating Solar Power (CSP) plants are a key technology in the transition
toward sustainable energy. A critical factor for their safe and efficient
operation is the distribution of concentrated solar flux on the receiver.
However, flux distributions from individual heliostats are sensitive to surface
imperfections. Measuring these surfaces across many heliostats remains
impractical in real-world deployments. As a result, control systems often
assume idealized heliostat surfaces, leading to suboptimal performance and
potential safety risks. To address this, inverse Deep Learning Raytracing
(iDLR) has been introduced as a novel method for inferring heliostat surface
profiles from target images recorded during standard calibration procedures. In
this work, we present the first successful Sim-to-Real transfer of iDLR,
enabling accurate surface predictions directly from real-world target images.
We evaluate our method on 63 heliostats under real operational conditions. iDLR
surface predictions achieve a median mean absolute error (MAE) of 0.17 mm and
show good agreement with deflectometry ground truth in 84% of cases. When used
in raytracing simulations, it enables flux density predictions with a mean
accuracy of 90% compared to deflectometry over our dataset, and outperforms the
commonly used ideal heliostat surface assumption by 26%. We tested this
approach in a challenging double-extrapolation scenario-involving unseen sun
positions and receiver projection-and found that iDLR maintains high predictive
accuracy, highlighting its generalization capabilities. Our results demonstrate
that iDLR is a scalable, automated, and cost-effective solution for integrating
realistic heliostat surface models into digital twins. This opens the door to
improved flux control, more precise performance modeling, and ultimately,
enhanced efficiency and safety in future CSP plants.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 13:15:05 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lewen",
"Jan",
""
],
[
"Pargmann",
"Max",
""
],
[
"Jitsev",
"Jenia",
""
],
[
"Cherti",
"Mehdi",
""
],
[
"Pitz-Paal",
"Robert",
""
],
[
"Quinto",
"Daniel Maldonado",
""
]
] | TITLE: Scalable heliostat surface predictions from focal spots: Sim-to-Real
transfer of inverse Deep Learning Raytracing
ABSTRACT: Concentrating Solar Power (CSP) plants are a key technology in the transition
toward sustainable energy. A critical factor for their safe and efficient
operation is the distribution of concentrated solar flux on the receiver.
However, flux distributions from individual heliostats are sensitive to surface
imperfections. Measuring these surfaces across many heliostats remains
impractical in real-world deployments. As a result, control systems often
assume idealized heliostat surfaces, leading to suboptimal performance and
potential safety risks. To address this, inverse Deep Learning Raytracing
(iDLR) has been introduced as a novel method for inferring heliostat surface
profiles from target images recorded during standard calibration procedures. In
this work, we present the first successful Sim-to-Real transfer of iDLR,
enabling accurate surface predictions directly from real-world target images.
We evaluate our method on 63 heliostats under real operational conditions. iDLR
surface predictions achieve a median mean absolute error (MAE) of 0.17 mm and
show good agreement with deflectometry ground truth in 84% of cases. When used
in raytracing simulations, it enables flux density predictions with a mean
accuracy of 90% compared to deflectometry over our dataset, and outperforms the
commonly used ideal heliostat surface assumption by 26%. We tested this
approach in a challenging double-extrapolation scenario-involving unseen sun
positions and receiver projection-and found that iDLR maintains high predictive
accuracy, highlighting its generalization capabilities. Our results demonstrate
that iDLR is a scalable, automated, and cost-effective solution for integrating
realistic heliostat surface models into digital twins. This opens the door to
improved flux control, more precise performance modeling, and ultimately,
enhanced efficiency and safety in future CSP plants.
|
2504.03713 | Weichen Dai | Weichen Dai, Zijie Dai, Zhijie Huang, Yixuan Pan, Xinhe Li, Xi Li, Yi
Zhou, Ji Qi and Wu Jiang | RLDBF: Enhancing LLMs Via Reinforcement Learning With DataBase FeedBack | null | null | null | null | cs.LG cs.AI cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While current large language models (LLMs) demonstrate remarkable linguistic
capabilities through training on massive unstructured text corpora, they remain
inadequate in leveraging structured scientific data (e.g., chemical molecular
properties in databases) that encapsulate centuries of accumulated scientific
expertise. These structured datasets hold strategic significance for advancing
AI for Science yet current approaches merely treat them as auxiliary
supplements to unstructured text. This study pioneers a systematic
investigation into enhancing LLMs with structured scientific data, using
chemical molecular science as a testbed. We investigate the impact of
incorporating molecular property data on LLM across distinct training phases,
including continual pre-training, supervised fine-tuning, and reinforcement
learning. Notably, to address the inherent limitation of numerical
insensitivity in large models, we propose an innovative methodology termed
"Reinforcement Learning with Database Feedback" (RLDBF). Experimental
evaluations demonstrate the efficacy of the proposed approach, with the model
exhibiting remarkable generalization capabilities on previously unseen data and
other chemical tasks. The results substantiate the potential of our method in
advancing the field of structured scientific data processing within LLMs.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 14:18:29 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Dai",
"Weichen",
""
],
[
"Dai",
"Zijie",
""
],
[
"Huang",
"Zhijie",
""
],
[
"Pan",
"Yixuan",
""
],
[
"Li",
"Xinhe",
""
],
[
"Li",
"Xi",
""
],
[
"Zhou",
"Yi",
""
],
[
"Qi",
"Ji",
""
],
[
"Jiang",
"Wu",
""
]
] | TITLE: RLDBF: Enhancing LLMs Via Reinforcement Learning With DataBase FeedBack
ABSTRACT: While current large language models (LLMs) demonstrate remarkable linguistic
capabilities through training on massive unstructured text corpora, they remain
inadequate in leveraging structured scientific data (e.g., chemical molecular
properties in databases) that encapsulate centuries of accumulated scientific
expertise. These structured datasets hold strategic significance for advancing
AI for Science yet current approaches merely treat them as auxiliary
supplements to unstructured text. This study pioneers a systematic
investigation into enhancing LLMs with structured scientific data, using
chemical molecular science as a testbed. We investigate the impact of
incorporating molecular property data on LLM across distinct training phases,
including continual pre-training, supervised fine-tuning, and reinforcement
learning. Notably, to address the inherent limitation of numerical
insensitivity in large models, we propose an innovative methodology termed
"Reinforcement Learning with Database Feedback" (RLDBF). Experimental
evaluations demonstrate the efficacy of the proposed approach, with the model
exhibiting remarkable generalization capabilities on previously unseen data and
other chemical tasks. The results substantiate the potential of our method in
advancing the field of structured scientific data processing within LLMs.
|
2504.03720 | Lihui Liu | Lihui Liu, Zihao Wang, Dawei Zhou, Ruijie Wang, Yuchen Yan, Bo Xiong,
Sihong He, Kai Shu, Hanghang Tong | TransNet: Transfer Knowledge for Few-shot Knowledge Graph Completion | null | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Knowledge graphs (KGs) are ubiquitous and widely used in various
applications. However, most real-world knowledge graphs are incomplete, which
significantly degrades their performance on downstream tasks. Additionally, the
relationships in real-world knowledge graphs often follow a long-tail
distribution, meaning that most relations are represented by only a few
training triplets. To address these challenges, few-shot learning has been
introduced. Few-shot KG completion aims to make accurate predictions for
triplets involving novel relations when only a limited number of training
triplets are available. Although many methods have been proposed, they
typically learn each relation individually, overlooking the correlations
between different tasks and the relevant information in previously trained
tasks. In this paper, we propose a transfer learning-based few-shot KG
completion method (TransNet). By learning the relationships between different
tasks, TransNet effectively transfers knowledge from similar tasks to improve
the current task's performance. Furthermore, by employing meta-learning,
TransNet can generalize effectively to new, unseen relations. Extensive
experiments on benchmark datasets demonstrate the superiority of TransNet over
state-of-the-art methods. Code can be found at
https://github.com/lihuiliullh/TransNet/tree/main
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 23:39:11 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Lihui",
""
],
[
"Wang",
"Zihao",
""
],
[
"Zhou",
"Dawei",
""
],
[
"Wang",
"Ruijie",
""
],
[
"Yan",
"Yuchen",
""
],
[
"Xiong",
"Bo",
""
],
[
"He",
"Sihong",
""
],
[
"Shu",
"Kai",
""
],
[
"Tong",
"Hanghang",
""
]
] | TITLE: TransNet: Transfer Knowledge for Few-shot Knowledge Graph Completion
ABSTRACT: Knowledge graphs (KGs) are ubiquitous and widely used in various
applications. However, most real-world knowledge graphs are incomplete, which
significantly degrades their performance on downstream tasks. Additionally, the
relationships in real-world knowledge graphs often follow a long-tail
distribution, meaning that most relations are represented by only a few
training triplets. To address these challenges, few-shot learning has been
introduced. Few-shot KG completion aims to make accurate predictions for
triplets involving novel relations when only a limited number of training
triplets are available. Although many methods have been proposed, they
typically learn each relation individually, overlooking the correlations
between different tasks and the relevant information in previously trained
tasks. In this paper, we propose a transfer learning-based few-shot KG
completion method (TransNet). By learning the relationships between different
tasks, TransNet effectively transfers knowledge from similar tasks to improve
the current task's performance. Furthermore, by employing meta-learning,
TransNet can generalize effectively to new, unseen relations. Extensive
experiments on benchmark datasets demonstrate the superiority of TransNet over
state-of-the-art methods. Code can be found at
https://github.com/lihuiliullh/TransNet/tree/main
|
2504.03724 | Zhiqiang Wang | Zhiqiang Wang, Pengbin Feng, Yanbin Lin, Shuzhang Cai, Zongao Bian,
Jinghua Yan, Xingquan Zhu | CrowdVLM-R1: Expanding R1 Ability to Vision Language Model for Crowd
Counting using Fuzzy Group Relative Policy Reward | 11 pages, 6 figures and 4 tables | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | We propose Fuzzy Group Relative Policy Reward (FGRPR), a novel framework that
integrates Group Relative Policy Optimization (GRPO) with a fuzzy reward
function to enhance learning efficiency. Unlike the conventional binary 0/1
accuracy reward, our fuzzy reward model provides nuanced incentives,
encouraging more precise outputs. Experimental results demonstrate that GRPO
with a standard 0/1 accuracy reward underperforms compared to supervised
fine-tuning (SFT). In contrast, FGRPR, applied to Qwen2.5-VL(3B and 7B),
surpasses all baseline models, including GPT4o, LLaMA2(90B), and SFT, across
five in-domain datasets. On an out-of-domain dataset, FGRPR achieves
performance comparable to SFT but excels when target values are larger, as its
fuzzy reward function assigns higher rewards to closer approximations. This
approach is broadly applicable to tasks where the precision of the answer is
critical. Code and data: https://github.com/yeyimilk/CrowdVLM-R1
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 03:57:16 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Zhiqiang",
""
],
[
"Feng",
"Pengbin",
""
],
[
"Lin",
"Yanbin",
""
],
[
"Cai",
"Shuzhang",
""
],
[
"Bian",
"Zongao",
""
],
[
"Yan",
"Jinghua",
""
],
[
"Zhu",
"Xingquan",
""
]
] | TITLE: CrowdVLM-R1: Expanding R1 Ability to Vision Language Model for Crowd
Counting using Fuzzy Group Relative Policy Reward
ABSTRACT: We propose Fuzzy Group Relative Policy Reward (FGRPR), a novel framework that
integrates Group Relative Policy Optimization (GRPO) with a fuzzy reward
function to enhance learning efficiency. Unlike the conventional binary 0/1
accuracy reward, our fuzzy reward model provides nuanced incentives,
encouraging more precise outputs. Experimental results demonstrate that GRPO
with a standard 0/1 accuracy reward underperforms compared to supervised
fine-tuning (SFT). In contrast, FGRPR, applied to Qwen2.5-VL(3B and 7B),
surpasses all baseline models, including GPT4o, LLaMA2(90B), and SFT, across
five in-domain datasets. On an out-of-domain dataset, FGRPR achieves
performance comparable to SFT but excels when target values are larger, as its
fuzzy reward function assigns higher rewards to closer approximations. This
approach is broadly applicable to tasks where the precision of the answer is
critical. Code and data: https://github.com/yeyimilk/CrowdVLM-R1
|
2504.03725 | Anita Graser | Anita Graser | Timeseries Foundation Models for Mobility: A Benchmark Comparison with
Traditional and Deep Learning Models | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Crowd and flow predictions have been extensively studied in mobility data
science. Traditional forecasting methods have relied on statistical models such
as ARIMA, later supplemented by deep learning approaches like ST-ResNet. More
recently, foundation models for time series forecasting, such as TimeGPT,
Chronos, and LagLlama, have emerged. A key advantage of these models is their
ability to generate zero-shot predictions, allowing them to be applied directly
to new tasks without retraining. This study evaluates the performance of
TimeGPT compared to traditional approaches for predicting city-wide mobility
timeseries using two bike-sharing datasets from New York City and Vienna,
Austria. Model performance is assessed across short (1-hour), medium (12-hour),
and long-term (24-hour) forecasting horizons. The results highlight the
potential of foundation models for mobility forecasting while also identifying
limitations of our experiments.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 07:20:31 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Graser",
"Anita",
""
]
] | TITLE: Timeseries Foundation Models for Mobility: A Benchmark Comparison with
Traditional and Deep Learning Models
ABSTRACT: Crowd and flow predictions have been extensively studied in mobility data
science. Traditional forecasting methods have relied on statistical models such
as ARIMA, later supplemented by deep learning approaches like ST-ResNet. More
recently, foundation models for time series forecasting, such as TimeGPT,
Chronos, and LagLlama, have emerged. A key advantage of these models is their
ability to generate zero-shot predictions, allowing them to be applied directly
to new tasks without retraining. This study evaluates the performance of
TimeGPT compared to traditional approaches for predicting city-wide mobility
timeseries using two bike-sharing datasets from New York City and Vienna,
Austria. Model performance is assessed across short (1-hour), medium (12-hour),
and long-term (24-hour) forecasting horizons. The results highlight the
potential of foundation models for mobility forecasting while also identifying
limitations of our experiments.
|
2504.03729 | G. Niklas Noren | Jim W. Barrett, Nils Erlanson, Joana F\'elix China, G. Niklas Nor\'en | A Scalable Predictive Modelling Approach to Identifying Duplicate
Adverse Event Reports for Drugs and Vaccines | 26 pages, 11 figures | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The practice of pharmacovigilance relies on large databases of individual
case safety reports to detect and evaluate potential new causal associations
between medicines or vaccines and adverse events. Duplicate reports are
separate and unlinked reports referring to the same case of an adverse event
involving a specific patient at a certain time. They impede statistical
analysis and mislead clinical assessment. The large size of such databases
precludes a manual identification of duplicates, and so a computational method
must be employed. This paper builds upon a hitherto state of the art model,
vigiMatch, modifying existing features and introducing new ones to target known
shortcomings of the original model. Two support vector machine classifiers, one
for medicines and one for vaccines, classify report pairs as duplicates and
non-duplicates. Recall was measured using a diverse collection of 5 independent
labelled test sets. Precision was measured by having each model classify a
randomly selected stream of pairs of reports until each model classified 100
pairs as duplicates. These pairs were assessed by a medical doctor without
indicating which method(s) had flagged each pair. Performance on individual
countries was measured by having a medical doctor assess a subset of pairs
classified as duplicates for three different countries. The new model achieved
higher precision and higher recall for all labelled datasets compared to the
previous state of the art model, with comparable performance for medicines and
vaccines. The model was shown to produce substantially fewer false positives
than the comparator model on pairs from individual countries. The method
presented here advances state of the art for duplicate detection in adverse
event reports for medicines and vaccines.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 15:24:29 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Barrett",
"Jim W.",
""
],
[
"Erlanson",
"Nils",
""
],
[
"China",
"Joana Félix",
""
],
[
"Norén",
"G. Niklas",
""
]
] | TITLE: A Scalable Predictive Modelling Approach to Identifying Duplicate
Adverse Event Reports for Drugs and Vaccines
ABSTRACT: The practice of pharmacovigilance relies on large databases of individual
case safety reports to detect and evaluate potential new causal associations
between medicines or vaccines and adverse events. Duplicate reports are
separate and unlinked reports referring to the same case of an adverse event
involving a specific patient at a certain time. They impede statistical
analysis and mislead clinical assessment. The large size of such databases
precludes a manual identification of duplicates, and so a computational method
must be employed. This paper builds upon a hitherto state of the art model,
vigiMatch, modifying existing features and introducing new ones to target known
shortcomings of the original model. Two support vector machine classifiers, one
for medicines and one for vaccines, classify report pairs as duplicates and
non-duplicates. Recall was measured using a diverse collection of 5 independent
labelled test sets. Precision was measured by having each model classify a
randomly selected stream of pairs of reports until each model classified 100
pairs as duplicates. These pairs were assessed by a medical doctor without
indicating which method(s) had flagged each pair. Performance on individual
countries was measured by having a medical doctor assess a subset of pairs
classified as duplicates for three different countries. The new model achieved
higher precision and higher recall for all labelled datasets compared to the
previous state of the art model, with comparable performance for medicines and
vaccines. The model was shown to produce substantially fewer false positives
than the comparator model on pairs from individual countries. The method
presented here advances state of the art for duplicate detection in adverse
event reports for medicines and vaccines.
|
2504.03732 | Nika Mansouri Ghiasi | Nika Mansouri Ghiasi, Talu G\"uloglu, Harun Mustafa, Can Firtina,
Konstantina Koliogeorgi, Konstantinos Kanellopoulos, Haiyu Mao, Rakesh Nadig,
Mohammad Sadrosadati, Jisung Park, Onur Mutlu | SAGe: A Lightweight Algorithm-Architecture Co-Design for Alleviating
Data Preparation Overheads in Large-Scale Genome Analysis | null | null | null | null | cs.AR cs.DC q-bio.GN | http://creativecommons.org/licenses/by/4.0/ | There have been extensive efforts to accelerate genome analysis, given the
exponentially growing volumes of genomic data. Prior works typically assume
that the data is ready to be analyzed in the desired format; in real usage
scenarios, however, it is common practice to store genomic data in storage
systems in a compressed format. Unfortunately, preparing genomic data (i.e.,
accessing compressed data from storage, and decompressing and reformatting it)
for an accelerator leads to large performance and energy overheads,
significantly diminishing the accelerator's intended benefits. To harness the
benefits of acceleration, without needing to store massive genomic data
uncompressed, there is a critical need to effectively address data preparation
overheads. The solution must meet three criteria: (i) high performance and
energy efficiency, (ii) high compression ratios, comparable to state-of-the-art
genomic compression, and (iii) be lightweight for seamless integration with a
broad range of genomics systems. This is challenging, particularly due to the
high decompression complexity of state-of-the-art genomic compressors and the
resource constraints of a wide range of genomics systems. We propose SAGe, an
algorithm-architecture co-design for highly-compressed storage and
high-performance access of large-scale genomic data in desired formats. With
our rigorous analysis of genomic datasets' features, we propose a co-design of
a new (de)compression algorithm, hardware, storage data layout, and interface
commands. SAGe encodes data in structures decodable by efficient sequential
scans and lightweight hardware. To still maintain high compression ratios, SAGe
exploits unique features of genomic data. SAGe improves the average performance
(energy efficiency) of state-of-the-art genomics accelerators by 3.0-12.3x
(18.8-49.6x), compared to when the accelerators rely on state-of-the-art
decompressors.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 23:36:26 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ghiasi",
"Nika Mansouri",
""
],
[
"Güloglu",
"Talu",
""
],
[
"Mustafa",
"Harun",
""
],
[
"Firtina",
"Can",
""
],
[
"Koliogeorgi",
"Konstantina",
""
],
[
"Kanellopoulos",
"Konstantinos",
""
],
[
"Mao",
"Haiyu",
""
],
[
"Nadig",
"Rakesh",
""
],
[
"Sadrosadati",
"Mohammad",
""
],
[
"Park",
"Jisung",
""
],
[
"Mutlu",
"Onur",
""
]
] | TITLE: SAGe: A Lightweight Algorithm-Architecture Co-Design for Alleviating
Data Preparation Overheads in Large-Scale Genome Analysis
ABSTRACT: There have been extensive efforts to accelerate genome analysis, given the
exponentially growing volumes of genomic data. Prior works typically assume
that the data is ready to be analyzed in the desired format; in real usage
scenarios, however, it is common practice to store genomic data in storage
systems in a compressed format. Unfortunately, preparing genomic data (i.e.,
accessing compressed data from storage, and decompressing and reformatting it)
for an accelerator leads to large performance and energy overheads,
significantly diminishing the accelerator's intended benefits. To harness the
benefits of acceleration, without needing to store massive genomic data
uncompressed, there is a critical need to effectively address data preparation
overheads. The solution must meet three criteria: (i) high performance and
energy efficiency, (ii) high compression ratios, comparable to state-of-the-art
genomic compression, and (iii) be lightweight for seamless integration with a
broad range of genomics systems. This is challenging, particularly due to the
high decompression complexity of state-of-the-art genomic compressors and the
resource constraints of a wide range of genomics systems. We propose SAGe, an
algorithm-architecture co-design for highly-compressed storage and
high-performance access of large-scale genomic data in desired formats. With
our rigorous analysis of genomic datasets' features, we propose a co-design of
a new (de)compression algorithm, hardware, storage data layout, and interface
commands. SAGe encodes data in structures decodable by efficient sequential
scans and lightweight hardware. To still maintain high compression ratios, SAGe
exploits unique features of genomic data. SAGe improves the average performance
(energy efficiency) of state-of-the-art genomics accelerators by 3.0-12.3x
(18.8-49.6x), compared to when the accelerators rely on state-of-the-art
decompressors.
|
2504.03734 | Jianfei Cao | Jianfei Cao, Dongchao Wang | Artificial Geographically Weighted Neural Network: A Novel Framework for
Spatial Analysis with Geographically Weighted Layers | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Geographically Weighted Regression (GWR) is a widely recognized technique for
modeling spatial heterogeneity. However, it is commonly assumed that the
relationships between dependent and independent variables are linear. To
overcome this limitation, we propose an Artificial Geographically Weighted
Neural Network (AGWNN), a novel framework that integrates geographically
weighted techniques with neural networks to capture complex nonlinear spatial
relationships. Central to this framework is the Geographically Weighted Layer
(GWL), a specialized component designed to encode spatial heterogeneity within
the neural network architecture. To rigorously evaluate the performance of
AGWNN, we conducted comprehensive experiments using both simulated datasets and
real-world case studies. Our results demonstrate that AGWNN significantly
outperforms traditional GWR and standard Artificial Neural Networks (ANNs) in
terms of model fitting accuracy. Notably, AGWNN excels in modeling intricate
nonlinear relationships and effectively identifies complex spatial
heterogeneity patterns, offering a robust and versatile tool for advanced
spatial analysis.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 01:48:46 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Cao",
"Jianfei",
""
],
[
"Wang",
"Dongchao",
""
]
] | TITLE: Artificial Geographically Weighted Neural Network: A Novel Framework for
Spatial Analysis with Geographically Weighted Layers
ABSTRACT: Geographically Weighted Regression (GWR) is a widely recognized technique for
modeling spatial heterogeneity. However, it is commonly assumed that the
relationships between dependent and independent variables are linear. To
overcome this limitation, we propose an Artificial Geographically Weighted
Neural Network (AGWNN), a novel framework that integrates geographically
weighted techniques with neural networks to capture complex nonlinear spatial
relationships. Central to this framework is the Geographically Weighted Layer
(GWL), a specialized component designed to encode spatial heterogeneity within
the neural network architecture. To rigorously evaluate the performance of
AGWNN, we conducted comprehensive experiments using both simulated datasets and
real-world case studies. Our results demonstrate that AGWNN significantly
outperforms traditional GWR and standard Artificial Neural Networks (ANNs) in
terms of model fitting accuracy. Notably, AGWNN excels in modeling intricate
nonlinear relationships and effectively identifies complex spatial
heterogeneity patterns, offering a robust and versatile tool for advanced
spatial analysis.
|
2504.03736 | Teodor Chiaburu | Teodor Chiaburu, Felix Bie{\ss}mann, Frank Hau{\ss}er | Uncertainty Propagation in XAI: A Comparison of Analytical and Empirical
Estimators | 23 pages, 10 figures, accepted at WCXAI 2025 Istanbul | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Understanding uncertainty in Explainable AI (XAI) is crucial for building
trust and ensuring reliable decision-making in Machine Learning models. This
paper introduces a unified framework for quantifying and interpreting
Uncertainty in XAI by defining a general explanation function $e_{\theta}(x,
f)$ that captures the propagation of uncertainty from key sources:
perturbations in input data and model parameters. By using both analytical and
empirical estimates of explanation variance, we provide a systematic means of
assessing the impact uncertainty on explanations. We illustrate the approach
using a first-order uncertainty propagation as the analytical estimator. In a
comprehensive evaluation across heterogeneous datasets, we compare analytical
and empirical estimates of uncertainty propagation and evaluate their
robustness. Extending previous work on inconsistencies in explanations, our
experiments identify XAI methods that do not reliably capture and propagate
uncertainty. Our findings underscore the importance of uncertainty-aware
explanations in high-stakes applications and offer new insights into the
limitations of current XAI methods. The code for the experiments can be found
in our repository at https://github.com/TeodorChiaburu/UXAI
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 07:06:31 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chiaburu",
"Teodor",
""
],
[
"Bießmann",
"Felix",
""
],
[
"Haußer",
"Frank",
""
]
] | TITLE: Uncertainty Propagation in XAI: A Comparison of Analytical and Empirical
Estimators
ABSTRACT: Understanding uncertainty in Explainable AI (XAI) is crucial for building
trust and ensuring reliable decision-making in Machine Learning models. This
paper introduces a unified framework for quantifying and interpreting
Uncertainty in XAI by defining a general explanation function $e_{\theta}(x,
f)$ that captures the propagation of uncertainty from key sources:
perturbations in input data and model parameters. By using both analytical and
empirical estimates of explanation variance, we provide a systematic means of
assessing the impact uncertainty on explanations. We illustrate the approach
using a first-order uncertainty propagation as the analytical estimator. In a
comprehensive evaluation across heterogeneous datasets, we compare analytical
and empirical estimates of uncertainty propagation and evaluate their
robustness. Extending previous work on inconsistencies in explanations, our
experiments identify XAI methods that do not reliably capture and propagate
uncertainty. Our findings underscore the importance of uncertainty-aware
explanations in high-stakes applications and offer new insights into the
limitations of current XAI methods. The code for the experiments can be found
in our repository at https://github.com/TeodorChiaburu/UXAI
|
2504.03740 | ZhiTeng Zhu | ZhiTeng Zhu and Lan Yao (School of Mathematics, Hunan University) | Brain Network Classification Based on Graph Contrastive Learning and
Graph Transformer | 10 pages, 5 figures, uses tikz.sty | unpublished (2025) | null | HNU-MATH-2025-04 | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The dynamic characterization of functional brain networks is of great
significance for elucidating the mechanisms of human brain function. Although
graph neural networks have achieved remarkable progress in functional network
analysis, challenges such as data scarcity and insufficient supervision
persist. To address the limitations of limited training data and inadequate
supervision, this paper proposes a novel model named PHGCL-DDGformer that
integrates graph contrastive learning with graph transformers, effectively
enhancing the representation learning capability for brain network
classification tasks. To overcome the constraints of existing graph contrastive
learning methods in brain network feature extraction, an adaptive graph
augmentation strategy combining attribute masking and edge perturbation is
implemented for data enhancement. Subsequently, a dual-domain graph transformer
(DDGformer) module is constructed to integrate local and global information,
where graph convolutional networks aggregate neighborhood features to capture
local patterns while attention mechanisms extract global dependencies. Finally,
a graph contrastive learning framework is established to maximize the
consistency between positive and negative pairs, thereby obtaining high-quality
graph representations. Experimental results on real-world datasets demonstrate
that the PHGCL-DDGformer model outperforms existing state-of-the-art approaches
in brain network classification tasks.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 13:26:03 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhu",
"ZhiTeng",
"",
"School of Mathematics, Hunan University"
],
[
"Yao",
"Lan",
"",
"School of Mathematics, Hunan University"
]
] | TITLE: Brain Network Classification Based on Graph Contrastive Learning and
Graph Transformer
ABSTRACT: The dynamic characterization of functional brain networks is of great
significance for elucidating the mechanisms of human brain function. Although
graph neural networks have achieved remarkable progress in functional network
analysis, challenges such as data scarcity and insufficient supervision
persist. To address the limitations of limited training data and inadequate
supervision, this paper proposes a novel model named PHGCL-DDGformer that
integrates graph contrastive learning with graph transformers, effectively
enhancing the representation learning capability for brain network
classification tasks. To overcome the constraints of existing graph contrastive
learning methods in brain network feature extraction, an adaptive graph
augmentation strategy combining attribute masking and edge perturbation is
implemented for data enhancement. Subsequently, a dual-domain graph transformer
(DDGformer) module is constructed to integrate local and global information,
where graph convolutional networks aggregate neighborhood features to capture
local patterns while attention mechanisms extract global dependencies. Finally,
a graph contrastive learning framework is established to maximize the
consistency between positive and negative pairs, thereby obtaining high-quality
graph representations. Experimental results on real-world datasets demonstrate
that the PHGCL-DDGformer model outperforms existing state-of-the-art approaches
in brain network classification tasks.
|
2504.03742 | Jiajun Zhou | Songtao Peng, Lei Wang, Wu Shuai, Hao Song, Jiajun Zhou, Shanqing Yu,
Qi Xuan | Hierarchical Local-Global Feature Learning for Few-shot Malicious
Traffic Detection | null | null | null | null | cs.CR cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid growth of internet traffic, malicious network attacks have
become increasingly frequent and sophisticated, posing significant threats to
global cybersecurity. Traditional detection methods, including rule-based and
machine learning-based approaches, struggle to accurately identify emerging
threats, particularly in scenarios with limited samples. While recent advances
in few-shot learning have partially addressed the data scarcity issue, existing
methods still exhibit high false positive rates and lack the capability to
effectively capture crucial local traffic patterns. In this paper, we propose
HLoG, a novel hierarchical few-shot malicious traffic detection framework that
leverages both local and global features extracted from network sessions. HLoG
employs a sliding-window approach to segment sessions into phases, capturing
fine-grained local interaction patterns through hierarchical bidirectional GRU
encoding, while simultaneously modeling global contextual dependencies. We
further design a session similarity assessment module that integrates local
similarity with global self-attention-enhanced representations, achieving
accurate and robust few-shot traffic classification. Comprehensive experiments
on three meticulously reconstructed datasets demonstrate that HLoG
significantly outperforms existing state-of-the-art methods. Particularly, HLoG
achieves superior recall rates while substantially reducing false positives,
highlighting its effectiveness and practical value in real-world cybersecurity
applications.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:56:44 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Peng",
"Songtao",
""
],
[
"Wang",
"Lei",
""
],
[
"Shuai",
"Wu",
""
],
[
"Song",
"Hao",
""
],
[
"Zhou",
"Jiajun",
""
],
[
"Yu",
"Shanqing",
""
],
[
"Xuan",
"Qi",
""
]
] | TITLE: Hierarchical Local-Global Feature Learning for Few-shot Malicious
Traffic Detection
ABSTRACT: With the rapid growth of internet traffic, malicious network attacks have
become increasingly frequent and sophisticated, posing significant threats to
global cybersecurity. Traditional detection methods, including rule-based and
machine learning-based approaches, struggle to accurately identify emerging
threats, particularly in scenarios with limited samples. While recent advances
in few-shot learning have partially addressed the data scarcity issue, existing
methods still exhibit high false positive rates and lack the capability to
effectively capture crucial local traffic patterns. In this paper, we propose
HLoG, a novel hierarchical few-shot malicious traffic detection framework that
leverages both local and global features extracted from network sessions. HLoG
employs a sliding-window approach to segment sessions into phases, capturing
fine-grained local interaction patterns through hierarchical bidirectional GRU
encoding, while simultaneously modeling global contextual dependencies. We
further design a session similarity assessment module that integrates local
similarity with global self-attention-enhanced representations, achieving
accurate and robust few-shot traffic classification. Comprehensive experiments
on three meticulously reconstructed datasets demonstrate that HLoG
significantly outperforms existing state-of-the-art methods. Particularly, HLoG
achieves superior recall rates while substantially reducing false positives,
highlighting its effectiveness and practical value in real-world cybersecurity
applications.
|
2504.03748 | Kaiyuan Hou | Kaiyuan Hou, Minghui Zhao, Lilin Xu, Yuang Fan and Xiaofan Jiang | TDBench: Benchmarking Vision-Language Models in Understanding Top-Down
Images | null | null | null | null | cs.LG cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid emergence of Vision-Language Models (VLMs) has significantly
advanced multimodal understanding, enabling applications in scene comprehension
and visual reasoning. While these models have been primarily evaluated and
developed for front-view image understanding, their capabilities in
interpreting top-down images have received limited attention, partly due to the
scarcity of diverse top-down datasets and the challenges in collecting such
data. In contrast, top-down vision provides explicit spatial overviews and
improved contextual understanding of scenes, making it particularly valuable
for tasks like autonomous navigation, aerial imaging, and spatial planning. In
this work, we address this gap by introducing TDBench, a comprehensive
benchmark for VLMs in top-down image understanding. TDBench is constructed from
public top-down view datasets and high-quality simulated images, including
diverse real-world and synthetic scenarios. TDBench consists of visual
question-answer pairs across ten evaluation dimensions of image understanding.
Moreover, we conduct four case studies that commonly happen in real-world
scenarios but are less explored. By revealing the strengths and limitations of
existing VLM through evaluation results, we hope TDBench to provide insights
for motivating future research. Project homepage:
https://github.com/Columbia-ICSL/TDBench
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 19:01:13 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Hou",
"Kaiyuan",
""
],
[
"Zhao",
"Minghui",
""
],
[
"Xu",
"Lilin",
""
],
[
"Fan",
"Yuang",
""
],
[
"Jiang",
"Xiaofan",
""
]
] | TITLE: TDBench: Benchmarking Vision-Language Models in Understanding Top-Down
Images
ABSTRACT: The rapid emergence of Vision-Language Models (VLMs) has significantly
advanced multimodal understanding, enabling applications in scene comprehension
and visual reasoning. While these models have been primarily evaluated and
developed for front-view image understanding, their capabilities in
interpreting top-down images have received limited attention, partly due to the
scarcity of diverse top-down datasets and the challenges in collecting such
data. In contrast, top-down vision provides explicit spatial overviews and
improved contextual understanding of scenes, making it particularly valuable
for tasks like autonomous navigation, aerial imaging, and spatial planning. In
this work, we address this gap by introducing TDBench, a comprehensive
benchmark for VLMs in top-down image understanding. TDBench is constructed from
public top-down view datasets and high-quality simulated images, including
diverse real-world and synthetic scenarios. TDBench consists of visual
question-answer pairs across ten evaluation dimensions of image understanding.
Moreover, we conduct four case studies that commonly happen in real-world
scenarios but are less explored. By revealing the strengths and limitations of
existing VLM through evaluation results, we hope TDBench to provide insights
for motivating future research. Project homepage:
https://github.com/Columbia-ICSL/TDBench
|
2504.03750 | Diego Vallarino Dr. | Diego Vallarino | Detecting Financial Fraud with Hybrid Deep Learning: A Mix-of-Experts
Approach to Sequential and Anomalous Patterns | null | null | null | null | cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Financial fraud detection remains a critical challenge due to the dynamic and
adversarial nature of fraudulent behavior. As fraudsters evolve their tactics,
detection systems must combine robustness, adaptability, and precision. This
study presents a hybrid architecture for credit card fraud detection that
integrates a Mixture of Experts (MoE) framework with Recurrent Neural Networks
(RNNs), Transformer encoders, and Autoencoders. Each expert module contributes
a specialized capability: RNNs capture sequential behavior, Transformers
extract high-order feature interactions, and Autoencoders detect anomalies
through reconstruction loss. The MoE framework dynamically assigns predictive
responsibility among the experts, enabling adaptive and context-sensitive
decision-making.
Trained on a high-fidelity synthetic dataset that simulates real-world
transaction patterns and fraud typologies, the hybrid model achieved 98.7
percent accuracy, 94.3 percent precision, and 91.5 percent recall,
outperforming standalone models and classical machine learning baselines. The
Autoencoder component significantly enhanced the system's ability to identify
emerging fraud strategies and atypical behaviors.
Beyond technical performance, the model contributes to broader efforts in
financial governance and crime prevention. It supports regulatory compliance
with Anti-Money Laundering (AML) and Know Your Customer (KYC) protocols and
aligns with routine activity theory by operationalizing AI as a capable
guardian within financial ecosystems. The proposed hybrid system offers a
scalable, modular, and regulation-aware approach to detecting increasingly
sophisticated fraud patterns, contributing both to the advancement of
intelligent systems and to the strengthening of institutional fraud defense
infrastructures.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 20:47:18 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Vallarino",
"Diego",
""
]
] | TITLE: Detecting Financial Fraud with Hybrid Deep Learning: A Mix-of-Experts
Approach to Sequential and Anomalous Patterns
ABSTRACT: Financial fraud detection remains a critical challenge due to the dynamic and
adversarial nature of fraudulent behavior. As fraudsters evolve their tactics,
detection systems must combine robustness, adaptability, and precision. This
study presents a hybrid architecture for credit card fraud detection that
integrates a Mixture of Experts (MoE) framework with Recurrent Neural Networks
(RNNs), Transformer encoders, and Autoencoders. Each expert module contributes
a specialized capability: RNNs capture sequential behavior, Transformers
extract high-order feature interactions, and Autoencoders detect anomalies
through reconstruction loss. The MoE framework dynamically assigns predictive
responsibility among the experts, enabling adaptive and context-sensitive
decision-making.
Trained on a high-fidelity synthetic dataset that simulates real-world
transaction patterns and fraud typologies, the hybrid model achieved 98.7
percent accuracy, 94.3 percent precision, and 91.5 percent recall,
outperforming standalone models and classical machine learning baselines. The
Autoencoder component significantly enhanced the system's ability to identify
emerging fraud strategies and atypical behaviors.
Beyond technical performance, the model contributes to broader efforts in
financial governance and crime prevention. It supports regulatory compliance
with Anti-Money Laundering (AML) and Know Your Customer (KYC) protocols and
aligns with routine activity theory by operationalizing AI as a capable
guardian within financial ecosystems. The proposed hybrid system offers a
scalable, modular, and regulation-aware approach to detecting increasingly
sophisticated fraud patterns, contributing both to the advancement of
intelligent systems and to the strengthening of institutional fraud defense
infrastructures.
|
2504.03753 | Juhua Chen | Juhua Chen, Karson shi, Jialing He, North Chen, Kele Jiang | MMCE: A Framework for Deep Monotonic Modeling of Multiple Causal Effects | null | null | null | null | cs.LG stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | When we plan to use money as an incentive to change the behavior of a person
(such as making riders to deliver more orders or making consumers to buy more
items), the common approach of this problem is to adopt a two-stage framework
in order to maximize ROI under cost constraints. In the first stage, the
individual price response curve is obtained. In the second stage, business
goals and resource constraints are formally expressed and modeled as an
optimization problem. The first stage is very critical. It can answer a very
important question. This question is how much incremental results can
incentives bring, which is the basis of the second stage. Usually, the causal
modeling is used to obtain the curve. In the case of only observational data,
causal modeling and evaluation are very challenging. In some business
scenarios, multiple causal effects need to be obtained at the same time. This
paper proposes a new observational data modeling and evaluation framework,
which can simultaneously model multiple causal effects and greatly improve the
modeling accuracy under some abnormal distributions. In the absence of RCT
data, evaluation seems impossible. This paper summarizes three priors to
illustrate the necessity and feasibility of qualitative evaluation of cognitive
testing. At the same time, this paper innovatively proposes the conditions
under which observational data can be considered as an evaluation dataset. Our
approach is very groundbreaking. It is the first to propose a modeling
framework that simultaneously obtains multiple causal effects. The offline
analysis and online experimental results show the effectiveness of the results
and significantly improve the effectiveness of the allocation strategies
generated in real world marketing activities.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 01:51:58 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chen",
"Juhua",
""
],
[
"shi",
"Karson",
""
],
[
"He",
"Jialing",
""
],
[
"Chen",
"North",
""
],
[
"Jiang",
"Kele",
""
]
] | TITLE: MMCE: A Framework for Deep Monotonic Modeling of Multiple Causal Effects
ABSTRACT: When we plan to use money as an incentive to change the behavior of a person
(such as making riders to deliver more orders or making consumers to buy more
items), the common approach of this problem is to adopt a two-stage framework
in order to maximize ROI under cost constraints. In the first stage, the
individual price response curve is obtained. In the second stage, business
goals and resource constraints are formally expressed and modeled as an
optimization problem. The first stage is very critical. It can answer a very
important question. This question is how much incremental results can
incentives bring, which is the basis of the second stage. Usually, the causal
modeling is used to obtain the curve. In the case of only observational data,
causal modeling and evaluation are very challenging. In some business
scenarios, multiple causal effects need to be obtained at the same time. This
paper proposes a new observational data modeling and evaluation framework,
which can simultaneously model multiple causal effects and greatly improve the
modeling accuracy under some abnormal distributions. In the absence of RCT
data, evaluation seems impossible. This paper summarizes three priors to
illustrate the necessity and feasibility of qualitative evaluation of cognitive
testing. At the same time, this paper innovatively proposes the conditions
under which observational data can be considered as an evaluation dataset. Our
approach is very groundbreaking. It is the first to propose a modeling
framework that simultaneously obtains multiple causal effects. The offline
analysis and online experimental results show the effectiveness of the results
and significantly improve the effectiveness of the allocation strategies
generated in real world marketing activities.
|
2504.03755 | Shijie Ma | Shijie Ma, Fei Zhu, Xu-Yao Zhang, Cheng-Lin Liu | ProtoGCD: Unified and Unbiased Prototype Learning for Generalized
Category Discovery | Accepted to IEEE TPAMI 2025 | null | 10.1109/TPAMI.2025.3557502 | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generalized category discovery (GCD) is a pragmatic but underexplored
problem, which requires models to automatically cluster and discover novel
categories by leveraging the labeled samples from old classes. The challenge is
that unlabeled data contain both old and new classes. Early works leveraging
pseudo-labeling with parametric classifiers handle old and new classes
separately, which brings about imbalanced accuracy between them. Recent methods
employing contrastive learning neglect potential positives and are decoupled
from the clustering objective, leading to biased representations and
sub-optimal results. To address these issues, we introduce a unified and
unbiased prototype learning framework, namely ProtoGCD, wherein old and new
classes are modeled with joint prototypes and unified learning objectives,
{enabling unified modeling between old and new classes}. Specifically, we
propose a dual-level adaptive pseudo-labeling mechanism to mitigate
confirmation bias, together with two regularization terms to collectively help
learn more suitable representations for GCD. Moreover, for practical
considerations, we devise a criterion to estimate the number of new classes.
Furthermore, we extend ProtoGCD to detect unseen outliers, achieving task-level
unification. Comprehensive experiments show that ProtoGCD achieves
state-of-the-art performance on both generic and fine-grained datasets. The
code is available at https://github.com/mashijie1028/ProtoGCD.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 06:13:14 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ma",
"Shijie",
""
],
[
"Zhu",
"Fei",
""
],
[
"Zhang",
"Xu-Yao",
""
],
[
"Liu",
"Cheng-Lin",
""
]
] | TITLE: ProtoGCD: Unified and Unbiased Prototype Learning for Generalized
Category Discovery
ABSTRACT: Generalized category discovery (GCD) is a pragmatic but underexplored
problem, which requires models to automatically cluster and discover novel
categories by leveraging the labeled samples from old classes. The challenge is
that unlabeled data contain both old and new classes. Early works leveraging
pseudo-labeling with parametric classifiers handle old and new classes
separately, which brings about imbalanced accuracy between them. Recent methods
employing contrastive learning neglect potential positives and are decoupled
from the clustering objective, leading to biased representations and
sub-optimal results. To address these issues, we introduce a unified and
unbiased prototype learning framework, namely ProtoGCD, wherein old and new
classes are modeled with joint prototypes and unified learning objectives,
{enabling unified modeling between old and new classes}. Specifically, we
propose a dual-level adaptive pseudo-labeling mechanism to mitigate
confirmation bias, together with two regularization terms to collectively help
learn more suitable representations for GCD. Moreover, for practical
considerations, we devise a criterion to estimate the number of new classes.
Furthermore, we extend ProtoGCD to detect unseen outliers, achieving task-level
unification. Comprehensive experiments show that ProtoGCD achieves
state-of-the-art performance on both generic and fine-grained datasets. The
code is available at https://github.com/mashijie1028/ProtoGCD.
|
2504.03756 | Yan-Ann Chen | Yu-Lin Kuo, Yu-Chee Tseng, Ting-Hui Chiang, Yan-Ann Chen | Semi-Self Representation Learning for Crowdsourced WiFi Trajectories | Accepted by VTC2025-Spring | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | WiFi fingerprint-based localization has been studied intensively. Point-based
solutions rely on position annotations of WiFi fingerprints. Trajectory-based
solutions, however, require end-position annotations of WiFi trajectories,
where a WiFi trajectory is a multivariate time series of signal features. A
trajectory dataset is much larger than a pointwise dataset as the number of
potential trajectories in a field may grow exponentially with respect to the
size of the field. This work presents a semi-self representation learning
solution, where a large dataset $C$ of crowdsourced unlabeled WiFi trajectories
can be automatically labeled by a much smaller dataset $\tilde C$ of labeled
WiFi trajectories. The size of $\tilde C$ only needs to be proportional to the
size of the physical field, while the unlabeled $C$ could be much larger. This
is made possible through a novel ``cut-and-flip'' augmentation scheme based on
the meet-in-the-middle paradigm. A two-stage learning consisting of trajectory
embedding followed by endpoint embedding is proposed for the unlabeled $C$.
Then the learned representations are labeled by $\tilde C$ and connected to a
neural-based localization network. The result, while delivering promising
accuracy, significantly relieves the burden of human annotations for
trajectory-based localization.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 06:19:43 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Kuo",
"Yu-Lin",
""
],
[
"Tseng",
"Yu-Chee",
""
],
[
"Chiang",
"Ting-Hui",
""
],
[
"Chen",
"Yan-Ann",
""
]
] | TITLE: Semi-Self Representation Learning for Crowdsourced WiFi Trajectories
ABSTRACT: WiFi fingerprint-based localization has been studied intensively. Point-based
solutions rely on position annotations of WiFi fingerprints. Trajectory-based
solutions, however, require end-position annotations of WiFi trajectories,
where a WiFi trajectory is a multivariate time series of signal features. A
trajectory dataset is much larger than a pointwise dataset as the number of
potential trajectories in a field may grow exponentially with respect to the
size of the field. This work presents a semi-self representation learning
solution, where a large dataset $C$ of crowdsourced unlabeled WiFi trajectories
can be automatically labeled by a much smaller dataset $\tilde C$ of labeled
WiFi trajectories. The size of $\tilde C$ only needs to be proportional to the
size of the physical field, while the unlabeled $C$ could be much larger. This
is made possible through a novel ``cut-and-flip'' augmentation scheme based on
the meet-in-the-middle paradigm. A two-stage learning consisting of trajectory
embedding followed by endpoint embedding is proposed for the unlabeled $C$.
Then the learned representations are labeled by $\tilde C$ and connected to a
neural-based localization network. The result, while delivering promising
accuracy, significantly relieves the burden of human annotations for
trajectory-based localization.
|
2504.03757 | Xi Fu | Xi Fu, Rui Liu, Aung Aung Phyo Wai, Hannah Pulferer, Neethu Robinson,
Gernot R M\"uller-Putz, Cuntai Guan | EEG2GAIT: A Hierarchical Graph Convolutional Network for EEG-based Gait
Decoding | null | null | null | null | eess.SP cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decoding gait dynamics from EEG signals presents significant challenges due
to the complex spatial dependencies of motor processes, the need for accurate
temporal and spectral feature extraction, and the scarcity of high-quality gait
EEG datasets. To address these issues, we propose EEG2GAIT, a novel
hierarchical graph-based model that captures multi-level spatial embeddings of
EEG channels using a Hierarchical Graph Convolutional Network (GCN) Pyramid. To
further improve decoding accuracy, we introduce a Hybrid Temporal-Spectral
Reward (HTSR) loss function, which combines time-domain, frequency-domain, and
reward-based loss components. Moreover, we contribute a new Gait-EEG Dataset
(GED), consisting of synchronized EEG and lower-limb joint angle data collected
from 50 participants over two lab visits. Validation experiments on both the
GED and the publicly available Mobile Brain-body imaging (MoBI) dataset
demonstrate that EEG2GAIT outperforms state-of-the-art methods and achieves the
best joint angle prediction. Ablation studies validate the contributions of the
hierarchical GCN modules and HTSR Loss, while saliency maps reveal the
significance of motor-related brain regions in decoding tasks. These findings
underscore EEG2GAIT's potential for advancing brain-computer interface
applications, particularly in lower-limb rehabilitation and assistive
technologies.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 07:48:21 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Fu",
"Xi",
""
],
[
"Liu",
"Rui",
""
],
[
"Wai",
"Aung Aung Phyo",
""
],
[
"Pulferer",
"Hannah",
""
],
[
"Robinson",
"Neethu",
""
],
[
"Müller-Putz",
"Gernot R",
""
],
[
"Guan",
"Cuntai",
""
]
] | TITLE: EEG2GAIT: A Hierarchical Graph Convolutional Network for EEG-based Gait
Decoding
ABSTRACT: Decoding gait dynamics from EEG signals presents significant challenges due
to the complex spatial dependencies of motor processes, the need for accurate
temporal and spectral feature extraction, and the scarcity of high-quality gait
EEG datasets. To address these issues, we propose EEG2GAIT, a novel
hierarchical graph-based model that captures multi-level spatial embeddings of
EEG channels using a Hierarchical Graph Convolutional Network (GCN) Pyramid. To
further improve decoding accuracy, we introduce a Hybrid Temporal-Spectral
Reward (HTSR) loss function, which combines time-domain, frequency-domain, and
reward-based loss components. Moreover, we contribute a new Gait-EEG Dataset
(GED), consisting of synchronized EEG and lower-limb joint angle data collected
from 50 participants over two lab visits. Validation experiments on both the
GED and the publicly available Mobile Brain-body imaging (MoBI) dataset
demonstrate that EEG2GAIT outperforms state-of-the-art methods and achieves the
best joint angle prediction. Ablation studies validate the contributions of the
hierarchical GCN modules and HTSR Loss, while saliency maps reveal the
significance of motor-related brain regions in decoding tasks. These findings
underscore EEG2GAIT's potential for advancing brain-computer interface
applications, particularly in lower-limb rehabilitation and assistive
technologies.
|
2504.03760 | Florian Heinrichs | Tiago Vasconcelos Afonso, Florian Heinrichs | EEG-EyeTrack: A Benchmark for Time Series and Functional Data Analysis
with Open Challenges and Baselines | Keywords: Functional data analysis, functional neural networks, EEG
data, eye-tracking 18 pages, 2 figures, 9 tables | null | null | null | eess.SP cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A new benchmark dataset for functional data analysis (FDA) is presented,
focusing on the reconstruction of eye movements from EEG data. The contribution
is twofold: first, open challenges and evaluation metrics tailored to FDA
applications are proposed. Second, functional neural networks are used to
establish baseline results for the primary regression task of reconstructing
eye movements from EEG signals. Baseline results are reported for the new
dataset, based on consumer-grade hardware, and the EEGEyeNet dataset, based on
research-grade hardware.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:33:38 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Afonso",
"Tiago Vasconcelos",
""
],
[
"Heinrichs",
"Florian",
""
]
] | TITLE: EEG-EyeTrack: A Benchmark for Time Series and Functional Data Analysis
with Open Challenges and Baselines
ABSTRACT: A new benchmark dataset for functional data analysis (FDA) is presented,
focusing on the reconstruction of eye movements from EEG data. The contribution
is twofold: first, open challenges and evaluation metrics tailored to FDA
applications are proposed. Second, functional neural networks are used to
establish baseline results for the primary regression task of reconstructing
eye movements from EEG signals. Baseline results are reported for the new
dataset, based on consumer-grade hardware, and the EEGEyeNet dataset, based on
research-grade hardware.
|
2504.03761 | Nina Moutonnet | Nina Moutonnet, Gregory Scott, Danilo P. Mandic | Augmentation of EEG and ECG Time Series for Deep Learning Applications:
Integrating Changepoint Detection into the iAAFT Surrogates | null | null | null | null | eess.SP cs.LG | http://creativecommons.org/licenses/by/4.0/ | The performance of deep learning methods critically depends on the quality
and quantity of the available training data. This is especially the case for
physiological time series, which are both noisy and scarce, which calls for
data augmentation to artificially increase the size of datasets. Another issue
is that the time-evolving statistical properties of nonstationary signals
prevent the use of standard data augmentation techniques. To this end, we
introduce a novel method for augmenting nonstationary time series. This is
achieved by combining offline changepoint detection with the iterative
amplitude-adjusted Fourier transform (iAAFT), which ensures that the
time-frequency properties of the original signal are preserved during
augmentation. The proposed method is validated through comparisons of the
performance of i) a deep learning seizure detection algorithm on both the
original and augmented versions of the CHB-MIT and Siena scalp
electroencephalography (EEG) databases, and ii) a deep learning atrial
fibrillation (AF) detection algorithm on the original and augmented versions of
the Computing in Cardiology Challenge 2017 dataset. By virtue of the proposed
method, for the CHB-MIT and Siena datasets respectively, accuracy rose by 4.4%
and 1.9%, precision by 10% and 5.5%, recall by 3.6% and 0.9%, and F1 by 4.2%
and 1.4%. For the AF classification task, accuracy rose by 0.3%, precision by
2.1%, recall by 0.8%, and F1 by 2.1%.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 09:40:04 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Moutonnet",
"Nina",
""
],
[
"Scott",
"Gregory",
""
],
[
"Mandic",
"Danilo P.",
""
]
] | TITLE: Augmentation of EEG and ECG Time Series for Deep Learning Applications:
Integrating Changepoint Detection into the iAAFT Surrogates
ABSTRACT: The performance of deep learning methods critically depends on the quality
and quantity of the available training data. This is especially the case for
physiological time series, which are both noisy and scarce, which calls for
data augmentation to artificially increase the size of datasets. Another issue
is that the time-evolving statistical properties of nonstationary signals
prevent the use of standard data augmentation techniques. To this end, we
introduce a novel method for augmenting nonstationary time series. This is
achieved by combining offline changepoint detection with the iterative
amplitude-adjusted Fourier transform (iAAFT), which ensures that the
time-frequency properties of the original signal are preserved during
augmentation. The proposed method is validated through comparisons of the
performance of i) a deep learning seizure detection algorithm on both the
original and augmented versions of the CHB-MIT and Siena scalp
electroencephalography (EEG) databases, and ii) a deep learning atrial
fibrillation (AF) detection algorithm on the original and augmented versions of
the Computing in Cardiology Challenge 2017 dataset. By virtue of the proposed
method, for the CHB-MIT and Siena datasets respectively, accuracy rose by 4.4%
and 1.9%, precision by 10% and 5.5%, recall by 3.6% and 0.9%, and F1 by 4.2%
and 1.4%. For the AF classification task, accuracy rose by 0.3%, precision by
2.1%, recall by 0.8%, and F1 by 2.1%.
|
2504.03762 | Muyun Jiang | Muyun Jiang, Yi Ding, Wei Zhang, Kok Ann Colin Teo, LaiGuan Fong,
Shuailei Zhang, Zhiwei Guo, Chenyu Liu, Raghavan Bhuvanakantham, Wei Khang
Jeremy Sim, Chuan Huat Vince Foo, Rong Hui Jonathan Chua, Parasuraman
Padmanabhan, Victoria Leong, Jia Lu, Balazs Gulyas, Cuntai Guan | Decoding Covert Speech from EEG Using a Functional Areas Spatio-Temporal
Transformer | null | null | null | null | eess.SP cs.LG | http://creativecommons.org/licenses/by/4.0/ | Covert speech involves imagining speaking without audible sound or any
movements. Decoding covert speech from electroencephalogram (EEG) is
challenging due to a limited understanding of neural pronunciation mapping and
the low signal-to-noise ratio of the signal. In this study, we developed a
large-scale multi-utterance speech EEG dataset from 57 right-handed native
English-speaking subjects, each performing covert and overt speech tasks by
repeating the same word in five utterances within a ten-second duration. Given
the spatio-temporal nature of the neural activation process during speech
pronunciation, we developed a Functional Areas Spatio-temporal Transformer
(FAST), an effective framework for converting EEG signals into tokens and
utilizing transformer architecture for sequence encoding. Our results reveal
distinct and interpretable speech neural features by the visualization of
FAST-generated activation maps across frontal and temporal brain regions with
each word being covertly spoken, providing new insights into the discriminative
features of the neural representation of covert speech. This is the first
report of such a study, which provides interpretable evidence for speech
decoding from EEG. The code for this work has been made public at
https://github.com/Jiang-Muyun/FAST
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 10:38:08 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Jiang",
"Muyun",
""
],
[
"Ding",
"Yi",
""
],
[
"Zhang",
"Wei",
""
],
[
"Teo",
"Kok Ann Colin",
""
],
[
"Fong",
"LaiGuan",
""
],
[
"Zhang",
"Shuailei",
""
],
[
"Guo",
"Zhiwei",
""
],
[
"Liu",
"Chenyu",
""
],
[
"Bhuvanakantham",
"Raghavan",
""
],
[
"Sim",
"Wei Khang Jeremy",
""
],
[
"Foo",
"Chuan Huat Vince",
""
],
[
"Chua",
"Rong Hui Jonathan",
""
],
[
"Padmanabhan",
"Parasuraman",
""
],
[
"Leong",
"Victoria",
""
],
[
"Lu",
"Jia",
""
],
[
"Gulyas",
"Balazs",
""
],
[
"Guan",
"Cuntai",
""
]
] | TITLE: Decoding Covert Speech from EEG Using a Functional Areas Spatio-Temporal
Transformer
ABSTRACT: Covert speech involves imagining speaking without audible sound or any
movements. Decoding covert speech from electroencephalogram (EEG) is
challenging due to a limited understanding of neural pronunciation mapping and
the low signal-to-noise ratio of the signal. In this study, we developed a
large-scale multi-utterance speech EEG dataset from 57 right-handed native
English-speaking subjects, each performing covert and overt speech tasks by
repeating the same word in five utterances within a ten-second duration. Given
the spatio-temporal nature of the neural activation process during speech
pronunciation, we developed a Functional Areas Spatio-temporal Transformer
(FAST), an effective framework for converting EEG signals into tokens and
utilizing transformer architecture for sequence encoding. Our results reveal
distinct and interpretable speech neural features by the visualization of
FAST-generated activation maps across frontal and temporal brain regions with
each word being covertly spoken, providing new insights into the discriminative
features of the neural representation of covert speech. This is the first
report of such a study, which provides interpretable evidence for speech
decoding from EEG. The code for this work has been made public at
https://github.com/Jiang-Muyun/FAST
|
2504.03772 | Anton Lambrecht | Anton Lambrecht, Stijn Luchie, Jaron Fontaine, Ben Van Herbruggen,
Adnan Shahid and Eli De Poorter | Low-cost Embedded Breathing Rate Determination Using 802.15.4z IR-UWB
Hardware for Remote Healthcare | This paper has been submitted to IEEE Sensors Journal and is
currently undergoing review | null | null | null | eess.SP cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Respiratory diseases account for a significant portion of global mortality.
Affordable and early detection is an effective way of addressing these
ailments. To this end, a low-cost commercial off-the-shelf (COTS), IEEE
802.15.4z standard compliant impulse-radio ultra-wideband (IR-UWB) radar system
is exploited to estimate human respiration rates. We propose a convolutional
neural network (CNN) to predict breathing rates from ultra-wideband (UWB)
channel impulse response (CIR) data, and compare its performance with other
rule-based algorithms. The study uses a diverse dataset of 16 individuals,
incorporating various real-life environments to evaluate system robustness.
Results show that the CNN achieves a mean absolute error (MAE) of 1.73 breaths
per minute (BPM) in unseen situations, significantly outperforming rule-based
methods (3.40 BPM). By incorporating calibration data from other individuals in
the unseen situations, the error is further reduced to 0.84 BPM. In addition,
this work evaluates the feasibility of running the pipeline on a low-cost
embedded device. Applying 8-bit quantization to both the weights and
input/ouput tensors, reduces memory requirements by 67% and inference time by
64% with only a 3% increase in MAE. As a result, we show it is feasible to
deploy the algorithm on an nRF52840 system-on-chip (SoC) requiring only 46 KB
of memory and operating with an inference time of only 192 ms. Once deployed,
the system can last up to 268 days without recharging using a 20 000 mAh
battery pack. For breathing monitoring in bed, the sampling rate can be
lowered, extending battery life to 313 days, making the solution highly
efficient for real-world, low-cost deployments.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 07:54:25 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lambrecht",
"Anton",
""
],
[
"Luchie",
"Stijn",
""
],
[
"Fontaine",
"Jaron",
""
],
[
"Van Herbruggen",
"Ben",
""
],
[
"Shahid",
"Adnan",
""
],
[
"De Poorter",
"Eli",
""
]
] | TITLE: Low-cost Embedded Breathing Rate Determination Using 802.15.4z IR-UWB
Hardware for Remote Healthcare
ABSTRACT: Respiratory diseases account for a significant portion of global mortality.
Affordable and early detection is an effective way of addressing these
ailments. To this end, a low-cost commercial off-the-shelf (COTS), IEEE
802.15.4z standard compliant impulse-radio ultra-wideband (IR-UWB) radar system
is exploited to estimate human respiration rates. We propose a convolutional
neural network (CNN) to predict breathing rates from ultra-wideband (UWB)
channel impulse response (CIR) data, and compare its performance with other
rule-based algorithms. The study uses a diverse dataset of 16 individuals,
incorporating various real-life environments to evaluate system robustness.
Results show that the CNN achieves a mean absolute error (MAE) of 1.73 breaths
per minute (BPM) in unseen situations, significantly outperforming rule-based
methods (3.40 BPM). By incorporating calibration data from other individuals in
the unseen situations, the error is further reduced to 0.84 BPM. In addition,
this work evaluates the feasibility of running the pipeline on a low-cost
embedded device. Applying 8-bit quantization to both the weights and
input/ouput tensors, reduces memory requirements by 67% and inference time by
64% with only a 3% increase in MAE. As a result, we show it is feasible to
deploy the algorithm on an nRF52840 system-on-chip (SoC) requiring only 46 KB
of memory and operating with an inference time of only 192 ms. Once deployed,
the system can last up to 268 days without recharging using a 20 000 mAh
battery pack. For breathing monitoring in bed, the sampling rate can be
lowered, extending battery life to 313 days, making the solution highly
efficient for real-world, low-cost deployments.
|
2504.03775 | Guochao Jiang | Weiqing Li, Guochao Jiang, Xiangyong Ding, Zhangcheng Tao, Chuzhan
Hao, Chenfeng Xu, Yuewei Zhang, Hao Wang | FlowKV: A Disaggregated Inference Framework with Low-Latency KV Cache
Transfer and Load-Aware Scheduling | null | null | null | null | cs.DC cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Disaggregated inference has become an essential framework that separates the
prefill (P) and decode (D) stages in large language model inference to improve
throughput. However, the KV cache transfer faces significant delays between
prefill and decode nodes. The block-wise calling method and discontinuous KV
cache memory allocation increase the number of calls to the transmission
kernel. Additionally, existing frameworks often fix the roles of P and D nodes,
leading to computational imbalances. In this paper, we propose FlowKV, a novel
disaggregated inference framework, which reduces the average transmission
latency of KV cache by 96%, from 0.944s to 0.053s, almost eliminating the
transfer time relative to the total request latency by optimizing the KV cache
transfer. FlowKV introduces the Load-Aware Scheduler for balanced request
scheduling and flexible PD node allocation. This design maximizes hardware
resource utilization, achieving peak system throughput across various
scenarios, including normal, computational imbalance, and extreme overload
conditions. Experimental results demonstrate that FlowKV significantly
accelerates inference by 15.2%-48.9% on LongBench dataset compared to the
baseline and supports applications with heterogeneous GPUs.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 08:58:05 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Weiqing",
""
],
[
"Jiang",
"Guochao",
""
],
[
"Ding",
"Xiangyong",
""
],
[
"Tao",
"Zhangcheng",
""
],
[
"Hao",
"Chuzhan",
""
],
[
"Xu",
"Chenfeng",
""
],
[
"Zhang",
"Yuewei",
""
],
[
"Wang",
"Hao",
""
]
] | TITLE: FlowKV: A Disaggregated Inference Framework with Low-Latency KV Cache
Transfer and Load-Aware Scheduling
ABSTRACT: Disaggregated inference has become an essential framework that separates the
prefill (P) and decode (D) stages in large language model inference to improve
throughput. However, the KV cache transfer faces significant delays between
prefill and decode nodes. The block-wise calling method and discontinuous KV
cache memory allocation increase the number of calls to the transmission
kernel. Additionally, existing frameworks often fix the roles of P and D nodes,
leading to computational imbalances. In this paper, we propose FlowKV, a novel
disaggregated inference framework, which reduces the average transmission
latency of KV cache by 96%, from 0.944s to 0.053s, almost eliminating the
transfer time relative to the total request latency by optimizing the KV cache
transfer. FlowKV introduces the Load-Aware Scheduler for balanced request
scheduling and flexible PD node allocation. This design maximizes hardware
resource utilization, achieving peak system throughput across various
scenarios, including normal, computational imbalance, and extreme overload
conditions. Experimental results demonstrate that FlowKV significantly
accelerates inference by 15.2%-48.9% on LongBench dataset compared to the
baseline and supports applications with heterogeneous GPUs.
|
2504.03776 | Mehran Behjati | Huam Ming Ken, Mehran Behjati | Advancing Air Quality Monitoring: TinyML-Based Real-Time Ozone
Prediction with Cost-Effective Edge Devices | This is a preprint version of a paper accepted and published in
Springer Lecture Notes in Networks and Systems. The final version is
available at https://doi.org/10.1007/978-981-96-3949-6_42 | Selected Proceedings from the 2nd ICIMR 2024. Lecture Notes in
Networks and Systems, vol 1316. Springer, Singapore | 10.1007/978-981-96-3949-6_42 | null | eess.SP cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The escalation of urban air pollution necessitates innovative solutions for
real-time air quality monitoring and prediction. This paper introduces a novel
TinyML-based system designed to predict ozone concentration in real-time. The
system employs an Arduino Nano 33 BLE Sense microcontroller equipped with an
MQ7 sensor for carbon monoxide (CO) detection and built-in sensors for
temperature and pressure measurements. The data, sourced from a Kaggle dataset
on air quality parameters from India, underwent thorough cleaning and
preprocessing. Model training and evaluation were performed using Edge Impulse,
considering various combinations of input parameters (CO, temperature, and
pressure). The optimal model, incorporating all three variables, achieved a
mean squared error (MSE) of 0.03 and an R-squared value of 0.95, indicating
high predictive accuracy. The regression model was deployed on the
microcontroller via the Arduino IDE, showcasing robust real-time performance.
Sensitivity analysis identified CO levels as the most critical predictor of
ozone concentration, followed by pressure and temperature. The system's
low-cost and low-power design makes it suitable for widespread implementation,
particularly in resource-constrained settings. This TinyML approach provides
precise real-time predictions of ozone levels, enabling prompt responses to
pollution events and enhancing public health protection.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 10:48:24 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ken",
"Huam Ming",
""
],
[
"Behjati",
"Mehran",
""
]
] | TITLE: Advancing Air Quality Monitoring: TinyML-Based Real-Time Ozone
Prediction with Cost-Effective Edge Devices
ABSTRACT: The escalation of urban air pollution necessitates innovative solutions for
real-time air quality monitoring and prediction. This paper introduces a novel
TinyML-based system designed to predict ozone concentration in real-time. The
system employs an Arduino Nano 33 BLE Sense microcontroller equipped with an
MQ7 sensor for carbon monoxide (CO) detection and built-in sensors for
temperature and pressure measurements. The data, sourced from a Kaggle dataset
on air quality parameters from India, underwent thorough cleaning and
preprocessing. Model training and evaluation were performed using Edge Impulse,
considering various combinations of input parameters (CO, temperature, and
pressure). The optimal model, incorporating all three variables, achieved a
mean squared error (MSE) of 0.03 and an R-squared value of 0.95, indicating
high predictive accuracy. The regression model was deployed on the
microcontroller via the Arduino IDE, showcasing robust real-time performance.
Sensitivity analysis identified CO levels as the most critical predictor of
ozone concentration, followed by pressure and temperature. The system's
low-cost and low-power design makes it suitable for widespread implementation,
particularly in resource-constrained settings. This TinyML approach provides
precise real-time predictions of ozone levels, enabling prompt responses to
pollution events and enhancing public health protection.
|
2504.03778 | Giandomenico Solimando | Stefano Cirillo, Domenico Desiato, Giuseppe Polese, Monica Maria Lucia
Sebillo, Giandomenico Solimando | Augmenting Anonymized Data with AI: Exploring the Feasibility and
Limitations of Large Language Models in Data Enrichment | Stefano Cirillo, Domenico Desiato, Giuseppe Polese, Monica Maria
Lucia Sebillo, Giandomenico Solimando: Augmenting Anonymized Data with AI:
Exploring the Feasibility and Limitations of Large Language Models in Data
Enrichment. In proceedings of the 3rd Italian Conference on Big Data and Data
Science (ITADATA 2024), 17-19 September 2024, Pisa, Italy | 3rd Italian Conference on Big Data and Data Science (ITADATA 2024) | null | ITADATA/2024/18 | cs.CR cs.ET | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have demonstrated advanced capabilities in both
text generation and comprehension, and their application to data archives might
facilitate the privatization of sensitive information about the data subjects.
In fact, the information contained in data often includes sensitive and
personally identifiable details. This data, if not safeguarded, may bring
privacy risks in terms of both disclosure and identification. Furthermore, the
application of anonymisation techniques, such as k-anonymity, can lead to a
significant reduction in the amount of data within data sources, which may
reduce the efficacy of predictive processes. In our study, we investigate the
capabilities offered by LLMs to enrich anonymized data sources without
affecting their anonymity. To this end, we designed new ad-hoc prompt template
engineering strategies to perform anonymized Data Augmentation and assess the
effectiveness of LLM-based approaches in providing anonymized data. To validate
the anonymization guarantees provided by LLMs, we exploited the pyCanon
library, designed to assess the values of the parameters associated with the
most common privacy-preserving techniques via anonymization. Our experiments
conducted on real-world datasets demonstrate that LLMs yield promising results
for this goal.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 13:26:59 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Cirillo",
"Stefano",
""
],
[
"Desiato",
"Domenico",
""
],
[
"Polese",
"Giuseppe",
""
],
[
"Sebillo",
"Monica Maria Lucia",
""
],
[
"Solimando",
"Giandomenico",
""
]
] | TITLE: Augmenting Anonymized Data with AI: Exploring the Feasibility and
Limitations of Large Language Models in Data Enrichment
ABSTRACT: Large Language Models (LLMs) have demonstrated advanced capabilities in both
text generation and comprehension, and their application to data archives might
facilitate the privatization of sensitive information about the data subjects.
In fact, the information contained in data often includes sensitive and
personally identifiable details. This data, if not safeguarded, may bring
privacy risks in terms of both disclosure and identification. Furthermore, the
application of anonymisation techniques, such as k-anonymity, can lead to a
significant reduction in the amount of data within data sources, which may
reduce the efficacy of predictive processes. In our study, we investigate the
capabilities offered by LLMs to enrich anonymized data sources without
affecting their anonymity. To this end, we designed new ad-hoc prompt template
engineering strategies to perform anonymized Data Augmentation and assess the
effectiveness of LLM-based approaches in providing anonymized data. To validate
the anonymization guarantees provided by LLMs, we exploited the pyCanon
library, designed to assess the values of the parameters associated with the
most common privacy-preserving techniques via anonymization. Our experiments
conducted on real-world datasets demonstrate that LLMs yield promising results
for this goal.
|
2504.03782 | Ramin Zarei Sabzevar | Ramin Zarei Sabzevar, Hamed Mohammadzadeh, Tahmineh Tavakoli, Ahad
Harati | A Study on Adversarial Robustness of Discriminative Prototypical
Learning | null | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks demonstrate significant vulnerability to adversarial
perturbations, posing risks for critical applications. Current adversarial
training methods predominantly focus on robustness against attacks without
explicitly leveraging geometric structures in the latent space, usually
resulting in reduced accuracy on the original clean data. To address these
issues, we propose a novel adversarial training framework named Adversarial
Deep Positive-Negative Prototypes (Adv-DPNP), which integrates disriminative
prototype-based learning with adversarial training. Adv-DPNP uses unified class
prototypes serving dual roles as classifier weights and robust anchors,
enhancing both intra-class compactness and inter-class separation in the latent
space. Moreover, a novel dual-branch training mechanism maintains stable
prototypes by updating them exclusively with clean data; while the feature
extractor layers are learned using both clean and adversarial data to remain
invariant against adversarial perturbations. In addition, our approach utilizes
a composite loss function combining positive prototype alignment, negative
prototype repulsion, and consistency regularization to further enhance
discrimination, adversarial robustness, and clean accuracy. Extensive
experiments conducted on standard benchmark datasets confirm the effectiveness
of Adv-DPNP compared to state-of-the-art methods, achieving higher clean
accuracy and competitive robustness under adversarial perturbations and common
corruptions. Our code is available at https://github.com/fum-rpl/adv-dpnp
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 15:42:58 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Sabzevar",
"Ramin Zarei",
""
],
[
"Mohammadzadeh",
"Hamed",
""
],
[
"Tavakoli",
"Tahmineh",
""
],
[
"Harati",
"Ahad",
""
]
] | TITLE: A Study on Adversarial Robustness of Discriminative Prototypical
Learning
ABSTRACT: Deep neural networks demonstrate significant vulnerability to adversarial
perturbations, posing risks for critical applications. Current adversarial
training methods predominantly focus on robustness against attacks without
explicitly leveraging geometric structures in the latent space, usually
resulting in reduced accuracy on the original clean data. To address these
issues, we propose a novel adversarial training framework named Adversarial
Deep Positive-Negative Prototypes (Adv-DPNP), which integrates disriminative
prototype-based learning with adversarial training. Adv-DPNP uses unified class
prototypes serving dual roles as classifier weights and robust anchors,
enhancing both intra-class compactness and inter-class separation in the latent
space. Moreover, a novel dual-branch training mechanism maintains stable
prototypes by updating them exclusively with clean data; while the feature
extractor layers are learned using both clean and adversarial data to remain
invariant against adversarial perturbations. In addition, our approach utilizes
a composite loss function combining positive prototype alignment, negative
prototype repulsion, and consistency regularization to further enhance
discrimination, adversarial robustness, and clean accuracy. Extensive
experiments conducted on standard benchmark datasets confirm the effectiveness
of Adv-DPNP compared to state-of-the-art methods, achieving higher clean
accuracy and competitive robustness under adversarial perturbations and common
corruptions. Our code is available at https://github.com/fum-rpl/adv-dpnp
|
2504.03790 | Gon\c{c}alo Faria | Gon\c{c}alo Faria, Noah A. Smith | Sample, Don't Search: Rethinking Test-Time Alignment for Language Models | null | null | null | null | cs.CL cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Increasing test-time computation has emerged as a promising direction for
improving language model performance, particularly in scenarios where model
finetuning is impractical or impossible due to computational constraints or
private model weights. However, existing test-time search methods using a
reward model (RM) often degrade in quality as compute scales, due to the
over-optimization of what are inherently imperfect reward proxies. We introduce
QAlign, a new test-time alignment approach. As we scale test-time compute,
QAlign converges to sampling from the optimal aligned distribution for each
individual prompt. By adopting recent advances in Markov chain Monte Carlo for
text generation, our method enables better-aligned outputs without modifying
the underlying model or even requiring logit access. We demonstrate the
effectiveness of QAlign on mathematical reasoning benchmarks (GSM8K and
GSM-Symbolic) using a task-specific RM, showing consistent improvements over
existing test-time compute methods like best-of-n and majority voting.
Furthermore, when applied with more realistic RMs trained on the Tulu 3
preference dataset, QAlign outperforms direct preference optimization (DPO),
best-of-n, majority voting, and weighted majority voting on a diverse range of
datasets (GSM8K, MATH500, IFEval, MMLU-Redux, and TruthfulQA). A practical
solution to aligning language models at test time using additional computation
without degradation, our approach expands the limits of the capability that can
be obtained from off-the-shelf language models without further training.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 00:41:40 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Faria",
"Gonçalo",
""
],
[
"Smith",
"Noah A.",
""
]
] | TITLE: Sample, Don't Search: Rethinking Test-Time Alignment for Language Models
ABSTRACT: Increasing test-time computation has emerged as a promising direction for
improving language model performance, particularly in scenarios where model
finetuning is impractical or impossible due to computational constraints or
private model weights. However, existing test-time search methods using a
reward model (RM) often degrade in quality as compute scales, due to the
over-optimization of what are inherently imperfect reward proxies. We introduce
QAlign, a new test-time alignment approach. As we scale test-time compute,
QAlign converges to sampling from the optimal aligned distribution for each
individual prompt. By adopting recent advances in Markov chain Monte Carlo for
text generation, our method enables better-aligned outputs without modifying
the underlying model or even requiring logit access. We demonstrate the
effectiveness of QAlign on mathematical reasoning benchmarks (GSM8K and
GSM-Symbolic) using a task-specific RM, showing consistent improvements over
existing test-time compute methods like best-of-n and majority voting.
Furthermore, when applied with more realistic RMs trained on the Tulu 3
preference dataset, QAlign outperforms direct preference optimization (DPO),
best-of-n, majority voting, and weighted majority voting on a diverse range of
datasets (GSM8K, MATH500, IFEval, MMLU-Redux, and TruthfulQA). A practical
solution to aligning language models at test time using additional computation
without degradation, our approach expands the limits of the capability that can
be obtained from off-the-shelf language models without further training.
|
2504.03799 | Hengyu Lin | Hengyu Lin | Experimental Study on Time Series Analysis of Lower Limb Rehabilitation
Exercise Data Driven by Novel Model Architecture and Large Models | null | null | null | null | eess.SP cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study investigates the application of novel model architectures and
large-scale foundational models in temporal series analysis of lower limb
rehabilitation motion data, aiming to leverage advancements in machine learning
and artificial intelligence to empower active rehabilitation guidance
strategies for post-stroke patients in limb motor function recovery. Utilizing
the SIAT-LLMD dataset of lower limb movement data proposed by the Shenzhen
Institute of Advanced Technology, Chinese Academy of Sciences, we
systematically elucidate the implementation and analytical outcomes of the
innovative xLSTM architecture and the foundational model Lag-Llama in
short-term temporal prediction tasks involving joint kinematics and dynamics
parameters. The research provides novel insights for AI-enabled medical
rehabilitation applications, demonstrating the potential of cutting-edge model
architectures and large-scale models in rehabilitation medicine temporal
prediction. These findings establish theoretical foundations for future
applications of personalized rehabilitation regimens, offering significant
implications for the development of customized therapeutic interventions in
clinical practice.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 05:40:13 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lin",
"Hengyu",
""
]
] | TITLE: Experimental Study on Time Series Analysis of Lower Limb Rehabilitation
Exercise Data Driven by Novel Model Architecture and Large Models
ABSTRACT: This study investigates the application of novel model architectures and
large-scale foundational models in temporal series analysis of lower limb
rehabilitation motion data, aiming to leverage advancements in machine learning
and artificial intelligence to empower active rehabilitation guidance
strategies for post-stroke patients in limb motor function recovery. Utilizing
the SIAT-LLMD dataset of lower limb movement data proposed by the Shenzhen
Institute of Advanced Technology, Chinese Academy of Sciences, we
systematically elucidate the implementation and analytical outcomes of the
innovative xLSTM architecture and the foundational model Lag-Llama in
short-term temporal prediction tasks involving joint kinematics and dynamics
parameters. The research provides novel insights for AI-enabled medical
rehabilitation applications, demonstrating the potential of cutting-edge model
architectures and large-scale models in rehabilitation medicine temporal
prediction. These findings establish theoretical foundations for future
applications of personalized rehabilitation regimens, offering significant
implications for the development of customized therapeutic interventions in
clinical practice.
|
2504.03804 | Eslam Eldeeb | Eslam Eldeeb and Hirley Alves | Offline and Distributional Reinforcement Learning for Wireless
Communications | null | null | null | null | cs.LG cs.MA cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid growth of heterogeneous and massive wireless connectivity in 6G
networks demands intelligent solutions to ensure scalability, reliability,
privacy, ultra-low latency, and effective control. Although artificial
intelligence (AI) and machine learning (ML) have demonstrated their potential
in this domain, traditional online reinforcement learning (RL) and deep RL
methods face limitations in real-time wireless networks. For instance, these
methods rely on online interaction with the environment, which might be
unfeasible, costly, or unsafe. In addition, they cannot handle the inherent
uncertainties in real-time wireless applications. We focus on offline and
distributional RL, two advanced RL techniques that can overcome these
challenges by training on static datasets and accounting for network
uncertainties. We introduce a novel framework that combines offline and
distributional RL for wireless communication applications. Through case studies
on unmanned aerial vehicle (UAV) trajectory optimization and radio resource
management (RRM), we demonstrate that our proposed Conservative Quantile
Regression (CQR) algorithm outperforms conventional RL approaches regarding
convergence speed and risk management. Finally, we discuss open challenges and
potential future directions for applying these techniques in 6G networks,
paving the way for safer and more efficient real-time wireless systems.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 09:24:39 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Eldeeb",
"Eslam",
""
],
[
"Alves",
"Hirley",
""
]
] | TITLE: Offline and Distributional Reinforcement Learning for Wireless
Communications
ABSTRACT: The rapid growth of heterogeneous and massive wireless connectivity in 6G
networks demands intelligent solutions to ensure scalability, reliability,
privacy, ultra-low latency, and effective control. Although artificial
intelligence (AI) and machine learning (ML) have demonstrated their potential
in this domain, traditional online reinforcement learning (RL) and deep RL
methods face limitations in real-time wireless networks. For instance, these
methods rely on online interaction with the environment, which might be
unfeasible, costly, or unsafe. In addition, they cannot handle the inherent
uncertainties in real-time wireless applications. We focus on offline and
distributional RL, two advanced RL techniques that can overcome these
challenges by training on static datasets and accounting for network
uncertainties. We introduce a novel framework that combines offline and
distributional RL for wireless communication applications. Through case studies
on unmanned aerial vehicle (UAV) trajectory optimization and radio resource
management (RRM), we demonstrate that our proposed Conservative Quantile
Regression (CQR) algorithm outperforms conventional RL approaches regarding
convergence speed and risk management. Finally, we discuss open challenges and
potential future directions for applying these techniques in 6G networks,
paving the way for safer and more efficient real-time wireless systems.
|
2504.03818 | Muhammed Adil Yatkin Ph.D. | Muhammed Adil Yatkin, Mihkel Korgesaar, Jani Romanoff, Umit Islak,
Hasan Kurban | Exploring Various Sequential Learning Methods for Deformation History
Modeling | Engineering Applications of Neural Networks | null | null | null | cs.LG cs.AI cs.CE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Current neural network (NN) models can learn patterns from data points with
historical dependence. Specifically, in natural language processing (NLP),
sequential learning has transitioned from recurrence-based architectures to
transformer-based architectures. However, it is unknown which NN architectures
will perform the best on datasets containing deformation history due to
mechanical loading. Thus, this study ascertains the appropriateness of
1D-convolutional, recurrent, and transformer-based architectures for predicting
deformation localization based on the earlier states in the form of deformation
history. Following this investigation, the crucial incompatibility issues
between the mathematical computation of the prediction process in the
best-performing NN architectures and the actual values derived from the natural
physical properties of the deformation paths are examined in detail.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 15:52:24 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yatkin",
"Muhammed Adil",
""
],
[
"Korgesaar",
"Mihkel",
""
],
[
"Romanoff",
"Jani",
""
],
[
"Islak",
"Umit",
""
],
[
"Kurban",
"Hasan",
""
]
] | TITLE: Exploring Various Sequential Learning Methods for Deformation History
Modeling
ABSTRACT: Current neural network (NN) models can learn patterns from data points with
historical dependence. Specifically, in natural language processing (NLP),
sequential learning has transitioned from recurrence-based architectures to
transformer-based architectures. However, it is unknown which NN architectures
will perform the best on datasets containing deformation history due to
mechanical loading. Thus, this study ascertains the appropriateness of
1D-convolutional, recurrent, and transformer-based architectures for predicting
deformation localization based on the earlier states in the form of deformation
history. Following this investigation, the crucial incompatibility issues
between the mathematical computation of the prediction process in the
best-performing NN architectures and the actual values derived from the natural
physical properties of the deformation paths are examined in detail.
|
2504.03847 | Xianyuan Liu | Xiaokun Liu, Sayedmohammadreza Rastegari, Yijun Huang, Sxe Chang
Cheong, Weikang Liu, Wenjie Zhao, Qihao Tian, Hongming Wang, Shuo Zhou,
Yingjie Guo, Sina Tabakhi, Xianyuan Liu, Zheqing Zhu, Wei Sang, Haiping Lu | Interpretable Multimodal Learning for Tumor Protein-Metal Binding:
Progress, Challenges, and Perspectives | null | null | null | null | q-bio.QM cs.LG q-bio.BM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In cancer therapeutics, protein-metal binding mechanisms critically govern
drug pharmacokinetics and targeting efficacy, thereby fundamentally shaping the
rational design of anticancer metallodrugs. While conventional laboratory
methods used to study such mechanisms are often costly, low throughput, and
limited in capturing dynamic biological processes, machine learning (ML) has
emerged as a promising alternative. Despite increasing efforts to develop
protein-metal binding datasets and ML algorithms, the application of ML in
tumor protein-metal binding remains limited. Key challenges include a shortage
of high-quality, tumor-specific datasets, insufficient consideration of
multiple data modalities, and the complexity of interpreting results due to the
''black box'' nature of complex ML models. This paper summarizes recent
progress and ongoing challenges in using ML to predict tumor protein-metal
binding, focusing on data, modeling, and interpretability. We present
multimodal protein-metal binding datasets and outline strategies for acquiring,
curating, and preprocessing them for training ML models. Moreover, we explore
the complementary value provided by different data modalities and examine
methods for their integration. We also review approaches for improving model
interpretability to support more trustworthy decisions in cancer research.
Finally, we offer our perspective on research opportunities and propose
strategies to address the scarcity of tumor protein data and the limited number
of predictive models for tumor protein-metal binding. We also highlight two
promising directions for effective metal-based drug design: integrating
protein-protein interaction data to provide structural insights into
metal-binding events and predicting structural changes in tumor proteins after
metal binding.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 18:10:00 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Xiaokun",
""
],
[
"Rastegari",
"Sayedmohammadreza",
""
],
[
"Huang",
"Yijun",
""
],
[
"Cheong",
"Sxe Chang",
""
],
[
"Liu",
"Weikang",
""
],
[
"Zhao",
"Wenjie",
""
],
[
"Tian",
"Qihao",
""
],
[
"Wang",
"Hongming",
""
],
[
"Zhou",
"Shuo",
""
],
[
"Guo",
"Yingjie",
""
],
[
"Tabakhi",
"Sina",
""
],
[
"Liu",
"Xianyuan",
""
],
[
"Zhu",
"Zheqing",
""
],
[
"Sang",
"Wei",
""
],
[
"Lu",
"Haiping",
""
]
] | TITLE: Interpretable Multimodal Learning for Tumor Protein-Metal Binding:
Progress, Challenges, and Perspectives
ABSTRACT: In cancer therapeutics, protein-metal binding mechanisms critically govern
drug pharmacokinetics and targeting efficacy, thereby fundamentally shaping the
rational design of anticancer metallodrugs. While conventional laboratory
methods used to study such mechanisms are often costly, low throughput, and
limited in capturing dynamic biological processes, machine learning (ML) has
emerged as a promising alternative. Despite increasing efforts to develop
protein-metal binding datasets and ML algorithms, the application of ML in
tumor protein-metal binding remains limited. Key challenges include a shortage
of high-quality, tumor-specific datasets, insufficient consideration of
multiple data modalities, and the complexity of interpreting results due to the
''black box'' nature of complex ML models. This paper summarizes recent
progress and ongoing challenges in using ML to predict tumor protein-metal
binding, focusing on data, modeling, and interpretability. We present
multimodal protein-metal binding datasets and outline strategies for acquiring,
curating, and preprocessing them for training ML models. Moreover, we explore
the complementary value provided by different data modalities and examine
methods for their integration. We also review approaches for improving model
interpretability to support more trustworthy decisions in cancer research.
Finally, we offer our perspective on research opportunities and propose
strategies to address the scarcity of tumor protein data and the limited number
of predictive models for tumor protein-metal binding. We also highlight two
promising directions for effective metal-based drug design: integrating
protein-protein interaction data to provide structural insights into
metal-binding events and predicting structural changes in tumor proteins after
metal binding.
|
2504.03850 | Ved Umrajkar | Ved Umrajkar and Aakash Kumar Singh | Detection Limits and Statistical Separability of Tree Ring Watermarks in
Rectified Flow-based Text-to-Image Generation Models | null | null | null | null | cs.CV cs.AI cs.CR cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Tree-Ring Watermarking is a significant technique for authenticating
AI-generated images. However, its effectiveness in rectified flow-based models
remains unexplored, particularly given the inherent challenges of these models
with noise latent inversion. Through extensive experimentation, we evaluated
and compared the detection and separability of watermarks between SD 2.1 and
FLUX.1-dev models. By analyzing various text guidance configurations and
augmentation attacks, we demonstrate how inversion limitations affect both
watermark recovery and the statistical separation between watermarked and
unwatermarked images. Our findings provide valuable insights into the current
limitations of Tree-Ring Watermarking in the current SOTA models and highlight
the critical need for improved inversion methods to achieve reliable watermark
detection and separability. The official implementation, dataset release and
all experimental results are available at this
\href{https://github.com/dsgiitr/flux-watermarking}{\textbf{link}}.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 18:24:23 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Umrajkar",
"Ved",
""
],
[
"Singh",
"Aakash Kumar",
""
]
] | TITLE: Detection Limits and Statistical Separability of Tree Ring Watermarks in
Rectified Flow-based Text-to-Image Generation Models
ABSTRACT: Tree-Ring Watermarking is a significant technique for authenticating
AI-generated images. However, its effectiveness in rectified flow-based models
remains unexplored, particularly given the inherent challenges of these models
with noise latent inversion. Through extensive experimentation, we evaluated
and compared the detection and separability of watermarks between SD 2.1 and
FLUX.1-dev models. By analyzing various text guidance configurations and
augmentation attacks, we demonstrate how inversion limitations affect both
watermark recovery and the statistical separation between watermarked and
unwatermarked images. Our findings provide valuable insights into the current
limitations of Tree-Ring Watermarking in the current SOTA models and highlight
the critical need for improved inversion methods to achieve reliable watermark
detection and separability. The official implementation, dataset release and
all experimental results are available at this
\href{https://github.com/dsgiitr/flux-watermarking}{\textbf{link}}.
|
2504.03865 | Ishika Ghosh | Erin Wolf Chambers, Ishika Ghosh, Elizabeth Munch, Sarah Percival, Bei
Wang | Towards an Optimal Bound for the Interleaving Distance on Mapper Graphs | null | null | null | null | cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mapper graphs are a widely used tool in topological data analysis and
visualization. They can be viewed as discrete approximations of Reeb graphs,
offering insight into the shape and connectivity of complex data. Given a
high-dimensional point cloud $\mathbb{X}$ equipped with a function $f:
\mathbb{X} \to \mathbb{R}$, a mapper graph provides a summary of the
topological structure of $\mathbb{X}$ induced by $f$, where each node
represents a local neighborhood, and edges connect nodes whose corresponding
neighborhoods overlap. Our focus is the interleaving distance for mapper
graphs, arising from a discretization of the version for Reeb graphs, which is
NP-hard to compute. This distance quantifies the similarity between two mapper
graphs by measuring the extent to which they must be ``stretched" to become
comparable. Recent work introduced a loss function that provides an upper bound
on the interleaving distance for mapper graphs, which evaluates how far a given
assignment is from being a true interleaving. Finding the loss is
computationally tractable, offering a practical way to estimate the distance.
In this paper, we employ a categorical formulation of mapper graphs and
develop the first framework for computing the associated loss function. Since
the quality of the bound depends on the chosen assignment, we optimize this
loss function by formulating the problem of finding the best assignment as an
integer linear programming problem. To evaluate the effectiveness of our
optimization, we apply it to small mapper graphs where the interleaving
distance is known, demonstrating that the optimized upper bound successfully
matches the interleaving distance in these cases. Additionally, we conduct an
experiment on the MPEG-7 dataset, computing the pairwise optimal loss on a
collection of mapper graphs derived from images and leveraging the distance
bound for image classification.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 18:43:01 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chambers",
"Erin Wolf",
""
],
[
"Ghosh",
"Ishika",
""
],
[
"Munch",
"Elizabeth",
""
],
[
"Percival",
"Sarah",
""
],
[
"Wang",
"Bei",
""
]
] | TITLE: Towards an Optimal Bound for the Interleaving Distance on Mapper Graphs
ABSTRACT: Mapper graphs are a widely used tool in topological data analysis and
visualization. They can be viewed as discrete approximations of Reeb graphs,
offering insight into the shape and connectivity of complex data. Given a
high-dimensional point cloud $\mathbb{X}$ equipped with a function $f:
\mathbb{X} \to \mathbb{R}$, a mapper graph provides a summary of the
topological structure of $\mathbb{X}$ induced by $f$, where each node
represents a local neighborhood, and edges connect nodes whose corresponding
neighborhoods overlap. Our focus is the interleaving distance for mapper
graphs, arising from a discretization of the version for Reeb graphs, which is
NP-hard to compute. This distance quantifies the similarity between two mapper
graphs by measuring the extent to which they must be ``stretched" to become
comparable. Recent work introduced a loss function that provides an upper bound
on the interleaving distance for mapper graphs, which evaluates how far a given
assignment is from being a true interleaving. Finding the loss is
computationally tractable, offering a practical way to estimate the distance.
In this paper, we employ a categorical formulation of mapper graphs and
develop the first framework for computing the associated loss function. Since
the quality of the bound depends on the chosen assignment, we optimize this
loss function by formulating the problem of finding the best assignment as an
integer linear programming problem. To evaluate the effectiveness of our
optimization, we apply it to small mapper graphs where the interleaving
distance is known, demonstrating that the optimized upper bound successfully
matches the interleaving distance in these cases. Additionally, we conduct an
experiment on the MPEG-7 dataset, computing the pairwise optimal loss on a
collection of mapper graphs derived from images and leveraging the distance
bound for image classification.
|
2504.03877 | Yuchen Wei | Yuchen Wei, Dennis Pearl, Matthew Beckman, and Rebecca J. Passonneau | Concept-based Rubrics Improve LLM Formative Assessment and Data
Synthesis | 13 pages excluding references. 9 tables and 4 figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Formative assessment in STEM topics aims to promote student learning by
identifying students' current understanding, thus targeting how to promote
further learning. Previous studies suggest that the assessment performance of
current generative large language models (LLMs) on constructed responses to
open-ended questions is significantly lower than that of supervised classifiers
trained on high-quality labeled data. However, we demonstrate that
concept-based rubrics can significantly enhance LLM performance, which narrows
the gap between LLMs as off-the shelf assessment tools, and smaller supervised
models, which need large amounts of training data. For datasets where
concept-based rubrics allow LLMs to achieve strong performance, we show that
the concept-based rubrics help the same LLMs generate high quality synthetic
data for training lightweight, high-performance supervised models. Our
experiments span diverse STEM student response datasets with labels of varying
quality, including a new real-world dataset that contains some AI-assisted
responses, which introduces additional considerations.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 19:02:07 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wei",
"Yuchen",
""
],
[
"Pearl",
"Dennis",
""
],
[
"Beckman",
"Matthew",
""
],
[
"Passonneau",
"Rebecca J.",
""
]
] | TITLE: Concept-based Rubrics Improve LLM Formative Assessment and Data
Synthesis
ABSTRACT: Formative assessment in STEM topics aims to promote student learning by
identifying students' current understanding, thus targeting how to promote
further learning. Previous studies suggest that the assessment performance of
current generative large language models (LLMs) on constructed responses to
open-ended questions is significantly lower than that of supervised classifiers
trained on high-quality labeled data. However, we demonstrate that
concept-based rubrics can significantly enhance LLM performance, which narrows
the gap between LLMs as off-the shelf assessment tools, and smaller supervised
models, which need large amounts of training data. For datasets where
concept-based rubrics allow LLMs to achieve strong performance, we show that
the concept-based rubrics help the same LLMs generate high quality synthetic
data for training lightweight, high-performance supervised models. Our
experiments span diverse STEM student response datasets with labels of varying
quality, including a new real-world dataset that contains some AI-assisted
responses, which introduces additional considerations.
|
2504.03886 | Jianhao Zheng | Jianhao Zheng, Zihan Zhu, Valentin Bieri, Marc Pollefeys, Songyou
Peng, Iro Armeni | WildGS-SLAM: Monocular Gaussian Splatting SLAM in Dynamic Environments | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We present WildGS-SLAM, a robust and efficient monocular RGB SLAM system
designed to handle dynamic environments by leveraging uncertainty-aware
geometric mapping. Unlike traditional SLAM systems, which assume static scenes,
our approach integrates depth and uncertainty information to enhance tracking,
mapping, and rendering performance in the presence of moving objects. We
introduce an uncertainty map, predicted by a shallow multi-layer perceptron and
DINOv2 features, to guide dynamic object removal during both tracking and
mapping. This uncertainty map enhances dense bundle adjustment and Gaussian map
optimization, improving reconstruction accuracy. Our system is evaluated on
multiple datasets and demonstrates artifact-free view synthesis. Results
showcase WildGS-SLAM's superior performance in dynamic environments compared to
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 19:19:40 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zheng",
"Jianhao",
""
],
[
"Zhu",
"Zihan",
""
],
[
"Bieri",
"Valentin",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Peng",
"Songyou",
""
],
[
"Armeni",
"Iro",
""
]
] | TITLE: WildGS-SLAM: Monocular Gaussian Splatting SLAM in Dynamic Environments
ABSTRACT: We present WildGS-SLAM, a robust and efficient monocular RGB SLAM system
designed to handle dynamic environments by leveraging uncertainty-aware
geometric mapping. Unlike traditional SLAM systems, which assume static scenes,
our approach integrates depth and uncertainty information to enhance tracking,
mapping, and rendering performance in the presence of moving objects. We
introduce an uncertainty map, predicted by a shallow multi-layer perceptron and
DINOv2 features, to guide dynamic object removal during both tracking and
mapping. This uncertainty map enhances dense bundle adjustment and Gaussian map
optimization, improving reconstruction accuracy. Our system is evaluated on
multiple datasets and demonstrates artifact-free view synthesis. Results
showcase WildGS-SLAM's superior performance in dynamic environments compared to
state-of-the-art methods.
|
2504.03889 | Pedro Sandoval-Segura | Pedro Sandoval-Segura, Xijun Wang, Ashwinee Panda, Micah Goldblum,
Ronen Basri, Tom Goldstein, David Jacobs | Using Attention Sinks to Identify and Evaluate Dormant Heads in
Pretrained LLMs | 22 pages, 14 figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Multi-head attention is foundational to large language models (LLMs),
enabling different heads to have diverse focus on relevant input tokens.
However, learned behaviors like attention sinks, where the first token receives
most attention despite limited semantic importance, challenge our understanding
of multi-head attention. To analyze this phenomenon, we propose a new
definition for attention heads dominated by attention sinks, known as dormant
attention heads. We compare our definition to prior work in a model
intervention study where we test whether dormant heads matter for inference by
zeroing out the output of dormant attention heads. Using six pretrained models
and five benchmark datasets, we find our definition to be more model and
dataset-agnostic. Using our definition on most models, more than 4% of a
model's attention heads can be zeroed while maintaining average accuracy, and
zeroing more than 14% of a model's attention heads can keep accuracy to within
1% of the pretrained model's average accuracy. Further analysis reveals that
dormant heads emerge early in pretraining and can transition between dormant
and active states during pretraining. Additionally, we provide evidence that
they depend on characteristics of the input text.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 19:28:23 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Sandoval-Segura",
"Pedro",
""
],
[
"Wang",
"Xijun",
""
],
[
"Panda",
"Ashwinee",
""
],
[
"Goldblum",
"Micah",
""
],
[
"Basri",
"Ronen",
""
],
[
"Goldstein",
"Tom",
""
],
[
"Jacobs",
"David",
""
]
] | TITLE: Using Attention Sinks to Identify and Evaluate Dormant Heads in
Pretrained LLMs
ABSTRACT: Multi-head attention is foundational to large language models (LLMs),
enabling different heads to have diverse focus on relevant input tokens.
However, learned behaviors like attention sinks, where the first token receives
most attention despite limited semantic importance, challenge our understanding
of multi-head attention. To analyze this phenomenon, we propose a new
definition for attention heads dominated by attention sinks, known as dormant
attention heads. We compare our definition to prior work in a model
intervention study where we test whether dormant heads matter for inference by
zeroing out the output of dormant attention heads. Using six pretrained models
and five benchmark datasets, we find our definition to be more model and
dataset-agnostic. Using our definition on most models, more than 4% of a
model's attention heads can be zeroed while maintaining average accuracy, and
zeroing more than 14% of a model's attention heads can keep accuracy to within
1% of the pretrained model's average accuracy. Further analysis reveals that
dormant heads emerge early in pretraining and can transition between dormant
and active states during pretraining. Additionally, we provide evidence that
they depend on characteristics of the input text.
|
2504.03894 | Haiqing Li | Haiqing Li, Yuzhi Guo, Feng Jiang, Qifeng Zhou, Hehuan Ma, Junzhou
Huang | Leveraging Gait Patterns as Biomarkers: An attention-guided Deep
Multiple Instance Learning Network for Scoliosis Classification | 6 pages, 3 figures | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Scoliosis is a spinal curvature disorder that is difficult to detect early
and can compress the chest cavity, impacting respiratory function and cardiac
health. Especially for adolescents, delayed detection and treatment result in
worsening compression. Traditional scoliosis detection methods heavily rely on
clinical expertise, and X-ray imaging poses radiation risks, limiting
large-scale early screening. We propose an Attention-Guided Deep Multi-Instance
Learning method (Gait-MIL) to effectively capture discriminative features from
gait patterns, which is inspired by ScoNet-MT's pioneering use of gait patterns
for scoliosis detection. We evaluate our method on the first large-scale
dataset based on gait patterns for scoliosis classification. The results
demonstrate that our study improves the performance of using gait as a
biomarker for scoliosis detection, significantly enhances detection accuracy
for the particularly challenging Neutral cases, where subtle indicators are
often overlooked. Our Gait-MIL also performs robustly in imbalanced scenarios,
making it a promising tool for large-scale scoliosis screening.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 19:35:33 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Haiqing",
""
],
[
"Guo",
"Yuzhi",
""
],
[
"Jiang",
"Feng",
""
],
[
"Zhou",
"Qifeng",
""
],
[
"Ma",
"Hehuan",
""
],
[
"Huang",
"Junzhou",
""
]
] | TITLE: Leveraging Gait Patterns as Biomarkers: An attention-guided Deep
Multiple Instance Learning Network for Scoliosis Classification
ABSTRACT: Scoliosis is a spinal curvature disorder that is difficult to detect early
and can compress the chest cavity, impacting respiratory function and cardiac
health. Especially for adolescents, delayed detection and treatment result in
worsening compression. Traditional scoliosis detection methods heavily rely on
clinical expertise, and X-ray imaging poses radiation risks, limiting
large-scale early screening. We propose an Attention-Guided Deep Multi-Instance
Learning method (Gait-MIL) to effectively capture discriminative features from
gait patterns, which is inspired by ScoNet-MT's pioneering use of gait patterns
for scoliosis detection. We evaluate our method on the first large-scale
dataset based on gait patterns for scoliosis classification. The results
demonstrate that our study improves the performance of using gait as a
biomarker for scoliosis detection, significantly enhances detection accuracy
for the particularly challenging Neutral cases, where subtle indicators are
often overlooked. Our Gait-MIL also performs robustly in imbalanced scenarios,
making it a promising tool for large-scale scoliosis screening.
|
2504.03902 | John Paisley | John Paisley, Ghazal Fazelnia, Brian Barr | Stochastic Variational Inference with Tuneable Stochastic Annealing | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | In this paper, we exploit the observation that stochastic variational
inference (SVI) is a form of annealing and present a modified SVI approach --
applicable to both large and small datasets -- that allows the amount of
annealing done by SVI to be tuned. We are motivated by the fact that, in SVI,
the larger the batch size the more approximately Gaussian is the intrinsic
noise, but the smaller its variance. This low variance reduces the amount of
annealing which is needed to escape bad local optimal solutions. We propose a
simple method for achieving both goals of having larger variance noise to
escape bad local optimal solutions and more data information to obtain more
accurate gradient directions. The idea is to set an actual batch size, which
may be the size of the data set, and a smaller effective batch size that
matches the larger level of variance at this smaller batch size. The result is
an approximation to the maximum entropy stochastic gradient at this variance
level. We theoretically motivate our approach for the framework of conjugate
exponential family models and illustrate the method empirically on the
probabilistic matrix factorization collaborative filter, the Latent Dirichlet
Allocation topic model, and the Gaussian mixture model.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 19:46:10 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Paisley",
"John",
""
],
[
"Fazelnia",
"Ghazal",
""
],
[
"Barr",
"Brian",
""
]
] | TITLE: Stochastic Variational Inference with Tuneable Stochastic Annealing
ABSTRACT: In this paper, we exploit the observation that stochastic variational
inference (SVI) is a form of annealing and present a modified SVI approach --
applicable to both large and small datasets -- that allows the amount of
annealing done by SVI to be tuned. We are motivated by the fact that, in SVI,
the larger the batch size the more approximately Gaussian is the intrinsic
noise, but the smaller its variance. This low variance reduces the amount of
annealing which is needed to escape bad local optimal solutions. We propose a
simple method for achieving both goals of having larger variance noise to
escape bad local optimal solutions and more data information to obtain more
accurate gradient directions. The idea is to set an actual batch size, which
may be the size of the data set, and a smaller effective batch size that
matches the larger level of variance at this smaller batch size. The result is
an approximation to the maximum entropy stochastic gradient at this variance
level. We theoretically motivate our approach for the framework of conjugate
exponential family models and illustrate the method empirically on the
probabilistic matrix factorization collaborative filter, the Latent Dirichlet
Allocation topic model, and the Gaussian mixture model.
|
2504.03906 | Abhilekh Borah | Abhilekh Borah, Hasnat Md Abdullah, Kangda Wei, Ruihong Huang | CliME: Evaluating Multimodal Climate Discourse on Social Media and the
Climate Alignment Quotient (CAQ) | 16 pages, 9 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The rise of Large Language Models (LLMs) has raised questions about their
ability to understand climate-related contexts. Though climate change dominates
social media, analyzing its multimodal expressions is understudied, and current
tools have failed to determine whether LLMs amplify credible solutions or
spread unsubstantiated claims. To address this, we introduce CliME (Climate
Change Multimodal Evaluation), a first-of-its-kind multimodal dataset,
comprising 2579 Twitter and Reddit posts. The benchmark features a diverse
collection of humorous memes and skeptical posts, capturing how these formats
distill complex issues into viral narratives that shape public opinion and
policy discussions. To systematically evaluate LLM performance, we present the
Climate Alignment Quotient (CAQ), a novel metric comprising five distinct
dimensions: Articulation, Evidence, Resonance, Transition, and Specificity.
Additionally, we propose three analytical lenses: Actionability, Criticality,
and Justice, to guide the assessment of LLM-generated climate discourse using
CAQ. Our findings, based on the CAQ metric, indicate that while most evaluated
LLMs perform relatively well in Criticality and Justice, they consistently
underperform on the Actionability axis. Among the models evaluated, Claude 3.7
Sonnet achieves the highest overall performance. We publicly release our CliME
dataset and code to foster further research in this domain.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 20:01:00 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Borah",
"Abhilekh",
""
],
[
"Abdullah",
"Hasnat Md",
""
],
[
"Wei",
"Kangda",
""
],
[
"Huang",
"Ruihong",
""
]
] | TITLE: CliME: Evaluating Multimodal Climate Discourse on Social Media and the
Climate Alignment Quotient (CAQ)
ABSTRACT: The rise of Large Language Models (LLMs) has raised questions about their
ability to understand climate-related contexts. Though climate change dominates
social media, analyzing its multimodal expressions is understudied, and current
tools have failed to determine whether LLMs amplify credible solutions or
spread unsubstantiated claims. To address this, we introduce CliME (Climate
Change Multimodal Evaluation), a first-of-its-kind multimodal dataset,
comprising 2579 Twitter and Reddit posts. The benchmark features a diverse
collection of humorous memes and skeptical posts, capturing how these formats
distill complex issues into viral narratives that shape public opinion and
policy discussions. To systematically evaluate LLM performance, we present the
Climate Alignment Quotient (CAQ), a novel metric comprising five distinct
dimensions: Articulation, Evidence, Resonance, Transition, and Specificity.
Additionally, we propose three analytical lenses: Actionability, Criticality,
and Justice, to guide the assessment of LLM-generated climate discourse using
CAQ. Our findings, based on the CAQ metric, indicate that while most evaluated
LLMs perform relatively well in Criticality and Justice, they consistently
underperform on the Actionability axis. Among the models evaluated, Claude 3.7
Sonnet achieves the highest overall performance. We publicly release our CliME
dataset and code to foster further research in this domain.
|
2504.03909 | Ziyue Xu | Ziyue Xu, Yuan-Ting Hsieh, Zhihong Zhang, Holger R. Roth, Chester
Chen, Yan Cheng, and Andrew Feng | Secure Federated XGBoost with CUDA-accelerated Homomorphic Encryption
via NVIDIA FLARE | null | null | null | null | cs.CR cs.DC cs.ET | http://creativecommons.org/licenses/by/4.0/ | Federated learning (FL) enables collaborative model training across
decentralized datasets. NVIDIA FLARE's Federated XGBoost extends the popular
XGBoost algorithm to both vertical and horizontal federated settings,
facilitating joint model development without direct data sharing. However, the
initial implementation assumed mutual trust over the sharing of intermediate
gradient statistics produced by the XGBoost algorithm, leaving potential
vulnerabilities to honest-but-curious adversaries. This work introduces "Secure
Federated XGBoost", an efficient solution to mitigate these risks. We implement
secure federated algorithms for both vertical and horizontal scenarios,
addressing diverse data security patterns. To secure the messages, we leverage
homomorphic encryption (HE) to protect sensitive information during training. A
novel plugin and processor interface seamlessly integrates HE into the
Federated XGBoost pipeline, enabling secure aggregation over ciphertexts. We
present both CPU-based and CUDA-accelerated HE plugins, demonstrating
significant performance gains. Notably, our CUDA-accelerated HE implementation
achieves up to 30x speedups in vertical Federated XGBoost compared to existing
third-party solutions. By securing critical computation steps and encrypting
sensitive assets, Secure Federated XGBoost provides robust data privacy
guarantees, reinforcing the fundamental benefits of federated learning while
maintaining high performance.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 20:08:24 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Xu",
"Ziyue",
""
],
[
"Hsieh",
"Yuan-Ting",
""
],
[
"Zhang",
"Zhihong",
""
],
[
"Roth",
"Holger R.",
""
],
[
"Chen",
"Chester",
""
],
[
"Cheng",
"Yan",
""
],
[
"Feng",
"Andrew",
""
]
] | TITLE: Secure Federated XGBoost with CUDA-accelerated Homomorphic Encryption
via NVIDIA FLARE
ABSTRACT: Federated learning (FL) enables collaborative model training across
decentralized datasets. NVIDIA FLARE's Federated XGBoost extends the popular
XGBoost algorithm to both vertical and horizontal federated settings,
facilitating joint model development without direct data sharing. However, the
initial implementation assumed mutual trust over the sharing of intermediate
gradient statistics produced by the XGBoost algorithm, leaving potential
vulnerabilities to honest-but-curious adversaries. This work introduces "Secure
Federated XGBoost", an efficient solution to mitigate these risks. We implement
secure federated algorithms for both vertical and horizontal scenarios,
addressing diverse data security patterns. To secure the messages, we leverage
homomorphic encryption (HE) to protect sensitive information during training. A
novel plugin and processor interface seamlessly integrates HE into the
Federated XGBoost pipeline, enabling secure aggregation over ciphertexts. We
present both CPU-based and CUDA-accelerated HE plugins, demonstrating
significant performance gains. Notably, our CUDA-accelerated HE implementation
achieves up to 30x speedups in vertical Federated XGBoost compared to existing
third-party solutions. By securing critical computation steps and encrypting
sensitive assets, Secure Federated XGBoost provides robust data privacy
guarantees, reinforcing the fundamental benefits of federated learning while
maintaining high performance.
|
2504.03913 | Majdi Radaideh | Nataly R. Panczyk, Omer F. Erdem, Majdi I. Radaideh | Opening the Black-Box: Symbolic Regression with Kolmogorov-Arnold
Networks for Energy Applications | 35 pages, 11 Figures, 14 Tables | null | null | null | cs.LG cs.SC stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | While most modern machine learning methods offer speed and accuracy, few
promise interpretability or explainability -- two key features necessary for
highly sensitive industries, like medicine, finance, and engineering. Using
eight datasets representative of one especially sensitive industry, nuclear
power, this work compares a traditional feedforward neural network (FNN) to a
Kolmogorov-Arnold Network (KAN). We consider not only model performance and
accuracy, but also interpretability through model architecture and
explainability through a post-hoc SHAP analysis. In terms of accuracy, we find
KANs and FNNs comparable across all datasets, when output dimensionality is
limited. KANs, which transform into symbolic equations after training, yield
perfectly interpretable models while FNNs remain black-boxes. Finally, using
the post-hoc explainability results from Kernel SHAP, we find that KANs learn
real, physical relations from experimental data, while FNNs simply produce
statistically accurate results. Overall, this analysis finds KANs a promising
alternative to traditional machine learning methods, particularly in
applications requiring both accuracy and comprehensibility.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 20:23:33 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Panczyk",
"Nataly R.",
""
],
[
"Erdem",
"Omer F.",
""
],
[
"Radaideh",
"Majdi I.",
""
]
] | TITLE: Opening the Black-Box: Symbolic Regression with Kolmogorov-Arnold
Networks for Energy Applications
ABSTRACT: While most modern machine learning methods offer speed and accuracy, few
promise interpretability or explainability -- two key features necessary for
highly sensitive industries, like medicine, finance, and engineering. Using
eight datasets representative of one especially sensitive industry, nuclear
power, this work compares a traditional feedforward neural network (FNN) to a
Kolmogorov-Arnold Network (KAN). We consider not only model performance and
accuracy, but also interpretability through model architecture and
explainability through a post-hoc SHAP analysis. In terms of accuracy, we find
KANs and FNNs comparable across all datasets, when output dimensionality is
limited. KANs, which transform into symbolic equations after training, yield
perfectly interpretable models while FNNs remain black-boxes. Finally, using
the post-hoc explainability results from Kernel SHAP, we find that KANs learn
real, physical relations from experimental data, while FNNs simply produce
statistically accurate results. Overall, this analysis finds KANs a promising
alternative to traditional machine learning methods, particularly in
applications requiring both accuracy and comprehensibility.
|
2504.03915 | Rufei Ma | Rufei Ma and Chao Chen | RF-BayesPhysNet: A Bayesian rPPG Uncertainty Estimation Method for
Complex Scenarios | 11 pages, 4 figures | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Remote photoplethysmography (rPPG) technology infers heart rate by capturing
subtle color changes in facial skin
using a camera, demonstrating great potential in non-contact heart rate
measurement. However, measurement
accuracy significantly decreases in complex scenarios such as lighting
changes and head movements compared
to ideal laboratory conditions. Existing deep learning models often neglect
the quantification of measurement
uncertainty, limiting their credibility in dynamic scenes. To address the
issue of insufficient rPPG measurement
reliability in complex scenarios, this paper introduces Bayesian neural
networks to the rPPG field for the first time,
proposing the Robust Fusion Bayesian Physiological Network (RF-BayesPhysNet),
which can model both aleatoric
and epistemic uncertainty. It leverages variational inference to balance
accuracy and computational efficiency.
Due to the current lack of uncertainty estimation metrics in the rPPG field,
this paper also proposes a new set of
methods, using Spearman correlation coefficient, prediction interval
coverage, and confidence interval width, to
measure the effectiveness of uncertainty estimation methods under different
noise conditions. Experiments show
that the model, with only double the parameters compared to traditional
network models, achieves a MAE of 2.56
on the UBFC-RPPG dataset, surpassing most models. It demonstrates good
uncertainty estimation capability
in no-noise and low-noise conditions, providing prediction confidence and
significantly enhancing robustness in
real-world applications. We have open-sourced the code at
https://github.com/AIDC-rPPG/RF-Net
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 20:24:57 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ma",
"Rufei",
""
],
[
"Chen",
"Chao",
""
]
] | TITLE: RF-BayesPhysNet: A Bayesian rPPG Uncertainty Estimation Method for
Complex Scenarios
ABSTRACT: Remote photoplethysmography (rPPG) technology infers heart rate by capturing
subtle color changes in facial skin
using a camera, demonstrating great potential in non-contact heart rate
measurement. However, measurement
accuracy significantly decreases in complex scenarios such as lighting
changes and head movements compared
to ideal laboratory conditions. Existing deep learning models often neglect
the quantification of measurement
uncertainty, limiting their credibility in dynamic scenes. To address the
issue of insufficient rPPG measurement
reliability in complex scenarios, this paper introduces Bayesian neural
networks to the rPPG field for the first time,
proposing the Robust Fusion Bayesian Physiological Network (RF-BayesPhysNet),
which can model both aleatoric
and epistemic uncertainty. It leverages variational inference to balance
accuracy and computational efficiency.
Due to the current lack of uncertainty estimation metrics in the rPPG field,
this paper also proposes a new set of
methods, using Spearman correlation coefficient, prediction interval
coverage, and confidence interval width, to
measure the effectiveness of uncertainty estimation methods under different
noise conditions. Experiments show
that the model, with only double the parameters compared to traditional
network models, achieves a MAE of 2.56
on the UBFC-RPPG dataset, surpassing most models. It demonstrates good
uncertainty estimation capability
in no-noise and low-noise conditions, providing prediction confidence and
significantly enhancing robustness in
real-world applications. We have open-sourced the code at
https://github.com/AIDC-rPPG/RF-Net
|
2504.03918 | Mahsa Bazzaz | Mahsa Bazzaz, Seth Cooper | Analysis of Uncertainty in Procedural Maps in Slay the Spire | null | null | 10.1145/3723498.3723846 | null | cs.IT math.IT | http://creativecommons.org/licenses/by/4.0/ | This work investigates the role of uncertainty in Slay the Spire using an
information-theoretic framework. Focusing on the entropy of game paths (which
are based on procedurally-generated maps) we analyze how randomness influences
player decision-making and success. By examining a dataset of 20,000 game runs,
we quantify the entropy of paths taken by players and relate it with their
outcomes and skill levels. The results show that victorious runs are associated
with higher normalized entropy, suggesting more risk-taking. Additionally,
higher-skill players tend to exhibit distinct patterns of risk-taking behavior
in later game stages.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 20:29:04 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Bazzaz",
"Mahsa",
""
],
[
"Cooper",
"Seth",
""
]
] | TITLE: Analysis of Uncertainty in Procedural Maps in Slay the Spire
ABSTRACT: This work investigates the role of uncertainty in Slay the Spire using an
information-theoretic framework. Focusing on the entropy of game paths (which
are based on procedurally-generated maps) we analyze how randomness influences
player decision-making and success. By examining a dataset of 20,000 game runs,
we quantify the entropy of paths taken by players and relate it with their
outcomes and skill levels. The results show that victorious runs are associated
with higher normalized entropy, suggesting more risk-taking. Additionally,
higher-skill players tend to exhibit distinct patterns of risk-taking behavior
in later game stages.
|
2504.03928 | Abde Rrafik Laakel Hemdanou | Abderrafik Laakel Hemdanou and Youssef Achtoun and Mohammed Lamarti
Sefian and Ismail Tahiri and Abdellatif El Afia | Random Normed k-Means: A Paradigm-Shift in Clustering within
Probabilistic Metric Spaces | 27 pages, 16 figures | null | null | null | cs.LG math.PR stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing approaches remain largely constrained by traditional distance
metrics, limiting their effectiveness in handling random data. In this work, we
introduce the first k-means variant in the literature that operates within a
probabilistic metric space, replacing conventional distance measures with a
well-defined distance distribution function. This pioneering approach enables
more flexible and robust clustering in both deterministic and random datasets,
establishing a new foundation for clustering in stochastic environments. By
adopting a probabilistic perspective, our method not only introduces a fresh
paradigm but also establishes a rigorous theoretical framework that is expected
to serve as a key reference for future clustering research involving random
data. Extensive experiments on diverse real and synthetic datasets assess our
model's effectiveness using widely recognized evaluation metrics, including
Silhouette, Davies-Bouldin, Calinski Harabasz, the adjusted Rand index, and
distortion. Comparative analyses against established methods such as k-means++,
fuzzy c-means, and kernel probabilistic k-means demonstrate the superior
performance of our proposed random normed k-means (RNKM) algorithm. Notably,
RNKM exhibits a remarkable ability to identify nonlinearly separable
structures, making it highly effective in complex clustering scenarios. These
findings position RNKM as a groundbreaking advancement in clustering research,
offering a powerful alternative to traditional techniques while addressing a
long-standing gap in the literature. By bridging probabilistic metrics with
clustering, this study provides a foundational reference for future
developments and opens new avenues for advanced data analysis in dynamic,
data-driven applications.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 20:48:43 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Hemdanou",
"Abderrafik Laakel",
""
],
[
"Achtoun",
"Youssef",
""
],
[
"Sefian",
"Mohammed Lamarti",
""
],
[
"Tahiri",
"Ismail",
""
],
[
"Afia",
"Abdellatif El",
""
]
] | TITLE: Random Normed k-Means: A Paradigm-Shift in Clustering within
Probabilistic Metric Spaces
ABSTRACT: Existing approaches remain largely constrained by traditional distance
metrics, limiting their effectiveness in handling random data. In this work, we
introduce the first k-means variant in the literature that operates within a
probabilistic metric space, replacing conventional distance measures with a
well-defined distance distribution function. This pioneering approach enables
more flexible and robust clustering in both deterministic and random datasets,
establishing a new foundation for clustering in stochastic environments. By
adopting a probabilistic perspective, our method not only introduces a fresh
paradigm but also establishes a rigorous theoretical framework that is expected
to serve as a key reference for future clustering research involving random
data. Extensive experiments on diverse real and synthetic datasets assess our
model's effectiveness using widely recognized evaluation metrics, including
Silhouette, Davies-Bouldin, Calinski Harabasz, the adjusted Rand index, and
distortion. Comparative analyses against established methods such as k-means++,
fuzzy c-means, and kernel probabilistic k-means demonstrate the superior
performance of our proposed random normed k-means (RNKM) algorithm. Notably,
RNKM exhibits a remarkable ability to identify nonlinearly separable
structures, making it highly effective in complex clustering scenarios. These
findings position RNKM as a groundbreaking advancement in clustering research,
offering a powerful alternative to traditional techniques while addressing a
long-standing gap in the literature. By bridging probabilistic metrics with
clustering, this study provides a foundational reference for future
developments and opens new avenues for advanced data analysis in dynamic,
data-driven applications.
|
2504.03940 | Mahsa Bazzaz | Mahsa Bazzaz, Seth Cooper | Analysis of Robustness of a Large Game Corpus | null | null | 10.1145/3723498.3723820 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Procedural content generation via machine learning (PCGML) in games involves
using machine learning techniques to create game content such as maps and
levels. 2D tile-based game levels have consistently served as a standard
dataset for PCGML because they are a simplified version of game levels while
maintaining the specific constraints typical of games, such as being solvable.
In this work, we highlight the unique characteristics of game levels, including
their structured discrete data nature, the local and global constraints
inherent in the games, and the sensitivity of the game levels to small changes
in input. We define the robustness of data as a measure of sensitivity to small
changes in input that cause a change in output, and we use this measure to
analyze and compare these levels to state-of-the-art machine learning datasets,
showcasing the subtle differences in their nature. We also constructed a large
dataset from four games inspired by popular classic tile-based games that
showcase these characteristics and address the challenge of sparse data in
PCGML by providing a significantly larger dataset than those currently
available.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 21:15:13 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Bazzaz",
"Mahsa",
""
],
[
"Cooper",
"Seth",
""
]
] | TITLE: Analysis of Robustness of a Large Game Corpus
ABSTRACT: Procedural content generation via machine learning (PCGML) in games involves
using machine learning techniques to create game content such as maps and
levels. 2D tile-based game levels have consistently served as a standard
dataset for PCGML because they are a simplified version of game levels while
maintaining the specific constraints typical of games, such as being solvable.
In this work, we highlight the unique characteristics of game levels, including
their structured discrete data nature, the local and global constraints
inherent in the games, and the sensitivity of the game levels to small changes
in input. We define the robustness of data as a measure of sensitivity to small
changes in input that cause a change in output, and we use this measure to
analyze and compare these levels to state-of-the-art machine learning datasets,
showcasing the subtle differences in their nature. We also constructed a large
dataset from four games inspired by popular classic tile-based games that
showcase these characteristics and address the challenge of sparse data in
PCGML by providing a significantly larger dataset than those currently
available.
|
2504.03948 | Sathyanarayanan Aakur | Sanjoy Kundu, Shanmukha Vellamchetti, Sathyanarayanan N. Aakur | ProbRes: Probabilistic Jump Diffusion for Open-World Egocentric Activity
Recognition | 17 pages, 6 figures, 3 tables. Under review | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Open-world egocentric activity recognition poses a fundamental challenge due
to its unconstrained nature, requiring models to infer unseen activities from
an expansive, partially observed search space. We introduce ProbRes, a
Probabilistic Residual search framework based on jump-diffusion that
efficiently navigates this space by balancing prior-guided exploration with
likelihood-driven exploitation. Our approach integrates structured commonsense
priors to construct a semantically coherent search space, adaptively refines
predictions using Vision-Language Models (VLMs) and employs a stochastic search
mechanism to locate high-likelihood activity labels while minimizing exhaustive
enumeration efficiently. We systematically evaluate ProbRes across multiple
openness levels (L0 - L3), demonstrating its adaptability to increasing search
space complexity. In addition to achieving state-of-the-art performance on
benchmark datasets (GTEA Gaze, GTEA Gaze+, EPIC-Kitchens, and Charades-Ego), we
establish a clear taxonomy for open-world recognition, delineating the
challenges and methodological advancements necessary for egocentric activity
understanding. Our results highlight the importance of structured search
strategies, paving the way for scalable and efficient open-world activity
recognition.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 21:30:45 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Kundu",
"Sanjoy",
""
],
[
"Vellamchetti",
"Shanmukha",
""
],
[
"Aakur",
"Sathyanarayanan N.",
""
]
] | TITLE: ProbRes: Probabilistic Jump Diffusion for Open-World Egocentric Activity
Recognition
ABSTRACT: Open-world egocentric activity recognition poses a fundamental challenge due
to its unconstrained nature, requiring models to infer unseen activities from
an expansive, partially observed search space. We introduce ProbRes, a
Probabilistic Residual search framework based on jump-diffusion that
efficiently navigates this space by balancing prior-guided exploration with
likelihood-driven exploitation. Our approach integrates structured commonsense
priors to construct a semantically coherent search space, adaptively refines
predictions using Vision-Language Models (VLMs) and employs a stochastic search
mechanism to locate high-likelihood activity labels while minimizing exhaustive
enumeration efficiently. We systematically evaluate ProbRes across multiple
openness levels (L0 - L3), demonstrating its adaptability to increasing search
space complexity. In addition to achieving state-of-the-art performance on
benchmark datasets (GTEA Gaze, GTEA Gaze+, EPIC-Kitchens, and Charades-Ego), we
establish a clear taxonomy for open-world recognition, delineating the
challenges and methodological advancements necessary for egocentric activity
understanding. Our results highlight the importance of structured search
strategies, paving the way for scalable and efficient open-world activity
recognition.
|
2504.03957 | Baolei Zhang | Baolei Zhang, Yuxi Chen, Minghong Fang, Zhuqing Liu, Lihai Nie, Tong
Li, Zheli Liu | Practical Poisoning Attacks against Retrieval-Augmented Generation | null | null | null | null | cs.CR cs.IR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have demonstrated impressive natural language
processing abilities but face challenges such as hallucination and outdated
knowledge. Retrieval-Augmented Generation (RAG) has emerged as a
state-of-the-art approach to mitigate these issues. While RAG enhances LLM
outputs, it remains vulnerable to poisoning attacks. Recent studies show that
injecting poisoned text into the knowledge database can compromise RAG systems,
but most existing attacks assume that the attacker can insert a sufficient
number of poisoned texts per query to outnumber correct-answer texts in
retrieval, an assumption that is often unrealistic. To address this limitation,
we propose CorruptRAG, a practical poisoning attack against RAG systems in
which the attacker injects only a single poisoned text, enhancing both
feasibility and stealth. Extensive experiments across multiple datasets
demonstrate that CorruptRAG achieves higher attack success rates compared to
existing baselines.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 21:49:42 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhang",
"Baolei",
""
],
[
"Chen",
"Yuxi",
""
],
[
"Fang",
"Minghong",
""
],
[
"Liu",
"Zhuqing",
""
],
[
"Nie",
"Lihai",
""
],
[
"Li",
"Tong",
""
],
[
"Liu",
"Zheli",
""
]
] | TITLE: Practical Poisoning Attacks against Retrieval-Augmented Generation
ABSTRACT: Large language models (LLMs) have demonstrated impressive natural language
processing abilities but face challenges such as hallucination and outdated
knowledge. Retrieval-Augmented Generation (RAG) has emerged as a
state-of-the-art approach to mitigate these issues. While RAG enhances LLM
outputs, it remains vulnerable to poisoning attacks. Recent studies show that
injecting poisoned text into the knowledge database can compromise RAG systems,
but most existing attacks assume that the attacker can insert a sufficient
number of poisoned texts per query to outnumber correct-answer texts in
retrieval, an assumption that is often unrealistic. To address this limitation,
we propose CorruptRAG, a practical poisoning attack against RAG systems in
which the attacker injects only a single poisoned text, enhancing both
feasibility and stealth. Extensive experiments across multiple datasets
demonstrate that CorruptRAG achieves higher attack success rates compared to
existing baselines.
|
2504.03978 | Philippe Bich | Francesco De Santis, Gabriele Ciravegna, Philippe Bich, Danilo
Giordano, Tania Cerquitelli | V-CEM: Bridging Performance and Intervenability in Concept-based Models | Paper accepted at: The 3rd World Conference on Explainable Artificial
Intelligence | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Concept-based eXplainable AI (C-XAI) is a rapidly growing research field that
enhances AI model interpretability by leveraging intermediate,
human-understandable concepts. This approach not only enhances model
transparency but also enables human intervention, allowing users to interact
with these concepts to refine and improve the model's performance. Concept
Bottleneck Models (CBMs) explicitly predict concepts before making final
decisions, enabling interventions to correct misclassified concepts. While CBMs
remain effective in Out-Of-Distribution (OOD) settings with intervention, they
struggle to match the performance of black-box models. Concept Embedding Models
(CEMs) address this by learning concept embeddings from both concept
predictions and input data, enhancing In-Distribution (ID) accuracy but
reducing the effectiveness of interventions, especially in OOD scenarios. In
this work, we propose the Variational Concept Embedding Model (V-CEM), which
leverages variational inference to improve intervention responsiveness in CEMs.
We evaluated our model on various textual and visual datasets in terms of ID
performance, intervention responsiveness in both ID and OOD settings, and
Concept Representation Cohesiveness (CRC), a metric we propose to assess the
quality of the concept embedding representations. The results demonstrate that
V-CEM retains CEM-level ID performance while achieving intervention
effectiveness similar to CBM in OOD settings, effectively reducing the gap
between interpretability (intervention) and generalization (performance).
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 22:43:04 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"De Santis",
"Francesco",
""
],
[
"Ciravegna",
"Gabriele",
""
],
[
"Bich",
"Philippe",
""
],
[
"Giordano",
"Danilo",
""
],
[
"Cerquitelli",
"Tania",
""
]
] | TITLE: V-CEM: Bridging Performance and Intervenability in Concept-based Models
ABSTRACT: Concept-based eXplainable AI (C-XAI) is a rapidly growing research field that
enhances AI model interpretability by leveraging intermediate,
human-understandable concepts. This approach not only enhances model
transparency but also enables human intervention, allowing users to interact
with these concepts to refine and improve the model's performance. Concept
Bottleneck Models (CBMs) explicitly predict concepts before making final
decisions, enabling interventions to correct misclassified concepts. While CBMs
remain effective in Out-Of-Distribution (OOD) settings with intervention, they
struggle to match the performance of black-box models. Concept Embedding Models
(CEMs) address this by learning concept embeddings from both concept
predictions and input data, enhancing In-Distribution (ID) accuracy but
reducing the effectiveness of interventions, especially in OOD scenarios. In
this work, we propose the Variational Concept Embedding Model (V-CEM), which
leverages variational inference to improve intervention responsiveness in CEMs.
We evaluated our model on various textual and visual datasets in terms of ID
performance, intervention responsiveness in both ID and OOD settings, and
Concept Representation Cohesiveness (CRC), a metric we propose to assess the
quality of the concept embedding representations. The results demonstrate that
V-CEM retains CEM-level ID performance while achieving intervention
effectiveness similar to CBM in OOD settings, effectively reducing the gap
between interpretability (intervention) and generalization (performance).
|
2504.03979 | Amit Kumar | Amit K Verma, Zhisong Zhang, Junwon Seo, Robin Kuo, Runbo Jiang, Emma
Strubell, Anthony D Rollett | Structured Extraction of Process Structure Properties Relationships in
Materials Science | 16 pages, 3 figures, 13 table | null | null | null | cs.CL cond-mat.mtrl-sci cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the advent of large language models (LLMs), the vast unstructured text
within millions of academic papers is increasingly accessible for materials
discovery, although significant challenges remain. While LLMs offer promising
few- and zero-shot learning capabilities, particularly valuable in the
materials domain where expert annotations are scarce, general-purpose LLMs
often fail to address key materials-specific queries without further
adaptation. To bridge this gap, fine-tuning LLMs on human-labeled data is
essential for effective structured knowledge extraction. In this study, we
introduce a novel annotation schema designed to extract generic
process-structure-properties relationships from scientific literature. We
demonstrate the utility of this approach using a dataset of 128 abstracts, with
annotations drawn from two distinct domains: high-temperature materials (Domain
I) and uncertainty quantification in simulating materials microstructure
(Domain II). Initially, we developed a conditional random field (CRF) model
based on MatBERT, a domain-specific BERT variant, and evaluated its performance
on Domain I. Subsequently, we compared this model with a fine-tuned LLM (GPT-4o
from OpenAI) under identical conditions. Our results indicate that fine-tuning
LLMs can significantly improve entity extraction performance over the BERT-CRF
baseline on Domain I. However, when additional examples from Domain II were
incorporated, the performance of the BERT-CRF model became comparable to that
of the GPT-4o model. These findings underscore the potential of our schema for
structured knowledge extraction and highlight the complementary strengths of
both modeling approaches.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 22:44:02 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Verma",
"Amit K",
""
],
[
"Zhang",
"Zhisong",
""
],
[
"Seo",
"Junwon",
""
],
[
"Kuo",
"Robin",
""
],
[
"Jiang",
"Runbo",
""
],
[
"Strubell",
"Emma",
""
],
[
"Rollett",
"Anthony D",
""
]
] | TITLE: Structured Extraction of Process Structure Properties Relationships in
Materials Science
ABSTRACT: With the advent of large language models (LLMs), the vast unstructured text
within millions of academic papers is increasingly accessible for materials
discovery, although significant challenges remain. While LLMs offer promising
few- and zero-shot learning capabilities, particularly valuable in the
materials domain where expert annotations are scarce, general-purpose LLMs
often fail to address key materials-specific queries without further
adaptation. To bridge this gap, fine-tuning LLMs on human-labeled data is
essential for effective structured knowledge extraction. In this study, we
introduce a novel annotation schema designed to extract generic
process-structure-properties relationships from scientific literature. We
demonstrate the utility of this approach using a dataset of 128 abstracts, with
annotations drawn from two distinct domains: high-temperature materials (Domain
I) and uncertainty quantification in simulating materials microstructure
(Domain II). Initially, we developed a conditional random field (CRF) model
based on MatBERT, a domain-specific BERT variant, and evaluated its performance
on Domain I. Subsequently, we compared this model with a fine-tuned LLM (GPT-4o
from OpenAI) under identical conditions. Our results indicate that fine-tuning
LLMs can significantly improve entity extraction performance over the BERT-CRF
baseline on Domain I. However, when additional examples from Domain II were
incorporated, the performance of the BERT-CRF model became comparable to that
of the GPT-4o model. These findings underscore the potential of our schema for
structured knowledge extraction and highlight the complementary strengths of
both modeling approaches.
|
2504.03980 | Roberta Mota | Roberta Mota, Ehud Sharlin, Usman Alim | Virtual Reality Lensing for Surface Approximation in Feature-driven
Volume Visualization | null | null | null | null | cs.GR | http://creativecommons.org/licenses/by/4.0/ | We present a novel lens technique to support the identification of
heterogeneous features in direct volume rendering (DVR) visualizations. In
contrast to data-centric transfer function (TF) design, our image-driven
approach enables users to specify target features directly within the
visualization using deformable quadric surfaces. The lens leverages quadrics
for their expressive yet simple parametrization, enabling users to sculpt
feature approximations by composing multiple quadric lenses. By doing so, the
lens offers greater versatility than traditional rigid-shape lenses for
selecting and bringing into focus features with irregular geometry. We discuss
the lens visualization and interaction design, advocating for bimanual spatial
virtual reality (VR) input for reducing cognitive and physical strain. We also
report findings from a pilot qualitative evaluation with a domain specialist
using a public asteroid impact dataset. These insights not only shed light on
the benefits and pitfalls of using deformable lenses but also suggest
directions for future research.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 22:47:05 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Mota",
"Roberta",
""
],
[
"Sharlin",
"Ehud",
""
],
[
"Alim",
"Usman",
""
]
] | TITLE: Virtual Reality Lensing for Surface Approximation in Feature-driven
Volume Visualization
ABSTRACT: We present a novel lens technique to support the identification of
heterogeneous features in direct volume rendering (DVR) visualizations. In
contrast to data-centric transfer function (TF) design, our image-driven
approach enables users to specify target features directly within the
visualization using deformable quadric surfaces. The lens leverages quadrics
for their expressive yet simple parametrization, enabling users to sculpt
feature approximations by composing multiple quadric lenses. By doing so, the
lens offers greater versatility than traditional rigid-shape lenses for
selecting and bringing into focus features with irregular geometry. We discuss
the lens visualization and interaction design, advocating for bimanual spatial
virtual reality (VR) input for reducing cognitive and physical strain. We also
report findings from a pilot qualitative evaluation with a domain specialist
using a public asteroid impact dataset. These insights not only shed light on
the benefits and pitfalls of using deformable lenses but also suggest
directions for future research.
|
2504.03999 | Bryan Fichera | Karna A. Morey, Bryan T. Fichera, Baiqing Lv, Zonqi Shen, Nuh Gedik | Automated Polarization Rotation for Multi-Axis Rotational-Anisotropy
Second Harmonic Generation Experiments | 7 pages, 5 figures | Rev. Sci. Instrum. 96, 043002 (2025) | 10.1063/5.0233827 | null | cond-mat.mtrl-sci physics.ins-det | http://creativecommons.org/licenses/by/4.0/ | Rotational anisotropy second harmonic generation (RA-SHG) is a nonlinear
optical technique used to probe the symmetry of condensed matter systems.
Measuring the dependence of the SHG susceptibility on one or more external
parameters, notably strain, field, temperature, or time delay, is an extremely
powerful way to probe complex phases of quantum materials. Experimentally,
extracting maximal information about the SHG susceptibility tensor requires
measurements of S and P polarized input and output combinations, which
naturally involves the rotation of the polarizers during data collection. For
multi-axis experiments, this has proved challenging since polarization rotation
is typically done manually. Automating this process eliminates labor
constraints, reduces uncertainty due to low-frequency noise, and expands the
type of multi-axis datasets that can be collected; however, it is difficult due
to geometrical constraints within the setup. In this work, we design and
implement low-cost, high-fidelity automated polarization rotators for use in
multi-axis RA-SHG. These polarization rotators utilize an electrical slip ring
to transfer power to the rotating RA-SHG optical setup as well as a miniature
stepper motor to perform the polarization rotation. We demonstrate this
automated system in time-resolved RA-SHG measurements in the
non-centrosymmetric semiconductor GaAs. For the multi-axis measurements
described above, this automated system permits data averaging over longer
periods, vastly expedites data collection, and expands the setup measurement
capability. This ultimately opens new frontiers in probing quantum materials
using multiple tunable external parameters.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 00:07:10 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Morey",
"Karna A.",
""
],
[
"Fichera",
"Bryan T.",
""
],
[
"Lv",
"Baiqing",
""
],
[
"Shen",
"Zonqi",
""
],
[
"Gedik",
"Nuh",
""
]
] | TITLE: Automated Polarization Rotation for Multi-Axis Rotational-Anisotropy
Second Harmonic Generation Experiments
ABSTRACT: Rotational anisotropy second harmonic generation (RA-SHG) is a nonlinear
optical technique used to probe the symmetry of condensed matter systems.
Measuring the dependence of the SHG susceptibility on one or more external
parameters, notably strain, field, temperature, or time delay, is an extremely
powerful way to probe complex phases of quantum materials. Experimentally,
extracting maximal information about the SHG susceptibility tensor requires
measurements of S and P polarized input and output combinations, which
naturally involves the rotation of the polarizers during data collection. For
multi-axis experiments, this has proved challenging since polarization rotation
is typically done manually. Automating this process eliminates labor
constraints, reduces uncertainty due to low-frequency noise, and expands the
type of multi-axis datasets that can be collected; however, it is difficult due
to geometrical constraints within the setup. In this work, we design and
implement low-cost, high-fidelity automated polarization rotators for use in
multi-axis RA-SHG. These polarization rotators utilize an electrical slip ring
to transfer power to the rotating RA-SHG optical setup as well as a miniature
stepper motor to perform the polarization rotation. We demonstrate this
automated system in time-resolved RA-SHG measurements in the
non-centrosymmetric semiconductor GaAs. For the multi-axis measurements
described above, this automated system permits data averaging over longer
periods, vastly expedites data collection, and expands the setup measurement
capability. This ultimately opens new frontiers in probing quantum materials
using multiple tunable external parameters.
|
2504.04008 | Adel Chehade | Adel Chehade, Edoardo Ragusa, Paolo Gastaldo, and Rodolfo Zunino | Tiny Neural Networks for Session-Level Traffic Classification | null | null | null | null | cs.NI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | This paper presents a system for session-level traffic classification on
endpoint devices, developed using a Hardware-aware Neural Architecture Search
(HW-NAS) framework. HW-NAS optimizes Convolutional Neural Network (CNN)
architectures by integrating hardware constraints, ensuring efficient
deployment on resource-constrained devices. Tested on the ISCX VPN-nonVPN
dataset, the method achieves 97.06% accuracy while reducing parameters by over
200 times and FLOPs by nearly 4 times compared to leading models. The proposed
model requires up to 15.5 times less RAM and 26.4 times fewer FLOPs than the
most hardware-demanding models. This system enhances compatibility across
network architectures and ensures efficient deployment on diverse hardware,
making it suitable for applications like firewall policy enforcement and
traffic monitoring.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 01:16:21 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chehade",
"Adel",
""
],
[
"Ragusa",
"Edoardo",
""
],
[
"Gastaldo",
"Paolo",
""
],
[
"Zunino",
"Rodolfo",
""
]
] | TITLE: Tiny Neural Networks for Session-Level Traffic Classification
ABSTRACT: This paper presents a system for session-level traffic classification on
endpoint devices, developed using a Hardware-aware Neural Architecture Search
(HW-NAS) framework. HW-NAS optimizes Convolutional Neural Network (CNN)
architectures by integrating hardware constraints, ensuring efficient
deployment on resource-constrained devices. Tested on the ISCX VPN-nonVPN
dataset, the method achieves 97.06% accuracy while reducing parameters by over
200 times and FLOPs by nearly 4 times compared to leading models. The proposed
model requires up to 15.5 times less RAM and 26.4 times fewer FLOPs than the
most hardware-demanding models. This system enhances compatibility across
network architectures and ensures efficient deployment on diverse hardware,
making it suitable for applications like firewall policy enforcement and
traffic monitoring.
|
2504.04010 | Maksim Siniukov | Maksim Siniukov and Di Chang and Minh Tran and Hongkun Gong and
Ashutosh Chaubey and Mohammad Soleymani | DiTaiListener: Controllable High Fidelity Listener Video Generation with
Diffusion | Project page: https://havent-invented.github.io/DiTaiListener | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generating naturalistic and nuanced listener motions for extended
interactions remains an open problem. Existing methods often rely on
low-dimensional motion codes for facial behavior generation followed by
photorealistic rendering, limiting both visual fidelity and expressive
richness. To address these challenges, we introduce DiTaiListener, powered by a
video diffusion model with multimodal conditions. Our approach first generates
short segments of listener responses conditioned on the speaker's speech and
facial motions with DiTaiListener-Gen. It then refines the transitional frames
via DiTaiListener-Edit for a seamless transition. Specifically,
DiTaiListener-Gen adapts a Diffusion Transformer (DiT) for the task of listener
head portrait generation by introducing a Causal Temporal Multimodal Adapter
(CTM-Adapter) to process speakers' auditory and visual cues. CTM-Adapter
integrates speakers' input in a causal manner into the video generation process
to ensure temporally coherent listener responses. For long-form video
generation, we introduce DiTaiListener-Edit, a transition refinement
video-to-video diffusion model. The model fuses video segments into smooth and
continuous videos, ensuring temporal consistency in facial expressions and
image quality when merging short video segments produced by DiTaiListener-Gen.
Quantitatively, DiTaiListener achieves the state-of-the-art performance on
benchmark datasets in both photorealism (+73.8% in FID on RealTalk) and motion
representation (+6.1% in FD metric on VICO) spaces. User studies confirm the
superior performance of DiTaiListener, with the model being the clear
preference in terms of feedback, diversity, and smoothness, outperforming
competitors by a significant margin.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 01:19:46 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Siniukov",
"Maksim",
""
],
[
"Chang",
"Di",
""
],
[
"Tran",
"Minh",
""
],
[
"Gong",
"Hongkun",
""
],
[
"Chaubey",
"Ashutosh",
""
],
[
"Soleymani",
"Mohammad",
""
]
] | TITLE: DiTaiListener: Controllable High Fidelity Listener Video Generation with
Diffusion
ABSTRACT: Generating naturalistic and nuanced listener motions for extended
interactions remains an open problem. Existing methods often rely on
low-dimensional motion codes for facial behavior generation followed by
photorealistic rendering, limiting both visual fidelity and expressive
richness. To address these challenges, we introduce DiTaiListener, powered by a
video diffusion model with multimodal conditions. Our approach first generates
short segments of listener responses conditioned on the speaker's speech and
facial motions with DiTaiListener-Gen. It then refines the transitional frames
via DiTaiListener-Edit for a seamless transition. Specifically,
DiTaiListener-Gen adapts a Diffusion Transformer (DiT) for the task of listener
head portrait generation by introducing a Causal Temporal Multimodal Adapter
(CTM-Adapter) to process speakers' auditory and visual cues. CTM-Adapter
integrates speakers' input in a causal manner into the video generation process
to ensure temporally coherent listener responses. For long-form video
generation, we introduce DiTaiListener-Edit, a transition refinement
video-to-video diffusion model. The model fuses video segments into smooth and
continuous videos, ensuring temporal consistency in facial expressions and
image quality when merging short video segments produced by DiTaiListener-Gen.
Quantitatively, DiTaiListener achieves the state-of-the-art performance on
benchmark datasets in both photorealism (+73.8% in FID on RealTalk) and motion
representation (+6.1% in FD metric on VICO) spaces. User studies confirm the
superior performance of DiTaiListener, with the model being the clear
preference in terms of feedback, diversity, and smoothness, outperforming
competitors by a significant margin.
|
2504.04012 | Xiaolin Wang | Houzhang Fang, Xiaolin Wang, Zengyang Li, Lu Wang, Qingshan Li, Yi
Chang, Luxin Yan | Detection-Friendly Nonuniformity Correction: A Union Framework for
Infrared UAVTarget Detection | Accepted by CVPR2025 | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Infrared unmanned aerial vehicle (UAV) images captured using thermal
detectors are often affected by temperature dependent low-frequency
nonuniformity, which significantly reduces the contrast of the images.
Detecting UAV targets under nonuniform conditions is crucial in UAV
surveillance applications. Existing methods typically treat infrared
nonuniformity correction (NUC) as a preprocessing step for detection, which
leads to suboptimal performance. Balancing the two tasks while enhancing
detection beneficial information remains challenging. In this paper, we present
a detection-friendly union framework, termed UniCD, that simultaneously
addresses both infrared NUC and UAV target detection tasks in an end-to-end
manner. We first model NUC as a small number of parameter estimation problem
jointly driven by priors and data to generate detection-conducive images. Then,
we incorporate a new auxiliary loss with target mask supervision into the
backbone of the infrared UAV target detection network to strengthen target
features while suppressing the background. To better balance correction and
detection, we introduce a detection-guided self-supervised loss to reduce
feature discrepancies between the two tasks, thereby enhancing detection
robustness to varying nonuniformity levels. Additionally, we construct a new
benchmark composed of 50,000 infrared images in various nonuniformity types,
multi-scale UAV targets and rich backgrounds with target annotations, called
IRBFD. Extensive experiments on IRBFD demonstrate that our UniCD is a robust
union framework for NUC and UAV target detection while achieving real-time
processing capabilities. Dataset can be available at
https://github.com/IVPLaboratory/UniCD.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 01:29:22 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Fang",
"Houzhang",
""
],
[
"Wang",
"Xiaolin",
""
],
[
"Li",
"Zengyang",
""
],
[
"Wang",
"Lu",
""
],
[
"Li",
"Qingshan",
""
],
[
"Chang",
"Yi",
""
],
[
"Yan",
"Luxin",
""
]
] | TITLE: Detection-Friendly Nonuniformity Correction: A Union Framework for
Infrared UAVTarget Detection
ABSTRACT: Infrared unmanned aerial vehicle (UAV) images captured using thermal
detectors are often affected by temperature dependent low-frequency
nonuniformity, which significantly reduces the contrast of the images.
Detecting UAV targets under nonuniform conditions is crucial in UAV
surveillance applications. Existing methods typically treat infrared
nonuniformity correction (NUC) as a preprocessing step for detection, which
leads to suboptimal performance. Balancing the two tasks while enhancing
detection beneficial information remains challenging. In this paper, we present
a detection-friendly union framework, termed UniCD, that simultaneously
addresses both infrared NUC and UAV target detection tasks in an end-to-end
manner. We first model NUC as a small number of parameter estimation problem
jointly driven by priors and data to generate detection-conducive images. Then,
we incorporate a new auxiliary loss with target mask supervision into the
backbone of the infrared UAV target detection network to strengthen target
features while suppressing the background. To better balance correction and
detection, we introduce a detection-guided self-supervised loss to reduce
feature discrepancies between the two tasks, thereby enhancing detection
robustness to varying nonuniformity levels. Additionally, we construct a new
benchmark composed of 50,000 infrared images in various nonuniformity types,
multi-scale UAV targets and rich backgrounds with target annotations, called
IRBFD. Extensive experiments on IRBFD demonstrate that our UniCD is a robust
union framework for NUC and UAV target detection while achieving real-time
processing capabilities. Dataset can be available at
https://github.com/IVPLaboratory/UniCD.
|
2504.04015 | Xuechun Li | Xuechun Li, Shan Gao, Susu Xu | Multi-resolution Score-Based Variational Graphical Diffusion for Causal
Disaster System Modeling and Inference | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Complex systems with intricate causal dependencies challenge accurate
prediction. Effective modeling requires precise physical process
representation, integration of interdependent factors, and incorporation of
multi-resolution observational data. These systems manifest in both static
scenarios with instantaneous causal chains and temporal scenarios with evolving
dynamics, complicating modeling efforts. Current methods struggle to
simultaneously handle varying resolutions, capture physical relationships,
model causal dependencies, and incorporate temporal dynamics, especially with
inconsistently sampled data from diverse sources. We introduce Temporal-SVGDM:
Score-based Variational Graphical Diffusion Model for Multi-resolution
observations. Our framework constructs individual SDEs for each variable at its
native resolution, then couples these SDEs through a causal score mechanism
where parent nodes inform child nodes' evolution. This enables unified modeling
of both immediate causal effects in static scenarios and evolving dependencies
in temporal scenarios. In temporal models, state representations are processed
through a sequence prediction model to predict future states based on
historical patterns and causal relationships. Experiments on real-world
datasets demonstrate improved prediction accuracy and causal understanding
compared to existing methods, with robust performance under varying levels of
background knowledge. Our model exhibits graceful degradation across different
disaster types, successfully handling both static earthquake scenarios and
temporal hurricane and wildfire scenarios, while maintaining superior
performance even with limited data.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 01:36:23 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Xuechun",
""
],
[
"Gao",
"Shan",
""
],
[
"Xu",
"Susu",
""
]
] | TITLE: Multi-resolution Score-Based Variational Graphical Diffusion for Causal
Disaster System Modeling and Inference
ABSTRACT: Complex systems with intricate causal dependencies challenge accurate
prediction. Effective modeling requires precise physical process
representation, integration of interdependent factors, and incorporation of
multi-resolution observational data. These systems manifest in both static
scenarios with instantaneous causal chains and temporal scenarios with evolving
dynamics, complicating modeling efforts. Current methods struggle to
simultaneously handle varying resolutions, capture physical relationships,
model causal dependencies, and incorporate temporal dynamics, especially with
inconsistently sampled data from diverse sources. We introduce Temporal-SVGDM:
Score-based Variational Graphical Diffusion Model for Multi-resolution
observations. Our framework constructs individual SDEs for each variable at its
native resolution, then couples these SDEs through a causal score mechanism
where parent nodes inform child nodes' evolution. This enables unified modeling
of both immediate causal effects in static scenarios and evolving dependencies
in temporal scenarios. In temporal models, state representations are processed
through a sequence prediction model to predict future states based on
historical patterns and causal relationships. Experiments on real-world
datasets demonstrate improved prediction accuracy and causal understanding
compared to existing methods, with robust performance under varying levels of
background knowledge. Our model exhibits graceful degradation across different
disaster types, successfully handling both static earthquake scenarios and
temporal hurricane and wildfire scenarios, while maintaining superior
performance even with limited data.
|
2504.04017 | Sudip Vhaduri | Andrea Gajic, Sudip Vhaduri | A Comprehensive Survey of Challenges and Opportunities of Few-Shot
Learning Across Multiple Domains | Under Review | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In a world where new domains are constantly discovered and machine learning
(ML) is applied to automate new tasks every day, challenges arise with the
number of samples available to train ML models. While the traditional ML
training relies heavily on data volume, finding a large dataset with a lot of
usable samples is not always easy, and often the process takes time. For
instance, when a new human transmissible disease such as COVID-19 breaks out
and there is an immediate surge for rapid diagnosis, followed by rapid
isolation of infected individuals from healthy ones to contain the spread,
there is an immediate need to create tools/automation using machine learning
models. At the early stage of an outbreak, it is not only difficult to obtain a
lot of samples, but also difficult to understand the details about the disease,
to process the data needed to train a traditional ML model. A solution for this
can be a few-shot learning approach. This paper presents challenges and
opportunities of few-shot approaches that vary across major domains, i.e.,
audio, image, text, and their combinations, with their strengths and
weaknesses. This detailed understanding can help to adopt appropriate
approaches applicable to different domains and applications.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 01:46:32 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Gajic",
"Andrea",
""
],
[
"Vhaduri",
"Sudip",
""
]
] | TITLE: A Comprehensive Survey of Challenges and Opportunities of Few-Shot
Learning Across Multiple Domains
ABSTRACT: In a world where new domains are constantly discovered and machine learning
(ML) is applied to automate new tasks every day, challenges arise with the
number of samples available to train ML models. While the traditional ML
training relies heavily on data volume, finding a large dataset with a lot of
usable samples is not always easy, and often the process takes time. For
instance, when a new human transmissible disease such as COVID-19 breaks out
and there is an immediate surge for rapid diagnosis, followed by rapid
isolation of infected individuals from healthy ones to contain the spread,
there is an immediate need to create tools/automation using machine learning
models. At the early stage of an outbreak, it is not only difficult to obtain a
lot of samples, but also difficult to understand the details about the disease,
to process the data needed to train a traditional ML model. A solution for this
can be a few-shot learning approach. This paper presents challenges and
opportunities of few-shot approaches that vary across major domains, i.e.,
audio, image, text, and their combinations, with their strengths and
weaknesses. This detailed understanding can help to adopt appropriate
approaches applicable to different domains and applications.
|
2504.04018 | Mingyu Yang | Mingyu Yang, Wentao Li, Zhitao Shen, Chuan Xiao and Wei Wang | ESG: Elastic Graphs for Range-Filtering Approximate k-Nearest Neighbor
Search | 14 pages | null | null | null | cs.DB | http://creativecommons.org/licenses/by/4.0/ | Range-filtering approximate $k$-nearest neighbor (RFAKNN) search takes as
input a vector and a numeric value, returning $k$ points from a database of $N$
high-dimensional points. The returned points must satisfy two criteria: their
numeric values must lie within the specified query range, and they must be
approximately the $k$ nearest points to the query vector. To strike a better
balance between query accuracy and efficiency, we propose novel methods that
relax the strict requirement for subranges to \textit{exactly} match the query
range. This elastic relaxation is based on a theoretical insight: allowing the
controlled inclusion of out-of-range points during the search does not
compromise the bounded complexity of the search process. Building on this
insight, we prove that our methods reduce the number of required subranges to
at most \textit{two}, eliminating the $O(\log N)$ query overhead inherent in
existing methods. Extensive experiments on real-world datasets demonstrate that
our proposed methods outperform state-of-the-art approaches, achieving
performance improvements of 1.5x to 6x while maintaining high accuracy.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 01:50:09 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yang",
"Mingyu",
""
],
[
"Li",
"Wentao",
""
],
[
"Shen",
"Zhitao",
""
],
[
"Xiao",
"Chuan",
""
],
[
"Wang",
"Wei",
""
]
] | TITLE: ESG: Elastic Graphs for Range-Filtering Approximate k-Nearest Neighbor
Search
ABSTRACT: Range-filtering approximate $k$-nearest neighbor (RFAKNN) search takes as
input a vector and a numeric value, returning $k$ points from a database of $N$
high-dimensional points. The returned points must satisfy two criteria: their
numeric values must lie within the specified query range, and they must be
approximately the $k$ nearest points to the query vector. To strike a better
balance between query accuracy and efficiency, we propose novel methods that
relax the strict requirement for subranges to \textit{exactly} match the query
range. This elastic relaxation is based on a theoretical insight: allowing the
controlled inclusion of out-of-range points during the search does not
compromise the bounded complexity of the search process. Building on this
insight, we prove that our methods reduce the number of required subranges to
at most \textit{two}, eliminating the $O(\log N)$ query overhead inherent in
existing methods. Extensive experiments on real-world datasets demonstrate that
our proposed methods outperform state-of-the-art approaches, achieving
performance improvements of 1.5x to 6x while maintaining high accuracy.
|
2504.04025 | Nghia (Andy) Nguyen | Daniel Rivera, Jacob Huddin, Alexander Banerjee, Rongzhen Zhang,
Brenda Mai, Hanadi El Achi, Jacob Armstrong, Amer Wahed, and Andy Nguyen | Artificial intelligence application in lymphoma diagnosis: from
Convolutional Neural Network to Vision Transformer | 14 pages, 6 figures, 1 table | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, vision transformers were shown to be capable of outperforming
convolutional neural networks when pretrained on sufficiently large datasets.
Vision transformer models show good accuracy on large scale datasets, with
features of multi-modal training. Due to their promising feature detection, we
aim to explore vision transformer models for diagnosis of anaplastic large cell
lymphoma versus classical Hodgkin lymphoma using pathology whole slide images
of HE slides. We compared the classification performance of the vision
transformer to our previously designed convolutional neural network on the same
dataset. The dataset includes whole slide images of HE slides for 20 cases,
including 10 cases in each diagnostic category. From each whole slide image, 60
image patches having size of 100 by 100 pixels and at magnification of 20 were
obtained to yield 1200 image patches, from which 90 percent were used for
training, 9 percent for validation, and 10 percent for testing. The test
results from the convolutional neural network model had previously shown an
excellent diagnostic accuracy of 100 percent. The test results from the vision
transformer model also showed a comparable accuracy at 100 percent. To the best
of the authors' knowledge, this is the first direct comparison of predictive
performance between a vision transformer model and a convolutional neural
network model using the same dataset of lymphoma. Overall, convolutional neural
network has a more mature architecture than vision transformer and is usually
the best choice when large scale pretraining is not an available option.
Nevertheless, our current study shows comparable and excellent accuracy of
vision transformer compared to that of convolutional neural network even with a
relatively small dataset of anaplastic large cell lymphoma and classical
Hodgkin lymphoma.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 02:33:34 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Rivera",
"Daniel",
""
],
[
"Huddin",
"Jacob",
""
],
[
"Banerjee",
"Alexander",
""
],
[
"Zhang",
"Rongzhen",
""
],
[
"Mai",
"Brenda",
""
],
[
"Achi",
"Hanadi El",
""
],
[
"Armstrong",
"Jacob",
""
],
[
"Wahed",
"Amer",
""
],
[
"Nguyen",
"Andy",
""
]
] | TITLE: Artificial intelligence application in lymphoma diagnosis: from
Convolutional Neural Network to Vision Transformer
ABSTRACT: Recently, vision transformers were shown to be capable of outperforming
convolutional neural networks when pretrained on sufficiently large datasets.
Vision transformer models show good accuracy on large scale datasets, with
features of multi-modal training. Due to their promising feature detection, we
aim to explore vision transformer models for diagnosis of anaplastic large cell
lymphoma versus classical Hodgkin lymphoma using pathology whole slide images
of HE slides. We compared the classification performance of the vision
transformer to our previously designed convolutional neural network on the same
dataset. The dataset includes whole slide images of HE slides for 20 cases,
including 10 cases in each diagnostic category. From each whole slide image, 60
image patches having size of 100 by 100 pixels and at magnification of 20 were
obtained to yield 1200 image patches, from which 90 percent were used for
training, 9 percent for validation, and 10 percent for testing. The test
results from the convolutional neural network model had previously shown an
excellent diagnostic accuracy of 100 percent. The test results from the vision
transformer model also showed a comparable accuracy at 100 percent. To the best
of the authors' knowledge, this is the first direct comparison of predictive
performance between a vision transformer model and a convolutional neural
network model using the same dataset of lymphoma. Overall, convolutional neural
network has a more mature architecture than vision transformer and is usually
the best choice when large scale pretraining is not an available option.
Nevertheless, our current study shows comparable and excellent accuracy of
vision transformer compared to that of convolutional neural network even with a
relatively small dataset of anaplastic large cell lymphoma and classical
Hodgkin lymphoma.
|
2504.04030 | Wasi Uddin Ahmad | Wasi Uddin Ahmad, Aleksander Ficek, Mehrzad Samadi, Jocelyn Huang,
Vahid Noroozi, Somshubra Majumdar, Boris Ginsburg | OpenCodeInstruct: A Large-scale Instruction Tuning Dataset for Code LLMs | Work in progress | null | null | null | cs.SE cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have transformed software development by
enabling code generation, automated debugging, and complex reasoning. However,
their continued advancement is constrained by the scarcity of high-quality,
publicly available supervised fine-tuning (SFT) datasets tailored for coding
tasks. To bridge this gap, we introduce OpenCodeInstruct, the largest
open-access instruction tuning dataset, comprising 5 million diverse samples.
Each sample includes a programming question, solution, test cases, execution
feedback, and LLM-generated quality assessments. We fine-tune various base
models, including LLaMA and Qwen, across multiple scales (1B+, 3B+, and 7B+)
using our dataset. Comprehensive evaluations on popular benchmarks (HumanEval,
MBPP, LiveCodeBench, and BigCodeBench) demonstrate substantial performance
improvements achieved by SFT with OpenCodeInstruct. We also present a detailed
methodology encompassing seed data curation, synthetic instruction and solution
generation, and filtering.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 02:52:16 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ahmad",
"Wasi Uddin",
""
],
[
"Ficek",
"Aleksander",
""
],
[
"Samadi",
"Mehrzad",
""
],
[
"Huang",
"Jocelyn",
""
],
[
"Noroozi",
"Vahid",
""
],
[
"Majumdar",
"Somshubra",
""
],
[
"Ginsburg",
"Boris",
""
]
] | TITLE: OpenCodeInstruct: A Large-scale Instruction Tuning Dataset for Code LLMs
ABSTRACT: Large Language Models (LLMs) have transformed software development by
enabling code generation, automated debugging, and complex reasoning. However,
their continued advancement is constrained by the scarcity of high-quality,
publicly available supervised fine-tuning (SFT) datasets tailored for coding
tasks. To bridge this gap, we introduce OpenCodeInstruct, the largest
open-access instruction tuning dataset, comprising 5 million diverse samples.
Each sample includes a programming question, solution, test cases, execution
feedback, and LLM-generated quality assessments. We fine-tune various base
models, including LLaMA and Qwen, across multiple scales (1B+, 3B+, and 7B+)
using our dataset. Comprehensive evaluations on popular benchmarks (HumanEval,
MBPP, LiveCodeBench, and BigCodeBench) demonstrate substantial performance
improvements achieved by SFT with OpenCodeInstruct. We also present a detailed
methodology encompassing seed data curation, synthetic instruction and solution
generation, and filtering.
|
2504.04033 | Ehsanul Kabir | Ehsanul Kabir, Lucas Craig, Shagufta Mehnaz | Disparate Privacy Vulnerability: Targeted Attribute Inference Attacks
and Defenses | Selected for publication at 34th USENIX Security Symposium | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As machine learning (ML) technologies become more prevalent in
privacy-sensitive areas like healthcare and finance, eventually incorporating
sensitive information in building data-driven algorithms, it is vital to
scrutinize whether these data face any privacy leakage risks. One potential
threat arises from an adversary querying trained models using the public,
non-sensitive attributes of entities in the training data to infer their
private, sensitive attributes, a technique known as the attribute inference
attack. This attack is particularly deceptive because, while it may perform
poorly in predicting sensitive attributes across the entire dataset, it excels
at predicting the sensitive attributes of records from a few vulnerable groups,
a phenomenon known as disparate vulnerability. This paper illustrates that an
adversary can take advantage of this disparity to carry out a series of new
attacks, showcasing a threat level beyond previous imagination. We first
develop a novel inference attack called the disparity inference attack, which
targets the identification of high-risk groups within the dataset. We then
introduce two targeted variations of the attribute inference attack that can
identify and exploit a vulnerable subset of the training data, marking the
first instances of targeted attacks in this category, achieving significantly
higher accuracy than untargeted versions. We are also the first to introduce a
novel and effective disparity mitigation technique that simultaneously
preserves model performance and prevents any risk of targeted attacks.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 02:58:37 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Kabir",
"Ehsanul",
""
],
[
"Craig",
"Lucas",
""
],
[
"Mehnaz",
"Shagufta",
""
]
] | TITLE: Disparate Privacy Vulnerability: Targeted Attribute Inference Attacks
and Defenses
ABSTRACT: As machine learning (ML) technologies become more prevalent in
privacy-sensitive areas like healthcare and finance, eventually incorporating
sensitive information in building data-driven algorithms, it is vital to
scrutinize whether these data face any privacy leakage risks. One potential
threat arises from an adversary querying trained models using the public,
non-sensitive attributes of entities in the training data to infer their
private, sensitive attributes, a technique known as the attribute inference
attack. This attack is particularly deceptive because, while it may perform
poorly in predicting sensitive attributes across the entire dataset, it excels
at predicting the sensitive attributes of records from a few vulnerable groups,
a phenomenon known as disparate vulnerability. This paper illustrates that an
adversary can take advantage of this disparity to carry out a series of new
attacks, showcasing a threat level beyond previous imagination. We first
develop a novel inference attack called the disparity inference attack, which
targets the identification of high-risk groups within the dataset. We then
introduce two targeted variations of the attribute inference attack that can
identify and exploit a vulnerable subset of the training data, marking the
first instances of targeted attacks in this category, achieving significantly
higher accuracy than untargeted versions. We are also the first to introduce a
novel and effective disparity mitigation technique that simultaneously
preserves model performance and prevents any risk of targeted attacks.
|
2504.04034 | Dianshuo Li | Dianshuo Li, Li Chen, Yunxiang Cao, Kai Zhu, Jun Cheng | UCS: A Universal Model for Curvilinear Structure Segmentation | 11 pages, 9 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Curvilinear structure segmentation (CSS) is vital in various domains,
including medical imaging, landscape analysis, industrial surface inspection,
and plant analysis. While existing methods achieve high performance within
specific domains, their generalizability is limited. On the other hand,
large-scale models such as Segment Anything Model (SAM) exhibit strong
generalization but are not optimized for curvilinear structures. Existing
adaptations of SAM primarily focus on general object segmentation and lack
specialized design for CSS tasks. To bridge this gap, we propose the Universal
Curvilinear structure Segmentation (\textit{UCS}) model, which adapts SAM to
CSS tasks while enhancing its generalization. \textit{UCS} features a novel
encoder architecture integrating a pretrained SAM encoder with two innovations:
a Sparse Adapter, strategically inserted to inherit the pre-trained SAM
encoder's generalization capability while minimizing the number of fine-tuning
parameters, and a Prompt Generation module, which leverages Fast Fourier
Transform with a high-pass filter to generate curve-specific prompts.
Furthermore, the \textit{UCS} incorporates a mask decoder that eliminates
reliance on manual interaction through a dual-compression module: a
Hierarchical Feature Compression module, which aggregates the outputs of the
sampled encoder to enhance detail preservation, and a Guidance Feature
Compression module, which extracts and compresses image-driven guidance
features. Evaluated on a comprehensive multi-domain dataset, including an
in-house dataset covering eight natural curvilinear structures, \textit{UCS}
demonstrates state-of-the-art generalization and open-set segmentation
performance across medical, engineering, natural, and plant imagery,
establishing a new benchmark for universal CSS.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 03:05:04 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Dianshuo",
""
],
[
"Chen",
"Li",
""
],
[
"Cao",
"Yunxiang",
""
],
[
"Zhu",
"Kai",
""
],
[
"Cheng",
"Jun",
""
]
] | TITLE: UCS: A Universal Model for Curvilinear Structure Segmentation
ABSTRACT: Curvilinear structure segmentation (CSS) is vital in various domains,
including medical imaging, landscape analysis, industrial surface inspection,
and plant analysis. While existing methods achieve high performance within
specific domains, their generalizability is limited. On the other hand,
large-scale models such as Segment Anything Model (SAM) exhibit strong
generalization but are not optimized for curvilinear structures. Existing
adaptations of SAM primarily focus on general object segmentation and lack
specialized design for CSS tasks. To bridge this gap, we propose the Universal
Curvilinear structure Segmentation (\textit{UCS}) model, which adapts SAM to
CSS tasks while enhancing its generalization. \textit{UCS} features a novel
encoder architecture integrating a pretrained SAM encoder with two innovations:
a Sparse Adapter, strategically inserted to inherit the pre-trained SAM
encoder's generalization capability while minimizing the number of fine-tuning
parameters, and a Prompt Generation module, which leverages Fast Fourier
Transform with a high-pass filter to generate curve-specific prompts.
Furthermore, the \textit{UCS} incorporates a mask decoder that eliminates
reliance on manual interaction through a dual-compression module: a
Hierarchical Feature Compression module, which aggregates the outputs of the
sampled encoder to enhance detail preservation, and a Guidance Feature
Compression module, which extracts and compresses image-driven guidance
features. Evaluated on a comprehensive multi-domain dataset, including an
in-house dataset covering eight natural curvilinear structures, \textit{UCS}
demonstrates state-of-the-art generalization and open-set segmentation
performance across medical, engineering, natural, and plant imagery,
establishing a new benchmark for universal CSS.
|
2504.04038 | Ye Kyaw Thu | Kaung Lwin Thant, Kwankamol Nongpong, Ye Kyaw Thu, Thura Aung, Khaing
Hsu Wai, Thazin Myint Oo | myNER: Contextualized Burmese Named Entity Recognition with
Bidirectional LSTM and fastText Embeddings via Joint Training with POS
Tagging | 7 pages, 2 figures, 5 tables, to be published in the proceedings of
IEEE ICCI-2025 | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Named Entity Recognition (NER) involves identifying and categorizing named
entities within textual data. Despite its significance, NER research has often
overlooked low-resource languages like Myanmar (Burmese), primarily due to the
lack of publicly available annotated datasets. To address this, we introduce
myNER, a novel word-level NER corpus featuring a 7-tag annotation scheme,
enriched with Part-of-Speech (POS) tagging to provide additional syntactic
information. Alongside the corpus, we conduct a comprehensive evaluation of NER
models, including Conditional Random Fields (CRF), Bidirectional LSTM
(BiLSTM)-CRF, and their combinations with fastText embeddings in different
settings. Our experiments reveal the effectiveness of contextualized word
embeddings and the impact of joint training with POS tagging, demonstrating
significant performance improvements across models. The traditional CRF
joint-task model with fastText embeddings as a feature achieved the best
result, with a 0.9818 accuracy and 0.9811 weighted F1 score with 0.7429 macro
F1 score. BiLSTM-CRF with fine-tuned fastText embeddings gets the best result
of 0.9791 accuracy and 0.9776 weighted F1 score with 0.7395 macro F1 score.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 03:13:33 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Thant",
"Kaung Lwin",
""
],
[
"Nongpong",
"Kwankamol",
""
],
[
"Thu",
"Ye Kyaw",
""
],
[
"Aung",
"Thura",
""
],
[
"Wai",
"Khaing Hsu",
""
],
[
"Oo",
"Thazin Myint",
""
]
] | TITLE: myNER: Contextualized Burmese Named Entity Recognition with
Bidirectional LSTM and fastText Embeddings via Joint Training with POS
Tagging
ABSTRACT: Named Entity Recognition (NER) involves identifying and categorizing named
entities within textual data. Despite its significance, NER research has often
overlooked low-resource languages like Myanmar (Burmese), primarily due to the
lack of publicly available annotated datasets. To address this, we introduce
myNER, a novel word-level NER corpus featuring a 7-tag annotation scheme,
enriched with Part-of-Speech (POS) tagging to provide additional syntactic
information. Alongside the corpus, we conduct a comprehensive evaluation of NER
models, including Conditional Random Fields (CRF), Bidirectional LSTM
(BiLSTM)-CRF, and their combinations with fastText embeddings in different
settings. Our experiments reveal the effectiveness of contextualized word
embeddings and the impact of joint training with POS tagging, demonstrating
significant performance improvements across models. The traditional CRF
joint-task model with fastText embeddings as a feature achieved the best
result, with a 0.9818 accuracy and 0.9811 weighted F1 score with 0.7429 macro
F1 score. BiLSTM-CRF with fine-tuned fastText embeddings gets the best result
of 0.9791 accuracy and 0.9776 weighted F1 score with 0.7395 macro F1 score.
|
2504.04050 | Kang Xue | Kang Xue, Ming Dong, Xinhui Tu, Tingting He | FISH-Tuning: Enhancing PEFT Methods with Fisher Information | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The rapid growth in the parameter size of Large Language Models (LLMs) has
led to the development of Parameter-Efficient Fine-Tuning (PEFT) methods to
alleviate the computational costs of fine-tuning. Among these, Fisher Induced
Sparse uncHanging (FISH) Mask is a selection-based PEFT technique that
identifies a subset of pre-trained parameters for fine-tuning based on
approximate Fisher information. However, the integration of FISH Mask with
other PEFT methods, such as LoRA and Adapters, remains underexplored. In this
paper, we propose FISH-Tuning, a novel approach that incorporates FISH Mask
into addition-based and reparameterization-based PEFT methods, including LoRA,
Adapters, and their variants. By leveraging Fisher information to select
critical parameters within these methods, FISH-Tuning achieves superior
performance without additional memory overhead or inference latency.
Experimental results across various datasets and pre-trained models demonstrate
that FISH-Tuning consistently outperforms the vanilla PEFT methods with the
same proportion of trainable parameters.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 04:05:55 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Xue",
"Kang",
""
],
[
"Dong",
"Ming",
""
],
[
"Tu",
"Xinhui",
""
],
[
"He",
"Tingting",
""
]
] | TITLE: FISH-Tuning: Enhancing PEFT Methods with Fisher Information
ABSTRACT: The rapid growth in the parameter size of Large Language Models (LLMs) has
led to the development of Parameter-Efficient Fine-Tuning (PEFT) methods to
alleviate the computational costs of fine-tuning. Among these, Fisher Induced
Sparse uncHanging (FISH) Mask is a selection-based PEFT technique that
identifies a subset of pre-trained parameters for fine-tuning based on
approximate Fisher information. However, the integration of FISH Mask with
other PEFT methods, such as LoRA and Adapters, remains underexplored. In this
paper, we propose FISH-Tuning, a novel approach that incorporates FISH Mask
into addition-based and reparameterization-based PEFT methods, including LoRA,
Adapters, and their variants. By leveraging Fisher information to select
critical parameters within these methods, FISH-Tuning achieves superior
performance without additional memory overhead or inference latency.
Experimental results across various datasets and pre-trained models demonstrate
that FISH-Tuning consistently outperforms the vanilla PEFT methods with the
same proportion of trainable parameters.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.