Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2504.04322 | Pei Xu | Pei Xu, Yulei Sui, Mark Staples | Towards Source Mapping for Zero-Knowledge Smart Contracts: Design and
Preliminary Evaluation | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Debugging and auditing zero-knowledge-compatible smart contracts remains a
significant challenge due to the lack of source mapping in compilers such as
zkSolc. In this work, we present a preliminary source mapping framework that
establishes traceability between Solidity source code, LLVM IR, and zkEVM
bytecode within the zkSolc compilation pipeline. Our approach addresses the
traceability challenges introduced by non-linear transformations and
proof-friendly optimizations in zero-knowledge compilation. To improve the
reliability of mappings, we incorporate lightweight consistency checks based on
static analysis and structural validation. We evaluate the framework on a
dataset of 50 benchmark contracts and 500 real-world zkSync contracts,
observing a mapping accuracy of approximately 97.2% for standard Solidity
constructs. Expected limitations arise in complex scenarios such as inline
assembly and deep inheritance hierarchies. The measured compilation overhead
remains modest, at approximately 8.6%. Our initial results suggest that source
mapping support in zero-knowledge compilation pipelines is feasible and can
benefit debugging, auditing, and development workflows. We hope that this work
serves as a foundation for further research and tool development aimed at
improving developer experience in zk-Rollup environments.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 01:42:07 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 10:31:46 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Xu",
"Pei",
""
],
[
"Sui",
"Yulei",
""
],
[
"Staples",
"Mark",
""
]
] | TITLE: Towards Source Mapping for Zero-Knowledge Smart Contracts: Design and
Preliminary Evaluation
ABSTRACT: Debugging and auditing zero-knowledge-compatible smart contracts remains a
significant challenge due to the lack of source mapping in compilers such as
zkSolc. In this work, we present a preliminary source mapping framework that
establishes traceability between Solidity source code, LLVM IR, and zkEVM
bytecode within the zkSolc compilation pipeline. Our approach addresses the
traceability challenges introduced by non-linear transformations and
proof-friendly optimizations in zero-knowledge compilation. To improve the
reliability of mappings, we incorporate lightweight consistency checks based on
static analysis and structural validation. We evaluate the framework on a
dataset of 50 benchmark contracts and 500 real-world zkSync contracts,
observing a mapping accuracy of approximately 97.2% for standard Solidity
constructs. Expected limitations arise in complex scenarios such as inline
assembly and deep inheritance hierarchies. The measured compilation overhead
remains modest, at approximately 8.6%. Our initial results suggest that source
mapping support in zero-knowledge compilation pipelines is feasible and can
benefit debugging, auditing, and development workflows. We hope that this work
serves as a foundation for further research and tool development aimed at
improving developer experience in zk-Rollup environments.
|
2504.04401 | Zhou Youyang | Zhou Youyang, Shi Wenren, Xie Yun, Zhao Bianli, Luo Xinyu, Yao
Mingjie, Zhang Rui, Tan Xin, Li Kui, Yang Hao, Liu Qi, Nan Yinggang, Bao Jie,
Zhang Yuping, Shu Feng, Li Shaopan and Zhang Xiaoshi | Super-Resolution Coherent Diffractive Imaging via Titled-Incidence
Multi-Rotation-Angle Fusion Ptychography | 18 pages, 6 figures | null | null | null | physics.optics | http://creativecommons.org/licenses/by/4.0/ | Coherent diffractive imaging (CDI) enables lensless imaging with experimental
simplicity and a flexible field of view, yet its resolution is fundamentally
constrained by the Abbe diffraction limit. To overcome this limitation, we
introduce a novel Tilted-Incidence Multi-Rotation-Angle Fusion Ptychography
technique. This approach leverages a tilted-incidence geometry to extend the
collection angle beyond the Abbe limit, achieving up to a -fold resolution
enhancement. By acquiring diffraction patterns at multiple sample rotation
angles, we capture complementary spatial frequency information. A
tilted-incidence multi-rotation-angle fusion ptychographic iterative engine
(tmf-PIE) algorithm is then employed to integrate these datasets, enabling
super-resolution image reconstruction. Additionally, this method mitigates the
anisotropic resolution artifacts inherent to tilted CDI geometries. Our
technique represents a novel advancement in super-resolution imaging, providing
a novel alternative alongside established methods such as STED, SIM, and SMLM.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 08:03:43 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 08:24:46 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Youyang",
"Zhou",
""
],
[
"Wenren",
"Shi",
""
],
[
"Yun",
"Xie",
""
],
[
"Bianli",
"Zhao",
""
],
[
"Xinyu",
"Luo",
""
],
[
"Mingjie",
"Yao",
""
],
[
"Rui",
"Zhang",
""
],
[
"Xin",
"Tan",
""
],
[
"Kui",
"Li",
""
],
[
"Hao",
"Yang",
""
],
[
"Qi",
"Liu",
""
],
[
"Yinggang",
"Nan",
""
],
[
"Jie",
"Bao",
""
],
[
"Yuping",
"Zhang",
""
],
[
"Feng",
"Shu",
""
],
[
"Shaopan",
"Li",
""
],
[
"Xiaoshi",
"Zhang",
""
]
] | TITLE: Super-Resolution Coherent Diffractive Imaging via Titled-Incidence
Multi-Rotation-Angle Fusion Ptychography
ABSTRACT: Coherent diffractive imaging (CDI) enables lensless imaging with experimental
simplicity and a flexible field of view, yet its resolution is fundamentally
constrained by the Abbe diffraction limit. To overcome this limitation, we
introduce a novel Tilted-Incidence Multi-Rotation-Angle Fusion Ptychography
technique. This approach leverages a tilted-incidence geometry to extend the
collection angle beyond the Abbe limit, achieving up to a -fold resolution
enhancement. By acquiring diffraction patterns at multiple sample rotation
angles, we capture complementary spatial frequency information. A
tilted-incidence multi-rotation-angle fusion ptychographic iterative engine
(tmf-PIE) algorithm is then employed to integrate these datasets, enabling
super-resolution image reconstruction. Additionally, this method mitigates the
anisotropic resolution artifacts inherent to tilted CDI geometries. Our
technique represents a novel advancement in super-resolution imaging, providing
a novel alternative alongside established methods such as STED, SIM, and SMLM.
|
2504.04582 | Eugenio Lomurno | Nicolo Resmini, Eugenio Lomurno, Cristian Sbrolli, Matteo Matteucci | Your Image Generator Is Your New Private Dataset | null | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Generative diffusion models have emerged as powerful tools to synthetically
produce training data, offering potential solutions to data scarcity and
reducing labelling costs for downstream supervised deep learning applications.
However, effectively leveraging text-conditioned image generation for building
classifier training sets requires addressing key issues: constructing
informative textual prompts, adapting generative models to specific domains,
and ensuring robust performance. This paper proposes the Text-Conditioned
Knowledge Recycling (TCKR) pipeline to tackle these challenges. TCKR combines
dynamic image captioning, parameter-efficient diffusion model fine-tuning, and
Generative Knowledge Distillation techniques to create synthetic datasets
tailored for image classification. The pipeline is rigorously evaluated on ten
diverse image classification benchmarks. The results demonstrate that models
trained solely on TCKR-generated data achieve classification accuracies on par
with (and in several cases exceeding) models trained on real images.
Furthermore, the evaluation reveals that these synthetic-data-trained models
exhibit substantially enhanced privacy characteristics: their vulnerability to
Membership Inference Attacks is significantly reduced, with the membership
inference AUC lowered by 5.49 points on average compared to using real training
data, demonstrating a substantial improvement in the performance-privacy
trade-off. These findings indicate that high-fidelity synthetic data can
effectively replace real data for training classifiers, yielding strong
performance whilst simultaneously providing improved privacy protection as a
valuable emergent property. The code and trained models are available in the
accompanying open-source repository.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 18:46:08 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 08:35:53 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Resmini",
"Nicolo",
""
],
[
"Lomurno",
"Eugenio",
""
],
[
"Sbrolli",
"Cristian",
""
],
[
"Matteucci",
"Matteo",
""
]
] | TITLE: Your Image Generator Is Your New Private Dataset
ABSTRACT: Generative diffusion models have emerged as powerful tools to synthetically
produce training data, offering potential solutions to data scarcity and
reducing labelling costs for downstream supervised deep learning applications.
However, effectively leveraging text-conditioned image generation for building
classifier training sets requires addressing key issues: constructing
informative textual prompts, adapting generative models to specific domains,
and ensuring robust performance. This paper proposes the Text-Conditioned
Knowledge Recycling (TCKR) pipeline to tackle these challenges. TCKR combines
dynamic image captioning, parameter-efficient diffusion model fine-tuning, and
Generative Knowledge Distillation techniques to create synthetic datasets
tailored for image classification. The pipeline is rigorously evaluated on ten
diverse image classification benchmarks. The results demonstrate that models
trained solely on TCKR-generated data achieve classification accuracies on par
with (and in several cases exceeding) models trained on real images.
Furthermore, the evaluation reveals that these synthetic-data-trained models
exhibit substantially enhanced privacy characteristics: their vulnerability to
Membership Inference Attacks is significantly reduced, with the membership
inference AUC lowered by 5.49 points on average compared to using real training
data, demonstrating a substantial improvement in the performance-privacy
trade-off. These findings indicate that high-fidelity synthetic data can
effectively replace real data for training classifiers, yielding strong
performance whilst simultaneously providing improved privacy protection as a
valuable emergent property. The code and trained models are available in the
accompanying open-source repository.
|
2504.04717 | Yubo Li | Yubo Li, Xiaobin Shen, Xinyu Yao, Xueying Ding, Yidi Miao, Ramayya
Krishnan, Rema Padman | Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large
Language Models | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in large language models (LLMs) have revolutionized their
ability to handle single-turn tasks, yet real-world applications demand
sophisticated multi-turn interactions. This survey provides a comprehensive
review of recent advancements in evaluating and enhancing multi-turn
interactions in LLMs. Focusing on task-specific scenarios, from instruction
following in diverse domains such as math and coding to complex conversational
engagements in roleplay, healthcare, education, and even adversarial jailbreak
settings, we systematically examine the challenges of maintaining context,
coherence, fairness, and responsiveness over prolonged dialogues. The paper
organizes current benchmarks and datasets into coherent categories that reflect
the evolving landscape of multi-turn dialogue evaluation. In addition, we
review a range of enhancement methodologies under multi-turn settings,
including model-centric strategies (contextual learning, supervised
fine-tuning, reinforcement learning, and new architectures), external
integration approaches (memory-augmented, retrieval-based methods, and
knowledge graph), and agent-based techniques for collaborative interactions.
Finally, we discuss open challenges and propose future directions for research
to further advance the robustness and effectiveness of multi-turn interactions
in LLMs. Related resources and papers are available at
https://github.com/yubol-cmu/Awesome-Multi-Turn-LLMs.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 04:00:08 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 03:58:37 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Li",
"Yubo",
""
],
[
"Shen",
"Xiaobin",
""
],
[
"Yao",
"Xinyu",
""
],
[
"Ding",
"Xueying",
""
],
[
"Miao",
"Yidi",
""
],
[
"Krishnan",
"Ramayya",
""
],
[
"Padman",
"Rema",
""
]
] | TITLE: Beyond Single-Turn: A Survey on Multi-Turn Interactions with Large
Language Models
ABSTRACT: Recent advancements in large language models (LLMs) have revolutionized their
ability to handle single-turn tasks, yet real-world applications demand
sophisticated multi-turn interactions. This survey provides a comprehensive
review of recent advancements in evaluating and enhancing multi-turn
interactions in LLMs. Focusing on task-specific scenarios, from instruction
following in diverse domains such as math and coding to complex conversational
engagements in roleplay, healthcare, education, and even adversarial jailbreak
settings, we systematically examine the challenges of maintaining context,
coherence, fairness, and responsiveness over prolonged dialogues. The paper
organizes current benchmarks and datasets into coherent categories that reflect
the evolving landscape of multi-turn dialogue evaluation. In addition, we
review a range of enhancement methodologies under multi-turn settings,
including model-centric strategies (contextual learning, supervised
fine-tuning, reinforcement learning, and new architectures), external
integration approaches (memory-augmented, retrieval-based methods, and
knowledge graph), and agent-based techniques for collaborative interactions.
Finally, we discuss open challenges and propose future directions for research
to further advance the robustness and effectiveness of multi-turn interactions
in LLMs. Related resources and papers are available at
https://github.com/yubol-cmu/Awesome-Multi-Turn-LLMs.
|
2504.04749 | Ahmad Hussein | Ahmad Hussein, Mukesh Prasad, Ali Anaissi and Ali Braytee | Vision Transformers with Autoencoders and Explainable AI for Cancer
Patient Risk Stratification Using Whole Slide Imaging | 11 pages | null | null | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Cancer remains one of the leading causes of mortality worldwide,
necessitating accurate diagnosis and prognosis. Whole Slide Imaging (WSI) has
become an integral part of clinical workflows with advancements in digital
pathology. While various studies have utilized WSIs, their extracted features
may not fully capture the most relevant pathological information, and their
lack of interpretability limits clinical adoption.
In this paper, we propose PATH-X, a framework that integrates Vision
Transformers (ViT) and Autoencoders with SHAP (Shapley Additive Explanations)
to enhance model explainability for patient stratification and risk prediction
using WSIs from The Cancer Genome Atlas (TCGA). A representative image slice is
selected from each WSI, and numerical feature embeddings are extracted using
Google's pre-trained ViT. These features are then compressed via an autoencoder
and used for unsupervised clustering and classification tasks. Kaplan-Meier
survival analysis is applied to evaluate stratification into two and three risk
groups. SHAP is used to identify key contributing features, which are mapped
onto histopathological slices to provide spatial context.
PATH-X demonstrates strong performance in breast and glioma cancers, where a
sufficient number of WSIs enabled robust stratification. However, performance
in lung cancer was limited due to data availability, emphasizing the need for
larger datasets to enhance model reliability and clinical applicability.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 05:48:42 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 03:59:22 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Hussein",
"Ahmad",
""
],
[
"Prasad",
"Mukesh",
""
],
[
"Anaissi",
"Ali",
""
],
[
"Braytee",
"Ali",
""
]
] | TITLE: Vision Transformers with Autoencoders and Explainable AI for Cancer
Patient Risk Stratification Using Whole Slide Imaging
ABSTRACT: Cancer remains one of the leading causes of mortality worldwide,
necessitating accurate diagnosis and prognosis. Whole Slide Imaging (WSI) has
become an integral part of clinical workflows with advancements in digital
pathology. While various studies have utilized WSIs, their extracted features
may not fully capture the most relevant pathological information, and their
lack of interpretability limits clinical adoption.
In this paper, we propose PATH-X, a framework that integrates Vision
Transformers (ViT) and Autoencoders with SHAP (Shapley Additive Explanations)
to enhance model explainability for patient stratification and risk prediction
using WSIs from The Cancer Genome Atlas (TCGA). A representative image slice is
selected from each WSI, and numerical feature embeddings are extracted using
Google's pre-trained ViT. These features are then compressed via an autoencoder
and used for unsupervised clustering and classification tasks. Kaplan-Meier
survival analysis is applied to evaluate stratification into two and three risk
groups. SHAP is used to identify key contributing features, which are mapped
onto histopathological slices to provide spatial context.
PATH-X demonstrates strong performance in breast and glioma cancers, where a
sufficient number of WSIs enabled robust stratification. However, performance
in lung cancer was limited due to data availability, emphasizing the need for
larger datasets to enhance model reliability and clinical applicability.
|
2504.04924 | Changqing Su | Changqing Su, Yanqin Chen, Zihan Lin, Zhen Cheng, You Zhou, Bo Xiong,
Zhaofei Yu, Tiejun Huang | Inter-event Interval Microscopy for Event Cameras | null | null | null | null | cs.CV eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Event cameras, an innovative bio-inspired sensor, differ from traditional
cameras by sensing changes in intensity rather than directly perceiving
intensity and recording these variations as a continuous stream of "events".
The intensity reconstruction from these sparse events has long been a
challenging problem. Previous approaches mainly focused on transforming
motion-induced events into videos or achieving intensity imaging for static
scenes by integrating modulation devices at the event camera acquisition end.
In this paper, for the first time, we achieve event-to-intensity conversion
using a static event camera for both static and dynamic scenes in fluorescence
microscopy. Unlike conventional methods that primarily rely on event
integration, the proposed Inter-event Interval Microscopy (IEIM) quantifies the
time interval between consecutive events at each pixel. With a fixed threshold
in the event camera, the time interval can precisely represent the intensity.
At the hardware level, the proposed IEIM integrates a pulse light modulation
device within a microscope equipped with an event camera, termed Pulse
Modulation-based Event-driven Fluorescence Microscopy. Additionally, we have
collected IEIMat dataset under various scenes including high dynamic range and
high-speed scenarios. Experimental results on the IEIMat dataset demonstrate
that the proposed IEIM achieves superior spatial and temporal resolution, as
well as a higher dynamic range, with lower bandwidth compared to other methods.
The code and the IEIMat dataset will be made publicly available.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 11:05:13 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 02:46:44 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Su",
"Changqing",
""
],
[
"Chen",
"Yanqin",
""
],
[
"Lin",
"Zihan",
""
],
[
"Cheng",
"Zhen",
""
],
[
"Zhou",
"You",
""
],
[
"Xiong",
"Bo",
""
],
[
"Yu",
"Zhaofei",
""
],
[
"Huang",
"Tiejun",
""
]
] | TITLE: Inter-event Interval Microscopy for Event Cameras
ABSTRACT: Event cameras, an innovative bio-inspired sensor, differ from traditional
cameras by sensing changes in intensity rather than directly perceiving
intensity and recording these variations as a continuous stream of "events".
The intensity reconstruction from these sparse events has long been a
challenging problem. Previous approaches mainly focused on transforming
motion-induced events into videos or achieving intensity imaging for static
scenes by integrating modulation devices at the event camera acquisition end.
In this paper, for the first time, we achieve event-to-intensity conversion
using a static event camera for both static and dynamic scenes in fluorescence
microscopy. Unlike conventional methods that primarily rely on event
integration, the proposed Inter-event Interval Microscopy (IEIM) quantifies the
time interval between consecutive events at each pixel. With a fixed threshold
in the event camera, the time interval can precisely represent the intensity.
At the hardware level, the proposed IEIM integrates a pulse light modulation
device within a microscope equipped with an event camera, termed Pulse
Modulation-based Event-driven Fluorescence Microscopy. Additionally, we have
collected IEIMat dataset under various scenes including high dynamic range and
high-speed scenarios. Experimental results on the IEIMat dataset demonstrate
that the proposed IEIM achieves superior spatial and temporal resolution, as
well as a higher dynamic range, with lower bandwidth compared to other methods.
The code and the IEIMat dataset will be made publicly available.
|
2504.05118 | Yu Yue | Yu Yue, Yufeng Yuan, Qiying Yu, Xiaochen Zuo, Ruofei Zhu, Wenyuan Xu,
Jiaze Chen, Chengyi Wang, TianTian Fan, Zhengyin Du, Xiangpeng Wei, Xiangyu
Yu, Gaohong Liu, Juncai Liu, Lingjun Liu, Haibin Lin, Zhiqi Lin, Bole Ma, Chi
Zhang, Mofan Zhang, Wang Zhang, Hang Zhu, Ru Zhang, Xin Liu, Mingxuan Wang,
Yonghui Wu, Lin Yan | VAPO: Efficient and Reliable Reinforcement Learning for Advanced
Reasoning Tasks | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present VAPO, Value-based Augmented Proximal Policy Optimization framework
for reasoning models., a novel framework tailored for reasoning models within
the value-based paradigm. Benchmarked the AIME 2024 dataset, VAPO, built on the
Qwen 32B pre-trained model, attains a state-of-the-art score of
$\mathbf{60.4}$. In direct comparison under identical experimental settings,
VAPO outperforms the previously reported results of DeepSeek-R1-Zero-Qwen-32B
and DAPO by more than 10 points. The training process of VAPO stands out for
its stability and efficiency. It reaches state-of-the-art performance within a
mere 5,000 steps. Moreover, across multiple independent runs, no training
crashes occur, underscoring its reliability. This research delves into long
chain-of-thought (long-CoT) reasoning using a value-based reinforcement
learning framework. We pinpoint three key challenges that plague value-based
methods: value model bias, the presence of heterogeneous sequence lengths, and
the sparsity of reward signals. Through systematic design, VAPO offers an
integrated solution that effectively alleviates these challenges, enabling
enhanced performance in long-CoT reasoning tasks.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 14:21:11 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 03:06:22 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Yue",
"Yu",
""
],
[
"Yuan",
"Yufeng",
""
],
[
"Yu",
"Qiying",
""
],
[
"Zuo",
"Xiaochen",
""
],
[
"Zhu",
"Ruofei",
""
],
[
"Xu",
"Wenyuan",
""
],
[
"Chen",
"Jiaze",
""
],
[
"Wang",
"Chengyi",
""
],
[
"Fan",
"TianTian",
""
],
[
"Du",
"Zhengyin",
""
],
[
"Wei",
"Xiangpeng",
""
],
[
"Yu",
"Xiangyu",
""
],
[
"Liu",
"Gaohong",
""
],
[
"Liu",
"Juncai",
""
],
[
"Liu",
"Lingjun",
""
],
[
"Lin",
"Haibin",
""
],
[
"Lin",
"Zhiqi",
""
],
[
"Ma",
"Bole",
""
],
[
"Zhang",
"Chi",
""
],
[
"Zhang",
"Mofan",
""
],
[
"Zhang",
"Wang",
""
],
[
"Zhu",
"Hang",
""
],
[
"Zhang",
"Ru",
""
],
[
"Liu",
"Xin",
""
],
[
"Wang",
"Mingxuan",
""
],
[
"Wu",
"Yonghui",
""
],
[
"Yan",
"Lin",
""
]
] | TITLE: VAPO: Efficient and Reliable Reinforcement Learning for Advanced
Reasoning Tasks
ABSTRACT: We present VAPO, Value-based Augmented Proximal Policy Optimization framework
for reasoning models., a novel framework tailored for reasoning models within
the value-based paradigm. Benchmarked the AIME 2024 dataset, VAPO, built on the
Qwen 32B pre-trained model, attains a state-of-the-art score of
$\mathbf{60.4}$. In direct comparison under identical experimental settings,
VAPO outperforms the previously reported results of DeepSeek-R1-Zero-Qwen-32B
and DAPO by more than 10 points. The training process of VAPO stands out for
its stability and efficiency. It reaches state-of-the-art performance within a
mere 5,000 steps. Moreover, across multiple independent runs, no training
crashes occur, underscoring its reliability. This research delves into long
chain-of-thought (long-CoT) reasoning using a value-based reinforcement
learning framework. We pinpoint three key challenges that plague value-based
methods: value model bias, the presence of heterogeneous sequence lengths, and
the sparsity of reward signals. Through systematic design, VAPO offers an
integrated solution that effectively alleviates these challenges, enabling
enhanced performance in long-CoT reasoning tasks.
|
2504.05250 | Mustafa Burak Gurbuz | Mustafa Burak Gurbuz, Xingyu Zheng, Constantine Dovrolis | PEAKS: Selecting Key Training Examples Incrementally via Prediction
Error Anchored by Kernel Similarity | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | As deep learning continues to be driven by ever-larger datasets,
understanding which examples are most important for generalization has become a
critical question. While progress in data selection continues, emerging
applications require studying this problem in dynamic contexts. To bridge this
gap, we pose the Incremental Data Selection (IDS) problem, where examples
arrive as a continuous stream, and need to be selected without access to the
full data source. In this setting, the learner must incrementally build a
training dataset of predefined size while simultaneously learning the
underlying task. We find that in IDS, the impact of a new sample on the model
state depends fundamentally on both its geometric relationship in the feature
space and its prediction error. Leveraging this insight, we propose PEAKS
(Prediction Error Anchored by Kernel Similarity), an efficient data selection
method tailored for IDS. Our comprehensive evaluations demonstrate that PEAKS
consistently outperforms existing selection strategies. Furthermore, PEAKS
yields increasingly better performance returns than random selection as
training data size grows on real-world datasets.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 16:42:09 GMT"
},
{
"version": "v2",
"created": "Tue, 8 Apr 2025 02:48:22 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Gurbuz",
"Mustafa Burak",
""
],
[
"Zheng",
"Xingyu",
""
],
[
"Dovrolis",
"Constantine",
""
]
] | TITLE: PEAKS: Selecting Key Training Examples Incrementally via Prediction
Error Anchored by Kernel Similarity
ABSTRACT: As deep learning continues to be driven by ever-larger datasets,
understanding which examples are most important for generalization has become a
critical question. While progress in data selection continues, emerging
applications require studying this problem in dynamic contexts. To bridge this
gap, we pose the Incremental Data Selection (IDS) problem, where examples
arrive as a continuous stream, and need to be selected without access to the
full data source. In this setting, the learner must incrementally build a
training dataset of predefined size while simultaneously learning the
underlying task. We find that in IDS, the impact of a new sample on the model
state depends fundamentally on both its geometric relationship in the feature
space and its prediction error. Leveraging this insight, we propose PEAKS
(Prediction Error Anchored by Kernel Similarity), an efficient data selection
method tailored for IDS. Our comprehensive evaluations demonstrate that PEAKS
consistently outperforms existing selection strategies. Furthermore, PEAKS
yields increasingly better performance returns than random selection as
training data size grows on real-world datasets.
|
2504.05307 | Sowmya S. Sundaram | Sowmya S Sundaram, Mark A Musen | Toward Total Recall: Enhancing FAIRness through AI-Driven Metadata
Standardization | null | null | null | null | cs.IR cs.AI | http://creativecommons.org/licenses/by/4.0/ | Current metadata often suffer from incompleteness, inconsistency, and
incorrect formatting, hindering effective data reuse and discovery. Using GPT-4
and a metadata knowledge base (CEDAR), we devised a method that standardizes
metadata in scientific data sets, ensuring the adherence to community
standards. The standardization process involves correcting and refining
metadata entries to conform to established guidelines, significantly improving
search performance and recall metrics. The investigation uses BioSample and GEO
repositories to demonstrate the impact of these enhancements, showcasing how
standardized metadata lead to better retrieval outcomes. The average recall
improves significantly, rising from 17.65\% with the baseline raw datasets of
BioSample and GEO to 62.87\% with our proposed metadata standardization
pipeline. This finding highlights the transformative impact of integrating
advanced AI models with structured metadata curation tools in achieving more
effective and reliable data retrieval.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2025 21:58:27 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Sundaram",
"Sowmya S",
""
],
[
"Musen",
"Mark A",
""
]
] | TITLE: Toward Total Recall: Enhancing FAIRness through AI-Driven Metadata
Standardization
ABSTRACT: Current metadata often suffer from incompleteness, inconsistency, and
incorrect formatting, hindering effective data reuse and discovery. Using GPT-4
and a metadata knowledge base (CEDAR), we devised a method that standardizes
metadata in scientific data sets, ensuring the adherence to community
standards. The standardization process involves correcting and refining
metadata entries to conform to established guidelines, significantly improving
search performance and recall metrics. The investigation uses BioSample and GEO
repositories to demonstrate the impact of these enhancements, showcasing how
standardized metadata lead to better retrieval outcomes. The average recall
improves significantly, rising from 17.65\% with the baseline raw datasets of
BioSample and GEO to 62.87\% with our proposed metadata standardization
pipeline. This finding highlights the transformative impact of integrating
advanced AI models with structured metadata curation tools in achieving more
effective and reliable data retrieval.
|
2504.05308 | Aleksandr Katrutsa | Ekaterina Solodneva, Alexandra Khirianova, Aleksandr Katrutsa, Roman
Loginov, Andrey Tikhanov, Egor Samosvat, Yuriy Dorn | RARe: Raising Ad Revenue Framework with Context-Aware Reranking | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Modern recommender systems excel at optimizing search result relevance for
e-commerce platforms. While maintaining this relevance, platforms seek
opportunities to maximize revenue through search result adjustments. To address
the trade-off between relevance and revenue, we propose the $\mathsf{RARe}$
($\textbf{R}$aising $\textbf{A}$dvertisement $\textbf{Re}$venue) framework.
$\mathsf{RARe}$ stacks a click model and a reranking model. We train the
$\mathsf{RARe}$ framework with a loss function to find revenue and relevance
trade-offs. According to our experience, the click model is crucial in the
$\mathsf{RARe}$ framework. We propose and compare two different click models
that take into account the context of items in a search result. The first click
model is a Gradient-Boosting Decision Tree with Concatenation (GBDT-C), which
includes a context in the traditional GBDT model for click prediction. The
second model, SAINT-Q, adapts the Sequential Attention model to capture
influences between search results. Our experiments indicate that the proposed
click models outperform baselines and improve the overall quality of our
framework. Experiments on the industrial dataset, which will be released
publicly, show $\mathsf{RARe}$'s significant revenue improvements while
preserving a high relevance.
| [
{
"version": "v1",
"created": "Sat, 15 Feb 2025 20:55:54 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Solodneva",
"Ekaterina",
""
],
[
"Khirianova",
"Alexandra",
""
],
[
"Katrutsa",
"Aleksandr",
""
],
[
"Loginov",
"Roman",
""
],
[
"Tikhanov",
"Andrey",
""
],
[
"Samosvat",
"Egor",
""
],
[
"Dorn",
"Yuriy",
""
]
] | TITLE: RARe: Raising Ad Revenue Framework with Context-Aware Reranking
ABSTRACT: Modern recommender systems excel at optimizing search result relevance for
e-commerce platforms. While maintaining this relevance, platforms seek
opportunities to maximize revenue through search result adjustments. To address
the trade-off between relevance and revenue, we propose the $\mathsf{RARe}$
($\textbf{R}$aising $\textbf{A}$dvertisement $\textbf{Re}$venue) framework.
$\mathsf{RARe}$ stacks a click model and a reranking model. We train the
$\mathsf{RARe}$ framework with a loss function to find revenue and relevance
trade-offs. According to our experience, the click model is crucial in the
$\mathsf{RARe}$ framework. We propose and compare two different click models
that take into account the context of items in a search result. The first click
model is a Gradient-Boosting Decision Tree with Concatenation (GBDT-C), which
includes a context in the traditional GBDT model for click prediction. The
second model, SAINT-Q, adapts the Sequential Attention model to capture
influences between search results. Our experiments indicate that the proposed
click models outperform baselines and improve the overall quality of our
framework. Experiments on the industrial dataset, which will be released
publicly, show $\mathsf{RARe}$'s significant revenue improvements while
preserving a high relevance.
|
2504.05310 | Hrishikesh Kulkarni | Hrishikesh Kulkarni and Surya Kallumadi and Sean MacAvaney and Nazli
Goharian and Ophir Frieder | GRIT: Graph-based Recall Improvement for Task-oriented E-commerce
Queries | LLM4ECommerce at WWW 2025 | Companion Proceedings of the ACM Web Conference 2025 (WWW
Companion 25), April 28-May 2, 2025, Sydney, NSW, Australia. ACM, New York,
NY, USA, 10 pages | 10.1145/3701716.3717859 | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Many e-commerce search pipelines have four stages, namely: retrieval,
filtering, ranking, and personalized-reranking. The retrieval stage must be
efficient and yield high recall because relevant products missed in the first
stage cannot be considered in later stages. This is challenging for
task-oriented queries (queries with actionable intent) where user requirements
are contextually intensive and difficult to understand. To foster research in
the domain of e-commerce, we created a novel benchmark for Task-oriented
Queries (TQE) by using LLM, which operates over the existing ESCI product
search dataset. Furthermore, we propose a novel method 'Graph-based Recall
Improvement for Task-oriented queries' (GRIT) to address the most crucial
first-stage recall improvement needs. GRIT leads to robust and statistically
significant improvements over state-of-the-art lexical, dense, and
learned-sparse baselines. Our system supports both traditional and
task-oriented e-commerce queries, yielding up to 6.3% recall improvement. In
the indexing stage, GRIT first builds a product-product similarity graph using
user clicks or manual annotation data. During retrieval, it locates neighbors
with higher contextual and action relevance and prioritizes them over the less
relevant candidates from the initial retrieval. This leads to a more
comprehensive and relevant first-stage result set that improves overall system
recall. Overall, GRIT leverages the locality relationships and contextual
insights provided by the graph using neighboring nodes to enrich the
first-stage retrieval results. We show that the method is not only robust
across all introduced parameters, but also works effectively on top of a
variety of first-stage retrieval methods.
| [
{
"version": "v1",
"created": "Sun, 16 Feb 2025 16:21:49 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Kulkarni",
"Hrishikesh",
""
],
[
"Kallumadi",
"Surya",
""
],
[
"MacAvaney",
"Sean",
""
],
[
"Goharian",
"Nazli",
""
],
[
"Frieder",
"Ophir",
""
]
] | TITLE: GRIT: Graph-based Recall Improvement for Task-oriented E-commerce
Queries
ABSTRACT: Many e-commerce search pipelines have four stages, namely: retrieval,
filtering, ranking, and personalized-reranking. The retrieval stage must be
efficient and yield high recall because relevant products missed in the first
stage cannot be considered in later stages. This is challenging for
task-oriented queries (queries with actionable intent) where user requirements
are contextually intensive and difficult to understand. To foster research in
the domain of e-commerce, we created a novel benchmark for Task-oriented
Queries (TQE) by using LLM, which operates over the existing ESCI product
search dataset. Furthermore, we propose a novel method 'Graph-based Recall
Improvement for Task-oriented queries' (GRIT) to address the most crucial
first-stage recall improvement needs. GRIT leads to robust and statistically
significant improvements over state-of-the-art lexical, dense, and
learned-sparse baselines. Our system supports both traditional and
task-oriented e-commerce queries, yielding up to 6.3% recall improvement. In
the indexing stage, GRIT first builds a product-product similarity graph using
user clicks or manual annotation data. During retrieval, it locates neighbors
with higher contextual and action relevance and prioritizes them over the less
relevant candidates from the initial retrieval. This leads to a more
comprehensive and relevant first-stage result set that improves overall system
recall. Overall, GRIT leverages the locality relationships and contextual
insights provided by the graph using neighboring nodes to enrich the
first-stage retrieval results. We show that the method is not only robust
across all introduced parameters, but also works effectively on top of a
variety of first-stage retrieval methods.
|
2504.05312 | Qin Qitao | Qitao Qin, Yucong Luo, Yihang Lu, Zhibo Chu, Xianwei Meng | Towards Adaptive Memory-Based Optimization for Enhanced
Retrieval-Augmented Generation | 8pages | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrieval-Augmented Generation (RAG), by integrating non-parametric knowledge
from external knowledge bases into models, has emerged as a promising approach
to enhancing response accuracy while mitigating factual errors and
hallucinations. This method has been widely applied in tasks such as Question
Answering (QA). However, existing RAG methods struggle with open-domain QA
tasks because they perform independent retrieval operations and directly
incorporate the retrieved information into generation without maintaining a
summarizing memory or using adaptive retrieval strategies, leading to noise
from redundant information and insufficient information integration. To address
these challenges, we propose Adaptive memory-based optimization for enhanced
RAG (Amber) for open-domain QA tasks, which comprises an Agent-based Memory
Updater, an Adaptive Information Collector, and a Multi-granular Content
Filter, working together within an iterative memory updating paradigm.
Specifically, Amber integrates and optimizes the language model's memory
through a multi-agent collaborative approach, ensuring comprehensive knowledge
integration from previous retrieval steps. It dynamically adjusts retrieval
queries and decides when to stop retrieval based on the accumulated knowledge,
enhancing retrieval efficiency and effectiveness. Additionally, it reduces
noise by filtering irrelevant content at multiple levels, retaining essential
information to improve overall model performance. We conduct extensive
experiments on several open-domain QA datasets, and the results demonstrate the
superiority and effectiveness of our method and its components. The source code
is available \footnote{https://anonymous.4open.science/r/Amber-B203/}.
| [
{
"version": "v1",
"created": "Wed, 19 Feb 2025 04:23:12 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Qin",
"Qitao",
""
],
[
"Luo",
"Yucong",
""
],
[
"Lu",
"Yihang",
""
],
[
"Chu",
"Zhibo",
""
],
[
"Meng",
"Xianwei",
""
]
] | TITLE: Towards Adaptive Memory-Based Optimization for Enhanced
Retrieval-Augmented Generation
ABSTRACT: Retrieval-Augmented Generation (RAG), by integrating non-parametric knowledge
from external knowledge bases into models, has emerged as a promising approach
to enhancing response accuracy while mitigating factual errors and
hallucinations. This method has been widely applied in tasks such as Question
Answering (QA). However, existing RAG methods struggle with open-domain QA
tasks because they perform independent retrieval operations and directly
incorporate the retrieved information into generation without maintaining a
summarizing memory or using adaptive retrieval strategies, leading to noise
from redundant information and insufficient information integration. To address
these challenges, we propose Adaptive memory-based optimization for enhanced
RAG (Amber) for open-domain QA tasks, which comprises an Agent-based Memory
Updater, an Adaptive Information Collector, and a Multi-granular Content
Filter, working together within an iterative memory updating paradigm.
Specifically, Amber integrates and optimizes the language model's memory
through a multi-agent collaborative approach, ensuring comprehensive knowledge
integration from previous retrieval steps. It dynamically adjusts retrieval
queries and decides when to stop retrieval based on the accumulated knowledge,
enhancing retrieval efficiency and effectiveness. Additionally, it reduces
noise by filtering irrelevant content at multiple levels, retaining essential
information to improve overall model performance. We conduct extensive
experiments on several open-domain QA datasets, and the results demonstrate the
superiority and effectiveness of our method and its components. The source code
is available \footnote{https://anonymous.4open.science/r/Amber-B203/}.
|
2504.05314 | Jianyang Zhai | Jianyang Zhai, Zi-Feng Mai, Chang-Dong Wang, Feidiao Yang, Xiawu
Zheng, Hui Li, Yonghong Tian | Multimodal Quantitative Language for Generative Recommendation | null | null | null | null | cs.IR cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Generative recommendation has emerged as a promising paradigm aiming at
directly generating the identifiers of the target candidates. Most existing
methods attempt to leverage prior knowledge embedded in Pre-trained Language
Models (PLMs) to improve the recommendation performance. However, they often
fail to accommodate the differences between the general linguistic knowledge of
PLMs and the specific needs of recommendation systems. Moreover, they rarely
consider the complementary knowledge between the multimodal information of
items, which represents the multi-faceted preferences of users. To facilitate
efficient recommendation knowledge transfer, we propose a novel approach called
Multimodal Quantitative Language for Generative Recommendation (MQL4GRec). Our
key idea is to transform items from different domains and modalities into a
unified language, which can serve as a bridge for transferring recommendation
knowledge. Specifically, we first introduce quantitative translators to convert
the text and image content of items from various domains into a new and concise
language, known as quantitative language, with all items sharing the same
vocabulary. Then, we design a series of quantitative language generation tasks
to enrich quantitative language with semantic information and prior knowledge.
Finally, we achieve the transfer of recommendation knowledge from different
domains and modalities to the recommendation task through pre-training and
fine-tuning. We evaluate the effectiveness of MQL4GRec through extensive
experiments and comparisons with existing methods, achieving improvements over
the baseline by 11.18\%, 14.82\%, and 7.95\% on the NDCG metric across three
different datasets, respectively.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 09:29:30 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhai",
"Jianyang",
""
],
[
"Mai",
"Zi-Feng",
""
],
[
"Wang",
"Chang-Dong",
""
],
[
"Yang",
"Feidiao",
""
],
[
"Zheng",
"Xiawu",
""
],
[
"Li",
"Hui",
""
],
[
"Tian",
"Yonghong",
""
]
] | TITLE: Multimodal Quantitative Language for Generative Recommendation
ABSTRACT: Generative recommendation has emerged as a promising paradigm aiming at
directly generating the identifiers of the target candidates. Most existing
methods attempt to leverage prior knowledge embedded in Pre-trained Language
Models (PLMs) to improve the recommendation performance. However, they often
fail to accommodate the differences between the general linguistic knowledge of
PLMs and the specific needs of recommendation systems. Moreover, they rarely
consider the complementary knowledge between the multimodal information of
items, which represents the multi-faceted preferences of users. To facilitate
efficient recommendation knowledge transfer, we propose a novel approach called
Multimodal Quantitative Language for Generative Recommendation (MQL4GRec). Our
key idea is to transform items from different domains and modalities into a
unified language, which can serve as a bridge for transferring recommendation
knowledge. Specifically, we first introduce quantitative translators to convert
the text and image content of items from various domains into a new and concise
language, known as quantitative language, with all items sharing the same
vocabulary. Then, we design a series of quantitative language generation tasks
to enrich quantitative language with semantic information and prior knowledge.
Finally, we achieve the transfer of recommendation knowledge from different
domains and modalities to the recommendation task through pre-training and
fine-tuning. We evaluate the effectiveness of MQL4GRec through extensive
experiments and comparisons with existing methods, achieving improvements over
the baseline by 11.18\%, 14.82\%, and 7.95\% on the NDCG metric across three
different datasets, respectively.
|
2504.05315 | Wei Zhang | Shijie Liu, Ruixing Ding, Weihai Lu, Jun Wang, Mo Yu, Xiaoming Shi,
Wei Zhang | Coherency Improved Explainable Recommendation via Large Language Model | Accepted by AAAI 2025, with 9 pages | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Explainable recommender systems are designed to elucidate the explanation
behind each recommendation, enabling users to comprehend the underlying logic.
Previous works perform rating prediction and explanation generation in a
multi-task manner. However, these works suffer from incoherence between
predicted ratings and explanations. To address the issue, we propose a novel
framework that employs a large language model (LLM) to generate a rating,
transforms it into a rating vector, and finally generates an explanation based
on the rating vector and user-item information. Moreover, we propose utilizing
publicly available LLMs and pre-trained sentiment analysis models to
automatically evaluate the coherence without human annotations. Extensive
experimental results on three datasets of explainable recommendation show that
the proposed framework is effective, outperforming state-of-the-art baselines
with improvements of 7.3\% in explainability and 4.4\% in text quality.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 00:55:57 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Liu",
"Shijie",
""
],
[
"Ding",
"Ruixing",
""
],
[
"Lu",
"Weihai",
""
],
[
"Wang",
"Jun",
""
],
[
"Yu",
"Mo",
""
],
[
"Shi",
"Xiaoming",
""
],
[
"Zhang",
"Wei",
""
]
] | TITLE: Coherency Improved Explainable Recommendation via Large Language Model
ABSTRACT: Explainable recommender systems are designed to elucidate the explanation
behind each recommendation, enabling users to comprehend the underlying logic.
Previous works perform rating prediction and explanation generation in a
multi-task manner. However, these works suffer from incoherence between
predicted ratings and explanations. To address the issue, we propose a novel
framework that employs a large language model (LLM) to generate a rating,
transforms it into a rating vector, and finally generates an explanation based
on the rating vector and user-item information. Moreover, we propose utilizing
publicly available LLMs and pre-trained sentiment analysis models to
automatically evaluate the coherence without human annotations. Extensive
experimental results on three datasets of explainable recommendation show that
the proposed framework is effective, outperforming state-of-the-art baselines
with improvements of 7.3\% in explainability and 4.4\% in text quality.
|
2504.05320 | Bayode Ogunleye | Laurence Hirsch, Robin Hirsch, Bayode Ogunleye | Document clustering with evolved multiword search queries | 15 pages | Evol. Intel. 18, 37. (2025) | 10.1007/s12065-025-01018-w | null | cs.IR cs.LG cs.NE | http://creativecommons.org/licenses/by/4.0/ | Text clustering holds significant value across various domains due to its
ability to identify patterns and group related information. Current approaches
which rely heavily on a computed similarity measure between documents are often
limited in accuracy and interpretability. We present a novel approach to the
problem based on a set of evolved search queries. Clusters are formed as the
set of documents matched by a single search query in the set of queries. The
queries are optimized to maximize the number of documents returned and to
minimize the overlap between clusters (documents returned by more than one
query). Where queries contain more than one word they are interpreted
disjunctively. We have found it useful to assign one word to be the root and
constrain the query construction such that the set of documents returned by any
additional query words intersect with the set returned by the root word. Not
all documents in a collection are returned by any of the search queries in a
set, so once the search query evolution is completed a second stage is
performed whereby a KNN algorithm is applied to assign all unassigned documents
to their nearest cluster. We describe the method and present results using 8
text datasets comparing effectiveness with well-known existing algorithms. We
note that as well as achieving the highest accuracy on these datasets the
search query format provides the qualitative benefits of being interpretable
and modifiable whilst providing a causal explanation of cluster construction.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 16:23:29 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Hirsch",
"Laurence",
""
],
[
"Hirsch",
"Robin",
""
],
[
"Ogunleye",
"Bayode",
""
]
] | TITLE: Document clustering with evolved multiword search queries
ABSTRACT: Text clustering holds significant value across various domains due to its
ability to identify patterns and group related information. Current approaches
which rely heavily on a computed similarity measure between documents are often
limited in accuracy and interpretability. We present a novel approach to the
problem based on a set of evolved search queries. Clusters are formed as the
set of documents matched by a single search query in the set of queries. The
queries are optimized to maximize the number of documents returned and to
minimize the overlap between clusters (documents returned by more than one
query). Where queries contain more than one word they are interpreted
disjunctively. We have found it useful to assign one word to be the root and
constrain the query construction such that the set of documents returned by any
additional query words intersect with the set returned by the root word. Not
all documents in a collection are returned by any of the search queries in a
set, so once the search query evolution is completed a second stage is
performed whereby a KNN algorithm is applied to assign all unassigned documents
to their nearest cluster. We describe the method and present results using 8
text datasets comparing effectiveness with well-known existing algorithms. We
note that as well as achieving the highest accuracy on these datasets the
search query format provides the qualitative benefits of being interpretable
and modifiable whilst providing a causal explanation of cluster construction.
|
2504.05324 | Chandana Sree Mala | Chandana Sree Mala, Gizem Gezici, Fosca Giannotti | Hybrid Retrieval for Hallucination Mitigation in Large Language Models:
A Comparative Analysis | null | null | null | null | cs.IR cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) excel in language comprehension and generation
but are prone to hallucinations, producing factually incorrect or unsupported
outputs. Retrieval Augmented Generation (RAG) systems address this issue by
grounding LLM responses with external knowledge. This study evaluates the
relationship between retriever effectiveness and hallucination reduction in
LLMs using three retrieval approaches: sparse retrieval based on BM25 keyword
search, dense retrieval using semantic search with Sentence Transformers, and a
proposed hybrid retrieval module. The hybrid module incorporates query
expansion and combines the results of sparse and dense retrievers through a
dynamically weighted Reciprocal Rank Fusion score. Using the HaluBench dataset,
a benchmark for hallucinations in question answering tasks, we assess retrieval
performance with metrics such as mean average precision and normalised
discounted cumulative gain, focusing on the relevance of the top three
retrieved documents. Results show that the hybrid retriever achieves better
relevance scores, outperforming both sparse and dense retrievers. Further
evaluation of LLM-generated answers against ground truth using metrics such as
accuracy, hallucination rate, and rejection rate reveals that the hybrid
retriever achieves the highest accuracy on fails, the lowest hallucination
rate, and the lowest rejection rate. These findings highlight the hybrid
retriever's ability to enhance retrieval relevance, reduce hallucination rates,
and improve LLM reliability, emphasising the importance of advanced retrieval
techniques in mitigating hallucinations and improving response accuracy.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 10:13:33 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Mala",
"Chandana Sree",
""
],
[
"Gezici",
"Gizem",
""
],
[
"Giannotti",
"Fosca",
""
]
] | TITLE: Hybrid Retrieval for Hallucination Mitigation in Large Language Models:
A Comparative Analysis
ABSTRACT: Large Language Models (LLMs) excel in language comprehension and generation
but are prone to hallucinations, producing factually incorrect or unsupported
outputs. Retrieval Augmented Generation (RAG) systems address this issue by
grounding LLM responses with external knowledge. This study evaluates the
relationship between retriever effectiveness and hallucination reduction in
LLMs using three retrieval approaches: sparse retrieval based on BM25 keyword
search, dense retrieval using semantic search with Sentence Transformers, and a
proposed hybrid retrieval module. The hybrid module incorporates query
expansion and combines the results of sparse and dense retrievers through a
dynamically weighted Reciprocal Rank Fusion score. Using the HaluBench dataset,
a benchmark for hallucinations in question answering tasks, we assess retrieval
performance with metrics such as mean average precision and normalised
discounted cumulative gain, focusing on the relevance of the top three
retrieved documents. Results show that the hybrid retriever achieves better
relevance scores, outperforming both sparse and dense retrievers. Further
evaluation of LLM-generated answers against ground truth using metrics such as
accuracy, hallucination rate, and rejection rate reveals that the hybrid
retriever achieves the highest accuracy on fails, the lowest hallucination
rate, and the lowest rejection rate. These findings highlight the hybrid
retriever's ability to enhance retrieval relevance, reduce hallucination rates,
and improve LLM reliability, emphasising the importance of advanced retrieval
techniques in mitigating hallucinations and improving response accuracy.
|
2504.05345 | Wei Ni | Wei Ni, Kaihang Zhang, Xiaoye Miao, Xiangyu Zhao, Yangyang Wu, Yaoshu
Wang, Jianwei Yin | ZeroED: Hybrid Zero-shot Error Detection through Large Language Model
Reasoning | 12 pages | null | null | null | cs.LG cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Error detection (ED) in tabular data is crucial yet challenging due to
diverse error types and the need for contextual understanding. Traditional ED
methods often rely heavily on manual criteria and labels, making them
labor-intensive. Large language models (LLM) can minimize human effort but
struggle with errors requiring a comprehensive understanding of data context.
In this paper, we propose ZeroED, a novel hybrid zero-shot error detection
framework, which combines LLM reasoning ability with the manual label-based ED
pipeline. ZeroED operates in four steps, i.e., feature representation, error
labeling, training data construction, and detector training. Initially, to
enhance error distinction, ZeroED generates rich data representations using
error reason-aware binary features, pre-trained embeddings, and statistical
features. Then, ZeroED employs LLM to label errors holistically through
in-context learning, guided by a two-step reasoning process for detailed error
detection guidelines. To reduce token costs, LLMs are applied only to
representative data selected via clustering-based sampling. High-quality
training data is constructed through in-cluster label propagation and LLM
augmentation with verification. Finally, a classifier is trained to detect all
errors. Extensive experiments on seven public datasets demonstrate that, ZeroED
substantially outperforms state-of-the-art methods by a maximum 30% improvement
in F1 score and up to 90% token cost reduction.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 10:28:41 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Ni",
"Wei",
""
],
[
"Zhang",
"Kaihang",
""
],
[
"Miao",
"Xiaoye",
""
],
[
"Zhao",
"Xiangyu",
""
],
[
"Wu",
"Yangyang",
""
],
[
"Wang",
"Yaoshu",
""
],
[
"Yin",
"Jianwei",
""
]
] | TITLE: ZeroED: Hybrid Zero-shot Error Detection through Large Language Model
Reasoning
ABSTRACT: Error detection (ED) in tabular data is crucial yet challenging due to
diverse error types and the need for contextual understanding. Traditional ED
methods often rely heavily on manual criteria and labels, making them
labor-intensive. Large language models (LLM) can minimize human effort but
struggle with errors requiring a comprehensive understanding of data context.
In this paper, we propose ZeroED, a novel hybrid zero-shot error detection
framework, which combines LLM reasoning ability with the manual label-based ED
pipeline. ZeroED operates in four steps, i.e., feature representation, error
labeling, training data construction, and detector training. Initially, to
enhance error distinction, ZeroED generates rich data representations using
error reason-aware binary features, pre-trained embeddings, and statistical
features. Then, ZeroED employs LLM to label errors holistically through
in-context learning, guided by a two-step reasoning process for detailed error
detection guidelines. To reduce token costs, LLMs are applied only to
representative data selected via clustering-based sampling. High-quality
training data is constructed through in-cluster label propagation and LLM
augmentation with verification. Finally, a classifier is trained to detect all
errors. Extensive experiments on seven public datasets demonstrate that, ZeroED
substantially outperforms state-of-the-art methods by a maximum 30% improvement
in F1 score and up to 90% token cost reduction.
|
2504.05356 | HongKuo Niu | Yunxiang Liu, Hongkuo Niu | DyTTP: Trajectory Prediction with Normalization-Free Transformers | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate trajectory prediction is a cornerstone for the safe operation of
autonomous driving systems, where understanding the dynamic behavior of
surrounding agents is crucial. Transformer-based architectures have
demonstrated significant promise in capturing complex spatio-temporality
dependencies. However, their reliance on normalization layers can lead to
computation overhead and training instabilities. In this work, we present a
two-fold approach to address these challenges. First, we integrate DynamicTanh
(DyT), which is the latest method to promote transformers, into the backbone,
replacing traditional layer normalization. This modification simplifies the
network architecture and improves the stability of the inference. We are the
first work to deploy the DyT to the trajectory prediction task. Complementing
this, we employ a snapshot ensemble strategy to further boost trajectory
prediction performance. Using cyclical learning rate scheduling, multiple model
snapshots are captured during a single training run. These snapshots are then
aggregated via simple averaging at inference time, allowing the model to
benefit from diverse hypotheses without incurring substantial additional
computational cost. Extensive experiments on Argoverse datasets demonstrate
that our combined approach significantly improves prediction accuracy,
inference speed and robustness in diverse driving scenarios. This work
underscores the potential of normalization-free transformer designs augmented
with lightweight ensemble techniques in advancing trajectory forecasting for
autonomous vehicles.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 09:26:25 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Liu",
"Yunxiang",
""
],
[
"Niu",
"Hongkuo",
""
]
] | TITLE: DyTTP: Trajectory Prediction with Normalization-Free Transformers
ABSTRACT: Accurate trajectory prediction is a cornerstone for the safe operation of
autonomous driving systems, where understanding the dynamic behavior of
surrounding agents is crucial. Transformer-based architectures have
demonstrated significant promise in capturing complex spatio-temporality
dependencies. However, their reliance on normalization layers can lead to
computation overhead and training instabilities. In this work, we present a
two-fold approach to address these challenges. First, we integrate DynamicTanh
(DyT), which is the latest method to promote transformers, into the backbone,
replacing traditional layer normalization. This modification simplifies the
network architecture and improves the stability of the inference. We are the
first work to deploy the DyT to the trajectory prediction task. Complementing
this, we employ a snapshot ensemble strategy to further boost trajectory
prediction performance. Using cyclical learning rate scheduling, multiple model
snapshots are captured during a single training run. These snapshots are then
aggregated via simple averaging at inference time, allowing the model to
benefit from diverse hypotheses without incurring substantial additional
computational cost. Extensive experiments on Argoverse datasets demonstrate
that our combined approach significantly improves prediction accuracy,
inference speed and robustness in diverse driving scenarios. This work
underscores the potential of normalization-free transformer designs augmented
with lightweight ensemble techniques in advancing trajectory forecasting for
autonomous vehicles.
|
2504.05357 | Junghun Oh | Junghun Oh, Sungyong Baik, Kyoung Mu Lee | Find A Winning Sign: Sign Is All We Need to Win the Lottery | Accepted at ICLR2025 | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The Lottery Ticket Hypothesis (LTH) posits the existence of a sparse
subnetwork (a.k.a. winning ticket) that can generalize comparably to its
over-parameterized counterpart when trained from scratch. The common approach
to finding a winning ticket is to preserve the original strong generalization
through Iterative Pruning (IP) and transfer information useful for achieving
the learned generalization by applying the resulting sparse mask to an
untrained network. However, existing IP methods still struggle to generalize
their observations beyond ad-hoc initialization and small-scale architectures
or datasets, or they bypass these challenges by applying their mask to trained
weights instead of initialized ones. In this paper, we demonstrate that the
parameter sign configuration plays a crucial role in conveying useful
information for generalization to any randomly initialized network. Through
linear mode connectivity analysis, we observe that a sparse network trained by
an existing IP method can retain its basin of attraction if its parameter signs
and normalization layer parameters are preserved. To take a step closer to
finding a winning ticket, we alleviate the reliance on normalization layer
parameters by preventing high error barriers along the linear path between the
sparse network trained by our method and its counterpart with initialized
normalization layer parameters. Interestingly, across various architectures and
datasets, we observe that any randomly initialized network can be optimized to
exhibit low error barriers along the linear path to the sparse network trained
by our method by inheriting its sparsity and parameter sign information,
potentially achieving performance comparable to the original. The code is
available at https://github.com/JungHunOh/AWS\_ICLR2025.git
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 09:30:38 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Oh",
"Junghun",
""
],
[
"Baik",
"Sungyong",
""
],
[
"Lee",
"Kyoung Mu",
""
]
] | TITLE: Find A Winning Sign: Sign Is All We Need to Win the Lottery
ABSTRACT: The Lottery Ticket Hypothesis (LTH) posits the existence of a sparse
subnetwork (a.k.a. winning ticket) that can generalize comparably to its
over-parameterized counterpart when trained from scratch. The common approach
to finding a winning ticket is to preserve the original strong generalization
through Iterative Pruning (IP) and transfer information useful for achieving
the learned generalization by applying the resulting sparse mask to an
untrained network. However, existing IP methods still struggle to generalize
their observations beyond ad-hoc initialization and small-scale architectures
or datasets, or they bypass these challenges by applying their mask to trained
weights instead of initialized ones. In this paper, we demonstrate that the
parameter sign configuration plays a crucial role in conveying useful
information for generalization to any randomly initialized network. Through
linear mode connectivity analysis, we observe that a sparse network trained by
an existing IP method can retain its basin of attraction if its parameter signs
and normalization layer parameters are preserved. To take a step closer to
finding a winning ticket, we alleviate the reliance on normalization layer
parameters by preventing high error barriers along the linear path between the
sparse network trained by our method and its counterpart with initialized
normalization layer parameters. Interestingly, across various architectures and
datasets, we observe that any randomly initialized network can be optimized to
exhibit low error barriers along the linear path to the sparse network trained
by our method by inheriting its sparsity and parameter sign information,
potentially achieving performance comparable to the original. The code is
available at https://github.com/JungHunOh/AWS\_ICLR2025.git
|
2504.05358 | Xi Chen | Xi Chen, Mao Mao, Shuo Li, Haotian Shangguan | Debate-Feedback: A Multi-Agent Framework for Efficient Legal Judgment
Prediction | null | null | null | null | cs.MA cs.AI | http://creativecommons.org/licenses/by/4.0/ | The use of AI in legal analysis and prediction (LegalAI) has gained
widespread attention, with past research focusing on retrieval-based methods
and fine-tuning large models. However, these approaches often require large
datasets and underutilize the capabilities of modern large language models
(LLMs). In this paper, inspired by the debate phase of real courtroom trials,
we propose a novel legal judgment prediction model based on the Debate-Feedback
architecture, which integrates LLM multi-agent debate and reliability
evaluation models. Unlike traditional methods, our model achieves significant
improvements in efficiency by minimizing the need for large historical
datasets, thus offering a lightweight yet robust solution. Comparative
experiments show that it outperforms several general-purpose and
domain-specific legal models, offering a dynamic reasoning process and a
promising direction for future LegalAI research.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 09:34:14 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Chen",
"Xi",
""
],
[
"Mao",
"Mao",
""
],
[
"Li",
"Shuo",
""
],
[
"Shangguan",
"Haotian",
""
]
] | TITLE: Debate-Feedback: A Multi-Agent Framework for Efficient Legal Judgment
Prediction
ABSTRACT: The use of AI in legal analysis and prediction (LegalAI) has gained
widespread attention, with past research focusing on retrieval-based methods
and fine-tuning large models. However, these approaches often require large
datasets and underutilize the capabilities of modern large language models
(LLMs). In this paper, inspired by the debate phase of real courtroom trials,
we propose a novel legal judgment prediction model based on the Debate-Feedback
architecture, which integrates LLM multi-agent debate and reliability
evaluation models. Unlike traditional methods, our model achieves significant
improvements in efficiency by minimizing the need for large historical
datasets, thus offering a lightweight yet robust solution. Comparative
experiments show that it outperforms several general-purpose and
domain-specific legal models, offering a dynamic reasoning process and a
promising direction for future LegalAI research.
|
2504.05366 | Giacomo Lancia | G. Lancia, D. Falanga, S. Alam, and G. Lulli | Handling Weather Uncertainty in Air Traffic Prediction through an
Inverse Approach | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Adverse weather conditions, particularly convective phenomena, pose
significant challenges to Air Traffic Management, often requiring real-time
rerouting decisions that impact efficiency and safety. This study introduces a
3-D Gaussian Mixture Model to predict long lead-time flight trajectory changes,
incorporating comprehensive weather and traffic data. Utilizing high-resolution
meteorological datasets, including convective weather maps and wind data,
alongside traffic records, the model demonstrates robust performance in
forecasting reroutes up to 60 minutes. The novel 3-D Gaussian Mixture Model
framework employs a probabilistic approach to capture uncertainty while
providing accurate forecasts of altitude, latitude, and longitude. Extensive
evaluation revealed a Mean Absolute Percentage Error below 0.02 across varying
lead times, highlighting the model's accuracy and scalability. By integrating
explainability techniques such as the Vanilla Gradient algorithm, the study
provides insights into feature contributions, showing that they contribute to
improving Air Traffic Management strategies to mitigate weather-induced
disruptions.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 15:42:09 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Lancia",
"G.",
""
],
[
"Falanga",
"D.",
""
],
[
"Alam",
"S.",
""
],
[
"Lulli",
"G.",
""
]
] | TITLE: Handling Weather Uncertainty in Air Traffic Prediction through an
Inverse Approach
ABSTRACT: Adverse weather conditions, particularly convective phenomena, pose
significant challenges to Air Traffic Management, often requiring real-time
rerouting decisions that impact efficiency and safety. This study introduces a
3-D Gaussian Mixture Model to predict long lead-time flight trajectory changes,
incorporating comprehensive weather and traffic data. Utilizing high-resolution
meteorological datasets, including convective weather maps and wind data,
alongside traffic records, the model demonstrates robust performance in
forecasting reroutes up to 60 minutes. The novel 3-D Gaussian Mixture Model
framework employs a probabilistic approach to capture uncertainty while
providing accurate forecasts of altitude, latitude, and longitude. Extensive
evaluation revealed a Mean Absolute Percentage Error below 0.02 across varying
lead times, highlighting the model's accuracy and scalability. By integrating
explainability techniques such as the Vanilla Gradient algorithm, the study
provides insights into feature contributions, showing that they contribute to
improving Air Traffic Management strategies to mitigate weather-induced
disruptions.
|
2504.05368 | Sneha Das | Maja J. Hjuler and Line H. Clemmensen and Sneha Das | Exploring Local Interpretable Model-Agnostic Explanations for Speech
Emotion Recognition with Distribution-Shift | Published in the proceedings of ICASSP 2025 | null | null | null | cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | We introduce EmoLIME, a version of local interpretable model-agnostic
explanations (LIME) for black-box Speech Emotion Recognition (SER) models. To
the best of our knowledge, this is the first attempt to apply LIME in SER.
EmoLIME generates high-level interpretable explanations and identifies which
specific frequency ranges are most influential in determining emotional states.
The approach aids in interpreting complex, high-dimensional embeddings such as
those generated by end-to-end speech models. We evaluate EmoLIME,
qualitatively, quantitatively, and statistically, across three emotional speech
datasets, using classifiers trained on both hand-crafted acoustic features and
Wav2Vec 2.0 embeddings. We find that EmoLIME exhibits stronger robustness
across different models than across datasets with distribution shifts,
highlighting its potential for more consistent explanations in SER tasks within
a dataset.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 17:38:21 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Hjuler",
"Maja J.",
""
],
[
"Clemmensen",
"Line H.",
""
],
[
"Das",
"Sneha",
""
]
] | TITLE: Exploring Local Interpretable Model-Agnostic Explanations for Speech
Emotion Recognition with Distribution-Shift
ABSTRACT: We introduce EmoLIME, a version of local interpretable model-agnostic
explanations (LIME) for black-box Speech Emotion Recognition (SER) models. To
the best of our knowledge, this is the first attempt to apply LIME in SER.
EmoLIME generates high-level interpretable explanations and identifies which
specific frequency ranges are most influential in determining emotional states.
The approach aids in interpreting complex, high-dimensional embeddings such as
those generated by end-to-end speech models. We evaluate EmoLIME,
qualitatively, quantitatively, and statistically, across three emotional speech
datasets, using classifiers trained on both hand-crafted acoustic features and
Wav2Vec 2.0 embeddings. We find that EmoLIME exhibits stronger robustness
across different models than across datasets with distribution shifts,
highlighting its potential for more consistent explanations in SER tasks within
a dataset.
|
2504.05370 | Xueqiao Zhang | Xueqiao Zhang and Chao Zhang and Jianwen Sun and Jun Xiao and Yi Yang
and Yawei Luo | EduPlanner: LLM-Based Multi-Agent Systems for Customized and Intelligent
Instructional Design | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have significantly advanced smart education in
the Artificial General Intelligence (AGI) era. A promising application lies in
the automatic generalization of instructional design for curriculum and
learning activities, focusing on two key aspects: (1) Customized Generation:
generating niche-targeted teaching content based on students' varying learning
abilities and states, and (2) Intelligent Optimization: iteratively optimizing
content based on feedback from learning effectiveness or test scores.
Currently, a single large LLM cannot effectively manage the entire process,
posing a challenge for designing intelligent teaching plans. To address these
issues, we developed EduPlanner, an LLM-based multi-agent system comprising an
evaluator agent, an optimizer agent, and a question analyst, working in
adversarial collaboration to generate customized and intelligent instructional
design for curriculum and learning activities. Taking mathematics lessons as
our example, EduPlanner employs a novel Skill-Tree structure to accurately
model the background mathematics knowledge of student groups, personalizing
instructional design for curriculum and learning activities according to
students' knowledge levels and learning abilities. Additionally, we introduce
the CIDDP, an LLM-based five-dimensional evaluation module encompassing
clarity, Integrity, Depth, Practicality, and Pertinence, to comprehensively
assess mathematics lesson plan quality and bootstrap intelligent optimization.
Experiments conducted on the GSM8K and Algebra datasets demonstrate that
EduPlanner excels in evaluating and optimizing instructional design for
curriculum and learning activities. Ablation studies further validate the
significance and effectiveness of each component within the framework. Our code
is publicly available at https://github.com/Zc0812/Edu_Planner
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 17:49:12 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhang",
"Xueqiao",
""
],
[
"Zhang",
"Chao",
""
],
[
"Sun",
"Jianwen",
""
],
[
"Xiao",
"Jun",
""
],
[
"Yang",
"Yi",
""
],
[
"Luo",
"Yawei",
""
]
] | TITLE: EduPlanner: LLM-Based Multi-Agent Systems for Customized and Intelligent
Instructional Design
ABSTRACT: Large Language Models (LLMs) have significantly advanced smart education in
the Artificial General Intelligence (AGI) era. A promising application lies in
the automatic generalization of instructional design for curriculum and
learning activities, focusing on two key aspects: (1) Customized Generation:
generating niche-targeted teaching content based on students' varying learning
abilities and states, and (2) Intelligent Optimization: iteratively optimizing
content based on feedback from learning effectiveness or test scores.
Currently, a single large LLM cannot effectively manage the entire process,
posing a challenge for designing intelligent teaching plans. To address these
issues, we developed EduPlanner, an LLM-based multi-agent system comprising an
evaluator agent, an optimizer agent, and a question analyst, working in
adversarial collaboration to generate customized and intelligent instructional
design for curriculum and learning activities. Taking mathematics lessons as
our example, EduPlanner employs a novel Skill-Tree structure to accurately
model the background mathematics knowledge of student groups, personalizing
instructional design for curriculum and learning activities according to
students' knowledge levels and learning abilities. Additionally, we introduce
the CIDDP, an LLM-based five-dimensional evaluation module encompassing
clarity, Integrity, Depth, Practicality, and Pertinence, to comprehensively
assess mathematics lesson plan quality and bootstrap intelligent optimization.
Experiments conducted on the GSM8K and Algebra datasets demonstrate that
EduPlanner excels in evaluating and optimizing instructional design for
curriculum and learning activities. Ablation studies further validate the
significance and effectiveness of each component within the framework. Our code
is publicly available at https://github.com/Zc0812/Edu_Planner
|
2504.05400 | Sihang Li | Sihang Li, Zeyu Jiang, Grace Chen, Chenyang Xu, Siqi Tan, Xue Wang,
Irving Fang, Kristof Zyskowski, Shannon P. McPherron, Radu Iovita, Chen Feng
and Jing Zhang | GARF: Learning Generalizable 3D Reassembly for Real-World Fractures | 15 pages, 11 figures. Project Page https://ai4ce.github.io/GARF/ | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D reassembly is a challenging spatial intelligence task with broad
applications across scientific domains. While large-scale synthetic datasets
have fueled promising learning-based approaches, their generalizability to
different domains is limited. Critically, it remains uncertain whether models
trained on synthetic datasets can generalize to real-world fractures where
breakage patterns are more complex. To bridge this gap, we propose GARF, a
generalizable 3D reassembly framework for real-world fractures. GARF leverages
fracture-aware pretraining to learn fracture features from individual
fragments, with flow matching enabling precise 6-DoF alignments. At inference
time, we introduce one-step preassembly, improving robustness to unseen objects
and varying numbers of fractures. In collaboration with archaeologists,
paleoanthropologists, and ornithologists, we curate Fractura, a diverse dataset
for vision and learning communities, featuring real-world fracture types across
ceramics, bones, eggshells, and lithics. Comprehensive experiments have shown
our approach consistently outperforms state-of-the-art methods on both
synthetic and real-world datasets, achieving 82.87\% lower rotation error and
25.15\% higher part accuracy. This sheds light on training on synthetic data to
advance real-world 3D puzzle solving, demonstrating its strong generalization
across unseen object shapes and diverse fracture types.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 18:13:16 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Li",
"Sihang",
""
],
[
"Jiang",
"Zeyu",
""
],
[
"Chen",
"Grace",
""
],
[
"Xu",
"Chenyang",
""
],
[
"Tan",
"Siqi",
""
],
[
"Wang",
"Xue",
""
],
[
"Fang",
"Irving",
""
],
[
"Zyskowski",
"Kristof",
""
],
[
"McPherron",
"Shannon P.",
""
],
[
"Iovita",
"Radu",
""
],
[
"Feng",
"Chen",
""
],
[
"Zhang",
"Jing",
""
]
] | TITLE: GARF: Learning Generalizable 3D Reassembly for Real-World Fractures
ABSTRACT: 3D reassembly is a challenging spatial intelligence task with broad
applications across scientific domains. While large-scale synthetic datasets
have fueled promising learning-based approaches, their generalizability to
different domains is limited. Critically, it remains uncertain whether models
trained on synthetic datasets can generalize to real-world fractures where
breakage patterns are more complex. To bridge this gap, we propose GARF, a
generalizable 3D reassembly framework for real-world fractures. GARF leverages
fracture-aware pretraining to learn fracture features from individual
fragments, with flow matching enabling precise 6-DoF alignments. At inference
time, we introduce one-step preassembly, improving robustness to unseen objects
and varying numbers of fractures. In collaboration with archaeologists,
paleoanthropologists, and ornithologists, we curate Fractura, a diverse dataset
for vision and learning communities, featuring real-world fracture types across
ceramics, bones, eggshells, and lithics. Comprehensive experiments have shown
our approach consistently outperforms state-of-the-art methods on both
synthetic and real-world datasets, achieving 82.87\% lower rotation error and
25.15\% higher part accuracy. This sheds light on training on synthetic data to
advance real-world 3D puzzle solving, demonstrating its strong generalization
across unseen object shapes and diverse fracture types.
|
2504.05407 | Yazan Youssef | Yazan Youssef, Paulo Ricardo Marques de Araujo, Aboelmagd Noureldin,
and Sidney Givigi | TRATSS: Transformer-Based Task Scheduling System for Autonomous Vehicles | 9 pages | null | null | null | cs.RO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficient scheduling remains a critical challenge in various domains,
requiring solutions to complex NP-hard optimization problems to achieve optimal
resource allocation and maximize productivity. In this paper, we introduce a
framework called Transformer-Based Task Scheduling System (TRATSS), designed to
address the intricacies of single agent scheduling in graph-based environments.
By integrating the latest advancements in reinforcement learning and
transformer architecture, TRATSS provides a novel system that outputs optimized
task scheduling decisions while dynamically adapting to evolving task
requirements and resource availability. Leveraging the self-attention mechanism
in transformers, TRATSS effectively captures complex task dependencies, thereby
providing solutions with enhanced resource utilization and task completion
efficiency. Experimental evaluations on benchmark datasets demonstrate TRATSS's
effectiveness in providing high-quality solutions to scheduling problems that
involve multiple action profiles.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 18:23:13 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Youssef",
"Yazan",
""
],
[
"de Araujo",
"Paulo Ricardo Marques",
""
],
[
"Noureldin",
"Aboelmagd",
""
],
[
"Givigi",
"Sidney",
""
]
] | TITLE: TRATSS: Transformer-Based Task Scheduling System for Autonomous Vehicles
ABSTRACT: Efficient scheduling remains a critical challenge in various domains,
requiring solutions to complex NP-hard optimization problems to achieve optimal
resource allocation and maximize productivity. In this paper, we introduce a
framework called Transformer-Based Task Scheduling System (TRATSS), designed to
address the intricacies of single agent scheduling in graph-based environments.
By integrating the latest advancements in reinforcement learning and
transformer architecture, TRATSS provides a novel system that outputs optimized
task scheduling decisions while dynamically adapting to evolving task
requirements and resource availability. Leveraging the self-attention mechanism
in transformers, TRATSS effectively captures complex task dependencies, thereby
providing solutions with enhanced resource utilization and task completion
efficiency. Experimental evaluations on benchmark datasets demonstrate TRATSS's
effectiveness in providing high-quality solutions to scheduling problems that
involve multiple action profiles.
|
2504.05411 | Lingzhi Shen | Lingzhi Shen, Yunfei Long, Xiaohao Cai, Guanming Chen, Imran Razzak,
Shoaib Jameel | Less but Better: Parameter-Efficient Fine-Tuning of Large Language
Models for Personality Detection | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Personality detection automatically identifies an individual's personality
from various data sources, such as social media texts. However, as the
parameter scale of language models continues to grow, the computational cost
becomes increasingly difficult to manage. Fine-tuning also grows more complex,
making it harder to justify the effort and reliably predict outcomes. We
introduce a novel parameter-efficient fine-tuning framework, PersLLM, to
address these challenges. In PersLLM, a large language model (LLM) extracts
high-dimensional representations from raw data and stores them in a dynamic
memory layer. PersLLM then updates the downstream layers with a replaceable
output network, enabling flexible adaptation to various personality detection
scenarios. By storing the features in the memory layer, we eliminate the need
for repeated complex computations by the LLM. Meanwhile, the lightweight output
network serves as a proxy for evaluating the overall effectiveness of the
framework, improving the predictability of results. Experimental results on key
benchmark datasets like Kaggle and Pandora show that PersLLM significantly
reduces computational cost while maintaining competitive performance and strong
adaptability.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 18:30:39 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Shen",
"Lingzhi",
""
],
[
"Long",
"Yunfei",
""
],
[
"Cai",
"Xiaohao",
""
],
[
"Chen",
"Guanming",
""
],
[
"Razzak",
"Imran",
""
],
[
"Jameel",
"Shoaib",
""
]
] | TITLE: Less but Better: Parameter-Efficient Fine-Tuning of Large Language
Models for Personality Detection
ABSTRACT: Personality detection automatically identifies an individual's personality
from various data sources, such as social media texts. However, as the
parameter scale of language models continues to grow, the computational cost
becomes increasingly difficult to manage. Fine-tuning also grows more complex,
making it harder to justify the effort and reliably predict outcomes. We
introduce a novel parameter-efficient fine-tuning framework, PersLLM, to
address these challenges. In PersLLM, a large language model (LLM) extracts
high-dimensional representations from raw data and stores them in a dynamic
memory layer. PersLLM then updates the downstream layers with a replaceable
output network, enabling flexible adaptation to various personality detection
scenarios. By storing the features in the memory layer, we eliminate the need
for repeated complex computations by the LLM. Meanwhile, the lightweight output
network serves as a proxy for evaluating the overall effectiveness of the
framework, improving the predictability of results. Experimental results on key
benchmark datasets like Kaggle and Pandora show that PersLLM significantly
reduces computational cost while maintaining competitive performance and strong
adaptability.
|
2504.05418 | Sara Silva | Rui Menoita, Sara Silva | Evolving Financial Trading Strategies with Vectorial Genetic Programming | 9 pages, 6 figures | null | null | null | cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Establishing profitable trading strategies in financial markets is a
challenging task. While traditional methods like technical analysis have long
served as foundational tools for traders to recognize and act upon market
patterns, the evolving landscape has called for more advanced techniques. We
explore the use of Vectorial Genetic Programming (VGP) for this task,
introducing two new variants of VGP, one that allows operations with complex
numbers and another that implements a strongly-typed version of VGP. We
evaluate the different variants on three financial instruments, with datasets
spanning more than seven years. Despite the inherent difficulty of this task,
it was possible to evolve profitable trading strategies. A comparative analysis
of the three VGP variants and standard GP revealed that standard GP is always
among the worst whereas strongly-typed VGP is always among the best.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 18:41:31 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Menoita",
"Rui",
""
],
[
"Silva",
"Sara",
""
]
] | TITLE: Evolving Financial Trading Strategies with Vectorial Genetic Programming
ABSTRACT: Establishing profitable trading strategies in financial markets is a
challenging task. While traditional methods like technical analysis have long
served as foundational tools for traders to recognize and act upon market
patterns, the evolving landscape has called for more advanced techniques. We
explore the use of Vectorial Genetic Programming (VGP) for this task,
introducing two new variants of VGP, one that allows operations with complex
numbers and another that implements a strongly-typed version of VGP. We
evaluate the different variants on three financial instruments, with datasets
spanning more than seven years. Despite the inherent difficulty of this task,
it was possible to evolve profitable trading strategies. A comparative analysis
of the three VGP variants and standard GP revealed that standard GP is always
among the worst whereas strongly-typed VGP is always among the best.
|
2504.05420 | Ori Ernst | Steven Koniaev, Ori Ernst, and Jackie Chi Kit Cheung | PreSumm: Predicting Summarization Performance Without Summarizing | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite recent advancements in automatic summarization, state-of-the-art
models do not summarize all documents equally well, raising the question: why?
While prior research has extensively analyzed summarization models, little
attention has been given to the role of document characteristics in influencing
summarization performance. In this work, we explore two key research questions.
First, do documents exhibit consistent summarization quality across multiple
systems? If so, can we predict a document's summarization performance without
generating a summary? We answer both questions affirmatively and introduce
PreSumm, a novel task in which a system predicts summarization performance
based solely on the source document. Our analysis sheds light on common
properties of documents with low PreSumm scores, revealing that they often
suffer from coherence issues, complex content, or a lack of a clear main theme.
In addition, we demonstrate PreSumm's practical utility in two key
applications: improving hybrid summarization workflows by identifying documents
that require manual summarization and enhancing dataset quality by filtering
outliers and noisy documents. Overall, our findings highlight the critical role
of document properties in summarization performance and offer insights into the
limitations of current systems that could serve as the basis for future
improvements.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 18:43:00 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Koniaev",
"Steven",
""
],
[
"Ernst",
"Ori",
""
],
[
"Cheung",
"Jackie Chi Kit",
""
]
] | TITLE: PreSumm: Predicting Summarization Performance Without Summarizing
ABSTRACT: Despite recent advancements in automatic summarization, state-of-the-art
models do not summarize all documents equally well, raising the question: why?
While prior research has extensively analyzed summarization models, little
attention has been given to the role of document characteristics in influencing
summarization performance. In this work, we explore two key research questions.
First, do documents exhibit consistent summarization quality across multiple
systems? If so, can we predict a document's summarization performance without
generating a summary? We answer both questions affirmatively and introduce
PreSumm, a novel task in which a system predicts summarization performance
based solely on the source document. Our analysis sheds light on common
properties of documents with low PreSumm scores, revealing that they often
suffer from coherence issues, complex content, or a lack of a clear main theme.
In addition, we demonstrate PreSumm's practical utility in two key
applications: improving hybrid summarization workflows by identifying documents
that require manual summarization and enhancing dataset quality by filtering
outliers and noisy documents. Overall, our findings highlight the critical role
of document properties in summarization performance and offer insights into the
limitations of current systems that could serve as the basis for future
improvements.
|
2504.05422 | Yue Yao | Yue Yao, Mohamed-Khalil Bouzidi, Daniel Goehring, Joerg Reichardt | EP-Diffuser: An Efficient Diffusion Model for Traffic Scene Generation
and Prediction via Polynomial Representations | null | null | null | null | cs.CV cs.LG cs.RO | http://creativecommons.org/licenses/by/4.0/ | As the prediction horizon increases, predicting the future evolution of
traffic scenes becomes increasingly difficult due to the multi-modal nature of
agent motion. Most state-of-the-art (SotA) prediction models primarily focus on
forecasting the most likely future. However, for the safe operation of
autonomous vehicles, it is equally important to cover the distribution for
plausible motion alternatives. To address this, we introduce EP-Diffuser, a
novel parameter-efficient diffusion-based generative model designed to capture
the distribution of possible traffic scene evolutions. Conditioned on road
layout and agent history, our model acts as a predictor and generates diverse,
plausible scene continuations. We benchmark EP-Diffuser against two SotA models
in terms of accuracy and plausibility of predictions on the Argoverse 2
dataset. Despite its significantly smaller model size, our approach achieves
both highly accurate and plausible traffic scene predictions. We further
evaluate model generalization ability in an out-of-distribution (OoD) test
setting using Waymo Open dataset and show superior robustness of our approach.
The code and model checkpoints can be found here:
https://github.com/continental/EP-Diffuser.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 18:45:49 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Yao",
"Yue",
""
],
[
"Bouzidi",
"Mohamed-Khalil",
""
],
[
"Goehring",
"Daniel",
""
],
[
"Reichardt",
"Joerg",
""
]
] | TITLE: EP-Diffuser: An Efficient Diffusion Model for Traffic Scene Generation
and Prediction via Polynomial Representations
ABSTRACT: As the prediction horizon increases, predicting the future evolution of
traffic scenes becomes increasingly difficult due to the multi-modal nature of
agent motion. Most state-of-the-art (SotA) prediction models primarily focus on
forecasting the most likely future. However, for the safe operation of
autonomous vehicles, it is equally important to cover the distribution for
plausible motion alternatives. To address this, we introduce EP-Diffuser, a
novel parameter-efficient diffusion-based generative model designed to capture
the distribution of possible traffic scene evolutions. Conditioned on road
layout and agent history, our model acts as a predictor and generates diverse,
plausible scene continuations. We benchmark EP-Diffuser against two SotA models
in terms of accuracy and plausibility of predictions on the Argoverse 2
dataset. Despite its significantly smaller model size, our approach achieves
both highly accurate and plausible traffic scene predictions. We further
evaluate model generalization ability in an out-of-distribution (OoD) test
setting using Waymo Open dataset and show superior robustness of our approach.
The code and model checkpoints can be found here:
https://github.com/continental/EP-Diffuser.
|
2504.05424 | Raffi Khatchadourian | Raffi Khatchadourian, Tatiana Castro V\'elez, Mehdi Bagherzadeh, Nan
Jia, Anita Raja | Safe Automated Refactoring for Efficient Migration of Imperative Deep
Learning Programs to Graph Execution | null | null | null | null | cs.SE cs.AI cs.PL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Efficiency is essential to support responsiveness w.r.t. ever-growing
datasets, especially for Deep Learning (DL) systems. DL frameworks have
traditionally embraced deferred execution-style DL code -- supporting symbolic,
graph-based Deep Neural Network (DNN) computation. While scalable, such
development is error-prone, non-intuitive, and difficult to debug.
Consequently, more natural, imperative DL frameworks encouraging eager
execution have emerged at the expense of run-time performance. Though hybrid
approaches aim for the "best of both worlds," using them effectively requires
subtle considerations to make code amenable to safe, accurate, and efficient
graph execution. We present an automated refactoring approach that assists
developers in specifying whether their otherwise eagerly-executed imperative DL
code could be reliably and efficiently executed as graphs while preserving
semantics. The approach, based on a novel imperative tensor analysis,
automatically determines when it is safe and potentially advantageous to
migrate imperative DL code to graph execution. The approach is implemented as a
PyDev Eclipse IDE plug-in that integrates the WALA Ariadne analysis framework
and evaluated on 19 Python projects consisting of 132.05 KLOC. We found that
326 of 766 candidate functions (42.56%) were refactorable, and an average
speedup of 2.16 on performance tests was observed. The results indicate that
the approach is useful in optimizing imperative DL code to its full potential.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 18:48:43 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Khatchadourian",
"Raffi",
""
],
[
"Vélez",
"Tatiana Castro",
""
],
[
"Bagherzadeh",
"Mehdi",
""
],
[
"Jia",
"Nan",
""
],
[
"Raja",
"Anita",
""
]
] | TITLE: Safe Automated Refactoring for Efficient Migration of Imperative Deep
Learning Programs to Graph Execution
ABSTRACT: Efficiency is essential to support responsiveness w.r.t. ever-growing
datasets, especially for Deep Learning (DL) systems. DL frameworks have
traditionally embraced deferred execution-style DL code -- supporting symbolic,
graph-based Deep Neural Network (DNN) computation. While scalable, such
development is error-prone, non-intuitive, and difficult to debug.
Consequently, more natural, imperative DL frameworks encouraging eager
execution have emerged at the expense of run-time performance. Though hybrid
approaches aim for the "best of both worlds," using them effectively requires
subtle considerations to make code amenable to safe, accurate, and efficient
graph execution. We present an automated refactoring approach that assists
developers in specifying whether their otherwise eagerly-executed imperative DL
code could be reliably and efficiently executed as graphs while preserving
semantics. The approach, based on a novel imperative tensor analysis,
automatically determines when it is safe and potentially advantageous to
migrate imperative DL code to graph execution. The approach is implemented as a
PyDev Eclipse IDE plug-in that integrates the WALA Ariadne analysis framework
and evaluated on 19 Python projects consisting of 132.05 KLOC. We found that
326 of 766 candidate functions (42.56%) were refactorable, and an average
speedup of 2.16 on performance tests was observed. The results indicate that
the approach is useful in optimizing imperative DL code to its full potential.
|
2504.05443 | Wuzhe Xu | Wuzhe Xu, Yulong Lu, Lian shen, Anqing Xuan and Ali Barzegari | Diffusion-based Models for Unpaired Super-resolution in Fluid Dynamics | null | null | null | null | math.NA cs.NA physics.flu-dyn | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | High-fidelity, high-resolution numerical simulations are crucial for studying
complex multiscale phenomena in fluid dynamics, such as turbulent flows and
ocean waves. However, direct numerical simulations with high-resolution solvers
are computationally prohibitive. As an alternative, super-resolution techniques
enable the enhancement of low-fidelity, low-resolution simulations. However,
traditional super-resolution approaches rely on paired low-fidelity,
low-resolution and high-fidelity, high-resolution datasets for training, which
are often impossible to acquire in complex flow systems. To address this
challenge, we propose a novel two-step approach that eliminates the need for
paired datasets. First, we perform unpaired domain translation at the
low-resolution level using an Enhanced Denoising Diffusion Implicit Bridge.
This process transforms low-fidelity, low-resolution inputs into high-fidelity,
low-resolution outputs, and we provide a theoretical analysis to highlight the
advantages of this enhanced diffusion-based approach. Second, we employ the
cascaded Super-Resolution via Repeated Refinement model to upscale the
high-fidelity, low-resolution prediction to the high-resolution result. We
demonstrate the effectiveness of our approach across three fluid dynamics
problems. Moreover, by incorporating a neural operator to learn system
dynamics, our method can be extended to improve evolutionary simulations of
low-fidelity, low-resolution data.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 19:08:28 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Xu",
"Wuzhe",
""
],
[
"Lu",
"Yulong",
""
],
[
"shen",
"Lian",
""
],
[
"Xuan",
"Anqing",
""
],
[
"Barzegari",
"Ali",
""
]
] | TITLE: Diffusion-based Models for Unpaired Super-resolution in Fluid Dynamics
ABSTRACT: High-fidelity, high-resolution numerical simulations are crucial for studying
complex multiscale phenomena in fluid dynamics, such as turbulent flows and
ocean waves. However, direct numerical simulations with high-resolution solvers
are computationally prohibitive. As an alternative, super-resolution techniques
enable the enhancement of low-fidelity, low-resolution simulations. However,
traditional super-resolution approaches rely on paired low-fidelity,
low-resolution and high-fidelity, high-resolution datasets for training, which
are often impossible to acquire in complex flow systems. To address this
challenge, we propose a novel two-step approach that eliminates the need for
paired datasets. First, we perform unpaired domain translation at the
low-resolution level using an Enhanced Denoising Diffusion Implicit Bridge.
This process transforms low-fidelity, low-resolution inputs into high-fidelity,
low-resolution outputs, and we provide a theoretical analysis to highlight the
advantages of this enhanced diffusion-based approach. Second, we employ the
cascaded Super-Resolution via Repeated Refinement model to upscale the
high-fidelity, low-resolution prediction to the high-resolution result. We
demonstrate the effectiveness of our approach across three fluid dynamics
problems. Moreover, by incorporating a neural operator to learn system
dynamics, our method can be extended to improve evolutionary simulations of
low-fidelity, low-resolution data.
|
2504.05461 | Arnas Uselis | Arnas Uselis, Seong Joon Oh | Intermediate Layer Classifiers for OOD generalization | ICLR 2025 | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Deep classifiers are known to be sensitive to data distribution shifts,
primarily due to their reliance on spurious correlations in training data. It
has been suggested that these classifiers can still find useful features in the
network's last layer that hold up under such shifts. In this work, we question
the use of last-layer representations for out-of-distribution (OOD)
generalisation and explore the utility of intermediate layers. To this end, we
introduce \textit{Intermediate Layer Classifiers} (ILCs). We discover that
intermediate layer representations frequently offer substantially better
generalisation than those from the penultimate layer. In many cases, zero-shot
OOD generalisation using earlier-layer representations approaches the few-shot
performance of retraining on penultimate layer representations. This is
confirmed across multiple datasets, architectures, and types of distribution
shifts. Our analysis suggests that intermediate layers are less sensitive to
distribution shifts compared to the penultimate layer. These findings highlight
the importance of understanding how information is distributed across network
layers and its role in OOD generalisation, while also pointing to the limits of
penultimate layer representation utility. Code is available at
https://github.com/oshapio/intermediate-layer-generalization
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 19:50:50 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Uselis",
"Arnas",
""
],
[
"Oh",
"Seong Joon",
""
]
] | TITLE: Intermediate Layer Classifiers for OOD generalization
ABSTRACT: Deep classifiers are known to be sensitive to data distribution shifts,
primarily due to their reliance on spurious correlations in training data. It
has been suggested that these classifiers can still find useful features in the
network's last layer that hold up under such shifts. In this work, we question
the use of last-layer representations for out-of-distribution (OOD)
generalisation and explore the utility of intermediate layers. To this end, we
introduce \textit{Intermediate Layer Classifiers} (ILCs). We discover that
intermediate layer representations frequently offer substantially better
generalisation than those from the penultimate layer. In many cases, zero-shot
OOD generalisation using earlier-layer representations approaches the few-shot
performance of retraining on penultimate layer representations. This is
confirmed across multiple datasets, architectures, and types of distribution
shifts. Our analysis suggests that intermediate layers are less sensitive to
distribution shifts compared to the penultimate layer. These findings highlight
the importance of understanding how information is distributed across network
layers and its role in OOD generalisation, while also pointing to the limits of
penultimate layer representation utility. Code is available at
https://github.com/oshapio/intermediate-layer-generalization
|
2504.05466 | Jaise Johnson | Jaise Johnson, Chinmayi R Galigekere and Manoj M Varma | A Solid-State Nanopore Signal Generator for Training Machine Learning
Models | Main text and supplementary information combined: 47 pages. Main
text: 13 pages, 4 figures. Supplementary Information: 34 pages, 29 figures | null | null | null | eess.SP physics.bio-ph q-bio.BM stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Translocation event detection from raw nanopore current signals is a
fundamental step in nanopore signal analysis. Traditional data analysis methods
rely on user-defined parameters to extract event information, making the
interpretation of experimental results sensitive to parameter choice. While
Machine Learning (ML) has seen widespread adoption across various scientific
fields, its potential remains underexplored in solid-state nanopore research.
In this work, we introduce a nanopore signal generator capable of producing
extensive synthetic datasets for machine learning applications and benchmarking
nanopore signal analysis platforms. Using this generator, we train deep
learning models to detect translocation events directly from raw signals,
achieving over 99% true event detection with minimal false positives.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 19:56:35 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Johnson",
"Jaise",
""
],
[
"Galigekere",
"Chinmayi R",
""
],
[
"Varma",
"Manoj M",
""
]
] | TITLE: A Solid-State Nanopore Signal Generator for Training Machine Learning
Models
ABSTRACT: Translocation event detection from raw nanopore current signals is a
fundamental step in nanopore signal analysis. Traditional data analysis methods
rely on user-defined parameters to extract event information, making the
interpretation of experimental results sensitive to parameter choice. While
Machine Learning (ML) has seen widespread adoption across various scientific
fields, its potential remains underexplored in solid-state nanopore research.
In this work, we introduce a nanopore signal generator capable of producing
extensive synthetic datasets for machine learning applications and benchmarking
nanopore signal analysis platforms. Using this generator, we train deep
learning models to detect translocation events directly from raw signals,
achieving over 99% true event detection with minimal false positives.
|
2504.05468 | Thanos Delatolas | Thanos Delatolas, Vicky Kalogeiton, Dim P. Papadopoulos | Studying Image Diffusion Features for Zero-Shot Video Object
Segmentation | Accepted to CVPRW2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper investigates the use of large-scale diffusion models for Zero-Shot
Video Object Segmentation (ZS-VOS) without fine-tuning on video data or
training on any image segmentation data. While diffusion models have
demonstrated strong visual representations across various tasks, their direct
application to ZS-VOS remains underexplored. Our goal is to find the optimal
feature extraction process for ZS-VOS by identifying the most suitable time
step and layer from which to extract features. We further analyze the affinity
of these features and observe a strong correlation with point correspondences.
Through extensive experiments on DAVIS-17 and MOSE, we find that diffusion
models trained on ImageNet outperform those trained on larger, more diverse
datasets for ZS-VOS. Additionally, we highlight the importance of point
correspondences in achieving high segmentation accuracy, and we yield
state-of-the-art results in ZS-VOS. Finally, our approach performs on par with
models trained on expensive image segmentation datasets.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 19:58:25 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Delatolas",
"Thanos",
""
],
[
"Kalogeiton",
"Vicky",
""
],
[
"Papadopoulos",
"Dim P.",
""
]
] | TITLE: Studying Image Diffusion Features for Zero-Shot Video Object
Segmentation
ABSTRACT: This paper investigates the use of large-scale diffusion models for Zero-Shot
Video Object Segmentation (ZS-VOS) without fine-tuning on video data or
training on any image segmentation data. While diffusion models have
demonstrated strong visual representations across various tasks, their direct
application to ZS-VOS remains underexplored. Our goal is to find the optimal
feature extraction process for ZS-VOS by identifying the most suitable time
step and layer from which to extract features. We further analyze the affinity
of these features and observe a strong correlation with point correspondences.
Through extensive experiments on DAVIS-17 and MOSE, we find that diffusion
models trained on ImageNet outperform those trained on larger, more diverse
datasets for ZS-VOS. Additionally, we highlight the importance of point
correspondences in achieving high segmentation accuracy, and we yield
state-of-the-art results in ZS-VOS. Finally, our approach performs on par with
models trained on expensive image segmentation datasets.
|
2504.05491 | Sakib Reza | Sakib Reza, Xiyun Song, Heather Yu, Zongfang Lin, Mohsen Moghaddam,
Octavia Camps | REEF: Relevance-Aware and Efficient LLM Adapter for Video Understanding | Accepted at CVPRW'25 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Integrating vision models into large language models (LLMs) has sparked
significant interest in creating vision-language foundation models, especially
for video understanding. Recent methods often utilize memory banks to handle
untrimmed videos for video-level understanding. However, they typically
compress visual memory using similarity-based greedy approaches, which can
overlook the contextual importance of individual tokens. To address this, we
introduce an efficient LLM adapter designed for video-level understanding of
untrimmed videos that prioritizes the contextual relevance of spatio-temporal
tokens. Our framework leverages scorer networks to selectively compress the
visual memory bank and filter spatial tokens based on relevance, using a
differentiable Top-K operator for end-to-end training. Across three key
video-level understanding tasks$\unicode{x2013}$ untrimmed video
classification, video question answering, and video
captioning$\unicode{x2013}$our method achieves competitive or superior results
on four large-scale datasets while reducing computational overhead by up to
34%. The code will be available soon on GitHub.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 20:36:34 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Reza",
"Sakib",
""
],
[
"Song",
"Xiyun",
""
],
[
"Yu",
"Heather",
""
],
[
"Lin",
"Zongfang",
""
],
[
"Moghaddam",
"Mohsen",
""
],
[
"Camps",
"Octavia",
""
]
] | TITLE: REEF: Relevance-Aware and Efficient LLM Adapter for Video Understanding
ABSTRACT: Integrating vision models into large language models (LLMs) has sparked
significant interest in creating vision-language foundation models, especially
for video understanding. Recent methods often utilize memory banks to handle
untrimmed videos for video-level understanding. However, they typically
compress visual memory using similarity-based greedy approaches, which can
overlook the contextual importance of individual tokens. To address this, we
introduce an efficient LLM adapter designed for video-level understanding of
untrimmed videos that prioritizes the contextual relevance of spatio-temporal
tokens. Our framework leverages scorer networks to selectively compress the
visual memory bank and filter spatial tokens based on relevance, using a
differentiable Top-K operator for end-to-end training. Across three key
video-level understanding tasks$\unicode{x2013}$ untrimmed video
classification, video question answering, and video
captioning$\unicode{x2013}$our method achieves competitive or superior results
on four large-scale datasets while reducing computational overhead by up to
34%. The code will be available soon on GitHub.
|
2504.05499 | Ruoyu Xue | Ruoyu Xue, Jingyi Xu, Sounak Mondal, Hieu Le, Gregory Zelinsky, Minh
Hoai, Dimitris Samaras | Few-shot Personalized Scanpath Prediction | Accepted by CVPR 2025,20 pages, 10 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | A personalized model for scanpath prediction provides insights into the
visual preferences and attention patterns of individual subjects. However,
existing methods for training scanpath prediction models are data-intensive and
cannot be effectively personalized to new individuals with only a few available
examples. In this paper, we propose few-shot personalized scanpath prediction
task (FS-PSP) and a novel method to address it, which aims to predict scanpaths
for an unseen subject using minimal support data of that subject's scanpath
behavior. The key to our method's adaptability is the Subject-Embedding Network
(SE-Net), specifically designed to capture unique, individualized
representations for each subject's scanpaths. SE-Net generates subject
embeddings that effectively distinguish between subjects while minimizing
variability among scanpaths from the same individual. The personalized scanpath
prediction model is then conditioned on these subject embeddings to produce
accurate, personalized results. Experiments on multiple eye-tracking datasets
demonstrate that our method excels in FS-PSP settings and does not require any
fine-tuning steps at test time. Code is available at:
https://github.com/cvlab-stonybrook/few-shot-scanpath
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 20:48:41 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Xue",
"Ruoyu",
""
],
[
"Xu",
"Jingyi",
""
],
[
"Mondal",
"Sounak",
""
],
[
"Le",
"Hieu",
""
],
[
"Zelinsky",
"Gregory",
""
],
[
"Hoai",
"Minh",
""
],
[
"Samaras",
"Dimitris",
""
]
] | TITLE: Few-shot Personalized Scanpath Prediction
ABSTRACT: A personalized model for scanpath prediction provides insights into the
visual preferences and attention patterns of individual subjects. However,
existing methods for training scanpath prediction models are data-intensive and
cannot be effectively personalized to new individuals with only a few available
examples. In this paper, we propose few-shot personalized scanpath prediction
task (FS-PSP) and a novel method to address it, which aims to predict scanpaths
for an unseen subject using minimal support data of that subject's scanpath
behavior. The key to our method's adaptability is the Subject-Embedding Network
(SE-Net), specifically designed to capture unique, individualized
representations for each subject's scanpaths. SE-Net generates subject
embeddings that effectively distinguish between subjects while minimizing
variability among scanpaths from the same individual. The personalized scanpath
prediction model is then conditioned on these subject embeddings to produce
accurate, personalized results. Experiments on multiple eye-tracking datasets
demonstrate that our method excels in FS-PSP settings and does not require any
fine-tuning steps at test time. Code is available at:
https://github.com/cvlab-stonybrook/few-shot-scanpath
|
2504.05504 | Marija Ivanovska | Marija Ivanovska, Leon Todorov, Naser Damer, Deepak Kumar Jain, Peter
Peer, Vitomir \v{S}truc | SelfMAD: Enhancing Generalization and Robustness in Morphing Attack
Detection via Self-Supervised Learning | Accepted at IEEE International Conference on Automatic Face and
Gesture Recognition (FG 2025) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | With the continuous advancement of generative models, face morphing attacks
have become a significant challenge for existing face verification systems due
to their potential use in identity fraud and other malicious activities.
Contemporary Morphing Attack Detection (MAD) approaches frequently rely on
supervised, discriminative models trained on examples of bona fide and morphed
images. These models typically perform well with morphs generated with
techniques seen during training, but often lead to sub-optimal performance when
subjected to novel unseen morphing techniques. While unsupervised models have
been shown to perform better in terms of generalizability, they typically
result in higher error rates, as they struggle to effectively capture features
of subtle artifacts. To address these shortcomings, we present SelfMAD, a novel
self-supervised approach that simulates general morphing attack artifacts,
allowing classifiers to learn generic and robust decision boundaries without
overfitting to the specific artifacts induced by particular face morphing
methods. Through extensive experiments on widely used datasets, we demonstrate
that SelfMAD significantly outperforms current state-of-the-art MADs, reducing
the detection error by more than 64% in terms of EER when compared to the
strongest unsupervised competitor, and by more than 66%, when compared to the
best performing discriminative MAD model, tested in cross-morph settings. The
source code for SelfMAD is available at https://github.com/LeonTodorov/SelfMAD.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 21:03:00 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Ivanovska",
"Marija",
""
],
[
"Todorov",
"Leon",
""
],
[
"Damer",
"Naser",
""
],
[
"Jain",
"Deepak Kumar",
""
],
[
"Peer",
"Peter",
""
],
[
"Štruc",
"Vitomir",
""
]
] | TITLE: SelfMAD: Enhancing Generalization and Robustness in Morphing Attack
Detection via Self-Supervised Learning
ABSTRACT: With the continuous advancement of generative models, face morphing attacks
have become a significant challenge for existing face verification systems due
to their potential use in identity fraud and other malicious activities.
Contemporary Morphing Attack Detection (MAD) approaches frequently rely on
supervised, discriminative models trained on examples of bona fide and morphed
images. These models typically perform well with morphs generated with
techniques seen during training, but often lead to sub-optimal performance when
subjected to novel unseen morphing techniques. While unsupervised models have
been shown to perform better in terms of generalizability, they typically
result in higher error rates, as they struggle to effectively capture features
of subtle artifacts. To address these shortcomings, we present SelfMAD, a novel
self-supervised approach that simulates general morphing attack artifacts,
allowing classifiers to learn generic and robust decision boundaries without
overfitting to the specific artifacts induced by particular face morphing
methods. Through extensive experiments on widely used datasets, we demonstrate
that SelfMAD significantly outperforms current state-of-the-art MADs, reducing
the detection error by more than 64% in terms of EER when compared to the
strongest unsupervised competitor, and by more than 66%, when compared to the
best performing discriminative MAD model, tested in cross-morph settings. The
source code for SelfMAD is available at https://github.com/LeonTodorov/SelfMAD.
|
2504.05515 | Gerardo Iuliano | Gerardo Iuliano, Davide Corradini, Michele Pasqua, Mariano Ceccato,
Dario Di Nucci | How Do Solidity Versions Affect Vulnerability Detection Tools? An
Empirical Study | null | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Context: Smart contract vulnerabilities pose significant security risks for
the Ethereum ecosystem, driving the development of automated tools for
detection and mitigation. Smart contracts are written in Solidity, a
programming language that is rapidly evolving to add features and improvements
to enhance smart contract security. New versions of Solidity change the
compilation process, potentially affecting how tools interpret and analyze
smart contract code. Objective: In such a continuously evolving landscape, we
aim to investigate the compatibility of detection tools with Solidity versions.
More specifically, we present a plan to study detection tools by empirically
assessing (i) their compatibility with the Solidity pragma directives, (ii)
their detection effectiveness, and (iii) their execution time across different
versions of Solidity. Method: We will conduct an exploratory study by running
several tools and collecting a large number of real-world smart contracts to
create a balanced dataset. We will track and analyze the tool execution through
SmartBugs, a framework that facilitates the tool execution and allows the
integration of new tools.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 21:15:59 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Iuliano",
"Gerardo",
""
],
[
"Corradini",
"Davide",
""
],
[
"Pasqua",
"Michele",
""
],
[
"Ceccato",
"Mariano",
""
],
[
"Di Nucci",
"Dario",
""
]
] | TITLE: How Do Solidity Versions Affect Vulnerability Detection Tools? An
Empirical Study
ABSTRACT: Context: Smart contract vulnerabilities pose significant security risks for
the Ethereum ecosystem, driving the development of automated tools for
detection and mitigation. Smart contracts are written in Solidity, a
programming language that is rapidly evolving to add features and improvements
to enhance smart contract security. New versions of Solidity change the
compilation process, potentially affecting how tools interpret and analyze
smart contract code. Objective: In such a continuously evolving landscape, we
aim to investigate the compatibility of detection tools with Solidity versions.
More specifically, we present a plan to study detection tools by empirically
assessing (i) their compatibility with the Solidity pragma directives, (ii)
their detection effectiveness, and (iii) their execution time across different
versions of Solidity. Method: We will conduct an exploratory study by running
several tools and collecting a large number of real-world smart contracts to
create a balanced dataset. We will track and analyze the tool execution through
SmartBugs, a framework that facilitates the tool execution and allows the
integration of new tools.
|
2504.05520 | Taiwei Shi | Taiwei Shi, Yiyang Wu, Linxin Song, Tianyi Zhou, Jieyu Zhao | Efficient Reinforcement Finetuning via Adaptive Curriculum Learning | 18 pages, 4 figures, 2 tables | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | Reinforcement finetuning (RFT) has shown great potential for enhancing the
mathematical reasoning capabilities of large language models (LLMs), but it is
often sample- and compute-inefficient, requiring extensive training. In this
work, we introduce AdaRFT (Adaptive Curriculum Reinforcement Finetuning), a
method that significantly improves both the efficiency and final accuracy of
RFT through adaptive curriculum learning. AdaRFT dynamically adjusts the
difficulty of training problems based on the model's recent reward signals,
ensuring that the model consistently trains on tasks that are challenging but
solvable. This adaptive sampling strategy accelerates learning by maintaining
an optimal difficulty range, avoiding wasted computation on problems that are
too easy or too hard. AdaRFT requires only a lightweight extension to standard
RFT algorithms like Proximal Policy Optimization (PPO), without modifying the
reward function or model architecture. Experiments on competition-level math
datasets-including AMC, AIME, and IMO-style problems-demonstrate that AdaRFT
significantly improves both training efficiency and reasoning performance. We
evaluate AdaRFT across multiple data distributions and model sizes, showing
that it reduces the number of training steps by up to 2x and improves accuracy
by a considerable margin, offering a more scalable and effective RFT framework.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 21:31:31 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Shi",
"Taiwei",
""
],
[
"Wu",
"Yiyang",
""
],
[
"Song",
"Linxin",
""
],
[
"Zhou",
"Tianyi",
""
],
[
"Zhao",
"Jieyu",
""
]
] | TITLE: Efficient Reinforcement Finetuning via Adaptive Curriculum Learning
ABSTRACT: Reinforcement finetuning (RFT) has shown great potential for enhancing the
mathematical reasoning capabilities of large language models (LLMs), but it is
often sample- and compute-inefficient, requiring extensive training. In this
work, we introduce AdaRFT (Adaptive Curriculum Reinforcement Finetuning), a
method that significantly improves both the efficiency and final accuracy of
RFT through adaptive curriculum learning. AdaRFT dynamically adjusts the
difficulty of training problems based on the model's recent reward signals,
ensuring that the model consistently trains on tasks that are challenging but
solvable. This adaptive sampling strategy accelerates learning by maintaining
an optimal difficulty range, avoiding wasted computation on problems that are
too easy or too hard. AdaRFT requires only a lightweight extension to standard
RFT algorithms like Proximal Policy Optimization (PPO), without modifying the
reward function or model architecture. Experiments on competition-level math
datasets-including AMC, AIME, and IMO-style problems-demonstrate that AdaRFT
significantly improves both training efficiency and reasoning performance. We
evaluate AdaRFT across multiple data distributions and model sizes, showing
that it reduces the number of training steps by up to 2x and improves accuracy
by a considerable margin, offering a more scalable and effective RFT framework.
|
2504.05521 | Andrei Neagu | Andrei Neagu, Fr\'ed\'eric Godin, Leila Kosseim | Deep Reinforcement Learning Algorithms for Option Hedging | null | null | null | null | q-fin.CP cs.AI cs.CE | http://creativecommons.org/licenses/by/4.0/ | Dynamic hedging is a financial strategy that consists in periodically
transacting one or multiple financial assets to offset the risk associated with
a correlated liability. Deep Reinforcement Learning (DRL) algorithms have been
used to find optimal solutions to dynamic hedging problems by framing them as
sequential decision-making problems. However, most previous work assesses the
performance of only one or two DRL algorithms, making an objective comparison
across algorithms difficult. In this paper, we compare the performance of eight
DRL algorithms in the context of dynamic hedging; Monte Carlo Policy Gradient
(MCPG), Proximal Policy Optimization (PPO), along with four variants of Deep
Q-Learning (DQL) and two variants of Deep Deterministic Policy Gradient (DDPG).
Two of these variants represent a novel application to the task of dynamic
hedging. In our experiments, we use the Black-Scholes delta hedge as a baseline
and simulate the dataset using a GJR-GARCH(1,1) model. Results show that MCPG,
followed by PPO, obtain the best performance in terms of the root
semi-quadratic penalty. Moreover, MCPG is the only algorithm to outperform the
Black-Scholes delta hedge baseline with the allotted computational budget,
possibly due to the sparsity of rewards in our environment.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 21:32:14 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Neagu",
"Andrei",
""
],
[
"Godin",
"Frédéric",
""
],
[
"Kosseim",
"Leila",
""
]
] | TITLE: Deep Reinforcement Learning Algorithms for Option Hedging
ABSTRACT: Dynamic hedging is a financial strategy that consists in periodically
transacting one or multiple financial assets to offset the risk associated with
a correlated liability. Deep Reinforcement Learning (DRL) algorithms have been
used to find optimal solutions to dynamic hedging problems by framing them as
sequential decision-making problems. However, most previous work assesses the
performance of only one or two DRL algorithms, making an objective comparison
across algorithms difficult. In this paper, we compare the performance of eight
DRL algorithms in the context of dynamic hedging; Monte Carlo Policy Gradient
(MCPG), Proximal Policy Optimization (PPO), along with four variants of Deep
Q-Learning (DQL) and two variants of Deep Deterministic Policy Gradient (DDPG).
Two of these variants represent a novel application to the task of dynamic
hedging. In our experiments, we use the Black-Scholes delta hedge as a baseline
and simulate the dataset using a GJR-GARCH(1,1) model. Results show that MCPG,
followed by PPO, obtain the best performance in terms of the root
semi-quadratic penalty. Moreover, MCPG is the only algorithm to outperform the
Black-Scholes delta hedge baseline with the allotted computational budget,
possibly due to the sparsity of rewards in our environment.
|
2504.05530 | Rishav Mukherjee | Rishav Mukherjee, Jeffrey Ahearn Thompson | FORCE: Feature-Oriented Representation with Clustering and Explanation | 12 pages, 3 figures | null | null | null | cs.LG cs.AI stat.AP | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Learning about underlying patterns in data using latent unobserved structures
to improve the accuracy of predictive models has become an active avenue of
deep learning research. Most approaches cluster the original features to
capture certain latent structures. However, the information gained in the
process can often be implicitly derived by sufficiently complex models. Thus,
such approaches often provide minimal benefits. We propose a SHAP (Shapley
Additive exPlanations) based supervised deep learning framework FORCE which
relies on two-stage usage of SHAP values in the neural network architecture,
(i) an additional latent feature to guide model training, based on clustering
SHAP values, and (ii) initiating an attention mechanism within the architecture
using latent information. This approach gives a neural network an indication
about the effect of unobserved values that modify feature importance for an
observation. The proposed framework is evaluated on three real life datasets.
Our results demonstrate that FORCE led to dramatic improvements in overall
performance as compared to networks that did not incorporate the latent feature
and attention framework (e.g., F1 score for presence of heart disease 0.80 vs
0.72). Using cluster assignments and attention based on SHAP values guides deep
learning, enhancing latent pattern learning and overall discriminative
capability.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 22:05:50 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Mukherjee",
"Rishav",
""
],
[
"Thompson",
"Jeffrey Ahearn",
""
]
] | TITLE: FORCE: Feature-Oriented Representation with Clustering and Explanation
ABSTRACT: Learning about underlying patterns in data using latent unobserved structures
to improve the accuracy of predictive models has become an active avenue of
deep learning research. Most approaches cluster the original features to
capture certain latent structures. However, the information gained in the
process can often be implicitly derived by sufficiently complex models. Thus,
such approaches often provide minimal benefits. We propose a SHAP (Shapley
Additive exPlanations) based supervised deep learning framework FORCE which
relies on two-stage usage of SHAP values in the neural network architecture,
(i) an additional latent feature to guide model training, based on clustering
SHAP values, and (ii) initiating an attention mechanism within the architecture
using latent information. This approach gives a neural network an indication
about the effect of unobserved values that modify feature importance for an
observation. The proposed framework is evaluated on three real life datasets.
Our results demonstrate that FORCE led to dramatic improvements in overall
performance as compared to networks that did not incorporate the latent feature
and attention framework (e.g., F1 score for presence of heart disease 0.80 vs
0.72). Using cluster assignments and attention based on SHAP values guides deep
learning, enhancing latent pattern learning and overall discriminative
capability.
|
2504.05534 | Arnau Marin-Llobet | Arnau Marin-Llobet, Arnau Manasanch, Sergio Sanchez-Manso, Lluc
Tresserras, Xinhe Zhang, Yining Hua, Hao Zhao, Melody Torao-Angosto, Maria V
Sanchez-Vives, Leonardo Dalla Porta | Riemannian Geometry for the classification of brain states with
intracortical brain-computer interfaces | Preprint | null | null | null | q-bio.NC cs.LG | http://creativecommons.org/licenses/by/4.0/ | This study investigates the application of Riemannian geometry-based methods
for brain decoding using invasive electrophysiological recordings. Although
previously employed in non-invasive, the utility of Riemannian geometry for
invasive datasets, which are typically smaller and scarcer, remains less
explored. Here, we propose a Minimum Distance to Mean (MDM) classifier using a
Riemannian geometry approach based on covariance matrices extracted from
intracortical Local Field Potential (LFP) recordings across various regions
during different brain state dynamics. For benchmarking, we evaluated the
performance of our approach against Convolutional Neural Networks (CNNs) and
Euclidean MDM classifiers. Our results indicate that the Riemannian
geometry-based classification not only achieves a superior mean F1
macro-averaged score across different channel configurations but also requires
up to two orders of magnitude less computational training time. Additionally,
the geometric framework reveals distinct spatial contributions of brain regions
across varying brain states, suggesting a state-dependent organization that
traditional time series-based methods often fail to capture. Our findings align
with previous studies supporting the efficacy of geometry-based methods and
extending their application to invasive brain recordings, highlighting their
potential for broader clinical use, such as brain computer interface
applications.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 22:11:59 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Marin-Llobet",
"Arnau",
""
],
[
"Manasanch",
"Arnau",
""
],
[
"Sanchez-Manso",
"Sergio",
""
],
[
"Tresserras",
"Lluc",
""
],
[
"Zhang",
"Xinhe",
""
],
[
"Hua",
"Yining",
""
],
[
"Zhao",
"Hao",
""
],
[
"Torao-Angosto",
"Melody",
""
],
[
"Sanchez-Vives",
"Maria V",
""
],
[
"Porta",
"Leonardo Dalla",
""
]
] | TITLE: Riemannian Geometry for the classification of brain states with
intracortical brain-computer interfaces
ABSTRACT: This study investigates the application of Riemannian geometry-based methods
for brain decoding using invasive electrophysiological recordings. Although
previously employed in non-invasive, the utility of Riemannian geometry for
invasive datasets, which are typically smaller and scarcer, remains less
explored. Here, we propose a Minimum Distance to Mean (MDM) classifier using a
Riemannian geometry approach based on covariance matrices extracted from
intracortical Local Field Potential (LFP) recordings across various regions
during different brain state dynamics. For benchmarking, we evaluated the
performance of our approach against Convolutional Neural Networks (CNNs) and
Euclidean MDM classifiers. Our results indicate that the Riemannian
geometry-based classification not only achieves a superior mean F1
macro-averaged score across different channel configurations but also requires
up to two orders of magnitude less computational training time. Additionally,
the geometric framework reveals distinct spatial contributions of brain regions
across varying brain states, suggesting a state-dependent organization that
traditional time series-based methods often fail to capture. Our findings align
with previous studies supporting the efficacy of geometry-based methods and
extending their application to invasive brain recordings, highlighting their
potential for broader clinical use, such as brain computer interface
applications.
|
2504.05535 | Siwei Wu | M-A-P Team, Siwei Wu, Jincheng Ren, Xinrun Du, Shuyue Guo, Xingwei Qu,
Yiming Liang, Jie Liu, Yunwen Li, Tianyu Zheng, Boyu Feng, Huaqing Yuan,
Zenith Wang, Jiaheng Liu, Wenhao Huang, Chenglin Cai, Haoran Que, Jian Yang,
Yuelin Bai, Zekun Moore Wang, Zhouliang Yu, Qunshu Lin, Ding Pan, Yuchen
Jiang, Tiannan Wang, Wangchunshu Zhou, Shenzhi Wang, Xingyuan Bu, Minghao
Liu, Guoyin Wang, Ge Zhang, Chenghua Lin | COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for
Alignment with Human Values | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Aligning large language models (LLMs) with human preferences has achieved
remarkable success. However, existing Chinese preference datasets are limited
by small scale, narrow domain coverage, and lack of rigorous data validation.
Additionally, the reliance on human annotators for instruction and response
labeling significantly constrains the scalability of human preference datasets.
To address these challenges, we design an LLM-based Chinese preference dataset
annotation pipeline with no human intervention. Specifically, we crawled and
carefully filtered 92k high-quality Chinese queries and employed 15 mainstream
LLMs to generate and score chosen-rejected response pairs. Based on it, we
introduce COIG-P (Chinese Open Instruction Generalist - Preference), a
high-quality, large-scale Chinese preference dataset, comprises 1,009k Chinese
preference pairs spanning 6 diverse domains: Chat, Code, Math, Logic, Novel,
and Role. Building upon COIG-P, to reduce the overhead of using LLMs for
scoring, we trained a 8B-sized Chinese Reward Model (CRM) and meticulously
constructed a Chinese Reward Benchmark (CRBench). Evaluation results based on
AlignBench \citep{liu2024alignbenchbenchmarkingchinesealignment} show that that
COIG-P significantly outperforms other Chinese preference datasets, and it
brings significant performance improvements ranging from 2% to 12% for the
Qwen2/2.5 and Infinity-Instruct-3M-0625 model series, respectively. The results
on CRBench demonstrate that our CRM has a strong and robust scoring ability. We
apply it to filter chosen-rejected response pairs in a test split of COIG-P,
and our experiments show that it is comparable to GPT-4o in identifying
low-quality samples while maintaining efficiency and cost-effectiveness. Our
codes and data are released in
https://github.com/multimodal-art-projection/COIG-P.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 22:15:51 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"P Team",
"",
""
],
[
"Wu",
"Siwei",
""
],
[
"Ren",
"Jincheng",
""
],
[
"Du",
"Xinrun",
""
],
[
"Guo",
"Shuyue",
""
],
[
"Qu",
"Xingwei",
""
],
[
"Liang",
"Yiming",
""
],
[
"Liu",
"Jie",
""
],
[
"Li",
"Yunwen",
""
],
[
"Zheng",
"Tianyu",
""
],
[
"Feng",
"Boyu",
""
],
[
"Yuan",
"Huaqing",
""
],
[
"Wang",
"Zenith",
""
],
[
"Liu",
"Jiaheng",
""
],
[
"Huang",
"Wenhao",
""
],
[
"Cai",
"Chenglin",
""
],
[
"Que",
"Haoran",
""
],
[
"Yang",
"Jian",
""
],
[
"Bai",
"Yuelin",
""
],
[
"Wang",
"Zekun Moore",
""
],
[
"Yu",
"Zhouliang",
""
],
[
"Lin",
"Qunshu",
""
],
[
"Pan",
"Ding",
""
],
[
"Jiang",
"Yuchen",
""
],
[
"Wang",
"Tiannan",
""
],
[
"Zhou",
"Wangchunshu",
""
],
[
"Wang",
"Shenzhi",
""
],
[
"Bu",
"Xingyuan",
""
],
[
"Liu",
"Minghao",
""
],
[
"Wang",
"Guoyin",
""
],
[
"Zhang",
"Ge",
""
],
[
"Lin",
"Chenghua",
""
]
] | TITLE: COIG-P: A High-Quality and Large-Scale Chinese Preference Dataset for
Alignment with Human Values
ABSTRACT: Aligning large language models (LLMs) with human preferences has achieved
remarkable success. However, existing Chinese preference datasets are limited
by small scale, narrow domain coverage, and lack of rigorous data validation.
Additionally, the reliance on human annotators for instruction and response
labeling significantly constrains the scalability of human preference datasets.
To address these challenges, we design an LLM-based Chinese preference dataset
annotation pipeline with no human intervention. Specifically, we crawled and
carefully filtered 92k high-quality Chinese queries and employed 15 mainstream
LLMs to generate and score chosen-rejected response pairs. Based on it, we
introduce COIG-P (Chinese Open Instruction Generalist - Preference), a
high-quality, large-scale Chinese preference dataset, comprises 1,009k Chinese
preference pairs spanning 6 diverse domains: Chat, Code, Math, Logic, Novel,
and Role. Building upon COIG-P, to reduce the overhead of using LLMs for
scoring, we trained a 8B-sized Chinese Reward Model (CRM) and meticulously
constructed a Chinese Reward Benchmark (CRBench). Evaluation results based on
AlignBench \citep{liu2024alignbenchbenchmarkingchinesealignment} show that that
COIG-P significantly outperforms other Chinese preference datasets, and it
brings significant performance improvements ranging from 2% to 12% for the
Qwen2/2.5 and Infinity-Instruct-3M-0625 model series, respectively. The results
on CRBench demonstrate that our CRM has a strong and robust scoring ability. We
apply it to filter chosen-rejected response pairs in a test split of COIG-P,
and our experiments show that it is comparable to GPT-4o in identifying
low-quality samples while maintaining efficiency and cost-effectiveness. Our
codes and data are released in
https://github.com/multimodal-art-projection/COIG-P.
|
2504.05537 | Tasmiah Haque | Tasmiah Haque, Md. Asif Bin Syed, Byungheon Jeong, Xue Bai, Sumit
Mohan, Somdyuti Paul, Imtiaz Ahmed and Srinjoy Das | Towards Efficient Real-Time Video Motion Transfer via Generative Time
Series Modeling | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a deep learning framework designed to significantly optimize
bandwidth for motion-transfer-enabled video applications, including video
conferencing, virtual reality interactions, health monitoring systems, and
vision-based real-time anomaly detection. To capture complex motion
effectively, we utilize the First Order Motion Model (FOMM), which encodes
dynamic objects by detecting keypoints and their associated local affine
transformations. These keypoints are identified using a self-supervised
keypoint detector and arranged into a time series corresponding to the
successive frames. Forecasting is performed on these keypoints by integrating
two advanced generative time series models into the motion transfer pipeline,
namely the Variational Recurrent Neural Network (VRNN) and the Gated Recurrent
Unit with Normalizing Flow (GRU-NF). The predicted keypoints are subsequently
synthesized into realistic video frames using an optical flow estimator paired
with a generator network, thereby facilitating accurate video forecasting and
enabling efficient, low-frame-rate video transmission. We validate our results
across three datasets for video animation and reconstruction using the
following metrics: Mean Absolute Error, Joint Embedding Predictive Architecture
Embedding Distance, Structural Similarity Index, and Average Pair-wise
Displacement. Our results confirm that by utilizing the superior reconstruction
property of the Variational Autoencoder, the VRNN integrated FOMM excels in
applications involving multi-step ahead forecasts such as video conferencing.
On the other hand, by leveraging the Normalizing Flow architecture for exact
likelihood estimation, and enabling efficient latent space sampling, the GRU-NF
based FOMM exhibits superior capabilities for producing diverse future samples
while maintaining high visual quality for tasks like real-time video-based
anomaly detection.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 22:21:54 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Haque",
"Tasmiah",
""
],
[
"Syed",
"Md. Asif Bin",
""
],
[
"Jeong",
"Byungheon",
""
],
[
"Bai",
"Xue",
""
],
[
"Mohan",
"Sumit",
""
],
[
"Paul",
"Somdyuti",
""
],
[
"Ahmed",
"Imtiaz",
""
],
[
"Das",
"Srinjoy",
""
]
] | TITLE: Towards Efficient Real-Time Video Motion Transfer via Generative Time
Series Modeling
ABSTRACT: We propose a deep learning framework designed to significantly optimize
bandwidth for motion-transfer-enabled video applications, including video
conferencing, virtual reality interactions, health monitoring systems, and
vision-based real-time anomaly detection. To capture complex motion
effectively, we utilize the First Order Motion Model (FOMM), which encodes
dynamic objects by detecting keypoints and their associated local affine
transformations. These keypoints are identified using a self-supervised
keypoint detector and arranged into a time series corresponding to the
successive frames. Forecasting is performed on these keypoints by integrating
two advanced generative time series models into the motion transfer pipeline,
namely the Variational Recurrent Neural Network (VRNN) and the Gated Recurrent
Unit with Normalizing Flow (GRU-NF). The predicted keypoints are subsequently
synthesized into realistic video frames using an optical flow estimator paired
with a generator network, thereby facilitating accurate video forecasting and
enabling efficient, low-frame-rate video transmission. We validate our results
across three datasets for video animation and reconstruction using the
following metrics: Mean Absolute Error, Joint Embedding Predictive Architecture
Embedding Distance, Structural Similarity Index, and Average Pair-wise
Displacement. Our results confirm that by utilizing the superior reconstruction
property of the Variational Autoencoder, the VRNN integrated FOMM excels in
applications involving multi-step ahead forecasts such as video conferencing.
On the other hand, by leveraging the Normalizing Flow architecture for exact
likelihood estimation, and enabling efficient latent space sampling, the GRU-NF
based FOMM exhibits superior capabilities for producing diverse future samples
while maintaining high visual quality for tasks like real-time video-based
anomaly detection.
|
2504.05559 | Erzhuo Shao | Erzhuo Shao, Yifang Wang, Yifan Qian, Zhenyu Pan, Han Liu, Dashun Wang | SciSciGPT: Advancing Human-AI Collaboration in the Science of Science | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The increasing availability of large-scale datasets has fueled rapid progress
across many scientific fields, creating unprecedented opportunities for
research and discovery while posing significant analytical challenges. Recent
advances in large language models (LLMs) and AI agents have opened new
possibilities for human-AI collaboration, offering powerful tools to navigate
this complex research landscape. In this paper, we introduce SciSciGPT, an
open-source, prototype AI collaborator that uses the science of science as a
testbed to explore the potential of LLM-powered research tools. SciSciGPT
automates complex workflows, supports diverse analytical approaches,
accelerates research prototyping and iteration, and facilitates
reproducibility. Through case studies, we demonstrate its ability to streamline
a wide range of empirical and analytical research tasks while highlighting its
broader potential to advance research. We further propose an LLM Agent
capability maturity model for human-AI collaboration, envisioning a roadmap to
further improve and expand upon frameworks like SciSciGPT. As AI capabilities
continue to evolve, frameworks like SciSciGPT may play increasingly pivotal
roles in scientific research and discovery, unlocking further opportunities. At
the same time, these new advances also raise critical challenges, from ensuring
transparency and ethical use to balancing human and AI contributions.
Addressing these issues may shape the future of scientific inquiry and inform
how we train the next generation of scientists to thrive in an increasingly
AI-integrated research ecosystem.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 23:19:39 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Shao",
"Erzhuo",
""
],
[
"Wang",
"Yifang",
""
],
[
"Qian",
"Yifan",
""
],
[
"Pan",
"Zhenyu",
""
],
[
"Liu",
"Han",
""
],
[
"Wang",
"Dashun",
""
]
] | TITLE: SciSciGPT: Advancing Human-AI Collaboration in the Science of Science
ABSTRACT: The increasing availability of large-scale datasets has fueled rapid progress
across many scientific fields, creating unprecedented opportunities for
research and discovery while posing significant analytical challenges. Recent
advances in large language models (LLMs) and AI agents have opened new
possibilities for human-AI collaboration, offering powerful tools to navigate
this complex research landscape. In this paper, we introduce SciSciGPT, an
open-source, prototype AI collaborator that uses the science of science as a
testbed to explore the potential of LLM-powered research tools. SciSciGPT
automates complex workflows, supports diverse analytical approaches,
accelerates research prototyping and iteration, and facilitates
reproducibility. Through case studies, we demonstrate its ability to streamline
a wide range of empirical and analytical research tasks while highlighting its
broader potential to advance research. We further propose an LLM Agent
capability maturity model for human-AI collaboration, envisioning a roadmap to
further improve and expand upon frameworks like SciSciGPT. As AI capabilities
continue to evolve, frameworks like SciSciGPT may play increasingly pivotal
roles in scientific research and discovery, unlocking further opportunities. At
the same time, these new advances also raise critical challenges, from ensuring
transparency and ethical use to balancing human and AI contributions.
Addressing these issues may shape the future of scientific inquiry and inform
how we train the next generation of scientists to thrive in an increasingly
AI-integrated research ecosystem.
|
2504.05565 | Xu Huang | Xu Huang, Bowen Deng, Peichen Zhong, Aaron D. Kaplan, Kristin A.
Persson, Gerbrand Ceder | Cross-functional transferability in universal machine learning
interatomic potentials | null | null | null | null | cond-mat.mtrl-sci cs.LG | http://creativecommons.org/licenses/by/4.0/ | The rapid development of universal machine learning interatomic potentials
(uMLIPs) has demonstrated the possibility for generalizable learning of the
universal potential energy surface. In principle, the accuracy of uMLIPs can be
further improved by bridging the model from lower-fidelity datasets to
high-fidelity ones. In this work, we analyze the challenge of this transfer
learning problem within the CHGNet framework. We show that significant energy
scale shifts and poor correlations between GGA and r$^2$SCAN pose challenges to
cross-functional data transferability in uMLIPs. By benchmarking different
transfer learning approaches on the MP-r$^2$SCAN dataset of 0.24 million
structures, we demonstrate the importance of elemental energy referencing in
the transfer learning of uMLIPs. By comparing the scaling law with and without
the pre-training on a low-fidelity dataset, we show that significant data
efficiency can still be achieved through transfer learning, even with a target
dataset of sub-million structures. We highlight the importance of proper
transfer learning and multi-fidelity learning in creating next-generation
uMLIPs on high-fidelity data.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 23:45:40 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Huang",
"Xu",
""
],
[
"Deng",
"Bowen",
""
],
[
"Zhong",
"Peichen",
""
],
[
"Kaplan",
"Aaron D.",
""
],
[
"Persson",
"Kristin A.",
""
],
[
"Ceder",
"Gerbrand",
""
]
] | TITLE: Cross-functional transferability in universal machine learning
interatomic potentials
ABSTRACT: The rapid development of universal machine learning interatomic potentials
(uMLIPs) has demonstrated the possibility for generalizable learning of the
universal potential energy surface. In principle, the accuracy of uMLIPs can be
further improved by bridging the model from lower-fidelity datasets to
high-fidelity ones. In this work, we analyze the challenge of this transfer
learning problem within the CHGNet framework. We show that significant energy
scale shifts and poor correlations between GGA and r$^2$SCAN pose challenges to
cross-functional data transferability in uMLIPs. By benchmarking different
transfer learning approaches on the MP-r$^2$SCAN dataset of 0.24 million
structures, we demonstrate the importance of elemental energy referencing in
the transfer learning of uMLIPs. By comparing the scaling law with and without
the pre-training on a low-fidelity dataset, we show that significant data
efficiency can still be achieved through transfer learning, even with a target
dataset of sub-million structures. We highlight the importance of proper
transfer learning and multi-fidelity learning in creating next-generation
uMLIPs on high-fidelity data.
|
2504.05571 | Menachem Brief | Oded Ovadia, Meni Brief, Rachel Lemberg, Eitam Sheetrit | Knowledge-Instruct: Effective Continual Pre-training from Limited Data
using Instructions | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | While Large Language Models (LLMs) acquire vast knowledge during
pre-training, they often lack domain-specific, new, or niche information.
Continual pre-training (CPT) attempts to address this gap but suffers from
catastrophic forgetting and inefficiencies in low-data regimes. We introduce
Knowledge-Instruct, a novel approach to efficiently inject knowledge from
limited corpora through pure instruction-tuning. By generating
information-dense synthetic instruction data, it effectively integrates new
knowledge while preserving general reasoning and instruction-following
abilities. Knowledge-Instruct demonstrates superior factual memorization,
minimizes catastrophic forgetting, and remains scalable by leveraging synthetic
data from relatively small language models. Additionally, it enhances
contextual understanding, including complex multi-hop reasoning, facilitating
integration with retrieval systems. We validate its effectiveness across
diverse benchmarks, including Companies, a new dataset that we release to
measure knowledge injection capabilities.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 00:00:36 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Ovadia",
"Oded",
""
],
[
"Brief",
"Meni",
""
],
[
"Lemberg",
"Rachel",
""
],
[
"Sheetrit",
"Eitam",
""
]
] | TITLE: Knowledge-Instruct: Effective Continual Pre-training from Limited Data
using Instructions
ABSTRACT: While Large Language Models (LLMs) acquire vast knowledge during
pre-training, they often lack domain-specific, new, or niche information.
Continual pre-training (CPT) attempts to address this gap but suffers from
catastrophic forgetting and inefficiencies in low-data regimes. We introduce
Knowledge-Instruct, a novel approach to efficiently inject knowledge from
limited corpora through pure instruction-tuning. By generating
information-dense synthetic instruction data, it effectively integrates new
knowledge while preserving general reasoning and instruction-following
abilities. Knowledge-Instruct demonstrates superior factual memorization,
minimizes catastrophic forgetting, and remains scalable by leveraging synthetic
data from relatively small language models. Additionally, it enhances
contextual understanding, including complex multi-hop reasoning, facilitating
integration with retrieval systems. We validate its effectiveness across
diverse benchmarks, including Companies, a new dataset that we release to
measure knowledge injection capabilities.
|
2504.05575 | Chris McCarthy | Belal Alsinglawi, Chris McCarthy, Sara Webb, Christopher Fluke, Navid
Toosy Saidy | A Lightweight Large Vision-language Model for Multimodal Medical Images | 10 pages, 4 figures | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Medical Visual Question Answering (VQA) enhances clinical decision-making by
enabling systems to interpret medical images and answer clinical queries.
However, developing efficient, high-performance VQA models is challenging due
to the complexity of medical imagery and diverse modalities. In this paper, we
introduce a lightweight, multimodal VQA model integrating BiomedCLIP for image
feature extraction and LLaMA-3 for text processing. Designed for medical VQA
tasks, our model achieves state-of-the-art performance on the OmniMedVQA
dataset. With approximately 8 billion parameters, it requires only two NVIDIA
40 GB A100 GPUs, demonstrating superior efficiency over larger models. Our
results show 73.4% accuracy for open-end questions, surpassing existing models
and validating its potential for real-world medical applications. Key
contributions include a specialized multimodal VQA model, a resource-efficient
architecture, and strong performance in answering open-ended clinical
questions.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 00:19:48 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Alsinglawi",
"Belal",
""
],
[
"McCarthy",
"Chris",
""
],
[
"Webb",
"Sara",
""
],
[
"Fluke",
"Christopher",
""
],
[
"Saidy",
"Navid Toosy",
""
]
] | TITLE: A Lightweight Large Vision-language Model for Multimodal Medical Images
ABSTRACT: Medical Visual Question Answering (VQA) enhances clinical decision-making by
enabling systems to interpret medical images and answer clinical queries.
However, developing efficient, high-performance VQA models is challenging due
to the complexity of medical imagery and diverse modalities. In this paper, we
introduce a lightweight, multimodal VQA model integrating BiomedCLIP for image
feature extraction and LLaMA-3 for text processing. Designed for medical VQA
tasks, our model achieves state-of-the-art performance on the OmniMedVQA
dataset. With approximately 8 billion parameters, it requires only two NVIDIA
40 GB A100 GPUs, demonstrating superior efficiency over larger models. Our
results show 73.4% accuracy for open-end questions, surpassing existing models
and validating its potential for real-world medical applications. Key
contributions include a specialized multimodal VQA model, a resource-efficient
architecture, and strong performance in answering open-ended clinical
questions.
|
2504.05583 | Jiahang Li | Jiahang Li, Shibo Xue and Yong Su | Gaze-Guided Learning: Avoiding Shortcut Bias in Visual Classification | 10 pages, 5 figures, 3 tables, URL:
https://szyyjl.github.io/eye_tracking_data.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Inspired by human visual attention, deep neural networks have widely adopted
attention mechanisms to learn locally discriminative attributes for challenging
visual classification tasks. However, existing approaches primarily emphasize
the representation of such features while neglecting their precise
localization, which often leads to misclassification caused by shortcut biases.
This limitation becomes even more pronounced when models are evaluated on
transfer or out-of-distribution datasets. In contrast, humans are capable of
leveraging prior object knowledge to quickly localize and compare fine-grained
attributes, a capability that is especially crucial in complex and
high-variance classification scenarios. Motivated by this, we introduce
Gaze-CIFAR-10, a human gaze time-series dataset, along with a dual-sequence
gaze encoder that models the precise sequential localization of human attention
on distinct local attributes. In parallel, a Vision Transformer (ViT) is
employed to learn the sequential representation of image content. Through
cross-modal fusion, our framework integrates human gaze priors with
machine-derived visual sequences, effectively correcting inaccurate
localization in image feature representations. Extensive qualitative and
quantitative experiments demonstrate that gaze-guided cognitive cues
significantly enhance classification accuracy.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 00:40:46 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Li",
"Jiahang",
""
],
[
"Xue",
"Shibo",
""
],
[
"Su",
"Yong",
""
]
] | TITLE: Gaze-Guided Learning: Avoiding Shortcut Bias in Visual Classification
ABSTRACT: Inspired by human visual attention, deep neural networks have widely adopted
attention mechanisms to learn locally discriminative attributes for challenging
visual classification tasks. However, existing approaches primarily emphasize
the representation of such features while neglecting their precise
localization, which often leads to misclassification caused by shortcut biases.
This limitation becomes even more pronounced when models are evaluated on
transfer or out-of-distribution datasets. In contrast, humans are capable of
leveraging prior object knowledge to quickly localize and compare fine-grained
attributes, a capability that is especially crucial in complex and
high-variance classification scenarios. Motivated by this, we introduce
Gaze-CIFAR-10, a human gaze time-series dataset, along with a dual-sequence
gaze encoder that models the precise sequential localization of human attention
on distinct local attributes. In parallel, a Vision Transformer (ViT) is
employed to learn the sequential representation of image content. Through
cross-modal fusion, our framework integrates human gaze priors with
machine-derived visual sequences, effectively correcting inaccurate
localization in image feature representations. Extensive qualitative and
quantitative experiments demonstrate that gaze-guided cognitive cues
significantly enhance classification accuracy.
|
2504.05591 | Tejas Sudharshan Mathai | Peter D. Erickson, Tejas Sudharshan Mathai, Ronald M. Summers | Class Imbalance Correction for Improved Universal Lesion Detection and
Tagging in CT | Published at MICCAI MILLAND Workshop 2022 | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Radiologists routinely detect and size lesions in CT to stage cancer and
assess tumor burden. To potentially aid their efforts, multiple lesion
detection algorithms have been developed with a large public dataset called
DeepLesion (32,735 lesions, 32,120 CT slices, 10,594 studies, 4,427 patients, 8
body part labels). However, this dataset contains missing measurements and
lesion tags, and exhibits a severe imbalance in the number of lesions per label
category. In this work, we utilize a limited subset of DeepLesion (6\%, 1331
lesions, 1309 slices) containing lesion annotations and body part label tags to
train a VFNet model to detect lesions and tag them. We address the class
imbalance by conducting three experiments: 1) Balancing data by the body part
labels, 2) Balancing data by the number of lesions per patient, and 3)
Balancing data by the lesion size. In contrast to a randomly sampled
(unbalanced) data subset, our results indicated that balancing the body part
labels always increased sensitivity for lesions >= 1cm for classes with low
data quantities (Bone: 80\% vs. 46\%, Kidney: 77\% vs. 61\%, Soft Tissue: 70\%
vs. 60\%, Pelvis: 83\% vs. 76\%). Similar trends were seen for three other
models tested (FasterRCNN, RetinaNet, FoveaBox). Balancing data by lesion size
also helped the VFNet model improve recalls for all classes in contrast to an
unbalanced dataset. We also provide a structured reporting guideline for a
``Lesions'' subsection to be entered into the ``Findings'' section of a
radiology report. To our knowledge, we are the first to report the class
imbalance in DeepLesion, and have taken data-driven steps to address it in the
context of joint lesion detection and tagging.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 00:58:26 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Erickson",
"Peter D.",
""
],
[
"Mathai",
"Tejas Sudharshan",
""
],
[
"Summers",
"Ronald M.",
""
]
] | TITLE: Class Imbalance Correction for Improved Universal Lesion Detection and
Tagging in CT
ABSTRACT: Radiologists routinely detect and size lesions in CT to stage cancer and
assess tumor burden. To potentially aid their efforts, multiple lesion
detection algorithms have been developed with a large public dataset called
DeepLesion (32,735 lesions, 32,120 CT slices, 10,594 studies, 4,427 patients, 8
body part labels). However, this dataset contains missing measurements and
lesion tags, and exhibits a severe imbalance in the number of lesions per label
category. In this work, we utilize a limited subset of DeepLesion (6\%, 1331
lesions, 1309 slices) containing lesion annotations and body part label tags to
train a VFNet model to detect lesions and tag them. We address the class
imbalance by conducting three experiments: 1) Balancing data by the body part
labels, 2) Balancing data by the number of lesions per patient, and 3)
Balancing data by the lesion size. In contrast to a randomly sampled
(unbalanced) data subset, our results indicated that balancing the body part
labels always increased sensitivity for lesions >= 1cm for classes with low
data quantities (Bone: 80\% vs. 46\%, Kidney: 77\% vs. 61\%, Soft Tissue: 70\%
vs. 60\%, Pelvis: 83\% vs. 76\%). Similar trends were seen for three other
models tested (FasterRCNN, RetinaNet, FoveaBox). Balancing data by lesion size
also helped the VFNet model improve recalls for all classes in contrast to an
unbalanced dataset. We also provide a structured reporting guideline for a
``Lesions'' subsection to be entered into the ``Findings'' section of a
radiology report. To our knowledge, we are the first to report the class
imbalance in DeepLesion, and have taken data-driven steps to address it in the
context of joint lesion detection and tagging.
|
2504.05601 | Zhenteng Li | Zhenteng Li, Sheng Lian, Dengfeng Pan, Youlin Wang, Wei Liu | AD-Det: Boosting Object Detection in UAV Images with Focused Small
Objects and Balanced Tail Classes | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Object detection in Unmanned Aerial Vehicle (UAV) images poses significant
challenges due to complex scale variations and class imbalance among objects.
Existing methods often address these challenges separately, overlooking the
intricate nature of UAV images and the potential synergy between them. In
response, this paper proposes AD-Det, a novel framework employing a coherent
coarse-to-fine strategy that seamlessly integrates two pivotal components:
Adaptive Small Object Enhancement (ASOE) and Dynamic Class-balanced Copy-paste
(DCC). ASOE utilizes a high-resolution feature map to identify and cluster
regions containing small objects. These regions are subsequently enlarged and
processed by a fine-grained detector. On the other hand, DCC conducts
object-level resampling by dynamically pasting tail classes around the cluster
centers obtained by ASOE, main-taining a dynamic memory bank for each tail
class. This approach enables AD-Det to not only extract regions with small
objects for precise detection but also dynamically perform reasonable
resampling for tail-class objects. Consequently, AD-Det enhances the overall
detection performance by addressing the challenges of scale variations and
class imbalance in UAV images through a synergistic and adaptive framework. We
extensively evaluate our approach on two public datasets, i.e., VisDrone and
UAVDT, and demonstrate that AD-Det significantly outperforms existing
competitive alternatives. Notably, AD-Det achieves a 37.5% Average Precision
(AP) on the VisDrone dataset, surpassing its counterparts by at least 3.1%.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 01:22:52 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Li",
"Zhenteng",
""
],
[
"Lian",
"Sheng",
""
],
[
"Pan",
"Dengfeng",
""
],
[
"Wang",
"Youlin",
""
],
[
"Liu",
"Wei",
""
]
] | TITLE: AD-Det: Boosting Object Detection in UAV Images with Focused Small
Objects and Balanced Tail Classes
ABSTRACT: Object detection in Unmanned Aerial Vehicle (UAV) images poses significant
challenges due to complex scale variations and class imbalance among objects.
Existing methods often address these challenges separately, overlooking the
intricate nature of UAV images and the potential synergy between them. In
response, this paper proposes AD-Det, a novel framework employing a coherent
coarse-to-fine strategy that seamlessly integrates two pivotal components:
Adaptive Small Object Enhancement (ASOE) and Dynamic Class-balanced Copy-paste
(DCC). ASOE utilizes a high-resolution feature map to identify and cluster
regions containing small objects. These regions are subsequently enlarged and
processed by a fine-grained detector. On the other hand, DCC conducts
object-level resampling by dynamically pasting tail classes around the cluster
centers obtained by ASOE, main-taining a dynamic memory bank for each tail
class. This approach enables AD-Det to not only extract regions with small
objects for precise detection but also dynamically perform reasonable
resampling for tail-class objects. Consequently, AD-Det enhances the overall
detection performance by addressing the challenges of scale variations and
class imbalance in UAV images through a synergistic and adaptive framework. We
extensively evaluate our approach on two public datasets, i.e., VisDrone and
UAVDT, and demonstrate that AD-Det significantly outperforms existing
competitive alternatives. Notably, AD-Det achieves a 37.5% Average Precision
(AP) on the VisDrone dataset, surpassing its counterparts by at least 3.1%.
|
2504.05603 | Majdi Radaideh | Naman Bhargava, Mohammed I. Radaideh, O Hwang Kwon, Aditi Verma, Majdi
I. Radaideh | On the Impact of Language Nuances on Sentiment Analysis with Large
Language Models: Paraphrasing, Sarcasm, and Emojis | 21 pages, 10 Tables, 5 figures | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Large Language Models (LLMs) have demonstrated impressive performance across
various tasks, including sentiment analysis. However, data
quality--particularly when sourced from social media--can significantly impact
their accuracy. This research explores how textual nuances, including emojis
and sarcasm, affect sentiment analysis, with a particular focus on improving
data quality through text paraphrasing techniques. To address the lack of
labeled sarcasm data, the authors created a human-labeled dataset of 5929
tweets that enabled the assessment of LLM in various sarcasm contexts. The
results show that when topic-specific datasets, such as those related to
nuclear power, are used to finetune LLMs these models are not able to
comprehend accurate sentiment in presence of sarcasm due to less diverse text,
requiring external interventions like sarcasm removal to boost model accuracy.
Sarcasm removal led to up to 21% improvement in sentiment accuracy, as LLMs
trained on nuclear power-related content struggled with sarcastic tweets,
achieving only 30% accuracy. In contrast, LLMs trained on general tweet
datasets, covering a broader range of topics, showed considerable improvements
in predicting sentiment for sarcastic tweets (60% accuracy), indicating that
incorporating general text data can enhance sarcasm detection. The study also
utilized adversarial text augmentation, showing that creating synthetic text
variants by making minor changes significantly increased model robustness and
accuracy for sarcastic tweets (approximately 85%). Additionally, text
paraphrasing of tweets with fragmented language transformed around 40% of the
tweets with low-confidence labels into high-confidence ones, improving LLMs
sentiment analysis accuracy by 6%.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 01:29:58 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Bhargava",
"Naman",
""
],
[
"Radaideh",
"Mohammed I.",
""
],
[
"Kwon",
"O Hwang",
""
],
[
"Verma",
"Aditi",
""
],
[
"Radaideh",
"Majdi I.",
""
]
] | TITLE: On the Impact of Language Nuances on Sentiment Analysis with Large
Language Models: Paraphrasing, Sarcasm, and Emojis
ABSTRACT: Large Language Models (LLMs) have demonstrated impressive performance across
various tasks, including sentiment analysis. However, data
quality--particularly when sourced from social media--can significantly impact
their accuracy. This research explores how textual nuances, including emojis
and sarcasm, affect sentiment analysis, with a particular focus on improving
data quality through text paraphrasing techniques. To address the lack of
labeled sarcasm data, the authors created a human-labeled dataset of 5929
tweets that enabled the assessment of LLM in various sarcasm contexts. The
results show that when topic-specific datasets, such as those related to
nuclear power, are used to finetune LLMs these models are not able to
comprehend accurate sentiment in presence of sarcasm due to less diverse text,
requiring external interventions like sarcasm removal to boost model accuracy.
Sarcasm removal led to up to 21% improvement in sentiment accuracy, as LLMs
trained on nuclear power-related content struggled with sarcastic tweets,
achieving only 30% accuracy. In contrast, LLMs trained on general tweet
datasets, covering a broader range of topics, showed considerable improvements
in predicting sentiment for sarcastic tweets (60% accuracy), indicating that
incorporating general text data can enhance sarcasm detection. The study also
utilized adversarial text augmentation, showing that creating synthetic text
variants by making minor changes significantly increased model robustness and
accuracy for sarcastic tweets (approximately 85%). Additionally, text
paraphrasing of tweets with fragmented language transformed around 40% of the
tweets with low-confidence labels into high-confidence ones, improving LLMs
sentiment analysis accuracy by 6%.
|
2504.05607 | Qian-Wen Zhang | Qian-Wen Zhang, Fang Li, Jie Wang, Lingfeng Qiao, Yifei Yu, Di Yin and
Xing Sun | FactGuard: Leveraging Multi-Agent Systems to Generate Answerable and
Unanswerable Questions for Enhanced Long-Context LLM Extraction | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Extractive reading comprehension systems are designed to locate the correct
answer to a question within a given text. However, a persistent challenge lies
in ensuring these models maintain high accuracy in answering questions while
reliably recognizing unanswerable queries. Despite significant advances in
large language models (LLMs) for reading comprehension, this issue remains
critical, particularly as the length of supported contexts continues to expand.
To address this challenge, we propose an innovative data augmentation
methodology grounded in a multi-agent collaborative framework. Unlike
traditional methods, such as the costly human annotation process required for
datasets like SQuAD 2.0, our method autonomously generates evidence-based
question-answer pairs and systematically constructs unanswerable questions.
Using this methodology, we developed the FactGuard-Bench dataset, which
comprises 25,220 examples of both answerable and unanswerable question
scenarios, with context lengths ranging from 8K to 128K. Experimental
evaluations conducted on seven popular LLMs reveal that even the most advanced
models achieve only 61.79% overall accuracy. Furthermore, we emphasize the
importance of a model's ability to reason about unanswerable questions to avoid
generating plausible but incorrect answers. By implementing efficient data
selection and generation within the multi-agent collaborative framework, our
method significantly reduces the traditionally high costs associated with
manual annotation and provides valuable insights for the training and
optimization of LLMs.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 01:45:16 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhang",
"Qian-Wen",
""
],
[
"Li",
"Fang",
""
],
[
"Wang",
"Jie",
""
],
[
"Qiao",
"Lingfeng",
""
],
[
"Yu",
"Yifei",
""
],
[
"Yin",
"Di",
""
],
[
"Sun",
"Xing",
""
]
] | TITLE: FactGuard: Leveraging Multi-Agent Systems to Generate Answerable and
Unanswerable Questions for Enhanced Long-Context LLM Extraction
ABSTRACT: Extractive reading comprehension systems are designed to locate the correct
answer to a question within a given text. However, a persistent challenge lies
in ensuring these models maintain high accuracy in answering questions while
reliably recognizing unanswerable queries. Despite significant advances in
large language models (LLMs) for reading comprehension, this issue remains
critical, particularly as the length of supported contexts continues to expand.
To address this challenge, we propose an innovative data augmentation
methodology grounded in a multi-agent collaborative framework. Unlike
traditional methods, such as the costly human annotation process required for
datasets like SQuAD 2.0, our method autonomously generates evidence-based
question-answer pairs and systematically constructs unanswerable questions.
Using this methodology, we developed the FactGuard-Bench dataset, which
comprises 25,220 examples of both answerable and unanswerable question
scenarios, with context lengths ranging from 8K to 128K. Experimental
evaluations conducted on seven popular LLMs reveal that even the most advanced
models achieve only 61.79% overall accuracy. Furthermore, we emphasize the
importance of a model's ability to reason about unanswerable questions to avoid
generating plausible but incorrect answers. By implementing efficient data
selection and generation within the multi-agent collaborative framework, our
method significantly reduces the traditionally high costs associated with
manual annotation and provides valuable insights for the training and
optimization of LLMs.
|
2504.05610 | Seokhyun Chung | Arafat Rahman, Sol Lim, Seokhyun Chung | Fairness in Machine Learning-based Hand Load Estimation: A Case Study on
Load Carriage Tasks | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Predicting external hand load from sensor data is essential for ergonomic
exposure assessments, as obtaining this information typically requires direct
observation or supplementary data. While machine learning methods have been
used to estimate external hand load from worker postures or force exertion
data, our findings reveal systematic bias in these predictions due to
individual differences such as age and biological sex. To explore this issue,
we examined bias in hand load prediction by varying the sex ratio in the
training dataset. We found substantial sex disparity in predictive performance,
especially when the training dataset is more sex-imbalanced. To address this
bias, we developed and evaluated a fair predictive model for hand load
estimation that leverages a Variational Autoencoder (VAE) with feature
disentanglement. This approach is designed to separate sex-agnostic and
sex-specific latent features, minimizing feature overlap. The disentanglement
capability enables the model to make predictions based solely on sex-agnostic
features of motion patterns, ensuring fair prediction for both biological
sexes. Our proposed fair algorithm outperformed conventional machine learning
methods (e.g., Random Forests) in both fairness and predictive accuracy,
achieving a lower mean absolute error (MAE) difference across male and female
sets and improved fairness metrics such as statistical parity (SP) and positive
and negative residual differences (PRD and NRD), even when trained on
imbalanced sex datasets. These findings emphasize the importance of
fairness-aware machine learning algorithms to prevent potential disadvantages
in workplace health and safety for certain worker populations.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 01:55:40 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Rahman",
"Arafat",
""
],
[
"Lim",
"Sol",
""
],
[
"Chung",
"Seokhyun",
""
]
] | TITLE: Fairness in Machine Learning-based Hand Load Estimation: A Case Study on
Load Carriage Tasks
ABSTRACT: Predicting external hand load from sensor data is essential for ergonomic
exposure assessments, as obtaining this information typically requires direct
observation or supplementary data. While machine learning methods have been
used to estimate external hand load from worker postures or force exertion
data, our findings reveal systematic bias in these predictions due to
individual differences such as age and biological sex. To explore this issue,
we examined bias in hand load prediction by varying the sex ratio in the
training dataset. We found substantial sex disparity in predictive performance,
especially when the training dataset is more sex-imbalanced. To address this
bias, we developed and evaluated a fair predictive model for hand load
estimation that leverages a Variational Autoencoder (VAE) with feature
disentanglement. This approach is designed to separate sex-agnostic and
sex-specific latent features, minimizing feature overlap. The disentanglement
capability enables the model to make predictions based solely on sex-agnostic
features of motion patterns, ensuring fair prediction for both biological
sexes. Our proposed fair algorithm outperformed conventional machine learning
methods (e.g., Random Forests) in both fairness and predictive accuracy,
achieving a lower mean absolute error (MAE) difference across male and female
sets and improved fairness metrics such as statistical parity (SP) and positive
and negative residual differences (PRD and NRD), even when trained on
imbalanced sex datasets. These findings emphasize the importance of
fairness-aware machine learning algorithms to prevent potential disadvantages
in workplace health and safety for certain worker populations.
|
2504.05618 | Jiawei Duan | Jiawei Duan, Haibo Hu, Qingqing Ye and Xinyue Sun | Technical Report: Full Version of Analyzing and Optimizing Perturbation
of DP-SGD Geometrically | This is the full version of our paper "Analyzing and Optimizing
Perturbation of DP-SGD Geometrically", which will appear in ICDE 2025 as a
regular research paper | International Conference of Data Engineering (ICDE 2025) | null | null | cs.LG cs.AI cs.CV cs.DB | http://creativecommons.org/licenses/by/4.0/ | Differential privacy (DP) has become a prevalent privacy model in a wide
range of machine learning tasks, especially after the debut of DP-SGD. However,
DP-SGD, which directly perturbs gradients in the training iterations, fails to
mitigate the negative impacts of noise on gradient direction. As a result,
DP-SGD is often inefficient. Although various solutions (e.g., clipping to
reduce the sensitivity of gradients and amplifying privacy bounds to save
privacy budgets) are proposed to trade privacy for model efficiency, the root
cause of its inefficiency is yet unveiled.
In this work, we first generalize DP-SGD and theoretically derive the impact
of DP noise on the training process. Our analysis reveals that, in terms of a
perturbed gradient, only the noise on direction has eminent impact on the model
efficiency while that on magnitude can be mitigated by optimization techniques,
i.e., fine-tuning gradient clipping and learning rate. Besides, we confirm that
traditional DP introduces biased noise on the direction when adding unbiased
noise to the gradient itself. Overall, the perturbation of DP-SGD is actually
sub-optimal from a geometric perspective. Motivated by this, we design a
geometric perturbation strategy GeoDP within the DP framework, which perturbs
the direction and the magnitude of a gradient, respectively. By directly
reducing the noise on the direction, GeoDP mitigates the negative impact of DP
noise on model efficiency with the same DP guarantee. Extensive experiments on
two public datasets (i.e., MNIST and CIFAR-10), one synthetic dataset and three
prevalent models (i.e., Logistic Regression, CNN and ResNet) confirm the
effectiveness and generality of our strategy.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 02:26:10 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Duan",
"Jiawei",
""
],
[
"Hu",
"Haibo",
""
],
[
"Ye",
"Qingqing",
""
],
[
"Sun",
"Xinyue",
""
]
] | TITLE: Technical Report: Full Version of Analyzing and Optimizing Perturbation
of DP-SGD Geometrically
ABSTRACT: Differential privacy (DP) has become a prevalent privacy model in a wide
range of machine learning tasks, especially after the debut of DP-SGD. However,
DP-SGD, which directly perturbs gradients in the training iterations, fails to
mitigate the negative impacts of noise on gradient direction. As a result,
DP-SGD is often inefficient. Although various solutions (e.g., clipping to
reduce the sensitivity of gradients and amplifying privacy bounds to save
privacy budgets) are proposed to trade privacy for model efficiency, the root
cause of its inefficiency is yet unveiled.
In this work, we first generalize DP-SGD and theoretically derive the impact
of DP noise on the training process. Our analysis reveals that, in terms of a
perturbed gradient, only the noise on direction has eminent impact on the model
efficiency while that on magnitude can be mitigated by optimization techniques,
i.e., fine-tuning gradient clipping and learning rate. Besides, we confirm that
traditional DP introduces biased noise on the direction when adding unbiased
noise to the gradient itself. Overall, the perturbation of DP-SGD is actually
sub-optimal from a geometric perspective. Motivated by this, we design a
geometric perturbation strategy GeoDP within the DP framework, which perturbs
the direction and the magnitude of a gradient, respectively. By directly
reducing the noise on the direction, GeoDP mitigates the negative impact of DP
noise on model efficiency with the same DP guarantee. Extensive experiments on
two public datasets (i.e., MNIST and CIFAR-10), one synthetic dataset and three
prevalent models (i.e., Logistic Regression, CNN and ResNet) confirm the
effectiveness and generality of our strategy.
|
2504.05623 | Mahmoud Afifi | Mahmoud Afifi, Luxi Zhao, Abhijith Punnappurath, Mohammed A.
Abdelsalam, Ran Zhang, Michael S. Brown | Time-Aware Auto White Balance in Mobile Photography | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cameras rely on auto white balance (AWB) to correct undesirable color casts
caused by scene illumination and the camera's spectral sensitivity. This is
typically achieved using an illuminant estimator that determines the global
color cast solely from the color information in the camera's raw sensor image.
Mobile devices provide valuable additional metadata-such as capture timestamp
and geolocation-that offers strong contextual clues to help narrow down the
possible illumination solutions. This paper proposes a lightweight illuminant
estimation method that incorporates such contextual metadata, along with
additional capture information and image colors, into a compact model (~5K
parameters), achieving promising results, matching or surpassing larger models.
To validate our method, we introduce a dataset of 3,224 smartphone images with
contextual metadata collected at various times of day and under diverse
lighting conditions. The dataset includes ground-truth illuminant colors,
determined using a color chart, and user-preferred illuminants validated
through a user study, providing a comprehensive benchmark for AWB evaluation.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 02:45:37 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Afifi",
"Mahmoud",
""
],
[
"Zhao",
"Luxi",
""
],
[
"Punnappurath",
"Abhijith",
""
],
[
"Abdelsalam",
"Mohammed A.",
""
],
[
"Zhang",
"Ran",
""
],
[
"Brown",
"Michael S.",
""
]
] | TITLE: Time-Aware Auto White Balance in Mobile Photography
ABSTRACT: Cameras rely on auto white balance (AWB) to correct undesirable color casts
caused by scene illumination and the camera's spectral sensitivity. This is
typically achieved using an illuminant estimator that determines the global
color cast solely from the color information in the camera's raw sensor image.
Mobile devices provide valuable additional metadata-such as capture timestamp
and geolocation-that offers strong contextual clues to help narrow down the
possible illumination solutions. This paper proposes a lightweight illuminant
estimation method that incorporates such contextual metadata, along with
additional capture information and image colors, into a compact model (~5K
parameters), achieving promising results, matching or surpassing larger models.
To validate our method, we introduce a dataset of 3,224 smartphone images with
contextual metadata collected at various times of day and under diverse
lighting conditions. The dataset includes ground-truth illuminant colors,
determined using a color chart, and user-preferred illuminants validated
through a user study, providing a comprehensive benchmark for AWB evaluation.
|
2504.05636 | Jungkyu Park | Jungkyu Park, Jan Witowski, Yanqi Xu, Hari Trivedi, Judy Gichoya,
Beatrice Brown-Mulry, Malte Westerhoff, Linda Moy, Laura Heacock, Alana
Lewin, Krzysztof J. Geras | A Multi-Modal AI System for Screening Mammography: Integrating 2D and 3D
Imaging to Improve Breast Cancer Detection in a Prospective Clinical Study | null | null | null | null | eess.IV cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Although digital breast tomosynthesis (DBT) improves diagnostic performance
over full-field digital mammography (FFDM), false-positive recalls remain a
concern in breast cancer screening. We developed a multi-modal artificial
intelligence system integrating FFDM, synthetic mammography, and DBT to provide
breast-level predictions and bounding-box localizations of suspicious findings.
Our AI system, trained on approximately 500,000 mammography exams, achieved
0.945 AUROC on an internal test set. It demonstrated capacity to reduce recalls
by 31.7% and radiologist workload by 43.8% while maintaining 100% sensitivity,
underscoring its potential to improve clinical workflows. External validation
confirmed strong generalizability, reducing the gap to a perfect AUROC by
35.31%-69.14% relative to strong baselines. In prospective deployment across 18
sites, the system reduced recall rates for low-risk cases. An improved version,
trained on over 750,000 exams with additional labels, further reduced the gap
by 18.86%-56.62% across large external datasets. Overall, these results
underscore the importance of utilizing all available imaging modalities,
demonstrate the potential for clinical impact, and indicate feasibility of
further reduction of the test error with increased training set when using
large-capacity neural networks.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 03:29:40 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Park",
"Jungkyu",
""
],
[
"Witowski",
"Jan",
""
],
[
"Xu",
"Yanqi",
""
],
[
"Trivedi",
"Hari",
""
],
[
"Gichoya",
"Judy",
""
],
[
"Brown-Mulry",
"Beatrice",
""
],
[
"Westerhoff",
"Malte",
""
],
[
"Moy",
"Linda",
""
],
[
"Heacock",
"Laura",
""
],
[
"Lewin",
"Alana",
""
],
[
"Geras",
"Krzysztof J.",
""
]
] | TITLE: A Multi-Modal AI System for Screening Mammography: Integrating 2D and 3D
Imaging to Improve Breast Cancer Detection in a Prospective Clinical Study
ABSTRACT: Although digital breast tomosynthesis (DBT) improves diagnostic performance
over full-field digital mammography (FFDM), false-positive recalls remain a
concern in breast cancer screening. We developed a multi-modal artificial
intelligence system integrating FFDM, synthetic mammography, and DBT to provide
breast-level predictions and bounding-box localizations of suspicious findings.
Our AI system, trained on approximately 500,000 mammography exams, achieved
0.945 AUROC on an internal test set. It demonstrated capacity to reduce recalls
by 31.7% and radiologist workload by 43.8% while maintaining 100% sensitivity,
underscoring its potential to improve clinical workflows. External validation
confirmed strong generalizability, reducing the gap to a perfect AUROC by
35.31%-69.14% relative to strong baselines. In prospective deployment across 18
sites, the system reduced recall rates for low-risk cases. An improved version,
trained on over 750,000 exams with additional labels, further reduced the gap
by 18.86%-56.62% across large external datasets. Overall, these results
underscore the importance of utilizing all available imaging modalities,
demonstrate the potential for clinical impact, and indicate feasibility of
further reduction of the test error with increased training set when using
large-capacity neural networks.
|
2504.05640 | Mingyang Zhu | Mingyang Zhu, Yuqiu Liang, Jiacheng Wang | CTI-Unet: Cascaded Threshold Integration for Improved U-Net Segmentation
of Pathology Images | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Chronic kidney disease (CKD) is a growing global health concern,
necessitating precise and efficient image analysis to aid diagnosis and
treatment planning. Automated segmentation of kidney pathology images plays a
central role in facilitating clinical workflows, yet conventional segmentation
models often require delicate threshold tuning. This paper proposes a novel
\textit{Cascaded Threshold-Integrated U-Net (CTI-Unet)} to overcome the
limitations of single-threshold segmentation. By sequentially integrating
multiple thresholded outputs, our approach can reconcile noise suppression with
the preservation of finer structural details. Experiments on the challenging
KPIs2024 dataset demonstrate that CTI-Unet outperforms state-of-the-art
architectures such as nnU-Net, Swin-Unet, and CE-Net, offering a robust and
flexible framework for kidney pathology image segmentation.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 03:35:09 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhu",
"Mingyang",
""
],
[
"Liang",
"Yuqiu",
""
],
[
"Wang",
"Jiacheng",
""
]
] | TITLE: CTI-Unet: Cascaded Threshold Integration for Improved U-Net Segmentation
of Pathology Images
ABSTRACT: Chronic kidney disease (CKD) is a growing global health concern,
necessitating precise and efficient image analysis to aid diagnosis and
treatment planning. Automated segmentation of kidney pathology images plays a
central role in facilitating clinical workflows, yet conventional segmentation
models often require delicate threshold tuning. This paper proposes a novel
\textit{Cascaded Threshold-Integrated U-Net (CTI-Unet)} to overcome the
limitations of single-threshold segmentation. By sequentially integrating
multiple thresholded outputs, our approach can reconcile noise suppression with
the preservation of finer structural details. Experiments on the challenging
KPIs2024 dataset demonstrate that CTI-Unet outperforms state-of-the-art
architectures such as nnU-Net, Swin-Unet, and CE-Net, offering a robust and
flexible framework for kidney pathology image segmentation.
|
2504.05644 | Yan Zhang | Yan Zhang, Zhong Ji, Changxu Meng, Yanwei Pang, Jungong Han | iEBAKER: Improved Remote Sensing Image-Text Retrieval Framework via
Eliminate Before Align and Keyword Explicit Reasoning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent studies focus on the Remote Sensing Image-Text Retrieval (RSITR),
which aims at searching for the corresponding targets based on the given query.
Among these efforts, the application of Foundation Models (FMs), such as CLIP,
to the domain of remote sensing has yielded encouraging outcomes. However,
existing FM based methodologies neglect the negative impact of weakly
correlated sample pairs and fail to account for the key distinctions among
remote sensing texts, leading to biased and superficial exploration of sample
pairs. To address these challenges, we propose an approach named iEBAKER (an
Improved Eliminate Before Align strategy with Keyword Explicit Reasoning
framework) for RSITR. Specifically, we propose an innovative Eliminate Before
Align (EBA) strategy to filter out the weakly correlated sample pairs, thereby
mitigating their deviations from optimal embedding space during
alignment.Further, two specific schemes are introduced from the perspective of
whether local similarity and global similarity affect each other. On this
basis, we introduce an alternative Sort After Reversed Retrieval (SAR)
strategy, aims at optimizing the similarity matrix via reverse retrieval.
Additionally, we incorporate a Keyword Explicit Reasoning (KER) module to
facilitate the beneficial impact of subtle key concept distinctions. Without
bells and whistles, our approach enables a direct transition from FM to RSITR
task, eliminating the need for additional pretraining on remote sensing data.
Extensive experiments conducted on three popular benchmark datasets demonstrate
that our proposed iEBAKER method surpasses the state-of-the-art models while
requiring less training data. Our source code will be released at
https://github.com/zhangy0822/iEBAKER.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 03:40:19 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhang",
"Yan",
""
],
[
"Ji",
"Zhong",
""
],
[
"Meng",
"Changxu",
""
],
[
"Pang",
"Yanwei",
""
],
[
"Han",
"Jungong",
""
]
] | TITLE: iEBAKER: Improved Remote Sensing Image-Text Retrieval Framework via
Eliminate Before Align and Keyword Explicit Reasoning
ABSTRACT: Recent studies focus on the Remote Sensing Image-Text Retrieval (RSITR),
which aims at searching for the corresponding targets based on the given query.
Among these efforts, the application of Foundation Models (FMs), such as CLIP,
to the domain of remote sensing has yielded encouraging outcomes. However,
existing FM based methodologies neglect the negative impact of weakly
correlated sample pairs and fail to account for the key distinctions among
remote sensing texts, leading to biased and superficial exploration of sample
pairs. To address these challenges, we propose an approach named iEBAKER (an
Improved Eliminate Before Align strategy with Keyword Explicit Reasoning
framework) for RSITR. Specifically, we propose an innovative Eliminate Before
Align (EBA) strategy to filter out the weakly correlated sample pairs, thereby
mitigating their deviations from optimal embedding space during
alignment.Further, two specific schemes are introduced from the perspective of
whether local similarity and global similarity affect each other. On this
basis, we introduce an alternative Sort After Reversed Retrieval (SAR)
strategy, aims at optimizing the similarity matrix via reverse retrieval.
Additionally, we incorporate a Keyword Explicit Reasoning (KER) module to
facilitate the beneficial impact of subtle key concept distinctions. Without
bells and whistles, our approach enables a direct transition from FM to RSITR
task, eliminating the need for additional pretraining on remote sensing data.
Extensive experiments conducted on three popular benchmark datasets demonstrate
that our proposed iEBAKER method surpasses the state-of-the-art models while
requiring less training data. Our source code will be released at
https://github.com/zhangy0822/iEBAKER.
|
2504.05649 | Yining Shi | Yining Shi, Kun Jiang, Xin Zhao, Kangan Qian, Chuchu Xie, Tuopu Wen,
Mengmeng Yang, Diange Yang | POD: Predictive Object Detection with Single-Frame FMCW LiDAR Point
Cloud | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LiDAR-based 3D object detection is a fundamental task in the field of
autonomous driving. This paper explores the unique advantage of Frequency
Modulated Continuous Wave (FMCW) LiDAR in autonomous perception. Given a single
frame FMCW point cloud with radial velocity measurements, we expect that our
object detector can detect the short-term future locations of objects using
only the current frame sensor data and demonstrate a fast ability to respond to
intermediate danger. To achieve this, we extend the standard object detection
task to a novel task named predictive object detection (POD), which aims to
predict the short-term future location and dimensions of objects based solely
on current observations. Typically, a motion prediction task requires
historical sensor information to process the temporal contexts of each object,
while our detector's avoidance of multi-frame historical information enables a
much faster response time to potential dangers. The core advantage of FMCW
LiDAR lies in the radial velocity associated with every reflected point. We
propose a novel POD framework, the core idea of which is to generate a virtual
future point using a ray casting mechanism, create virtual two-frame point
clouds with the current and virtual future frames, and encode these two-frame
voxel features with a sparse 4D encoder. Subsequently, the 4D voxel features
are separated by temporal indices and remapped into two Bird's Eye View (BEV)
features: one decoded for standard current frame object detection and the other
for future predictive object detection. Extensive experiments on our in-house
dataset demonstrate the state-of-the-art standard and predictive detection
performance of the proposed POD framework.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 03:53:28 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Shi",
"Yining",
""
],
[
"Jiang",
"Kun",
""
],
[
"Zhao",
"Xin",
""
],
[
"Qian",
"Kangan",
""
],
[
"Xie",
"Chuchu",
""
],
[
"Wen",
"Tuopu",
""
],
[
"Yang",
"Mengmeng",
""
],
[
"Yang",
"Diange",
""
]
] | TITLE: POD: Predictive Object Detection with Single-Frame FMCW LiDAR Point
Cloud
ABSTRACT: LiDAR-based 3D object detection is a fundamental task in the field of
autonomous driving. This paper explores the unique advantage of Frequency
Modulated Continuous Wave (FMCW) LiDAR in autonomous perception. Given a single
frame FMCW point cloud with radial velocity measurements, we expect that our
object detector can detect the short-term future locations of objects using
only the current frame sensor data and demonstrate a fast ability to respond to
intermediate danger. To achieve this, we extend the standard object detection
task to a novel task named predictive object detection (POD), which aims to
predict the short-term future location and dimensions of objects based solely
on current observations. Typically, a motion prediction task requires
historical sensor information to process the temporal contexts of each object,
while our detector's avoidance of multi-frame historical information enables a
much faster response time to potential dangers. The core advantage of FMCW
LiDAR lies in the radial velocity associated with every reflected point. We
propose a novel POD framework, the core idea of which is to generate a virtual
future point using a ray casting mechanism, create virtual two-frame point
clouds with the current and virtual future frames, and encode these two-frame
voxel features with a sparse 4D encoder. Subsequently, the 4D voxel features
are separated by temporal indices and remapped into two Bird's Eye View (BEV)
features: one decoded for standard current frame object detection and the other
for future predictive object detection. Extensive experiments on our in-house
dataset demonstrate the state-of-the-art standard and predictive detection
performance of the proposed POD framework.
|
2504.05651 | Narine Kokhlikyan | Narine Kokhlikyan, Bargav Jayaraman, Florian Bordes, Chuan Guo,
Kamalika Chaudhuri | Measuring D\'ej\`a vu Memorization Efficiently | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Recent research has shown that representation learning models may
accidentally memorize their training data. For example, the d\'ej\`a vu method
shows that for certain representation learning models and training images, it
is sometimes possible to correctly predict the foreground label given only the
representation of the background - better than through dataset-level
correlations. However, their measurement method requires training two models -
one to estimate dataset-level correlations and the other to estimate
memorization. This multiple model setup becomes infeasible for large
open-source models. In this work, we propose alternative simple methods to
estimate dataset-level correlations, and show that these can be used to
approximate an off-the-shelf model's memorization ability without any
retraining. This enables, for the first time, the measurement of memorization
in pre-trained open-source image representation and vision-language
representation models. Our results show that different ways of measuring
memorization yield very similar aggregate results. We also find that
open-source models typically have lower aggregate memorization than similar
models trained on a subset of the data. The code is available both for vision
and vision language models.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 03:55:20 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Kokhlikyan",
"Narine",
""
],
[
"Jayaraman",
"Bargav",
""
],
[
"Bordes",
"Florian",
""
],
[
"Guo",
"Chuan",
""
],
[
"Chaudhuri",
"Kamalika",
""
]
] | TITLE: Measuring D\'ej\`a vu Memorization Efficiently
ABSTRACT: Recent research has shown that representation learning models may
accidentally memorize their training data. For example, the d\'ej\`a vu method
shows that for certain representation learning models and training images, it
is sometimes possible to correctly predict the foreground label given only the
representation of the background - better than through dataset-level
correlations. However, their measurement method requires training two models -
one to estimate dataset-level correlations and the other to estimate
memorization. This multiple model setup becomes infeasible for large
open-source models. In this work, we propose alternative simple methods to
estimate dataset-level correlations, and show that these can be used to
approximate an off-the-shelf model's memorization ability without any
retraining. This enables, for the first time, the measurement of memorization
in pre-trained open-source image representation and vision-language
representation models. Our results show that different ways of measuring
memorization yield very similar aggregate results. We also find that
open-source models typically have lower aggregate memorization than similar
models trained on a subset of the data. The code is available both for vision
and vision language models.
|
2504.05657 | Tianchi Liu | Tianchi Liu, Duc-Tuan Truong, Rohan Kumar Das, Kong Aik Lee, Haizhou
Li | Nes2Net: A Lightweight Nested Architecture for Foundation Model Driven
Speech Anti-spoofing | This manuscript has been submitted for peer review | null | null | null | eess.AS cs.AI cs.SD | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Speech foundation models have significantly advanced various speech-related
tasks by providing exceptional representation capabilities. However, their
high-dimensional output features often create a mismatch with downstream task
models, which typically require lower-dimensional inputs. A common solution is
to apply a dimensionality reduction (DR) layer, but this approach increases
parameter overhead, computational costs, and risks losing valuable information.
To address these issues, we propose Nested Res2Net (Nes2Net), a lightweight
back-end architecture designed to directly process high-dimensional features
without DR layers. The nested structure enhances multi-scale feature
extraction, improves feature interaction, and preserves high-dimensional
information. We first validate Nes2Net on CtrSVDD, a singing voice deepfake
detection dataset, and report a 22% performance improvement and an 87% back-end
computational cost reduction over the state-of-the-art baseline. Additionally,
extensive testing across four diverse datasets: ASVspoof 2021, ASVspoof 5,
PartialSpoof, and In-the-Wild, covering fully spoofed speech, adversarial
attacks, partial spoofing, and real-world scenarios, consistently highlights
Nes2Net's superior robustness and generalization capabilities. The code package
and pre-trained models are available at https://github.com/Liu-Tianchi/Nes2Net.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 04:11:28 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Liu",
"Tianchi",
""
],
[
"Truong",
"Duc-Tuan",
""
],
[
"Das",
"Rohan Kumar",
""
],
[
"Lee",
"Kong Aik",
""
],
[
"Li",
"Haizhou",
""
]
] | TITLE: Nes2Net: A Lightweight Nested Architecture for Foundation Model Driven
Speech Anti-spoofing
ABSTRACT: Speech foundation models have significantly advanced various speech-related
tasks by providing exceptional representation capabilities. However, their
high-dimensional output features often create a mismatch with downstream task
models, which typically require lower-dimensional inputs. A common solution is
to apply a dimensionality reduction (DR) layer, but this approach increases
parameter overhead, computational costs, and risks losing valuable information.
To address these issues, we propose Nested Res2Net (Nes2Net), a lightweight
back-end architecture designed to directly process high-dimensional features
without DR layers. The nested structure enhances multi-scale feature
extraction, improves feature interaction, and preserves high-dimensional
information. We first validate Nes2Net on CtrSVDD, a singing voice deepfake
detection dataset, and report a 22% performance improvement and an 87% back-end
computational cost reduction over the state-of-the-art baseline. Additionally,
extensive testing across four diverse datasets: ASVspoof 2021, ASVspoof 5,
PartialSpoof, and In-the-Wild, covering fully spoofed speech, adversarial
attacks, partial spoofing, and real-world scenarios, consistently highlights
Nes2Net's superior robustness and generalization capabilities. The code package
and pre-trained models are available at https://github.com/Liu-Tianchi/Nes2Net.
|
2504.05662 | Shunsuke Sakai | Shunsuke Sakai, Tatsuhito Hasegawa | Reconstruction-Free Anomaly Detection with Diffusion Models via Direct
Latent Likelihood Evaluation | Code is available at https://github.com/SkyShunsuke/InversionAD | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Diffusion models, with their robust distribution approximation capabilities,
have demonstrated excellent performance in anomaly detection. However,
conventional reconstruction-based approaches rely on computing the
reconstruction error between the original and denoised images, which requires
careful noise-strength tuning and over ten network evaluations per
input-leading to significantly slower detection speeds. To address these
limitations, we propose a novel diffusion-based anomaly detection method that
circumvents the need for resource-intensive reconstruction. Instead of
reconstructing the input image, we directly infer its corresponding latent
variables and measure their density under the Gaussian prior distribution.
Remarkably, the prior density proves effective as an anomaly score even when
using a short partial diffusion process of only 2-5 steps. We evaluate our
method on the MVTecAD dataset, achieving an AUC of 0.991 at 15 FPS, thereby
setting a new state-of-the-art speed-AUC anomaly detection trade-off.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 04:23:43 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Sakai",
"Shunsuke",
""
],
[
"Hasegawa",
"Tatsuhito",
""
]
] | TITLE: Reconstruction-Free Anomaly Detection with Diffusion Models via Direct
Latent Likelihood Evaluation
ABSTRACT: Diffusion models, with their robust distribution approximation capabilities,
have demonstrated excellent performance in anomaly detection. However,
conventional reconstruction-based approaches rely on computing the
reconstruction error between the original and denoised images, which requires
careful noise-strength tuning and over ten network evaluations per
input-leading to significantly slower detection speeds. To address these
limitations, we propose a novel diffusion-based anomaly detection method that
circumvents the need for resource-intensive reconstruction. Instead of
reconstructing the input image, we directly infer its corresponding latent
variables and measure their density under the Gaussian prior distribution.
Remarkably, the prior density proves effective as an anomaly score even when
using a short partial diffusion process of only 2-5 steps. We evaluate our
method on the MVTecAD dataset, achieving an AUC of 0.991 at 15 FPS, thereby
setting a new state-of-the-art speed-AUC anomaly detection trade-off.
|
2504.05670 | Renda Han | John Smith, Wenxuan Tu, Junlong Wu, Wenxin Zhang, Jingxin Liu, Haotian
Wang, Jieren Cheng, Huajie Lei, Guangzhen Yao, Lingren Wang, Mengfei Li,
Renda Han, and Yu Li | Dual Boost-Driven Graph-Level Clustering Network | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Graph-level clustering remains a pivotal yet formidable challenge in graph
learning. Recently, the integration of deep learning with representation
learning has demonstrated notable advancements, yielding performance
enhancements to a certain degree. However, existing methods suffer from at
least one of the following issues: 1. the original graph structure has noise,
and 2. during feature propagation and pooling processes, noise is gradually
aggregated into the graph-level embeddings through information propagation.
Consequently, these two limitations mask clustering-friendly information,
leading to suboptimal graph-level clustering performance. To this end, we
propose a novel Dual Boost-Driven Graph-Level Clustering Network (DBGCN) to
alternately promote graph-level clustering and filtering out interference
information in a unified framework. Specifically, in the pooling step, we
evaluate the contribution of features at the global and optimize them using a
learnable transformation matrix to obtain high-quality graph-level
representation, such that the model's reasoning capability can be improved.
Moreover, to enable reliable graph-level clustering, we first identify and
suppress information detrimental to clustering by evaluating similarities
between graph-level representations, providing more accurate guidance for
multi-view fusion. Extensive experiments demonstrated that DBGCN outperforms
the state-of-the-art graph-level clustering methods on six benchmark datasets.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 04:32:46 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Smith",
"John",
""
],
[
"Tu",
"Wenxuan",
""
],
[
"Wu",
"Junlong",
""
],
[
"Zhang",
"Wenxin",
""
],
[
"Liu",
"Jingxin",
""
],
[
"Wang",
"Haotian",
""
],
[
"Cheng",
"Jieren",
""
],
[
"Lei",
"Huajie",
""
],
[
"Yao",
"Guangzhen",
""
],
[
"Wang",
"Lingren",
""
],
[
"Li",
"Mengfei",
""
],
[
"Han",
"Renda",
""
],
[
"Li",
"Yu",
""
]
] | TITLE: Dual Boost-Driven Graph-Level Clustering Network
ABSTRACT: Graph-level clustering remains a pivotal yet formidable challenge in graph
learning. Recently, the integration of deep learning with representation
learning has demonstrated notable advancements, yielding performance
enhancements to a certain degree. However, existing methods suffer from at
least one of the following issues: 1. the original graph structure has noise,
and 2. during feature propagation and pooling processes, noise is gradually
aggregated into the graph-level embeddings through information propagation.
Consequently, these two limitations mask clustering-friendly information,
leading to suboptimal graph-level clustering performance. To this end, we
propose a novel Dual Boost-Driven Graph-Level Clustering Network (DBGCN) to
alternately promote graph-level clustering and filtering out interference
information in a unified framework. Specifically, in the pooling step, we
evaluate the contribution of features at the global and optimize them using a
learnable transformation matrix to obtain high-quality graph-level
representation, such that the model's reasoning capability can be improved.
Moreover, to enable reliable graph-level clustering, we first identify and
suppress information detrimental to clustering by evaluating similarities
between graph-level representations, providing more accurate guidance for
multi-view fusion. Extensive experiments demonstrated that DBGCN outperforms
the state-of-the-art graph-level clustering methods on six benchmark datasets.
|
2504.05673 | Dongjun Qian | Dongjun Qian, Kai Su, Yiming Tan, Qishuai Diao, Xian Wu, Chang Liu,
Bingyue Peng, Zehuan Yuan | VC-LLM: Automated Advertisement Video Creation from Raw Footage using
Multi-modal LLMs | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | As short videos have risen in popularity, the role of video content in
advertising has become increasingly significant. Typically, advertisers record
a large amount of raw footage about the product and then create numerous
different short-form advertisement videos based on this raw footage. Creating
such videos mainly involves editing raw footage and writing advertisement
scripts, which requires a certain level of creative ability. It is usually
challenging to create many different video contents for the same product, and
manual efficiency is often low. In this paper, we present VC-LLM, a framework
powered by Large Language Models for the automatic creation of high-quality
short-form advertisement videos. Our approach leverages high-resolution spatial
input and low-resolution temporal input to represent video clips more
effectively, capturing both fine-grained visual details and broader temporal
dynamics. In addition, during training, we incorporate supplementary
information generated by rewriting the ground truth text, ensuring that all key
output information can be directly traced back to the input, thereby reducing
model hallucinations. We also designed a benchmark to evaluate the quality of
the created videos. Experiments show that VC-LLM based on GPT-4o can produce
videos comparable to those created by humans. Furthermore, we collected
numerous high-quality short advertisement videos to create a pre-training
dataset and manually cleaned a portion of the data to construct a high-quality
fine-tuning dataset. Experiments indicate that, on the benchmark, the VC-LLM
based on fine-tuned LLM can produce videos with superior narrative logic
compared to those created by the VC-LLM based on GPT-4o.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 04:35:23 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Qian",
"Dongjun",
""
],
[
"Su",
"Kai",
""
],
[
"Tan",
"Yiming",
""
],
[
"Diao",
"Qishuai",
""
],
[
"Wu",
"Xian",
""
],
[
"Liu",
"Chang",
""
],
[
"Peng",
"Bingyue",
""
],
[
"Yuan",
"Zehuan",
""
]
] | TITLE: VC-LLM: Automated Advertisement Video Creation from Raw Footage using
Multi-modal LLMs
ABSTRACT: As short videos have risen in popularity, the role of video content in
advertising has become increasingly significant. Typically, advertisers record
a large amount of raw footage about the product and then create numerous
different short-form advertisement videos based on this raw footage. Creating
such videos mainly involves editing raw footage and writing advertisement
scripts, which requires a certain level of creative ability. It is usually
challenging to create many different video contents for the same product, and
manual efficiency is often low. In this paper, we present VC-LLM, a framework
powered by Large Language Models for the automatic creation of high-quality
short-form advertisement videos. Our approach leverages high-resolution spatial
input and low-resolution temporal input to represent video clips more
effectively, capturing both fine-grained visual details and broader temporal
dynamics. In addition, during training, we incorporate supplementary
information generated by rewriting the ground truth text, ensuring that all key
output information can be directly traced back to the input, thereby reducing
model hallucinations. We also designed a benchmark to evaluate the quality of
the created videos. Experiments show that VC-LLM based on GPT-4o can produce
videos comparable to those created by humans. Furthermore, we collected
numerous high-quality short advertisement videos to create a pre-training
dataset and manually cleaned a portion of the data to construct a high-quality
fine-tuning dataset. Experiments indicate that, on the benchmark, the VC-LLM
based on fine-tuned LLM can produce videos with superior narrative logic
compared to those created by the VC-LLM based on GPT-4o.
|
2504.05677 | Shunsuke Sakai | Shunsuke Sakai, Shunsuke Tsuge, Tatsuhito Hasegawa | Noisy Deep Ensemble: Accelerating Deep Ensemble Learning via Noise
Injection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Neural network ensembles is a simple yet effective approach for enhancing
generalization capabilities. The most common method involves independently
training multiple neural networks initialized with different weights and then
averaging their predictions during inference. However, this approach increases
training time linearly with the number of ensemble members. To address this
issue, we propose the novel ``\textbf{Noisy Deep Ensemble}'' method,
significantly reducing the training time required for neural network ensembles.
In this method, a \textit{parent model} is trained until convergence, and then
the weights of the \textit{parent model} are perturbed in various ways to
construct multiple \textit{child models}. This perturbation of the
\textit{parent model} weights facilitates the exploration of different local
minima while significantly reducing the training time for each ensemble member.
We evaluated our method using diverse CNN architectures on CIFAR-10 and
CIFAR-100 datasets, surpassing conventional efficient ensemble methods and
achieving test accuracy comparable to standard ensembles. Code is available at
\href{https://github.com/TSTB-dev/NoisyDeepEnsemble}{https://github.com/TSTB-dev/NoisyDeepEnsemble}
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 04:36:39 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Sakai",
"Shunsuke",
""
],
[
"Tsuge",
"Shunsuke",
""
],
[
"Hasegawa",
"Tatsuhito",
""
]
] | TITLE: Noisy Deep Ensemble: Accelerating Deep Ensemble Learning via Noise
Injection
ABSTRACT: Neural network ensembles is a simple yet effective approach for enhancing
generalization capabilities. The most common method involves independently
training multiple neural networks initialized with different weights and then
averaging their predictions during inference. However, this approach increases
training time linearly with the number of ensemble members. To address this
issue, we propose the novel ``\textbf{Noisy Deep Ensemble}'' method,
significantly reducing the training time required for neural network ensembles.
In this method, a \textit{parent model} is trained until convergence, and then
the weights of the \textit{parent model} are perturbed in various ways to
construct multiple \textit{child models}. This perturbation of the
\textit{parent model} weights facilitates the exploration of different local
minima while significantly reducing the training time for each ensemble member.
We evaluated our method using diverse CNN architectures on CIFAR-10 and
CIFAR-100 datasets, surpassing conventional efficient ensemble methods and
achieving test accuracy comparable to standard ensembles. Code is available at
\href{https://github.com/TSTB-dev/NoisyDeepEnsemble}{https://github.com/TSTB-dev/NoisyDeepEnsemble}
|
2504.05679 | Kashita Niranjan Udayanga Gangoda Withana Gamage | Udayanga G.W.K.N. Gamage, Xuanni Huo, Luca Zanatta, T Delbruck, Cesar
Cadena, Matteo Fumagalli, Silvia Tolu | Event-based Civil Infrastructure Visual Defect Detection: ev-CIVIL
Dataset and Benchmark | A journal paper which submitted to Sage SHM journa and it is under
review currently. consist of 25 pages. It has 19 figures and 5 tables.
Keywords Event-based vision, civil structural health monitoring, defect
detection, crack, spalling, DVS, dataset, YOLOv6, SSD, 2D event histograms | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Small Unmanned Aerial Vehicle (UAV) based visual inspections are a more
efficient alternative to manual methods for examining civil structural defects,
offering safe access to hazardous areas and significant cost savings by
reducing labor requirements. However, traditional frame-based cameras, widely
used in UAV-based inspections, often struggle to capture defects under low or
dynamic lighting conditions. In contrast, Dynamic Vision Sensors (DVS), or
event-based cameras, excel in such scenarios by minimizing motion blur,
enhancing power efficiency, and maintaining high-quality imaging across diverse
lighting conditions without saturation or information loss. Despite these
advantages, existing research lacks studies exploring the feasibility of using
DVS for detecting civil structural defects.Moreover, there is no dedicated
event-based dataset tailored for this purpose. Addressing this gap, this study
introduces the first event-based civil infrastructure defect detection dataset,
capturing defective surfaces as a spatio-temporal event stream using DVS.In
addition to event-based data, the dataset includes grayscale intensity image
frames captured simultaneously using an Active Pixel Sensor (APS). Both data
types were collected using the DAVIS346 camera, which integrates DVS and APS
sensors.The dataset focuses on two types of defects: cracks and spalling, and
includes data from both field and laboratory environments. The field dataset
comprises 318 recording sequences,documenting 458 distinct cracks and 121
distinct spalling instances.The laboratory dataset includes 362 recording
sequences, covering 220 distinct cracks and 308 spalling instances.Four
realtime object detection models were evaluated on it to validate the dataset
effectiveness.The results demonstrate the dataset robustness in enabling
accurate defect detection and classification,even under challenging lighting
conditions.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 04:44:33 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Gamage",
"Udayanga G. W. K. N.",
""
],
[
"Huo",
"Xuanni",
""
],
[
"Zanatta",
"Luca",
""
],
[
"Delbruck",
"T",
""
],
[
"Cadena",
"Cesar",
""
],
[
"Fumagalli",
"Matteo",
""
],
[
"Tolu",
"Silvia",
""
]
] | TITLE: Event-based Civil Infrastructure Visual Defect Detection: ev-CIVIL
Dataset and Benchmark
ABSTRACT: Small Unmanned Aerial Vehicle (UAV) based visual inspections are a more
efficient alternative to manual methods for examining civil structural defects,
offering safe access to hazardous areas and significant cost savings by
reducing labor requirements. However, traditional frame-based cameras, widely
used in UAV-based inspections, often struggle to capture defects under low or
dynamic lighting conditions. In contrast, Dynamic Vision Sensors (DVS), or
event-based cameras, excel in such scenarios by minimizing motion blur,
enhancing power efficiency, and maintaining high-quality imaging across diverse
lighting conditions without saturation or information loss. Despite these
advantages, existing research lacks studies exploring the feasibility of using
DVS for detecting civil structural defects.Moreover, there is no dedicated
event-based dataset tailored for this purpose. Addressing this gap, this study
introduces the first event-based civil infrastructure defect detection dataset,
capturing defective surfaces as a spatio-temporal event stream using DVS.In
addition to event-based data, the dataset includes grayscale intensity image
frames captured simultaneously using an Active Pixel Sensor (APS). Both data
types were collected using the DAVIS346 camera, which integrates DVS and APS
sensors.The dataset focuses on two types of defects: cracks and spalling, and
includes data from both field and laboratory environments. The field dataset
comprises 318 recording sequences,documenting 458 distinct cracks and 121
distinct spalling instances.The laboratory dataset includes 362 recording
sequences, covering 220 distinct cracks and 308 spalling instances.Four
realtime object detection models were evaluated on it to validate the dataset
effectiveness.The results demonstrate the dataset robustness in enabling
accurate defect detection and classification,even under challenging lighting
conditions.
|
2504.05683 | Subhankar Maity | Subhankar Maity, Aniket Deroy, Sudeshna Sarkar | Towards Smarter Hiring: Are Zero-Shot and Few-Shot Pre-trained LLMs
Ready for HR Spoken Interview Transcript Analysis? | 32 pages, 24 figures | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This research paper presents a comprehensive analysis of the performance of
prominent pre-trained large language models (LLMs), including GPT-4 Turbo,
GPT-3.5 Turbo, text-davinci-003, text-babbage-001, text-curie-001,
text-ada-001, llama-2-7b-chat, llama-2-13b-chat, and llama-2-70b-chat, in
comparison to expert human evaluators in providing scores, identifying errors,
and offering feedback and improvement suggestions to candidates during mock HR
(Human Resources) interviews. We introduce a dataset called HURIT (Human
Resource Interview Transcripts), which comprises 3,890 HR interview transcripts
sourced from real-world HR interview scenarios. Our findings reveal that
pre-trained LLMs, particularly GPT-4 Turbo and GPT-3.5 Turbo, exhibit
commendable performance and are capable of producing evaluations comparable to
those of expert human evaluators. Although these LLMs demonstrate proficiency
in providing scores comparable to human experts in terms of human evaluation
metrics, they frequently fail to identify errors and offer specific actionable
advice for candidate performance improvement in HR interviews. Our research
suggests that the current state-of-the-art pre-trained LLMs are not fully
conducive for automatic deployment in an HR interview assessment. Instead, our
findings advocate for a human-in-the-loop approach, to incorporate manual
checks for inconsistencies and provisions for improving feedback quality as a
more suitable strategy.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 04:46:10 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Maity",
"Subhankar",
""
],
[
"Deroy",
"Aniket",
""
],
[
"Sarkar",
"Sudeshna",
""
]
] | TITLE: Towards Smarter Hiring: Are Zero-Shot and Few-Shot Pre-trained LLMs
Ready for HR Spoken Interview Transcript Analysis?
ABSTRACT: This research paper presents a comprehensive analysis of the performance of
prominent pre-trained large language models (LLMs), including GPT-4 Turbo,
GPT-3.5 Turbo, text-davinci-003, text-babbage-001, text-curie-001,
text-ada-001, llama-2-7b-chat, llama-2-13b-chat, and llama-2-70b-chat, in
comparison to expert human evaluators in providing scores, identifying errors,
and offering feedback and improvement suggestions to candidates during mock HR
(Human Resources) interviews. We introduce a dataset called HURIT (Human
Resource Interview Transcripts), which comprises 3,890 HR interview transcripts
sourced from real-world HR interview scenarios. Our findings reveal that
pre-trained LLMs, particularly GPT-4 Turbo and GPT-3.5 Turbo, exhibit
commendable performance and are capable of producing evaluations comparable to
those of expert human evaluators. Although these LLMs demonstrate proficiency
in providing scores comparable to human experts in terms of human evaluation
metrics, they frequently fail to identify errors and offer specific actionable
advice for candidate performance improvement in HR interviews. Our research
suggests that the current state-of-the-art pre-trained LLMs are not fully
conducive for automatic deployment in an HR interview assessment. Instead, our
findings advocate for a human-in-the-loop approach, to incorporate manual
checks for inconsistencies and provisions for improving feedback quality as a
more suitable strategy.
|
2504.05684 | Tri Ton | Tri Ton, Ji Woo Hong, Chang D. Yoo | TARO: Timestep-Adaptive Representation Alignment with Onset-Aware
Conditioning for Synchronized Video-to-Audio Synthesis | 10 pages, 6 figures | null | null | null | cs.SD cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper introduces Timestep-Adaptive Representation Alignment with
Onset-Aware Conditioning (TARO), a novel framework for high-fidelity and
temporally coherent video-to-audio synthesis. Built upon flow-based
transformers, which offer stable training and continuous transformations for
enhanced synchronization and audio quality, TARO introduces two key
innovations: (1) Timestep-Adaptive Representation Alignment (TRA), which
dynamically aligns latent representations by adjusting alignment strength based
on the noise schedule, ensuring smooth evolution and improved fidelity, and (2)
Onset-Aware Conditioning (OAC), which integrates onset cues that serve as sharp
event-driven markers of audio-relevant visual moments to enhance
synchronization with dynamic visual events. Extensive experiments on the
VGGSound and Landscape datasets demonstrate that TARO outperforms prior
methods, achieving relatively 53\% lower Frechet Distance (FD), 29% lower
Frechet Audio Distance (FAD), and a 97.19% Alignment Accuracy, highlighting its
superior audio quality and synchronization precision.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 04:49:36 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Ton",
"Tri",
""
],
[
"Hong",
"Ji Woo",
""
],
[
"Yoo",
"Chang D.",
""
]
] | TITLE: TARO: Timestep-Adaptive Representation Alignment with Onset-Aware
Conditioning for Synchronized Video-to-Audio Synthesis
ABSTRACT: This paper introduces Timestep-Adaptive Representation Alignment with
Onset-Aware Conditioning (TARO), a novel framework for high-fidelity and
temporally coherent video-to-audio synthesis. Built upon flow-based
transformers, which offer stable training and continuous transformations for
enhanced synchronization and audio quality, TARO introduces two key
innovations: (1) Timestep-Adaptive Representation Alignment (TRA), which
dynamically aligns latent representations by adjusting alignment strength based
on the noise schedule, ensuring smooth evolution and improved fidelity, and (2)
Onset-Aware Conditioning (OAC), which integrates onset cues that serve as sharp
event-driven markers of audio-relevant visual moments to enhance
synchronization with dynamic visual events. Extensive experiments on the
VGGSound and Landscape datasets demonstrate that TARO outperforms prior
methods, achieving relatively 53\% lower Frechet Distance (FD), 29% lower
Frechet Audio Distance (FAD), and a 97.19% Alignment Accuracy, highlighting its
superior audio quality and synchronization precision.
|
2504.05687 | Kevin Tian | Arun Jambulapati and Jonathan Li and Kevin Tian | Radial Isotropic Position via an Implicit Newton's Method | null | null | null | null | cs.DS math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Placing a dataset $A = \{\mathbf{a}_i\}_{i \in [n]} \subset \mathbb{R}^d$ in
radial isotropic position, i.e., finding an invertible $\mathbf{R} \in
\mathbb{R}^{d \times d}$ such that the unit vectors $\{(\mathbf{R}
\mathbf{a}_i) \|\mathbf{R} \mathbf{a}_i\|_2^{-1}\}_{i \in [n]}$ are in
isotropic position, is a powerful tool with applications in functional
analysis, communication complexity, coding theory, and the design of learning
algorithms. When the transformed dataset has a second moment matrix within a
$\exp(\pm \epsilon)$ factor of a multiple of $\mathbf{I}_d$, we call
$\mathbf{R}$ an $\epsilon$-approximate Forster transform.
We give a faster algorithm for computing approximate Forster transforms,
based on optimizing an objective defined by Barthe [Barthe98]. When the
transform has a polynomially-bounded aspect ratio, our algorithm uses
$O(nd^{\omega - 1}(\frac n \epsilon)^{o(1)})$ time to output an
$\epsilon$-approximate Forster transform with high probability, when one
exists. This is almost the natural limit of this approach, as even evaluating
Barthe's objective takes $O(nd^{\omega - 1})$ time. Previously, the
state-of-the-art runtime in this regime was based on cutting-plane methods, and
scaled at least as $\approx n^3 + n^2 d^{\omega - 1}$. We also provide explicit
estimates on the aspect ratio in the smoothed analysis setting, and show that
our algorithm similarly improves upon those in the literature.
To obtain our results, we develop a subroutine of potential broader interest:
a reduction from almost-linear time sparsification of graph Laplacians to the
ability to support almost-linear time matrix-vector products. We combine this
tool with new stability bounds on Barthe's objective to implicitly implement a
box-constrained Newton's method [CMTV17, ALOW17].
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 05:00:28 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Jambulapati",
"Arun",
""
],
[
"Li",
"Jonathan",
""
],
[
"Tian",
"Kevin",
""
]
] | TITLE: Radial Isotropic Position via an Implicit Newton's Method
ABSTRACT: Placing a dataset $A = \{\mathbf{a}_i\}_{i \in [n]} \subset \mathbb{R}^d$ in
radial isotropic position, i.e., finding an invertible $\mathbf{R} \in
\mathbb{R}^{d \times d}$ such that the unit vectors $\{(\mathbf{R}
\mathbf{a}_i) \|\mathbf{R} \mathbf{a}_i\|_2^{-1}\}_{i \in [n]}$ are in
isotropic position, is a powerful tool with applications in functional
analysis, communication complexity, coding theory, and the design of learning
algorithms. When the transformed dataset has a second moment matrix within a
$\exp(\pm \epsilon)$ factor of a multiple of $\mathbf{I}_d$, we call
$\mathbf{R}$ an $\epsilon$-approximate Forster transform.
We give a faster algorithm for computing approximate Forster transforms,
based on optimizing an objective defined by Barthe [Barthe98]. When the
transform has a polynomially-bounded aspect ratio, our algorithm uses
$O(nd^{\omega - 1}(\frac n \epsilon)^{o(1)})$ time to output an
$\epsilon$-approximate Forster transform with high probability, when one
exists. This is almost the natural limit of this approach, as even evaluating
Barthe's objective takes $O(nd^{\omega - 1})$ time. Previously, the
state-of-the-art runtime in this regime was based on cutting-plane methods, and
scaled at least as $\approx n^3 + n^2 d^{\omega - 1}$. We also provide explicit
estimates on the aspect ratio in the smoothed analysis setting, and show that
our algorithm similarly improves upon those in the literature.
To obtain our results, we develop a subroutine of potential broader interest:
a reduction from almost-linear time sparsification of graph Laplacians to the
ability to support almost-linear time matrix-vector products. We combine this
tool with new stability bounds on Barthe's objective to implicitly implement a
box-constrained Newton's method [CMTV17, ALOW17].
|
2504.05691 | Sudeshna Jana | Sudeshna Jana, Manjira Sinha and Tirthankar Dasgupta | StayLTC: A Cost-Effective Multimodal Framework for Hospital Length of
Stay Forecasting | 4 pages, 3 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate prediction of Length of Stay (LOS) in hospitals is crucial for
improving healthcare services, resource management, and cost efficiency. This
paper presents StayLTC, a multimodal deep learning framework developed to
forecast real-time hospital LOS using Liquid Time-Constant Networks (LTCs).
LTCs, with their continuous-time recurrent dynamics, are evaluated against
traditional models using structured data from Electronic Health Records (EHRs)
and clinical notes. Our evaluation, conducted on the MIMIC-III dataset,
demonstrated that LTCs significantly outperform most of the other time series
models, offering enhanced accuracy, robustness, and efficiency in resource
utilization. Additionally, LTCs demonstrate a comparable performance in LOS
prediction compared to time series large language models, while requiring
significantly less computational power and memory, underscoring their potential
to advance Natural Language Processing (NLP) tasks in healthcare.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 05:27:53 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Jana",
"Sudeshna",
""
],
[
"Sinha",
"Manjira",
""
],
[
"Dasgupta",
"Tirthankar",
""
]
] | TITLE: StayLTC: A Cost-Effective Multimodal Framework for Hospital Length of
Stay Forecasting
ABSTRACT: Accurate prediction of Length of Stay (LOS) in hospitals is crucial for
improving healthcare services, resource management, and cost efficiency. This
paper presents StayLTC, a multimodal deep learning framework developed to
forecast real-time hospital LOS using Liquid Time-Constant Networks (LTCs).
LTCs, with their continuous-time recurrent dynamics, are evaluated against
traditional models using structured data from Electronic Health Records (EHRs)
and clinical notes. Our evaluation, conducted on the MIMIC-III dataset,
demonstrated that LTCs significantly outperform most of the other time series
models, offering enhanced accuracy, robustness, and efficiency in resource
utilization. Additionally, LTCs demonstrate a comparable performance in LOS
prediction compared to time series large language models, while requiring
significantly less computational power and memory, underscoring their potential
to advance Natural Language Processing (NLP) tasks in healthcare.
|
2504.05696 | Adi Wijaya | Sidhiq Mardianta, Affandy, Catur Supriyanto, Catur Supriyanto, Adi
Wijaya | Diabetic Retinopathy Detection Based on Convolutional Neural Networks
with SMOTE and CLAHE Techniques Applied to Fundus Images | 6 pages, 6 figures, 2 tables | null | null | null | eess.IV cs.CV cs.LG q-bio.NC | http://creativecommons.org/licenses/by/4.0/ | Diabetic retinopathy (DR) is one of the major complications in diabetic
patients' eyes, potentially leading to permanent blindness if not detected
timely. This study aims to evaluate the accuracy of artificial intelligence
(AI) in diagnosing DR. The method employed is the Synthetic Minority
Over-sampling Technique (SMOTE) algorithm, applied to identify DR and its
severity stages from fundus images using the public dataset "APTOS 2019
Blindness Detection." Literature was reviewed via ScienceDirect, ResearchGate,
Google Scholar, and IEEE Xplore. Classification results using Convolutional
Neural Network (CNN) showed the best performance for the binary classes normal
(0) and DR (1) with an accuracy of 99.55%, precision of 99.54%, recall of
99.54%, and F1-score of 99.54%. For the multiclass classification No_DR (0),
Mild (1), Moderate (2), Severe (3), Proliferate_DR (4), the accuracy was
95.26%, precision 95.26%, recall 95.17%, and F1-score 95.23%. Evaluation using
the confusion matrix yielded results of 99.68% for binary classification and
96.65% for multiclass. This study highlights the significant potential in
enhancing the accuracy of DR diagnosis compared to traditional human analysis
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 05:38:53 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Mardianta",
"Sidhiq",
""
],
[
"Affandy",
"",
""
],
[
"Supriyanto",
"Catur",
""
],
[
"Supriyanto",
"Catur",
""
],
[
"Wijaya",
"Adi",
""
]
] | TITLE: Diabetic Retinopathy Detection Based on Convolutional Neural Networks
with SMOTE and CLAHE Techniques Applied to Fundus Images
ABSTRACT: Diabetic retinopathy (DR) is one of the major complications in diabetic
patients' eyes, potentially leading to permanent blindness if not detected
timely. This study aims to evaluate the accuracy of artificial intelligence
(AI) in diagnosing DR. The method employed is the Synthetic Minority
Over-sampling Technique (SMOTE) algorithm, applied to identify DR and its
severity stages from fundus images using the public dataset "APTOS 2019
Blindness Detection." Literature was reviewed via ScienceDirect, ResearchGate,
Google Scholar, and IEEE Xplore. Classification results using Convolutional
Neural Network (CNN) showed the best performance for the binary classes normal
(0) and DR (1) with an accuracy of 99.55%, precision of 99.54%, recall of
99.54%, and F1-score of 99.54%. For the multiclass classification No_DR (0),
Mild (1), Moderate (2), Severe (3), Proliferate_DR (4), the accuracy was
95.26%, precision 95.26%, recall 95.17%, and F1-score 95.23%. Evaluation using
the confusion matrix yielded results of 99.68% for binary classification and
96.65% for multiclass. This study highlights the significant potential in
enhancing the accuracy of DR diagnosis compared to traditional human analysis
|
2504.05697 | Rui Qiu | Rui Qiu, Yamei Tu, Po-Yin Yen, Han-Wei Shen | VADIS: A Visual Analytics Pipeline for Dynamic Document Representation
and Information-Seeking | null | null | null | null | cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the biomedical domain, visualizing the document embeddings of an extensive
corpus has been widely used in information-seeking tasks. However, three key
challenges with existing visualizations make it difficult for clinicians to
find information efficiently. First, the document embeddings used in these
visualizations are generated statically by pretrained language models, which
cannot adapt to the user's evolving interest. Second, existing document
visualization techniques cannot effectively display how the documents are
relevant to users' interest, making it difficult for users to identify the most
pertinent information. Third, existing embedding generation and visualization
processes suffer from a lack of interpretability, making it difficult to
understand, trust and use the result for decision-making. In this paper, we
present a novel visual analytics pipeline for user driven document
representation and iterative information seeking (VADIS). VADIS introduces a
prompt-based attention model (PAM) that generates dynamic document embedding
and document relevance adjusted to the user's query. To effectively visualize
these two pieces of information, we design a new document map that leverages a
circular grid layout to display documents based on both their relevance to the
query and the semantic similarity. Additionally, to improve the
interpretability, we introduce a corpus-level attention visualization method to
improve the user's understanding of the model focus and to enable the users to
identify potential oversight. This visualization, in turn, empowers users to
refine, update and introduce new queries, thereby facilitating a dynamic and
iterative information-seeking experience. We evaluated VADIS quantitatively and
qualitatively on a real-world dataset of biomedical research papers to
demonstrate its effectiveness.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 05:39:11 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Qiu",
"Rui",
""
],
[
"Tu",
"Yamei",
""
],
[
"Yen",
"Po-Yin",
""
],
[
"Shen",
"Han-Wei",
""
]
] | TITLE: VADIS: A Visual Analytics Pipeline for Dynamic Document Representation
and Information-Seeking
ABSTRACT: In the biomedical domain, visualizing the document embeddings of an extensive
corpus has been widely used in information-seeking tasks. However, three key
challenges with existing visualizations make it difficult for clinicians to
find information efficiently. First, the document embeddings used in these
visualizations are generated statically by pretrained language models, which
cannot adapt to the user's evolving interest. Second, existing document
visualization techniques cannot effectively display how the documents are
relevant to users' interest, making it difficult for users to identify the most
pertinent information. Third, existing embedding generation and visualization
processes suffer from a lack of interpretability, making it difficult to
understand, trust and use the result for decision-making. In this paper, we
present a novel visual analytics pipeline for user driven document
representation and iterative information seeking (VADIS). VADIS introduces a
prompt-based attention model (PAM) that generates dynamic document embedding
and document relevance adjusted to the user's query. To effectively visualize
these two pieces of information, we design a new document map that leverages a
circular grid layout to display documents based on both their relevance to the
query and the semantic similarity. Additionally, to improve the
interpretability, we introduce a corpus-level attention visualization method to
improve the user's understanding of the model focus and to enable the users to
identify potential oversight. This visualization, in turn, empowers users to
refine, update and introduce new queries, thereby facilitating a dynamic and
iterative information-seeking experience. We evaluated VADIS quantitatively and
qualitatively on a real-world dataset of biomedical research papers to
demonstrate its effectiveness.
|
2504.05698 | Wesley Khademi | Wesley Khademi, Li Fuxin | Point-based Instance Completion with Scene Constraints | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent point-based object completion methods have demonstrated the ability to
accurately recover the missing geometry of partially observed objects. However,
these approaches are not well-suited for completing objects within a scene, as
they do not consider known scene constraints (e.g., other observed surfaces) in
their completions and further expect the partial input to be in a canonical
coordinate system, which does not hold for objects within scenes. While
instance scene completion methods have been proposed for completing objects
within a scene, they lag behind point-based object completion methods in terms
of object completion quality and still do not consider known scene constraints
during completion. To overcome these limitations, we propose a point
cloud-based instance completion model that can robustly complete objects at
arbitrary scales and pose in the scene. To enable reasoning at the scene level,
we introduce a sparse set of scene constraints represented as point clouds and
integrate them into our completion model via a cross-attention mechanism. To
evaluate the instance scene completion task on indoor scenes, we further build
a new dataset called ScanWCF, which contains labeled partial scans as well as
aligned ground truth scene completions that are watertight and collision-free.
Through several experiments, we demonstrate that our method achieves improved
fidelity to partial scans, higher completion quality, and greater plausibility
over existing state-of-the-art methods.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 05:41:49 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Khademi",
"Wesley",
""
],
[
"Fuxin",
"Li",
""
]
] | TITLE: Point-based Instance Completion with Scene Constraints
ABSTRACT: Recent point-based object completion methods have demonstrated the ability to
accurately recover the missing geometry of partially observed objects. However,
these approaches are not well-suited for completing objects within a scene, as
they do not consider known scene constraints (e.g., other observed surfaces) in
their completions and further expect the partial input to be in a canonical
coordinate system, which does not hold for objects within scenes. While
instance scene completion methods have been proposed for completing objects
within a scene, they lag behind point-based object completion methods in terms
of object completion quality and still do not consider known scene constraints
during completion. To overcome these limitations, we propose a point
cloud-based instance completion model that can robustly complete objects at
arbitrary scales and pose in the scene. To enable reasoning at the scene level,
we introduce a sparse set of scene constraints represented as point clouds and
integrate them into our completion model via a cross-attention mechanism. To
evaluate the instance scene completion task on indoor scenes, we further build
a new dataset called ScanWCF, which contains labeled partial scans as well as
aligned ground truth scene completions that are watertight and collision-free.
Through several experiments, we demonstrate that our method achieves improved
fidelity to partial scans, higher completion quality, and greater plausibility
over existing state-of-the-art methods.
|
2504.05700 | Zhihao Zhao | Seth Z. Zhao, Reza Ghoddoosian, Isht Dwivedi, Nakul Agarwal, Behzad
Dariush | Pose-Aware Weakly-Supervised Action Segmentation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Understanding human behavior is an important problem in the pursuit of visual
intelligence. A challenge in this endeavor is the extensive and costly effort
required to accurately label action segments. To address this issue, we
consider learning methods that demand minimal supervision for segmentation of
human actions in long instructional videos. Specifically, we introduce a
weakly-supervised framework that uniquely incorporates pose knowledge during
training while omitting its use during inference, thereby distilling pose
knowledge pertinent to each action component. We propose a pose-inspired
contrastive loss as a part of the whole weakly-supervised framework which is
trained to distinguish action boundaries more effectively. Our approach,
validated through extensive experiments on representative datasets, outperforms
previous state-of-the-art (SOTA) in segmenting long instructional videos under
both online and offline settings. Additionally, we demonstrate the framework's
adaptability to various segmentation backbones and pose extractors across
different datasets.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 05:42:55 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhao",
"Seth Z.",
""
],
[
"Ghoddoosian",
"Reza",
""
],
[
"Dwivedi",
"Isht",
""
],
[
"Agarwal",
"Nakul",
""
],
[
"Dariush",
"Behzad",
""
]
] | TITLE: Pose-Aware Weakly-Supervised Action Segmentation
ABSTRACT: Understanding human behavior is an important problem in the pursuit of visual
intelligence. A challenge in this endeavor is the extensive and costly effort
required to accurately label action segments. To address this issue, we
consider learning methods that demand minimal supervision for segmentation of
human actions in long instructional videos. Specifically, we introduce a
weakly-supervised framework that uniquely incorporates pose knowledge during
training while omitting its use during inference, thereby distilling pose
knowledge pertinent to each action component. We propose a pose-inspired
contrastive loss as a part of the whole weakly-supervised framework which is
trained to distinguish action boundaries more effectively. Our approach,
validated through extensive experiments on representative datasets, outperforms
previous state-of-the-art (SOTA) in segmenting long instructional videos under
both online and offline settings. Additionally, we demonstrate the framework's
adaptability to various segmentation backbones and pose extractors across
different datasets.
|
2504.05706 | Fida Mohammad Thoker | Fida Mohammad Thoker, Letian Jiang, Chen Zhao, Piyush Bagad, Hazel
Doughty, Bernard Ghanem, Cees G. M. Snoek | SEVERE++: Evaluating Benchmark Sensitivity in Generalization of Video
Representation Learning | Under Review | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Continued advances in self-supervised learning have led to significant
progress in video representation learning, offering a scalable alternative to
supervised approaches by removing the need for manual annotations. Despite
strong performance on standard action recognition benchmarks, video
self-supervised learning methods are largely evaluated under narrow protocols,
typically pretraining on Kinetics-400 and fine-tuning on similar datasets,
limiting our understanding of their generalization in real world scenarios. In
this work, we present a comprehensive evaluation of modern video
self-supervised models, focusing on generalization across four key downstream
factors: domain shift, sample efficiency, action granularity, and task
diversity. Building on our prior work analyzing benchmark sensitivity in
CNN-based contrastive learning, we extend the study to cover state-of-the-art
transformer-based video-only and video-text models. Specifically, we benchmark
12 transformer-based methods (7 video-only, 5 video-text) and compare them to
10 CNN-based methods, totaling over 1100 experiments across 8 datasets and 7
downstream tasks. Our analysis shows that, despite architectural advances,
transformer-based models remain sensitive to downstream conditions. No method
generalizes consistently across all factors, video-only transformers perform
better under domain shifts, CNNs outperform for fine-grained tasks, and
video-text models often underperform despite large scale pretraining. We also
find that recent transformer models do not consistently outperform earlier
approaches. Our findings provide a detailed view of the strengths and
limitations of current video SSL methods and offer a unified benchmark for
evaluating generalization in video representation learning.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 06:00:28 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Thoker",
"Fida Mohammad",
""
],
[
"Jiang",
"Letian",
""
],
[
"Zhao",
"Chen",
""
],
[
"Bagad",
"Piyush",
""
],
[
"Doughty",
"Hazel",
""
],
[
"Ghanem",
"Bernard",
""
],
[
"Snoek",
"Cees G. M.",
""
]
] | TITLE: SEVERE++: Evaluating Benchmark Sensitivity in Generalization of Video
Representation Learning
ABSTRACT: Continued advances in self-supervised learning have led to significant
progress in video representation learning, offering a scalable alternative to
supervised approaches by removing the need for manual annotations. Despite
strong performance on standard action recognition benchmarks, video
self-supervised learning methods are largely evaluated under narrow protocols,
typically pretraining on Kinetics-400 and fine-tuning on similar datasets,
limiting our understanding of their generalization in real world scenarios. In
this work, we present a comprehensive evaluation of modern video
self-supervised models, focusing on generalization across four key downstream
factors: domain shift, sample efficiency, action granularity, and task
diversity. Building on our prior work analyzing benchmark sensitivity in
CNN-based contrastive learning, we extend the study to cover state-of-the-art
transformer-based video-only and video-text models. Specifically, we benchmark
12 transformer-based methods (7 video-only, 5 video-text) and compare them to
10 CNN-based methods, totaling over 1100 experiments across 8 datasets and 7
downstream tasks. Our analysis shows that, despite architectural advances,
transformer-based models remain sensitive to downstream conditions. No method
generalizes consistently across all factors, video-only transformers perform
better under domain shifts, CNNs outperform for fine-grained tasks, and
video-text models often underperform despite large scale pretraining. We also
find that recent transformer models do not consistently outperform earlier
approaches. Our findings provide a detailed view of the strengths and
limitations of current video SSL methods and offer a unified benchmark for
evaluating generalization in video representation learning.
|
2504.05711 | Jinghua Groppe | Jinghua Groppe, Andreas Marquet, Annabel Walz, Sven Groppe | Automated Archival Descriptions with Federated Intelligence of LLMs | 15 pages | null | null | null | cs.AI cs.DL cs.IR cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Enforcing archival standards requires specialized expertise, and manually
creating metadata descriptions for archival materials is a tedious and
error-prone task. This work aims at exploring the potential of agentic AI and
large language models (LLMs) in addressing the challenges of implementing a
standardized archival description process. To this end, we introduce an agentic
AI-driven system for automated generation of high-quality metadata descriptions
of archival materials. We develop a federated optimization approach that unites
the intelligence of multiple LLMs to construct optimal archival metadata. We
also suggest methods to overcome the challenges associated with using LLMs for
consistent metadata generation. To evaluate the feasibility and effectiveness
of our techniques, we conducted extensive experiments using a real-world
dataset of archival materials, which covers a variety of document types and
data formats. The evaluation results demonstrate the feasibility of our
techniques and highlight the superior performance of the federated optimization
approach compared to single-model solutions in metadata quality and
reliability.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 06:11:05 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Groppe",
"Jinghua",
""
],
[
"Marquet",
"Andreas",
""
],
[
"Walz",
"Annabel",
""
],
[
"Groppe",
"Sven",
""
]
] | TITLE: Automated Archival Descriptions with Federated Intelligence of LLMs
ABSTRACT: Enforcing archival standards requires specialized expertise, and manually
creating metadata descriptions for archival materials is a tedious and
error-prone task. This work aims at exploring the potential of agentic AI and
large language models (LLMs) in addressing the challenges of implementing a
standardized archival description process. To this end, we introduce an agentic
AI-driven system for automated generation of high-quality metadata descriptions
of archival materials. We develop a federated optimization approach that unites
the intelligence of multiple LLMs to construct optimal archival metadata. We
also suggest methods to overcome the challenges associated with using LLMs for
consistent metadata generation. To evaluate the feasibility and effectiveness
of our techniques, we conducted extensive experiments using a real-world
dataset of archival materials, which covers a variety of document types and
data formats. The evaluation results demonstrate the feasibility of our
techniques and highlight the superior performance of the federated optimization
approach compared to single-model solutions in metadata quality and
reliability.
|
2504.05716 | Valdemar \v{S}v\'abensk\'y | Gen Li, Li Chen, Cheng Tang, Valdemar \v{S}v\'abensk\'y, Daisuke
Deguchi, Takayoshi Yamashita, Atsushi Shimada | Single-Agent vs. Multi-Agent LLM Strategies for Automated Student
Reflection Assessment | To be published in Proceedings of the 29th Pacific-Asia Conference on
Knowledge Discovery and Data Mining (PAKDD 2025) | null | null | null | cs.LG cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We explore the use of Large Language Models (LLMs) for automated assessment
of open-text student reflections and prediction of academic performance.
Traditional methods for evaluating reflections are time-consuming and may not
scale effectively in educational settings. In this work, we employ LLMs to
transform student reflections into quantitative scores using two assessment
strategies (single-agent and multi-agent) and two prompting techniques
(zero-shot and few-shot). Our experiments, conducted on a dataset of 5,278
reflections from 377 students over three academic terms, demonstrate that the
single-agent with few-shot strategy achieves the highest match rate with human
evaluations. Furthermore, models utilizing LLM-assessed reflection scores
outperform baselines in both at-risk student identification and grade
prediction tasks. These findings suggest that LLMs can effectively automate
reflection assessment, reduce educators' workload, and enable timely support
for students who may need additional assistance. Our work emphasizes the
potential of integrating advanced generative AI technologies into educational
practices to enhance student engagement and academic success.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 06:34:15 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Li",
"Gen",
""
],
[
"Chen",
"Li",
""
],
[
"Tang",
"Cheng",
""
],
[
"Švábenský",
"Valdemar",
""
],
[
"Deguchi",
"Daisuke",
""
],
[
"Yamashita",
"Takayoshi",
""
],
[
"Shimada",
"Atsushi",
""
]
] | TITLE: Single-Agent vs. Multi-Agent LLM Strategies for Automated Student
Reflection Assessment
ABSTRACT: We explore the use of Large Language Models (LLMs) for automated assessment
of open-text student reflections and prediction of academic performance.
Traditional methods for evaluating reflections are time-consuming and may not
scale effectively in educational settings. In this work, we employ LLMs to
transform student reflections into quantitative scores using two assessment
strategies (single-agent and multi-agent) and two prompting techniques
(zero-shot and few-shot). Our experiments, conducted on a dataset of 5,278
reflections from 377 students over three academic terms, demonstrate that the
single-agent with few-shot strategy achieves the highest match rate with human
evaluations. Furthermore, models utilizing LLM-assessed reflection scores
outperform baselines in both at-risk student identification and grade
prediction tasks. These findings suggest that LLMs can effectively automate
reflection assessment, reduce educators' workload, and enable timely support
for students who may need additional assistance. Our work emphasizes the
potential of integrating advanced generative AI technologies into educational
practices to enhance student engagement and academic success.
|
2504.05728 | Tianqi Ding | Tianqi Ding and Dawei Xiang and Tianyao Sun and YiJiashum Qi and
Zunduo Zhao | AI-Driven Prognostics for State of Health Prediction in Li-ion
Batteries: A Comprehensive Analysis with Validation | 8 pages, 12 figures, Accepted by 2025 6th International Conference on
Electrical Technology and Automatic Control(ICETAC 2025) | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a comprehensive review of AI-driven prognostics for State
of Health (SoH) prediction in lithium-ion batteries. We compare the
effectiveness of various AI algorithms, including FFNN, LSTM, and BiLSTM,
across multiple datasets (CALCE, NASA, UDDS) and scenarios (e.g., varying
temperatures and driving conditions). Additionally, we analyze the factors
influencing SoH fluctuations, such as temperature and charge-discharge rates,
and validate our findings through simulations. The results demonstrate that
BiLSTM achieves the highest accuracy, with an average RMSE reduction of 15%
compared to LSTM, highlighting its robustness in real-world applications.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 06:58:39 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Ding",
"Tianqi",
""
],
[
"Xiang",
"Dawei",
""
],
[
"Sun",
"Tianyao",
""
],
[
"Qi",
"YiJiashum",
""
],
[
"Zhao",
"Zunduo",
""
]
] | TITLE: AI-Driven Prognostics for State of Health Prediction in Li-ion
Batteries: A Comprehensive Analysis with Validation
ABSTRACT: This paper presents a comprehensive review of AI-driven prognostics for State
of Health (SoH) prediction in lithium-ion batteries. We compare the
effectiveness of various AI algorithms, including FFNN, LSTM, and BiLSTM,
across multiple datasets (CALCE, NASA, UDDS) and scenarios (e.g., varying
temperatures and driving conditions). Additionally, we analyze the factors
influencing SoH fluctuations, such as temperature and charge-discharge rates,
and validate our findings through simulations. The results demonstrate that
BiLSTM achieves the highest accuracy, with an average RMSE reduction of 15%
compared to LSTM, highlighting its robustness in real-world applications.
|
2504.05736 | Cai Yida | Yida Cai, Kun Liang, Sanwoo Lee, Qinghan Wang, Yunfang Wu | Rank-Then-Score: Enhancing Large Language Models for Automated Essay
Scoring | 17 pages | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, large language models (LLMs) achieve remarkable success
across a variety of tasks. However, their potential in the domain of Automated
Essay Scoring (AES) remains largely underexplored. Moreover, compared to
English data, the methods for Chinese AES is not well developed. In this paper,
we propose Rank-Then-Score (RTS), a fine-tuning framework based on large
language models to enhance their essay scoring capabilities. Specifically, we
fine-tune the ranking model (Ranker) with feature-enriched data, and then feed
the output of the ranking model, in the form of a candidate score set, with the
essay content into the scoring model (Scorer) to produce the final score.
Experimental results on two benchmark datasets, HSK and ASAP, demonstrate that
RTS consistently outperforms the direct prompting (Vanilla) method in terms of
average QWK across all LLMs and datasets, and achieves the best performance on
Chinese essay scoring using the HSK dataset.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 07:10:51 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Cai",
"Yida",
""
],
[
"Liang",
"Kun",
""
],
[
"Lee",
"Sanwoo",
""
],
[
"Wang",
"Qinghan",
""
],
[
"Wu",
"Yunfang",
""
]
] | TITLE: Rank-Then-Score: Enhancing Large Language Models for Automated Essay
Scoring
ABSTRACT: In recent years, large language models (LLMs) achieve remarkable success
across a variety of tasks. However, their potential in the domain of Automated
Essay Scoring (AES) remains largely underexplored. Moreover, compared to
English data, the methods for Chinese AES is not well developed. In this paper,
we propose Rank-Then-Score (RTS), a fine-tuning framework based on large
language models to enhance their essay scoring capabilities. Specifically, we
fine-tune the ranking model (Ranker) with feature-enriched data, and then feed
the output of the ranking model, in the form of a candidate score set, with the
essay content into the scoring model (Scorer) to produce the final score.
Experimental results on two benchmark datasets, HSK and ASAP, demonstrate that
RTS consistently outperforms the direct prompting (Vanilla) method in terms of
average QWK across all LLMs and datasets, and achieves the best performance on
Chinese essay scoring using the HSK dataset.
|
2504.05751 | Jiangsan Zhao Dr. | Jiangsan Zhao, Jakob Geipel, Krzysztof Kusnierek, Xuean Cui | InvNeRF-Seg: Fine-Tuning a Pre-Trained NeRF for 3D Object Segmentation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Neural Radiance Fields (NeRF) have been widely adopted for reconstructing
high quality 3D point clouds from 2D RGB images. However, the segmentation of
these reconstructed 3D scenes is more essential for downstream tasks such as
object counting, size estimation, and scene understanding. While segmentation
on raw 3D point clouds using deep learning requires labor intensive and
time-consuming manual annotation, directly training NeRF on binary masks also
fails due to the absence of color and shading cues essential for geometry
learning. We propose Invariant NeRF for Segmentation (InvNeRFSeg), a two step,
zero change fine tuning strategy for 3D segmentation. We first train a standard
NeRF on RGB images and then fine tune it using 2D segmentation masks without
altering either the model architecture or loss function. This approach produces
higher quality, cleaner segmented point clouds directly from the refined
radiance field with minimal computational overhead or complexity. Field density
analysis reveals consistent semantic refinement: densities of object regions
increase while background densities are suppressed, ensuring clean and
interpretable segmentations. We demonstrate InvNeRFSegs superior performance
over both SA3D and FruitNeRF on both synthetic fruit and real world soybean
datasets. This approach effectively extends 2D segmentation to high quality 3D
segmentation.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 07:31:01 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zhao",
"Jiangsan",
""
],
[
"Geipel",
"Jakob",
""
],
[
"Kusnierek",
"Krzysztof",
""
],
[
"Cui",
"Xuean",
""
]
] | TITLE: InvNeRF-Seg: Fine-Tuning a Pre-Trained NeRF for 3D Object Segmentation
ABSTRACT: Neural Radiance Fields (NeRF) have been widely adopted for reconstructing
high quality 3D point clouds from 2D RGB images. However, the segmentation of
these reconstructed 3D scenes is more essential for downstream tasks such as
object counting, size estimation, and scene understanding. While segmentation
on raw 3D point clouds using deep learning requires labor intensive and
time-consuming manual annotation, directly training NeRF on binary masks also
fails due to the absence of color and shading cues essential for geometry
learning. We propose Invariant NeRF for Segmentation (InvNeRFSeg), a two step,
zero change fine tuning strategy for 3D segmentation. We first train a standard
NeRF on RGB images and then fine tune it using 2D segmentation masks without
altering either the model architecture or loss function. This approach produces
higher quality, cleaner segmented point clouds directly from the refined
radiance field with minimal computational overhead or complexity. Field density
analysis reveals consistent semantic refinement: densities of object regions
increase while background densities are suppressed, ensuring clean and
interpretable segmentations. We demonstrate InvNeRFSegs superior performance
over both SA3D and FruitNeRF on both synthetic fruit and real world soybean
datasets. This approach effectively extends 2D segmentation to high quality 3D
segmentation.
|
2504.05756 | Marco Virgolin | Luigi Rovito, Marco Virgolin | Interpretable Non-linear Survival Analysis with Evolutionary Symbolic
Regression | null | null | 10.1145/3712256.3726446 | null | cs.LG cs.NE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Survival Regression (SuR) is a key technique for modeling time to event in
important applications such as clinical trials and semiconductor manufacturing.
Currently, SuR algorithms belong to one of three classes: non-linear black-box
-- allowing adaptability to many datasets but offering limited interpretability
(e.g., tree ensembles); linear glass-box -- being easier to interpret but
limited to modeling only linear interactions (e.g., Cox proportional hazards);
and non-linear glass-box -- allowing adaptability and interpretability, but
empirically found to have several limitations (e.g., explainable boosting
machines, survival trees). In this work, we investigate whether Symbolic
Regression (SR), i.e., the automated search of mathematical expressions from
data, can lead to non-linear glass-box survival models that are interpretable
and accurate. We propose an evolutionary, multi-objective, and multi-expression
implementation of SR adapted to SuR. Our empirical results on five real-world
datasets show that SR consistently outperforms traditional glass-box methods
for SuR in terms of accuracy per number of dimensions in the model, while
exhibiting comparable accuracy with black-box methods. Furthermore, we offer
qualitative examples to assess the interpretability potential of SR models for
SuR. Code at: https://github.com/lurovi/SurvivalMultiTree-pyNSGP.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 07:37:37 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Rovito",
"Luigi",
""
],
[
"Virgolin",
"Marco",
""
]
] | TITLE: Interpretable Non-linear Survival Analysis with Evolutionary Symbolic
Regression
ABSTRACT: Survival Regression (SuR) is a key technique for modeling time to event in
important applications such as clinical trials and semiconductor manufacturing.
Currently, SuR algorithms belong to one of three classes: non-linear black-box
-- allowing adaptability to many datasets but offering limited interpretability
(e.g., tree ensembles); linear glass-box -- being easier to interpret but
limited to modeling only linear interactions (e.g., Cox proportional hazards);
and non-linear glass-box -- allowing adaptability and interpretability, but
empirically found to have several limitations (e.g., explainable boosting
machines, survival trees). In this work, we investigate whether Symbolic
Regression (SR), i.e., the automated search of mathematical expressions from
data, can lead to non-linear glass-box survival models that are interpretable
and accurate. We propose an evolutionary, multi-objective, and multi-expression
implementation of SR adapted to SuR. Our empirical results on five real-world
datasets show that SR consistently outperforms traditional glass-box methods
for SuR in terms of accuracy per number of dimensions in the model, while
exhibiting comparable accuracy with black-box methods. Furthermore, we offer
qualitative examples to assess the interpretability potential of SR models for
SuR. Code at: https://github.com/lurovi/SurvivalMultiTree-pyNSGP.
|
2504.05758 | Yiwei Zhang | Yujia Lou, Jie Liu, Yuan Sheng, Jiawei Wang, Yiwei Zhang, Yaokun Ren | Addressing Class Imbalance with Probabilistic Graphical Models and
Variational Inference | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study proposes a method for imbalanced data classification based on deep
probabilistic graphical models (DPGMs) to solve the problem that traditional
methods have insufficient learning ability for minority class samples. To
address the classification bias caused by class imbalance, we introduce
variational inference optimization probability modeling, which enables the
model to adaptively adjust the representation ability of minority classes and
combines the class-aware weight adjustment strategy to enhance the classifier's
sensitivity to minority classes. In addition, we combine the adversarial
learning mechanism to generate minority class samples in the latent space so
that the model can better characterize the category boundary in the
high-dimensional feature space. The experiment is evaluated on the Kaggle
"Credit Card Fraud Detection" dataset and compared with a variety of advanced
imbalanced classification methods (such as GAN-based sampling, BRF,
XGBoost-Cost Sensitive, SAAD, HAN). The results show that the method in this
study has achieved the best performance in AUC, Precision, Recall and F1-score
indicators, effectively improving the recognition rate of minority classes and
reducing the false alarm rate. This method can be widely used in imbalanced
classification tasks such as financial fraud detection, medical diagnosis, and
anomaly detection, providing a new solution for related research.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 07:38:30 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Lou",
"Yujia",
""
],
[
"Liu",
"Jie",
""
],
[
"Sheng",
"Yuan",
""
],
[
"Wang",
"Jiawei",
""
],
[
"Zhang",
"Yiwei",
""
],
[
"Ren",
"Yaokun",
""
]
] | TITLE: Addressing Class Imbalance with Probabilistic Graphical Models and
Variational Inference
ABSTRACT: This study proposes a method for imbalanced data classification based on deep
probabilistic graphical models (DPGMs) to solve the problem that traditional
methods have insufficient learning ability for minority class samples. To
address the classification bias caused by class imbalance, we introduce
variational inference optimization probability modeling, which enables the
model to adaptively adjust the representation ability of minority classes and
combines the class-aware weight adjustment strategy to enhance the classifier's
sensitivity to minority classes. In addition, we combine the adversarial
learning mechanism to generate minority class samples in the latent space so
that the model can better characterize the category boundary in the
high-dimensional feature space. The experiment is evaluated on the Kaggle
"Credit Card Fraud Detection" dataset and compared with a variety of advanced
imbalanced classification methods (such as GAN-based sampling, BRF,
XGBoost-Cost Sensitive, SAAD, HAN). The results show that the method in this
study has achieved the best performance in AUC, Precision, Recall and F1-score
indicators, effectively improving the recognition rate of minority classes and
reducing the false alarm rate. This method can be widely used in imbalanced
classification tasks such as financial fraud detection, medical diagnosis, and
anomaly detection, providing a new solution for related research.
|
2504.05764 | Jiho Gwak | Jiho Gwak and Yuchul Jung | Layer-Aware Embedding Fusion for LLMs in Text Classifications | 11 pages, 3 figures, Preprint | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Embedding fusion has emerged as an effective approach for enhancing
performance across various NLP tasks. However, systematic guidelines for
selecting optimal layers and developing effective fusion strategies for the
integration of LLMs remain underexplored. In this study, we propose a
layer-aware embedding selection method and investigate how to quantitatively
evaluate different layers to identify the most important ones for downstream
NLP tasks, showing that the critical layers vary depending on the dataset. We
also explore how combining embeddings from multiple LLMs, without requiring
model fine-tuning, can improve performance. Experiments on four English text
classification datasets (SST-2, MR, R8, and R52) demonstrate that different
layers in LLMs exhibit varying degrees of representational strength for
classification, and that combining embeddings from different models can enhance
performance if the models exhibit complementary characteristics. Additionally,
we discuss resources overhead (memory and inference time) to provide a balanced
perspective on the real world feasibility of embedding fusion. Future work will
explore multilingual and domain specific datasets, as well as techniques for
automating layer selection, to improve both performance and scalability.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 07:45:50 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Gwak",
"Jiho",
""
],
[
"Jung",
"Yuchul",
""
]
] | TITLE: Layer-Aware Embedding Fusion for LLMs in Text Classifications
ABSTRACT: Embedding fusion has emerged as an effective approach for enhancing
performance across various NLP tasks. However, systematic guidelines for
selecting optimal layers and developing effective fusion strategies for the
integration of LLMs remain underexplored. In this study, we propose a
layer-aware embedding selection method and investigate how to quantitatively
evaluate different layers to identify the most important ones for downstream
NLP tasks, showing that the critical layers vary depending on the dataset. We
also explore how combining embeddings from multiple LLMs, without requiring
model fine-tuning, can improve performance. Experiments on four English text
classification datasets (SST-2, MR, R8, and R52) demonstrate that different
layers in LLMs exhibit varying degrees of representational strength for
classification, and that combining embeddings from different models can enhance
performance if the models exhibit complementary characteristics. Additionally,
we discuss resources overhead (memory and inference time) to provide a balanced
perspective on the real world feasibility of embedding fusion. Future work will
explore multilingual and domain specific datasets, as well as techniques for
automating layer selection, to improve both performance and scalability.
|
2504.05767 | Zhang Dong | Zhang Dong, Mingbang Wang, Songhang deng, Le Dai, Jiyuan Li, Xingzu
Liu, Ruilin Nong | Cross-Document Contextual Coreference Resolution in Knowledge Graphs | ACL 2025 Submission Version | null | null | null | cs.CL cs.MA | http://creativecommons.org/licenses/by/4.0/ | Coreference resolution across multiple documents poses a significant
challenge in natural language processing, particularly within the domain of
knowledge graphs. This study introduces an innovative method aimed at
identifying and resolving references to the same entities that appear across
differing texts, thus enhancing the coherence and collaboration of information.
Our method employs a dynamic linking mechanism that associates entities in the
knowledge graph with their corresponding textual mentions. By utilizing
contextual embeddings along with graph-based inference strategies, we
effectively capture the relationships and interactions among entities, thereby
improving the accuracy of coreference resolution. Rigorous evaluations on
various benchmark datasets highlight notable advancements in our approach over
traditional methodologies. The results showcase how the contextual information
derived from knowledge graphs enhances the understanding of complex
relationships across documents, leading to better entity linking and
information extraction capabilities in applications driven by knowledge. Our
technique demonstrates substantial improvements in both precision and recall,
underscoring its effectiveness in the area of cross-document coreference
resolution.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 07:47:07 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Dong",
"Zhang",
""
],
[
"Wang",
"Mingbang",
""
],
[
"deng",
"Songhang",
""
],
[
"Dai",
"Le",
""
],
[
"Li",
"Jiyuan",
""
],
[
"Liu",
"Xingzu",
""
],
[
"Nong",
"Ruilin",
""
]
] | TITLE: Cross-Document Contextual Coreference Resolution in Knowledge Graphs
ABSTRACT: Coreference resolution across multiple documents poses a significant
challenge in natural language processing, particularly within the domain of
knowledge graphs. This study introduces an innovative method aimed at
identifying and resolving references to the same entities that appear across
differing texts, thus enhancing the coherence and collaboration of information.
Our method employs a dynamic linking mechanism that associates entities in the
knowledge graph with their corresponding textual mentions. By utilizing
contextual embeddings along with graph-based inference strategies, we
effectively capture the relationships and interactions among entities, thereby
improving the accuracy of coreference resolution. Rigorous evaluations on
various benchmark datasets highlight notable advancements in our approach over
traditional methodologies. The results showcase how the contextual information
derived from knowledge graphs enhances the understanding of complex
relationships across documents, leading to better entity linking and
information extraction capabilities in applications driven by knowledge. Our
technique demonstrates substantial improvements in both precision and recall,
underscoring its effectiveness in the area of cross-document coreference
resolution.
|
2504.05768 | Mincheol Kim | Mincheol Kim, Soo-Yong Shin | Temporal Dynamic Embedding for Irregularly Sampled Time Series | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In several practical applications, particularly healthcare, clinical data of
each patient is individually recorded in a database at irregular intervals as
required. This causes a sparse and irregularly sampled time series, which makes
it difficult to handle as a structured representation of the prerequisites of
neural network models. We therefore propose temporal dynamic embedding (TDE),
which enables neural network models to receive data that change the number of
variables over time. TDE regards each time series variable as an embedding
vector evolving over time, instead of a conventional fixed structured
representation, which causes a critical missing problem. For each time step,
TDE allows for the selective adoption and aggregation of only observed variable
subsets and represents the current status of patient based on current
observations. The experiment was conducted on three clinical datasets:
PhysioNet 2012, MIMIC-III, and PhysioNet 2019. The TDE model performed
competitively or better than the imputation-based baseline and several recent
state-of-the-art methods with reduced training runtime.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 07:49:22 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Kim",
"Mincheol",
""
],
[
"Shin",
"Soo-Yong",
""
]
] | TITLE: Temporal Dynamic Embedding for Irregularly Sampled Time Series
ABSTRACT: In several practical applications, particularly healthcare, clinical data of
each patient is individually recorded in a database at irregular intervals as
required. This causes a sparse and irregularly sampled time series, which makes
it difficult to handle as a structured representation of the prerequisites of
neural network models. We therefore propose temporal dynamic embedding (TDE),
which enables neural network models to receive data that change the number of
variables over time. TDE regards each time series variable as an embedding
vector evolving over time, instead of a conventional fixed structured
representation, which causes a critical missing problem. For each time step,
TDE allows for the selective adoption and aggregation of only observed variable
subsets and represents the current status of patient based on current
observations. The experiment was conducted on three clinical datasets:
PhysioNet 2012, MIMIC-III, and PhysioNet 2019. The TDE model performed
competitively or better than the imputation-based baseline and several recent
state-of-the-art methods with reduced training runtime.
|
2504.05779 | Tao Lin | Tao Lin, Qingwang Wang, Qiwei Liang, Minghua Tang, Yuxuan Sun | FASR-Net: Unsupervised Shadow Removal Leveraging Inherent Frequency
Priors | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Shadow removal is challenging due to the complex interaction of geometry,
lighting, and environmental factors. Existing unsupervised methods often
overlook shadow-specific priors, leading to incomplete shadow recovery. To
address this issue, we propose a novel unsupervised Frequency Aware Shadow
Removal Network (FASR-Net), which leverages the inherent frequency
characteristics of shadow regions. Specifically, the proposed Wavelet Attention
Downsampling Module (WADM) integrates wavelet-based image decomposition and
deformable attention, effectively breaking down the image into frequency
components to enhance shadow details within specific frequency bands. We also
introduce several new loss functions for precise shadow-free image
reproduction: a frequency loss to capture image component details, a
brightness-chromaticity loss that references the chromaticity of shadow-free
regions, and an alignment loss to ensure smooth transitions between shadowed
and shadow-free regions. Experimental results on the AISTD and SRD datasets
demonstrate that our method achieves superior shadow removal performance.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 08:00:58 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Lin",
"Tao",
""
],
[
"Wang",
"Qingwang",
""
],
[
"Liang",
"Qiwei",
""
],
[
"Tang",
"Minghua",
""
],
[
"Sun",
"Yuxuan",
""
]
] | TITLE: FASR-Net: Unsupervised Shadow Removal Leveraging Inherent Frequency
Priors
ABSTRACT: Shadow removal is challenging due to the complex interaction of geometry,
lighting, and environmental factors. Existing unsupervised methods often
overlook shadow-specific priors, leading to incomplete shadow recovery. To
address this issue, we propose a novel unsupervised Frequency Aware Shadow
Removal Network (FASR-Net), which leverages the inherent frequency
characteristics of shadow regions. Specifically, the proposed Wavelet Attention
Downsampling Module (WADM) integrates wavelet-based image decomposition and
deformable attention, effectively breaking down the image into frequency
components to enhance shadow details within specific frequency bands. We also
introduce several new loss functions for precise shadow-free image
reproduction: a frequency loss to capture image component details, a
brightness-chromaticity loss that references the chromaticity of shadow-free
regions, and an alignment loss to ensure smooth transitions between shadowed
and shadow-free regions. Experimental results on the AISTD and SRD datasets
demonstrate that our method achieves superior shadow removal performance.
|
2504.05783 | Zijie Song | Zijie Song, Zhenzhen Hu, Yixiao Ma, Jia Li, Richang Hong | Video Flow as Time Series: Discovering Temporal Consistency and
Variability for VideoQA | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video Question Answering (VideoQA) is a complex video-language task that
demands a sophisticated understanding of both visual content and temporal
dynamics. Traditional Transformer-style architectures, while effective in
integrating multimodal data, often simplify temporal dynamics through
positional encoding and fail to capture non-linear interactions within video
sequences. In this paper, we introduce the Temporal Trio Transformer (T3T), a
novel architecture that models time consistency and time variability. The T3T
integrates three key components: Temporal Smoothing (TS), Temporal Difference
(TD), and Temporal Fusion (TF). The TS module employs Brownian Bridge for
capturing smooth, continuous temporal transitions, while the TD module
identifies and encodes significant temporal variations and abrupt changes
within the video content. Subsequently, the TF module synthesizes these
temporal features with textual cues, facilitating a deeper contextual
understanding and response accuracy. The efficacy of the T3T is demonstrated
through extensive testing on multiple VideoQA benchmark datasets. Our results
underscore the importance of a nuanced approach to temporal modeling in
improving the accuracy and depth of video-based question answering.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 08:08:03 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Song",
"Zijie",
""
],
[
"Hu",
"Zhenzhen",
""
],
[
"Ma",
"Yixiao",
""
],
[
"Li",
"Jia",
""
],
[
"Hong",
"Richang",
""
]
] | TITLE: Video Flow as Time Series: Discovering Temporal Consistency and
Variability for VideoQA
ABSTRACT: Video Question Answering (VideoQA) is a complex video-language task that
demands a sophisticated understanding of both visual content and temporal
dynamics. Traditional Transformer-style architectures, while effective in
integrating multimodal data, often simplify temporal dynamics through
positional encoding and fail to capture non-linear interactions within video
sequences. In this paper, we introduce the Temporal Trio Transformer (T3T), a
novel architecture that models time consistency and time variability. The T3T
integrates three key components: Temporal Smoothing (TS), Temporal Difference
(TD), and Temporal Fusion (TF). The TS module employs Brownian Bridge for
capturing smooth, continuous temporal transitions, while the TD module
identifies and encodes significant temporal variations and abrupt changes
within the video content. Subsequently, the TF module synthesizes these
temporal features with textual cues, facilitating a deeper contextual
understanding and response accuracy. The efficacy of the T3T is demonstrated
through extensive testing on multiple VideoQA benchmark datasets. Our results
underscore the importance of a nuanced approach to temporal modeling in
improving the accuracy and depth of video-based question answering.
|
2504.05786 | Jirong Zha | Jirong Zha, Yuxuan Fan, Xiao Yang, Chen Gao, Xinlei Chen | How to Enable LLM with 3D Capacity? A Survey of Spatial Reasoning in LLM | 9 pages, 5 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D spatial understanding is essential in real-world applications such as
robotics, autonomous vehicles, virtual reality, and medical imaging. Recently,
Large Language Models (LLMs), having demonstrated remarkable success across
various domains, have been leveraged to enhance 3D understanding tasks, showing
potential to surpass traditional computer vision methods. In this survey, we
present a comprehensive review of methods integrating LLMs with 3D spatial
understanding. We propose a taxonomy that categorizes existing methods into
three branches: image-based methods deriving 3D understanding from 2D visual
data, point cloud-based methods working directly with 3D representations, and
hybrid modality-based methods combining multiple data streams. We
systematically review representative methods along these categories, covering
data representations, architectural modifications, and training strategies that
bridge textual and 3D modalities. Finally, we discuss current limitations,
including dataset scarcity and computational challenges, while highlighting
promising research directions in spatial perception, multi-modal fusion, and
real-world applications.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 08:11:39 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Zha",
"Jirong",
""
],
[
"Fan",
"Yuxuan",
""
],
[
"Yang",
"Xiao",
""
],
[
"Gao",
"Chen",
""
],
[
"Chen",
"Xinlei",
""
]
] | TITLE: How to Enable LLM with 3D Capacity? A Survey of Spatial Reasoning in LLM
ABSTRACT: 3D spatial understanding is essential in real-world applications such as
robotics, autonomous vehicles, virtual reality, and medical imaging. Recently,
Large Language Models (LLMs), having demonstrated remarkable success across
various domains, have been leveraged to enhance 3D understanding tasks, showing
potential to surpass traditional computer vision methods. In this survey, we
present a comprehensive review of methods integrating LLMs with 3D spatial
understanding. We propose a taxonomy that categorizes existing methods into
three branches: image-based methods deriving 3D understanding from 2D visual
data, point cloud-based methods working directly with 3D representations, and
hybrid modality-based methods combining multiple data streams. We
systematically review representative methods along these categories, covering
data representations, architectural modifications, and training strategies that
bridge textual and 3D modalities. Finally, we discuss current limitations,
including dataset scarcity and computational challenges, while highlighting
promising research directions in spatial perception, multi-modal fusion, and
real-world applications.
|
2504.05789 | Sarosij Bose | Sarosij Bose, Hannah Dela Cruz, Arindam Dutta, Elena Kokkoni,
Konstantinos Karydis, Amit K. Roy-Chowdhury | Leveraging Synthetic Adult Datasets for Unsupervised Infant Pose
Estimation | Accepted at ABAW@CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Human pose estimation is a critical tool across a variety of healthcare
applications. Despite significant progress in pose estimation algorithms
targeting adults, such developments for infants remain limited. Existing
algorithms for infant pose estimation, despite achieving commendable
performance, depend on fully supervised approaches that require large amounts
of labeled data. These algorithms also struggle with poor generalizability
under distribution shifts. To address these challenges, we introduce SHIFT:
Leveraging SyntHetic Adult Datasets for Unsupervised InFanT Pose Estimation,
which leverages the pseudo-labeling-based Mean-Teacher framework to compensate
for the lack of labeled data and addresses distribution shifts by enforcing
consistency between the student and the teacher pseudo-labels. Additionally, to
penalize implausible predictions obtained from the mean-teacher framework, we
incorporate an infant manifold pose prior. To enhance SHIFT's self-occlusion
perception ability, we propose a novel visibility consistency module for
improved alignment of the predicted poses with the original image. Extensive
experiments on multiple benchmarks show that SHIFT significantly outperforms
existing state-of-the-art unsupervised domain adaptation (UDA) pose estimation
methods by 5% and supervised infant pose estimation methods by a margin of 16%.
The project page is available at: https://sarosijbose.github.io/SHIFT.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 08:13:38 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Bose",
"Sarosij",
""
],
[
"Cruz",
"Hannah Dela",
""
],
[
"Dutta",
"Arindam",
""
],
[
"Kokkoni",
"Elena",
""
],
[
"Karydis",
"Konstantinos",
""
],
[
"Roy-Chowdhury",
"Amit K.",
""
]
] | TITLE: Leveraging Synthetic Adult Datasets for Unsupervised Infant Pose
Estimation
ABSTRACT: Human pose estimation is a critical tool across a variety of healthcare
applications. Despite significant progress in pose estimation algorithms
targeting adults, such developments for infants remain limited. Existing
algorithms for infant pose estimation, despite achieving commendable
performance, depend on fully supervised approaches that require large amounts
of labeled data. These algorithms also struggle with poor generalizability
under distribution shifts. To address these challenges, we introduce SHIFT:
Leveraging SyntHetic Adult Datasets for Unsupervised InFanT Pose Estimation,
which leverages the pseudo-labeling-based Mean-Teacher framework to compensate
for the lack of labeled data and addresses distribution shifts by enforcing
consistency between the student and the teacher pseudo-labels. Additionally, to
penalize implausible predictions obtained from the mean-teacher framework, we
incorporate an infant manifold pose prior. To enhance SHIFT's self-occlusion
perception ability, we propose a novel visibility consistency module for
improved alignment of the predicted poses with the original image. Extensive
experiments on multiple benchmarks show that SHIFT significantly outperforms
existing state-of-the-art unsupervised domain adaptation (UDA) pose estimation
methods by 5% and supervised infant pose estimation methods by a margin of 16%.
The project page is available at: https://sarosijbose.github.io/SHIFT.
|
2504.05805 | Seongmin Park | Seongmin Park, Mincheol Yoon, Hye-young Kim, Jongwuk Lee | Why is Normalization Necessary for Linear Recommenders? | Accepted by SIGIR 2025 | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Despite their simplicity, linear autoencoder (LAE)-based models have shown
comparable or even better performance with faster inference speed than neural
recommender models. However, LAEs face two critical challenges: (i) popularity
bias, which tends to recommend popular items, and (ii) neighborhood bias, which
overly focuses on capturing local item correlations. To address these issues,
this paper first analyzes the effect of two existing normalization methods for
LAEs, i.e., random-walk and symmetric normalization. Our theoretical analysis
reveals that normalization highly affects the degree of popularity and
neighborhood biases among items. Inspired by this analysis, we propose a
versatile normalization solution, called Data-Adaptive Normalization (DAN),
which flexibly controls the popularity and neighborhood biases by adjusting
item- and user-side normalization to align with unique dataset characteristics.
Owing to its model-agnostic property, DAN can be easily applied to various
LAE-based models. Experimental results show that DAN-equipped LAEs consistently
improve existing LAE-based models across six benchmark datasets, with
significant gains of up to 128.57% and 12.36% for long-tail items and unbiased
evaluations, respectively. Refer to our code in https://github.com/psm1206/DAN.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 08:37:32 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Park",
"Seongmin",
""
],
[
"Yoon",
"Mincheol",
""
],
[
"Kim",
"Hye-young",
""
],
[
"Lee",
"Jongwuk",
""
]
] | TITLE: Why is Normalization Necessary for Linear Recommenders?
ABSTRACT: Despite their simplicity, linear autoencoder (LAE)-based models have shown
comparable or even better performance with faster inference speed than neural
recommender models. However, LAEs face two critical challenges: (i) popularity
bias, which tends to recommend popular items, and (ii) neighborhood bias, which
overly focuses on capturing local item correlations. To address these issues,
this paper first analyzes the effect of two existing normalization methods for
LAEs, i.e., random-walk and symmetric normalization. Our theoretical analysis
reveals that normalization highly affects the degree of popularity and
neighborhood biases among items. Inspired by this analysis, we propose a
versatile normalization solution, called Data-Adaptive Normalization (DAN),
which flexibly controls the popularity and neighborhood biases by adjusting
item- and user-side normalization to align with unique dataset characteristics.
Owing to its model-agnostic property, DAN can be easily applied to various
LAE-based models. Experimental results show that DAN-equipped LAEs consistently
improve existing LAE-based models across six benchmark datasets, with
significant gains of up to 128.57% and 12.36% for long-tail items and unbiased
evaluations, respectively. Refer to our code in https://github.com/psm1206/DAN.
|
2504.05806 | Seungyoon Woo | Seungyoon Woo, Junhyeog Yun, Gunhee Kim | Meta-Continual Learning of Neural Fields | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Neural Fields (NF) have gained prominence as a versatile framework for
complex data representation. This work unveils a new problem setting termed
\emph{Meta-Continual Learning of Neural Fields} (MCL-NF) and introduces a novel
strategy that employs a modular architecture combined with optimization-based
meta-learning. Focused on overcoming the limitations of existing methods for
continual learning of neural fields, such as catastrophic forgetting and slow
convergence, our strategy achieves high-quality reconstruction with
significantly improved learning speed. We further introduce Fisher Information
Maximization loss for neural radiance fields (FIM-NeRF), which maximizes
information gains at the sample level to enhance learning generalization, with
proved convergence guarantee and generalization bound. We perform extensive
evaluations across image, audio, video reconstruction, and view synthesis tasks
on six diverse datasets, demonstrating our method's superiority in
reconstruction quality and speed over existing MCL and CL-NF approaches.
Notably, our approach attains rapid adaptation of neural fields for city-scale
NeRF rendering with reduced parameter requirement.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 08:38:37 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Woo",
"Seungyoon",
""
],
[
"Yun",
"Junhyeog",
""
],
[
"Kim",
"Gunhee",
""
]
] | TITLE: Meta-Continual Learning of Neural Fields
ABSTRACT: Neural Fields (NF) have gained prominence as a versatile framework for
complex data representation. This work unveils a new problem setting termed
\emph{Meta-Continual Learning of Neural Fields} (MCL-NF) and introduces a novel
strategy that employs a modular architecture combined with optimization-based
meta-learning. Focused on overcoming the limitations of existing methods for
continual learning of neural fields, such as catastrophic forgetting and slow
convergence, our strategy achieves high-quality reconstruction with
significantly improved learning speed. We further introduce Fisher Information
Maximization loss for neural radiance fields (FIM-NeRF), which maximizes
information gains at the sample level to enhance learning generalization, with
proved convergence guarantee and generalization bound. We perform extensive
evaluations across image, audio, video reconstruction, and view synthesis tasks
on six diverse datasets, demonstrating our method's superiority in
reconstruction quality and speed over existing MCL and CL-NF approaches.
Notably, our approach attains rapid adaptation of neural fields for city-scale
NeRF rendering with reduced parameter requirement.
|
2504.05808 | Pawel Pieta | Pawel Tomasz Pieta, Peter Winkel Rasumssen, Anders Bjorholm Dahl,
Anders Nymark Christensen | Fast Sphericity and Roundness approximation in 2D and 3D using Local
Thickness | Accepted at CVMI (CVPR 2025 Workshop) | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Sphericity and roundness are fundamental measures used for assessing object
uniformity in 2D and 3D images. However, using their strict definition makes
computation costly. As both 2D and 3D microscopy imaging datasets grow larger,
there is an increased demand for efficient algorithms that can quantify
multiple objects in large volumes. We propose a novel approach for extracting
sphericity and roundness based on the output of a local thickness algorithm.
For sphericity, we simplify the surface area computation by modeling objects as
spheroids/ellipses of varying lengths and widths of mean local thickness. For
roundness, we avoid a complex corner curvature determination process by
approximating it with local thickness values on the contour/surface of the
object. The resulting methods provide an accurate representation of the exact
measures while being significantly faster than their existing implementations.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 08:40:50 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Pieta",
"Pawel Tomasz",
""
],
[
"Rasumssen",
"Peter Winkel",
""
],
[
"Dahl",
"Anders Bjorholm",
""
],
[
"Christensen",
"Anders Nymark",
""
]
] | TITLE: Fast Sphericity and Roundness approximation in 2D and 3D using Local
Thickness
ABSTRACT: Sphericity and roundness are fundamental measures used for assessing object
uniformity in 2D and 3D images. However, using their strict definition makes
computation costly. As both 2D and 3D microscopy imaging datasets grow larger,
there is an increased demand for efficient algorithms that can quantify
multiple objects in large volumes. We propose a novel approach for extracting
sphericity and roundness based on the output of a local thickness algorithm.
For sphericity, we simplify the surface area computation by modeling objects as
spheroids/ellipses of varying lengths and widths of mean local thickness. For
roundness, we avoid a complex corner curvature determination process by
approximating it with local thickness values on the contour/surface of the
object. The resulting methods provide an accurate representation of the exact
measures while being significantly faster than their existing implementations.
|
2504.05822 | Alessio Mora | Alessio Mora, Carlo Mazzocca, Rebecca Montanari, Paolo Bellavista | Federated Unlearning Made Practical: Seamless Integration via Negated
Pseudo-Gradients | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | The right to be forgotten is a fundamental principle of privacy-preserving
regulations and extends to Machine Learning (ML) paradigms such as Federated
Learning (FL). While FL enhances privacy by enabling collaborative model
training without sharing private data, trained models still retain the
influence of training data. Federated Unlearning (FU) methods recently proposed
often rely on impractical assumptions for real-world FL deployments, such as
storing client update histories or requiring access to a publicly available
dataset. To address these constraints, this paper introduces a novel method
that leverages negated Pseudo-gradients Updates for Federated Unlearning (PUF).
Our approach only uses standard client model updates, anyway employed during
regular FL rounds, and interprets them as pseudo-gradients. When a client needs
to be forgotten, we apply the negated of their pseudo-gradients, appropriately
scaled, to the global model. Unlike state-of-the-art mechanisms, PUF seamlessly
integrates with FL workflows, incurs no additional computational and
communication overhead beyond standard FL rounds, and supports concurrent
unlearning requests. We extensively evaluated the proposed method on two
well-known benchmark image classification datasets (CIFAR-10 and CIFAR-100) and
a real-world medical imaging dataset for segmentation (ProstateMRI), using
three different neural architectures: two residual networks and a vision
transformer. The experimental results across various settings demonstrate that
PUF achieves state-of-the-art forgetting effectiveness and recovery time,
without relying on any additional assumptions, thus underscoring its practical
applicability.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 09:05:33 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Mora",
"Alessio",
""
],
[
"Mazzocca",
"Carlo",
""
],
[
"Montanari",
"Rebecca",
""
],
[
"Bellavista",
"Paolo",
""
]
] | TITLE: Federated Unlearning Made Practical: Seamless Integration via Negated
Pseudo-Gradients
ABSTRACT: The right to be forgotten is a fundamental principle of privacy-preserving
regulations and extends to Machine Learning (ML) paradigms such as Federated
Learning (FL). While FL enhances privacy by enabling collaborative model
training without sharing private data, trained models still retain the
influence of training data. Federated Unlearning (FU) methods recently proposed
often rely on impractical assumptions for real-world FL deployments, such as
storing client update histories or requiring access to a publicly available
dataset. To address these constraints, this paper introduces a novel method
that leverages negated Pseudo-gradients Updates for Federated Unlearning (PUF).
Our approach only uses standard client model updates, anyway employed during
regular FL rounds, and interprets them as pseudo-gradients. When a client needs
to be forgotten, we apply the negated of their pseudo-gradients, appropriately
scaled, to the global model. Unlike state-of-the-art mechanisms, PUF seamlessly
integrates with FL workflows, incurs no additional computational and
communication overhead beyond standard FL rounds, and supports concurrent
unlearning requests. We extensively evaluated the proposed method on two
well-known benchmark image classification datasets (CIFAR-10 and CIFAR-100) and
a real-world medical imaging dataset for segmentation (ProstateMRI), using
three different neural architectures: two residual networks and a vision
transformer. The experimental results across various settings demonstrate that
PUF achieves state-of-the-art forgetting effectiveness and recovery time,
without relying on any additional assumptions, thus underscoring its practical
applicability.
|
2504.05824 | Zhang Dong | Zhang Dong, Songhang deng, Mingbang Wang, Le Dai, Jiyuan Li, Xingzu
Liu, Ruilin Nong | End-to-End Dialog Neural Coreference Resolution: Balancing Efficiency
and Accuracy in Large-Scale Systems | submission of acl 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Large-scale coreference resolution presents a significant challenge in
natural language processing, necessitating a balance between efficiency and
accuracy. In response to this challenge, we introduce an End-to-End Neural
Coreference Resolution system tailored for large-scale applications. Our system
efficiently identifies and resolves coreference links in text, ensuring minimal
computational overhead without compromising on performance. By utilizing
advanced neural network architectures, we incorporate various contextual
embeddings and attention mechanisms, which enhance the quality of predictions
for coreference pairs. Furthermore, we apply optimization strategies to
accelerate processing speeds, making the system suitable for real-world
deployment. Extensive evaluations conducted on benchmark datasets demonstrate
that our model achieves improved accuracy compared to existing approaches,
while effectively maintaining rapid inference times. Rigorous testing confirms
the ability of our system to deliver precise coreference resolutions
efficiently, thereby establishing a benchmark for future advancements in this
field.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 09:06:52 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Dong",
"Zhang",
""
],
[
"deng",
"Songhang",
""
],
[
"Wang",
"Mingbang",
""
],
[
"Dai",
"Le",
""
],
[
"Li",
"Jiyuan",
""
],
[
"Liu",
"Xingzu",
""
],
[
"Nong",
"Ruilin",
""
]
] | TITLE: End-to-End Dialog Neural Coreference Resolution: Balancing Efficiency
and Accuracy in Large-Scale Systems
ABSTRACT: Large-scale coreference resolution presents a significant challenge in
natural language processing, necessitating a balance between efficiency and
accuracy. In response to this challenge, we introduce an End-to-End Neural
Coreference Resolution system tailored for large-scale applications. Our system
efficiently identifies and resolves coreference links in text, ensuring minimal
computational overhead without compromising on performance. By utilizing
advanced neural network architectures, we incorporate various contextual
embeddings and attention mechanisms, which enhance the quality of predictions
for coreference pairs. Furthermore, we apply optimization strategies to
accelerate processing speeds, making the system suitable for real-world
deployment. Extensive evaluations conducted on benchmark datasets demonstrate
that our model achieves improved accuracy compared to existing approaches,
while effectively maintaining rapid inference times. Rigorous testing confirms
the ability of our system to deliver precise coreference resolutions
efficiently, thereby establishing a benchmark for future advancements in this
field.
|
2504.05830 | Xiao Wang | Shiao Wang, Xiao Wang, Bo Jiang, Lin Zhu, Guoqi Li, Yaowei Wang,
Yonghong Tian, and Jin Tang | Human Activity Recognition using RGB-Event based Sensors: A Multi-modal
Heat Conduction Model and A Benchmark Dataset | Journal Extension of HARDVS (AAAI 2024) | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human Activity Recognition (HAR) primarily relied on traditional RGB cameras
to achieve high-performance activity recognition. However, the challenging
factors in real-world scenarios, such as insufficient lighting and rapid
movements, inevitably degrade the performance of RGB cameras. To address these
challenges, biologically inspired event cameras offer a promising solution to
overcome the limitations of traditional RGB cameras. In this work, we rethink
human activity recognition by combining the RGB and event cameras. The first
contribution is the proposed large-scale multi-modal RGB-Event human activity
recognition benchmark dataset, termed HARDVS 2.0, which bridges the dataset
gaps. It contains 300 categories of everyday real-world actions with a total of
107,646 paired videos covering various challenging scenarios. Inspired by the
physics-informed heat conduction model, we propose a novel multi-modal heat
conduction operation framework for effective activity recognition, termed
MMHCO-HAR. More in detail, given the RGB frames and event streams, we first
extract the feature embeddings using a stem network. Then, multi-modal Heat
Conduction blocks are designed to fuse the dual features, the key module of
which is the multi-modal Heat Conduction Operation layer. We integrate RGB and
event embeddings through a multi-modal DCT-IDCT layer while adaptively
incorporating the thermal conductivity coefficient via FVEs into this module.
After that, we propose an adaptive fusion module based on a policy routing
strategy for high-performance classification. Comprehensive experiments
demonstrate that our method consistently performs well, validating its
effectiveness and robustness. The source code and benchmark dataset will be
released on https://github.com/Event-AHU/HARDVS/tree/HARDVSv2
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 09:14:24 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Wang",
"Shiao",
""
],
[
"Wang",
"Xiao",
""
],
[
"Jiang",
"Bo",
""
],
[
"Zhu",
"Lin",
""
],
[
"Li",
"Guoqi",
""
],
[
"Wang",
"Yaowei",
""
],
[
"Tian",
"Yonghong",
""
],
[
"Tang",
"Jin",
""
]
] | TITLE: Human Activity Recognition using RGB-Event based Sensors: A Multi-modal
Heat Conduction Model and A Benchmark Dataset
ABSTRACT: Human Activity Recognition (HAR) primarily relied on traditional RGB cameras
to achieve high-performance activity recognition. However, the challenging
factors in real-world scenarios, such as insufficient lighting and rapid
movements, inevitably degrade the performance of RGB cameras. To address these
challenges, biologically inspired event cameras offer a promising solution to
overcome the limitations of traditional RGB cameras. In this work, we rethink
human activity recognition by combining the RGB and event cameras. The first
contribution is the proposed large-scale multi-modal RGB-Event human activity
recognition benchmark dataset, termed HARDVS 2.0, which bridges the dataset
gaps. It contains 300 categories of everyday real-world actions with a total of
107,646 paired videos covering various challenging scenarios. Inspired by the
physics-informed heat conduction model, we propose a novel multi-modal heat
conduction operation framework for effective activity recognition, termed
MMHCO-HAR. More in detail, given the RGB frames and event streams, we first
extract the feature embeddings using a stem network. Then, multi-modal Heat
Conduction blocks are designed to fuse the dual features, the key module of
which is the multi-modal Heat Conduction Operation layer. We integrate RGB and
event embeddings through a multi-modal DCT-IDCT layer while adaptively
incorporating the thermal conductivity coefficient via FVEs into this module.
After that, we propose an adaptive fusion module based on a policy routing
strategy for high-performance classification. Comprehensive experiments
demonstrate that our method consistently performs well, validating its
effectiveness and robustness. The source code and benchmark dataset will be
released on https://github.com/Event-AHU/HARDVS/tree/HARDVSv2
|
2504.05833 | Wenyu Wang | Wenyu Wang, Yiquan Zhou, Jihua Zhu, Hongwu Ding, Jiacheng Xu, Shihao
Li | AVENet: Disentangling Features by Approximating Average Features for
Voice Conversion | Accepted by ICME 2025 | null | null | null | cs.SD | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Voice conversion (VC) has made progress in feature disentanglement, but it is
still difficult to balance timbre and content information. This paper evaluates
the pre-trained model features commonly used in voice conversion, and proposes
an innovative method for disentangling speech feature representations.
Specifically, we first propose an ideal content feature, referred to as the
average feature, which is calculated by averaging the features within
frame-level aligned parallel speech (FAPS) data. For generating FAPS data, we
utilize a technique that involves freezing the duration predictor in a
Text-to-Speech system and manipulating speaker embedding. To fit the average
feature on traditional VC datasets, we then design the AVENet to take features
as input and generate closely matching average features. Experiments are
conducted on the performance of AVENet-extracted features within a VC system.
The experimental results demonstrate its superiority over multiple current
speech feature disentangling methods. These findings affirm the effectiveness
of our disentanglement approach.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 09:16:32 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Wang",
"Wenyu",
""
],
[
"Zhou",
"Yiquan",
""
],
[
"Zhu",
"Jihua",
""
],
[
"Ding",
"Hongwu",
""
],
[
"Xu",
"Jiacheng",
""
],
[
"Li",
"Shihao",
""
]
] | TITLE: AVENet: Disentangling Features by Approximating Average Features for
Voice Conversion
ABSTRACT: Voice conversion (VC) has made progress in feature disentanglement, but it is
still difficult to balance timbre and content information. This paper evaluates
the pre-trained model features commonly used in voice conversion, and proposes
an innovative method for disentangling speech feature representations.
Specifically, we first propose an ideal content feature, referred to as the
average feature, which is calculated by averaging the features within
frame-level aligned parallel speech (FAPS) data. For generating FAPS data, we
utilize a technique that involves freezing the duration predictor in a
Text-to-Speech system and manipulating speaker embedding. To fit the average
feature on traditional VC datasets, we then design the AVENet to take features
as input and generate closely matching average features. Experiments are
conducted on the performance of AVENet-extracted features within a VC system.
The experimental results demonstrate its superiority over multiple current
speech feature disentangling methods. These findings affirm the effectiveness
of our disentanglement approach.
|
2504.05844 | Tianyi Jiang | Tianyi Jiang, Zeyu Wang, Shanqing Yu, Qi Xuan | Adaptive Substructure-Aware Expert Model for Molecular Property
Prediction | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Molecular property prediction is essential for applications such as drug
discovery and toxicity assessment. While Graph Neural Networks (GNNs) have
shown promising results by modeling molecules as molecular graphs, their
reliance on data-driven learning limits their ability to generalize,
particularly in the presence of data imbalance and diverse molecular
substructures. Existing methods often overlook the varying contributions of
different substructures to molecular properties, treating them uniformly. To
address these challenges, we propose ASE-Mol, a novel GNN-based framework that
leverages a Mixture-of-Experts (MoE) approach for molecular property
prediction. ASE-Mol incorporates BRICS decomposition and significant
substructure awareness to dynamically identify positive and negative
substructures. By integrating a MoE architecture, it reduces the adverse impact
of negative motifs while improving adaptability to positive motifs.
Experimental results on eight benchmark datasets demonstrate that ASE-Mol
achieves state-of-the-art performance, with significant improvements in both
accuracy and interpretability.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 09:25:03 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Jiang",
"Tianyi",
""
],
[
"Wang",
"Zeyu",
""
],
[
"Yu",
"Shanqing",
""
],
[
"Xuan",
"Qi",
""
]
] | TITLE: Adaptive Substructure-Aware Expert Model for Molecular Property
Prediction
ABSTRACT: Molecular property prediction is essential for applications such as drug
discovery and toxicity assessment. While Graph Neural Networks (GNNs) have
shown promising results by modeling molecules as molecular graphs, their
reliance on data-driven learning limits their ability to generalize,
particularly in the presence of data imbalance and diverse molecular
substructures. Existing methods often overlook the varying contributions of
different substructures to molecular properties, treating them uniformly. To
address these challenges, we propose ASE-Mol, a novel GNN-based framework that
leverages a Mixture-of-Experts (MoE) approach for molecular property
prediction. ASE-Mol incorporates BRICS decomposition and significant
substructure awareness to dynamically identify positive and negative
substructures. By integrating a MoE architecture, it reduces the adverse impact
of negative motifs while improving adaptability to positive motifs.
Experimental results on eight benchmark datasets demonstrate that ASE-Mol
achieves state-of-the-art performance, with significant improvements in both
accuracy and interpretability.
|
2504.05846 | Steeve Marcelyn | Steeve Cuthbert Marcelyn, Yucen Gao, Yuzhe Zhang, Xiaofeng Gao, Guihai
Chen | PathGPT: Leveraging Large Language Models for Personalized Route
Generation | null | null | null | null | cs.IR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | The proliferation of GPS enabled devices has led to the accumulation of a
substantial corpus of historical trajectory data. By leveraging these data for
training machine learning models,researchers have devised novel data-driven
methodologies that address the personalized route recommendation (PRR) problem.
In contrast to conventional algorithms such as Dijkstra shortest path
algorithm,these novel algorithms possess the capacity to discern and learn
patterns within the data,thereby facilitating the generation of more
personalized paths. However,once these models have been trained,their
application is constrained to the generation of routes that align with their
training patterns. This limitation renders them less adaptable to novel
scenarios and the deployment of multiple machine learning models might be
necessary to address new possible scenarios,which can be costly as each model
must be trained separately. Inspired by recent advances in the field of Large
Language Models (LLMs),we leveraged their natural language understanding
capabilities to develop a unified model to solve the PRR problem while being
seamlessly adaptable to new scenarios without additional training. To
accomplish this,we combined the extensive knowledge LLMs acquired during
training with further access to external hand-crafted context
information,similar to RAG (Retrieved Augmented Generation) systems,to enhance
their ability to generate paths according to user-defined requirements.
Extensive experiments on different datasets show a considerable uplift in LLM
performance on the PRR problem.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 09:25:21 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Marcelyn",
"Steeve Cuthbert",
""
],
[
"Gao",
"Yucen",
""
],
[
"Zhang",
"Yuzhe",
""
],
[
"Gao",
"Xiaofeng",
""
],
[
"Chen",
"Guihai",
""
]
] | TITLE: PathGPT: Leveraging Large Language Models for Personalized Route
Generation
ABSTRACT: The proliferation of GPS enabled devices has led to the accumulation of a
substantial corpus of historical trajectory data. By leveraging these data for
training machine learning models,researchers have devised novel data-driven
methodologies that address the personalized route recommendation (PRR) problem.
In contrast to conventional algorithms such as Dijkstra shortest path
algorithm,these novel algorithms possess the capacity to discern and learn
patterns within the data,thereby facilitating the generation of more
personalized paths. However,once these models have been trained,their
application is constrained to the generation of routes that align with their
training patterns. This limitation renders them less adaptable to novel
scenarios and the deployment of multiple machine learning models might be
necessary to address new possible scenarios,which can be costly as each model
must be trained separately. Inspired by recent advances in the field of Large
Language Models (LLMs),we leveraged their natural language understanding
capabilities to develop a unified model to solve the PRR problem while being
seamlessly adaptable to new scenarios without additional training. To
accomplish this,we combined the extensive knowledge LLMs acquired during
training with further access to external hand-crafted context
information,similar to RAG (Retrieved Augmented Generation) systems,to enhance
their ability to generate paths according to user-defined requirements.
Extensive experiments on different datasets show a considerable uplift in LLM
performance on the PRR problem.
|
2504.05849 | Julian Lorenz | Julian Lorenz, Katja Ludwig, Valentin Haug, Rainer Lienhart | On the Importance of Conditioning for Privacy-Preserving Data
Augmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Latent diffusion models can be used as a powerful augmentation method to
artificially extend datasets for enhanced training. To the human eye, these
augmented images look very different to the originals. Previous work has
suggested to use this data augmentation technique for data anonymization.
However, we show that latent diffusion models that are conditioned on features
like depth maps or edges to guide the diffusion process are not suitable as a
privacy preserving method. We use a contrastive learning approach to train a
model that can correctly identify people out of a pool of candidates. Moreover,
we demonstrate that anonymization using conditioned diffusion models is
susceptible to black box attacks. We attribute the success of the described
methods to the conditioning of the latent diffusion model in the anonymization
process. The diffusion model is instructed to produce similar edges for the
anonymized images. Hence, a model can learn to recognize these patterns for
identification.
| [
{
"version": "v1",
"created": "Tue, 8 Apr 2025 09:27:51 GMT"
}
] | 2025-04-09T00:00:00 | [
[
"Lorenz",
"Julian",
""
],
[
"Ludwig",
"Katja",
""
],
[
"Haug",
"Valentin",
""
],
[
"Lienhart",
"Rainer",
""
]
] | TITLE: On the Importance of Conditioning for Privacy-Preserving Data
Augmentation
ABSTRACT: Latent diffusion models can be used as a powerful augmentation method to
artificially extend datasets for enhanced training. To the human eye, these
augmented images look very different to the originals. Previous work has
suggested to use this data augmentation technique for data anonymization.
However, we show that latent diffusion models that are conditioned on features
like depth maps or edges to guide the diffusion process are not suitable as a
privacy preserving method. We use a contrastive learning approach to train a
model that can correctly identify people out of a pool of candidates. Moreover,
we demonstrate that anonymization using conditioned diffusion models is
susceptible to black box attacks. We attribute the success of the described
methods to the conditioning of the latent diffusion model in the anonymization
process. The diffusion model is instructed to produce similar edges for the
anonymized images. Hence, a model can learn to recognize these patterns for
identification.
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.