Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2504.04677 | Yiling Lin | Yiling Lin, Linzhuo Li, Lingfei Wu | The Disruption Index Measures Displacement Between a Paper and Its Most
Cited Reference | null | null | null | null | cs.DL cs.SI | http://creativecommons.org/licenses/by/4.0/ | Initially developed to capture technical innovation and later adapted to
identify scientific breakthroughs, the Disruption Index (D-index) offers the
first quantitative framework for analyzing transformative research. Despite its
promise, prior studies have struggled to clarify its theoretical foundations,
raising concerns about potential bias. Here, we show that-contrary to the
common belief that the D-index measures absolute innovation-it captures
relative innovation: a paper's ability to displace its most-cited reference. In
this way, the D-index reflects scientific progress as the replacement of older
answers with newer ones to the same fundamental question-much like light bulbs
replacing candles. We support this insight through mathematical analysis,
expert surveys, and large-scale bibliometric evidence. To facilitate
replication, validation, and broader use, we release a dataset of D-index
values for 49 million journal articles (1800-2024) based on OpenAlex.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 02:04:10 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lin",
"Yiling",
""
],
[
"Li",
"Linzhuo",
""
],
[
"Wu",
"Lingfei",
""
]
] | TITLE: The Disruption Index Measures Displacement Between a Paper and Its Most
Cited Reference
ABSTRACT: Initially developed to capture technical innovation and later adapted to
identify scientific breakthroughs, the Disruption Index (D-index) offers the
first quantitative framework for analyzing transformative research. Despite its
promise, prior studies have struggled to clarify its theoretical foundations,
raising concerns about potential bias. Here, we show that-contrary to the
common belief that the D-index measures absolute innovation-it captures
relative innovation: a paper's ability to displace its most-cited reference. In
this way, the D-index reflects scientific progress as the replacement of older
answers with newer ones to the same fundamental question-much like light bulbs
replacing candles. We support this insight through mathematical analysis,
expert surveys, and large-scale bibliometric evidence. To facilitate
replication, validation, and broader use, we release a dataset of D-index
values for 49 million journal articles (1800-2024) based on OpenAlex.
|
2504.04679 | Wanzhou Liu | Wanzhou Liu, Zhexiao Xiong, Xinyu Li, Nathan Jacobs | DeclutterNeRF: Generative-Free 3D Scene Recovery for Occlusion Removal | Accepted by CVPR 2025 4th CV4Metaverse Workshop. 15 pages, 10
figures. Code and data at: https://github.com/wanzhouliu/declutter-nerf | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent novel view synthesis (NVS) techniques, including Neural Radiance
Fields (NeRF) and 3D Gaussian Splatting (3DGS) have greatly advanced 3D scene
reconstruction with high-quality rendering and realistic detail recovery.
Effectively removing occlusions while preserving scene details can further
enhance the robustness and applicability of these techniques. However, existing
approaches for object and occlusion removal predominantly rely on generative
priors, which, despite filling the resulting holes, introduce new artifacts and
blurriness. Moreover, existing benchmark datasets for evaluating occlusion
removal methods lack realistic complexity and viewpoint variations. To address
these issues, we introduce DeclutterSet, a novel dataset featuring diverse
scenes with pronounced occlusions distributed across foreground, midground, and
background, exhibiting substantial relative motion across viewpoints. We
further introduce DeclutterNeRF, an occlusion removal method free from
generative priors. DeclutterNeRF introduces joint multi-view optimization of
learnable camera parameters, occlusion annealing regularization, and employs an
explainable stochastic structural similarity loss, ensuring high-quality,
artifact-free reconstructions from incomplete images. Experiments demonstrate
that DeclutterNeRF significantly outperforms state-of-the-art methods on our
proposed DeclutterSet, establishing a strong baseline for future research.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 02:22:08 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Wanzhou",
""
],
[
"Xiong",
"Zhexiao",
""
],
[
"Li",
"Xinyu",
""
],
[
"Jacobs",
"Nathan",
""
]
] | TITLE: DeclutterNeRF: Generative-Free 3D Scene Recovery for Occlusion Removal
ABSTRACT: Recent novel view synthesis (NVS) techniques, including Neural Radiance
Fields (NeRF) and 3D Gaussian Splatting (3DGS) have greatly advanced 3D scene
reconstruction with high-quality rendering and realistic detail recovery.
Effectively removing occlusions while preserving scene details can further
enhance the robustness and applicability of these techniques. However, existing
approaches for object and occlusion removal predominantly rely on generative
priors, which, despite filling the resulting holes, introduce new artifacts and
blurriness. Moreover, existing benchmark datasets for evaluating occlusion
removal methods lack realistic complexity and viewpoint variations. To address
these issues, we introduce DeclutterSet, a novel dataset featuring diverse
scenes with pronounced occlusions distributed across foreground, midground, and
background, exhibiting substantial relative motion across viewpoints. We
further introduce DeclutterNeRF, an occlusion removal method free from
generative priors. DeclutterNeRF introduces joint multi-view optimization of
learnable camera parameters, occlusion annealing regularization, and employs an
explainable stochastic structural similarity loss, ensuring high-quality,
artifact-free reconstructions from incomplete images. Experiments demonstrate
that DeclutterNeRF significantly outperforms state-of-the-art methods on our
proposed DeclutterSet, establishing a strong baseline for future research.
|
2504.04687 | Yicheng Leng | Yicheng Leng, Chaowei Fang, Junye Chen, Yixiang Fang, Sheng Li,
Guanbin Li | Bridging Knowledge Gap Between Image Inpainting and Large-Area Visible
Watermark Removal | To be published in AAAI 2025 | null | null | null | cs.CV cs.AI cs.MM eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visible watermark removal which involves watermark cleaning and background
content restoration is pivotal to evaluate the resilience of watermarks.
Existing deep neural network (DNN)-based models still struggle with large-area
watermarks and are overly dependent on the quality of watermark mask
prediction. To overcome these challenges, we introduce a novel feature adapting
framework that leverages the representation modeling capacity of a pre-trained
image inpainting model. Our approach bridges the knowledge gap between image
inpainting and watermark removal by fusing information of the residual
background content beneath watermarks into the inpainting backbone model. We
establish a dual-branch system to capture and embed features from the residual
background content, which are merged into intermediate features of the
inpainting backbone model via gated feature fusion modules. Moreover, for
relieving the dependence on high-quality watermark masks, we introduce a new
training paradigm by utilizing coarse watermark masks to guide the inference
process. This contributes to a visible image removal model which is insensitive
to the quality of watermark mask during testing. Extensive experiments on both
a large-scale synthesized dataset and a real-world dataset demonstrate that our
approach significantly outperforms existing state-of-the-art methods. The
source code is available in the supplementary materials.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 02:37:14 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Leng",
"Yicheng",
""
],
[
"Fang",
"Chaowei",
""
],
[
"Chen",
"Junye",
""
],
[
"Fang",
"Yixiang",
""
],
[
"Li",
"Sheng",
""
],
[
"Li",
"Guanbin",
""
]
] | TITLE: Bridging Knowledge Gap Between Image Inpainting and Large-Area Visible
Watermark Removal
ABSTRACT: Visible watermark removal which involves watermark cleaning and background
content restoration is pivotal to evaluate the resilience of watermarks.
Existing deep neural network (DNN)-based models still struggle with large-area
watermarks and are overly dependent on the quality of watermark mask
prediction. To overcome these challenges, we introduce a novel feature adapting
framework that leverages the representation modeling capacity of a pre-trained
image inpainting model. Our approach bridges the knowledge gap between image
inpainting and watermark removal by fusing information of the residual
background content beneath watermarks into the inpainting backbone model. We
establish a dual-branch system to capture and embed features from the residual
background content, which are merged into intermediate features of the
inpainting backbone model via gated feature fusion modules. Moreover, for
relieving the dependence on high-quality watermark masks, we introduce a new
training paradigm by utilizing coarse watermark masks to guide the inference
process. This contributes to a visible image removal model which is insensitive
to the quality of watermark mask during testing. Extensive experiments on both
a large-scale synthesized dataset and a real-world dataset demonstrate that our
approach significantly outperforms existing state-of-the-art methods. The
source code is available in the supplementary materials.
|
2504.04699 | Martin Weyssow | Martin Weyssow, Chengran Yang, Junkai Chen, Yikun Li, Huihui Huang,
Ratnadira Widyasari, Han Wei Ang, Frank Liauw, Eng Lieh Ouh, Lwin Khin Shar,
David Lo | R2Vul: Learning to Reason about Software Vulnerabilities with
Reinforcement Learning and Structured Reasoning Distillation | null | null | null | null | cs.SE cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) have shown promising performance in software
vulnerability detection (SVD), yet their reasoning capabilities remain
unreliable. Existing approaches relying on chain-of-thought (CoT) struggle to
provide relevant and actionable security assessments. Additionally, effective
SVD requires not only generating coherent reasoning but also differentiating
between well-founded and misleading yet plausible security assessments, an
aspect overlooked in prior work. To this end, we introduce R2Vul, a novel
approach that distills structured reasoning into small LLMs using reinforcement
learning from AI feedback (RLAIF). Through RLAIF, R2Vul enables LLMs to produce
structured, security-aware reasoning that is actionable and reliable while
explicitly learning to distinguish valid assessments from misleading ones. We
evaluate R2Vul across five languages against SAST tools, CoT, instruction
tuning, and classification-based baselines. Our results show that R2Vul with
structured reasoning distillation enables a 1.5B student LLM to rival larger
models while improving generalization to out-of-distribution vulnerabilities.
Beyond model improvements, we contribute a large-scale, multilingual preference
dataset featuring structured reasoning to support future research in SVD.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 03:04:16 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Weyssow",
"Martin",
""
],
[
"Yang",
"Chengran",
""
],
[
"Chen",
"Junkai",
""
],
[
"Li",
"Yikun",
""
],
[
"Huang",
"Huihui",
""
],
[
"Widyasari",
"Ratnadira",
""
],
[
"Ang",
"Han Wei",
""
],
[
"Liauw",
"Frank",
""
],
[
"Ouh",
"Eng Lieh",
""
],
[
"Shar",
"Lwin Khin",
""
],
[
"Lo",
"David",
""
]
] | TITLE: R2Vul: Learning to Reason about Software Vulnerabilities with
Reinforcement Learning and Structured Reasoning Distillation
ABSTRACT: Large language models (LLMs) have shown promising performance in software
vulnerability detection (SVD), yet their reasoning capabilities remain
unreliable. Existing approaches relying on chain-of-thought (CoT) struggle to
provide relevant and actionable security assessments. Additionally, effective
SVD requires not only generating coherent reasoning but also differentiating
between well-founded and misleading yet plausible security assessments, an
aspect overlooked in prior work. To this end, we introduce R2Vul, a novel
approach that distills structured reasoning into small LLMs using reinforcement
learning from AI feedback (RLAIF). Through RLAIF, R2Vul enables LLMs to produce
structured, security-aware reasoning that is actionable and reliable while
explicitly learning to distinguish valid assessments from misleading ones. We
evaluate R2Vul across five languages against SAST tools, CoT, instruction
tuning, and classification-based baselines. Our results show that R2Vul with
structured reasoning distillation enables a 1.5B student LLM to rival larger
models while improving generalization to out-of-distribution vulnerabilities.
Beyond model improvements, we contribute a large-scale, multilingual preference
dataset featuring structured reasoning to support future research in SVD.
|
2504.04706 | Lingyue Fu Miss | Lingyue Fu, Ting Long, Jianghao Lin, Wei Xia, Xinyi Dai, Ruiming Tang,
Yasheng Wang, Weinan Zhang, Yong Yu | AdvKT: An Adversarial Multi-Step Training Framework for Knowledge
Tracing | null | null | null | null | cs.LG cs.AI cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge Tracing (KT) monitors students' knowledge states and simulates
their responses to question sequences. Existing KT models typically follow a
single-step training paradigm, which leads to discrepancies with the multi-step
inference process required in real-world simulations, resulting in significant
error accumulation. This accumulation of error, coupled with the issue of data
sparsity, can substantially degrade the performance of recommendation models in
the intelligent tutoring systems. To address these challenges, we propose a
novel Adversarial Multi-Step Training Framework for Knowledge Tracing (AdvKT),
which, for the first time, focuses on the multi-step KT task. More
specifically, AdvKT leverages adversarial learning paradigm involving a
generator and a discriminator. The generator mimics high-reward responses,
effectively reducing error accumulation across multiple steps, while the
discriminator provides feedback to generate synthetic data. Additionally, we
design specialized data augmentation techniques to enrich the training data
with realistic variations, ensuring that the model generalizes well even in
scenarios with sparse data. Experiments conducted on four real-world datasets
demonstrate the superiority of AdvKT over existing KT models, showcasing its
ability to address both error accumulation and data sparsity issues
effectively.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 03:31:57 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Fu",
"Lingyue",
""
],
[
"Long",
"Ting",
""
],
[
"Lin",
"Jianghao",
""
],
[
"Xia",
"Wei",
""
],
[
"Dai",
"Xinyi",
""
],
[
"Tang",
"Ruiming",
""
],
[
"Wang",
"Yasheng",
""
],
[
"Zhang",
"Weinan",
""
],
[
"Yu",
"Yong",
""
]
] | TITLE: AdvKT: An Adversarial Multi-Step Training Framework for Knowledge
Tracing
ABSTRACT: Knowledge Tracing (KT) monitors students' knowledge states and simulates
their responses to question sequences. Existing KT models typically follow a
single-step training paradigm, which leads to discrepancies with the multi-step
inference process required in real-world simulations, resulting in significant
error accumulation. This accumulation of error, coupled with the issue of data
sparsity, can substantially degrade the performance of recommendation models in
the intelligent tutoring systems. To address these challenges, we propose a
novel Adversarial Multi-Step Training Framework for Knowledge Tracing (AdvKT),
which, for the first time, focuses on the multi-step KT task. More
specifically, AdvKT leverages adversarial learning paradigm involving a
generator and a discriminator. The generator mimics high-reward responses,
effectively reducing error accumulation across multiple steps, while the
discriminator provides feedback to generate synthetic data. Additionally, we
design specialized data augmentation techniques to enrich the training data
with realistic variations, ensuring that the model generalizes well even in
scenarios with sparse data. Experiments conducted on four real-world datasets
demonstrate the superiority of AdvKT over existing KT models, showcasing its
ability to address both error accumulation and data sparsity issues
effectively.
|
2504.04708 | Minchul Kim | Minchul Kim, Dingqiang Ye, Yiyang Su, Feng Liu, Xiaoming Liu | SapiensID: Foundation for Human Recognition | To appear in CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing human recognition systems often rely on separate, specialized models
for face and body analysis, limiting their effectiveness in real-world
scenarios where pose, visibility, and context vary widely. This paper
introduces SapiensID, a unified model that bridges this gap, achieving robust
performance across diverse settings. SapiensID introduces (i) Retina Patch
(RP), a dynamic patch generation scheme that adapts to subject scale and
ensures consistent tokenization of regions of interest, (ii) a masked
recognition model (MRM) that learns from variable token length, and (iii)
Semantic Attention Head (SAH), an module that learns pose-invariant
representations by pooling features around key body parts. To facilitate
training, we introduce WebBody4M, a large-scale dataset capturing diverse poses
and scale variations. Extensive experiments demonstrate that SapiensID achieves
state-of-the-art results on various body ReID benchmarks, outperforming
specialized models in both short-term and long-term scenarios while remaining
competitive with dedicated face recognition systems. Furthermore, SapiensID
establishes a strong baseline for the newly introduced challenge of Cross
Pose-Scale ReID, demonstrating its ability to generalize to complex, real-world
conditions.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 03:38:07 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Kim",
"Minchul",
""
],
[
"Ye",
"Dingqiang",
""
],
[
"Su",
"Yiyang",
""
],
[
"Liu",
"Feng",
""
],
[
"Liu",
"Xiaoming",
""
]
] | TITLE: SapiensID: Foundation for Human Recognition
ABSTRACT: Existing human recognition systems often rely on separate, specialized models
for face and body analysis, limiting their effectiveness in real-world
scenarios where pose, visibility, and context vary widely. This paper
introduces SapiensID, a unified model that bridges this gap, achieving robust
performance across diverse settings. SapiensID introduces (i) Retina Patch
(RP), a dynamic patch generation scheme that adapts to subject scale and
ensures consistent tokenization of regions of interest, (ii) a masked
recognition model (MRM) that learns from variable token length, and (iii)
Semantic Attention Head (SAH), an module that learns pose-invariant
representations by pooling features around key body parts. To facilitate
training, we introduce WebBody4M, a large-scale dataset capturing diverse poses
and scale variations. Extensive experiments demonstrate that SapiensID achieves
state-of-the-art results on various body ReID benchmarks, outperforming
specialized models in both short-term and long-term scenarios while remaining
competitive with dedicated face recognition systems. Furthermore, SapiensID
establishes a strong baseline for the newly introduced challenge of Cross
Pose-Scale ReID, demonstrating its ability to generalize to complex, real-world
conditions.
|
2504.04722 | Adnan Khan | Adnan Khan, Alireza Choubineh, Mai A. Shaaban, Abbas Akkasi, Majid
Komeili | TactileNet: Bridging the Accessibility Gap with AI-Generated Tactile
Graphics for Individuals with Vision Impairment | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Tactile graphics are essential for providing access to visual information for
the 43 million people globally living with vision loss, as estimated by global
prevalence data. However, traditional methods for creating these tactile
graphics are labor-intensive and struggle to meet demand. We introduce
TactileNet, the first comprehensive dataset and AI-driven framework for
generating tactile graphics using text-to-image Stable Diffusion (SD) models.
By integrating Low-Rank Adaptation (LoRA) and DreamBooth, our method fine-tunes
SD models to produce high-fidelity, guideline-compliant tactile graphics while
reducing computational costs. Evaluations involving tactile experts show that
generated graphics achieve 92.86% adherence to tactile standards and 100%
alignment with natural images in posture and features. Our framework also
demonstrates scalability, generating 32,000 images (7,050 filtered for quality)
across 66 classes, with prompt editing enabling customizable outputs (e.g.,
adding/removing details). Our work empowers designers to focus on refinement,
significantly accelerating accessibility efforts. It underscores the
transformative potential of AI for social good, offering a scalable solution to
bridge the accessibility gap in education and beyond.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 04:21:31 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Khan",
"Adnan",
""
],
[
"Choubineh",
"Alireza",
""
],
[
"Shaaban",
"Mai A.",
""
],
[
"Akkasi",
"Abbas",
""
],
[
"Komeili",
"Majid",
""
]
] | TITLE: TactileNet: Bridging the Accessibility Gap with AI-Generated Tactile
Graphics for Individuals with Vision Impairment
ABSTRACT: Tactile graphics are essential for providing access to visual information for
the 43 million people globally living with vision loss, as estimated by global
prevalence data. However, traditional methods for creating these tactile
graphics are labor-intensive and struggle to meet demand. We introduce
TactileNet, the first comprehensive dataset and AI-driven framework for
generating tactile graphics using text-to-image Stable Diffusion (SD) models.
By integrating Low-Rank Adaptation (LoRA) and DreamBooth, our method fine-tunes
SD models to produce high-fidelity, guideline-compliant tactile graphics while
reducing computational costs. Evaluations involving tactile experts show that
generated graphics achieve 92.86% adherence to tactile standards and 100%
alignment with natural images in posture and features. Our framework also
demonstrates scalability, generating 32,000 images (7,050 filtered for quality)
across 66 classes, with prompt editing enabling customizable outputs (e.g.,
adding/removing details). Our work empowers designers to focus on refinement,
significantly accelerating accessibility efforts. It underscores the
transformative potential of AI for social good, offering a scalable solution to
bridge the accessibility gap in education and beyond.
|
2504.04726 | Chu Zhao | Chu Zhao, Enneng Yang, Yuting Liu, Jianzhe Zhao, Guibing Guo, Xingwei
Wang | Can LLM-Driven Hard Negative Sampling Empower Collaborative Filtering?
Findings and Potentials | 11 pages | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Hard negative samples can accelerate model convergence and optimize decision
boundaries, which is key to improving the performance of recommender systems.
Although large language models (LLMs) possess strong semantic understanding and
generation capabilities, systematic research has not yet been conducted on how
to generate hard negative samples effectively. To fill this gap, this paper
introduces the concept of Semantic Negative Sampling and exploreshow to
optimize LLMs for high-quality, hard negative sampling. Specifically, we design
an experimental pipeline that includes three main modules, profile generation,
semantic negative sampling, and semantic alignment, to verify the potential of
LLM-driven hard negative sampling in enhancing the accuracy of collaborative
filtering (CF). Experimental results indicate that hard negative samples
generated based on LLMs, when semantically aligned and integrated into CF, can
significantly improve CF performance, although there is still a certain gap
compared to traditional negative sampling methods. Further analysis reveals
that this gap primarily arises from two major challenges: noisy samples and
lack of behavioral constraints. To address these challenges, we propose a
framework called HNLMRec, based on fine-tuning LLMs supervised by collaborative
signals. Experimental results show that this framework outperforms traditional
negative sampling and other LLM-driven recommendation methods across multiple
datasets, providing new solutions for empowering traditional RS with LLMs.
Additionally, we validate the excellent generalization ability of the LLM-based
semantic negative sampling method on new datasets, demonstrating its potential
in alleviating issues such as data sparsity, popularity bias, and the problem
of false hard negative samples. Our implementation code is available at
https://github.com/user683/HNLMRec.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 04:39:45 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhao",
"Chu",
""
],
[
"Yang",
"Enneng",
""
],
[
"Liu",
"Yuting",
""
],
[
"Zhao",
"Jianzhe",
""
],
[
"Guo",
"Guibing",
""
],
[
"Wang",
"Xingwei",
""
]
] | TITLE: Can LLM-Driven Hard Negative Sampling Empower Collaborative Filtering?
Findings and Potentials
ABSTRACT: Hard negative samples can accelerate model convergence and optimize decision
boundaries, which is key to improving the performance of recommender systems.
Although large language models (LLMs) possess strong semantic understanding and
generation capabilities, systematic research has not yet been conducted on how
to generate hard negative samples effectively. To fill this gap, this paper
introduces the concept of Semantic Negative Sampling and exploreshow to
optimize LLMs for high-quality, hard negative sampling. Specifically, we design
an experimental pipeline that includes three main modules, profile generation,
semantic negative sampling, and semantic alignment, to verify the potential of
LLM-driven hard negative sampling in enhancing the accuracy of collaborative
filtering (CF). Experimental results indicate that hard negative samples
generated based on LLMs, when semantically aligned and integrated into CF, can
significantly improve CF performance, although there is still a certain gap
compared to traditional negative sampling methods. Further analysis reveals
that this gap primarily arises from two major challenges: noisy samples and
lack of behavioral constraints. To address these challenges, we propose a
framework called HNLMRec, based on fine-tuning LLMs supervised by collaborative
signals. Experimental results show that this framework outperforms traditional
negative sampling and other LLM-driven recommendation methods across multiple
datasets, providing new solutions for empowering traditional RS with LLMs.
Additionally, we validate the excellent generalization ability of the LLM-based
semantic negative sampling method on new datasets, demonstrating its potential
in alleviating issues such as data sparsity, popularity bias, and the problem
of false hard negative samples. Our implementation code is available at
https://github.com/user683/HNLMRec.
|
2504.04732 | Zhenxing Ming | Zhenxing Ming, Julie Stephany Berrio, Mao Shan, Stewart Worrall | Inverse++: Vision-Centric 3D Semantic Occupancy Prediction Assisted with
3D Object Detection | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | 3D semantic occupancy prediction aims to forecast detailed geometric and
semantic information of the surrounding environment for autonomous vehicles
(AVs) using onboard surround-view cameras. Existing methods primarily focus on
intricate inner structure module designs to improve model performance, such as
efficient feature sampling and aggregation processes or intermediate feature
representation formats. In this paper, we explore multitask learning by
introducing an additional 3D supervision signal by incorporating an additional
3D object detection auxiliary branch. This extra 3D supervision signal enhances
the model's overall performance by strengthening the capability of the
intermediate features to capture small dynamic objects in the scene, and these
small dynamic objects often include vulnerable road users, i.e. bicycles,
motorcycles, and pedestrians, whose detection is crucial for ensuring driving
safety in autonomous vehicles. Extensive experiments conducted on the nuScenes
datasets, including challenging rainy and nighttime scenarios, showcase that
our approach attains state-of-the-art results, achieving an IoU score of 31.73%
and a mIoU score of 20.91% and excels at detecting vulnerable road users (VRU).
The code will be made available at:https://github.com/DanielMing123/Inverse++
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 05:08:22 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ming",
"Zhenxing",
""
],
[
"Berrio",
"Julie Stephany",
""
],
[
"Shan",
"Mao",
""
],
[
"Worrall",
"Stewart",
""
]
] | TITLE: Inverse++: Vision-Centric 3D Semantic Occupancy Prediction Assisted with
3D Object Detection
ABSTRACT: 3D semantic occupancy prediction aims to forecast detailed geometric and
semantic information of the surrounding environment for autonomous vehicles
(AVs) using onboard surround-view cameras. Existing methods primarily focus on
intricate inner structure module designs to improve model performance, such as
efficient feature sampling and aggregation processes or intermediate feature
representation formats. In this paper, we explore multitask learning by
introducing an additional 3D supervision signal by incorporating an additional
3D object detection auxiliary branch. This extra 3D supervision signal enhances
the model's overall performance by strengthening the capability of the
intermediate features to capture small dynamic objects in the scene, and these
small dynamic objects often include vulnerable road users, i.e. bicycles,
motorcycles, and pedestrians, whose detection is crucial for ensuring driving
safety in autonomous vehicles. Extensive experiments conducted on the nuScenes
datasets, including challenging rainy and nighttime scenarios, showcase that
our approach attains state-of-the-art results, achieving an IoU score of 31.73%
and a mIoU score of 20.91% and excels at detecting vulnerable road users (VRU).
The code will be made available at:https://github.com/DanielMing123/Inverse++
|
2504.04736 | Anna Goldie | Anna Goldie, Azalia Mirhoseini, Hao Zhou, Irene Cai, Christopher D.
Manning | Synthetic Data Generation & Multi-Step RL for Reasoning & Tool Use | null | null | null | null | cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Reinforcement learning has been shown to improve the performance of large
language models. However, traditional approaches like RLHF or RLAIF treat the
problem as single-step. As focus shifts toward more complex reasoning and
agentic tasks, language models must take multiple steps of text generation,
reasoning and environment interaction before generating a solution. We propose
a synthetic data generation and RL methodology targeting multi-step
optimization scenarios. This approach, called Step-Wise Reinforcement Learning
(SWiRL), iteratively generates multi-step reasoning and tool use data, and then
learns from that data. It employs a simple step-wise decomposition that breaks
each multi-step trajectory into multiple sub-trajectories corresponding to each
action by the original model. It then applies synthetic data filtering and RL
optimization on these sub-trajectories. We evaluated SWiRL on a number of
multi-step tool use, question answering, and mathematical reasoning tasks. Our
experiments show that SWiRL outperforms baseline approaches by 21.5%, 12.3%,
14.8%, 11.1%, and 15.3% in relative accuracy on GSM8K, HotPotQA, CofCA,
MuSiQue, and BeerQA, respectively. Excitingly, the approach exhibits
generalization across tasks: for example, training only on HotPotQA (text
question-answering) improves zero-shot performance on GSM8K (a math dataset) by
a relative 16.9%.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 05:20:58 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Goldie",
"Anna",
""
],
[
"Mirhoseini",
"Azalia",
""
],
[
"Zhou",
"Hao",
""
],
[
"Cai",
"Irene",
""
],
[
"Manning",
"Christopher D.",
""
]
] | TITLE: Synthetic Data Generation & Multi-Step RL for Reasoning & Tool Use
ABSTRACT: Reinforcement learning has been shown to improve the performance of large
language models. However, traditional approaches like RLHF or RLAIF treat the
problem as single-step. As focus shifts toward more complex reasoning and
agentic tasks, language models must take multiple steps of text generation,
reasoning and environment interaction before generating a solution. We propose
a synthetic data generation and RL methodology targeting multi-step
optimization scenarios. This approach, called Step-Wise Reinforcement Learning
(SWiRL), iteratively generates multi-step reasoning and tool use data, and then
learns from that data. It employs a simple step-wise decomposition that breaks
each multi-step trajectory into multiple sub-trajectories corresponding to each
action by the original model. It then applies synthetic data filtering and RL
optimization on these sub-trajectories. We evaluated SWiRL on a number of
multi-step tool use, question answering, and mathematical reasoning tasks. Our
experiments show that SWiRL outperforms baseline approaches by 21.5%, 12.3%,
14.8%, 11.1%, and 15.3% in relative accuracy on GSM8K, HotPotQA, CofCA,
MuSiQue, and BeerQA, respectively. Excitingly, the approach exhibits
generalization across tasks: for example, training only on HotPotQA (text
question-answering) improves zero-shot performance on GSM8K (a math dataset) by
a relative 16.9%.
|
2504.04737 | Shubham Kumar Nigam | Shubham Kumar Nigam, Balaramamahanthi Deepak Patnaik, Shivam Mishra,
Noel Shallum, Kripabandhu Ghosh and Arnab Bhattacharya | TathyaNyaya and FactLegalLlama: Advancing Factual Judgment Prediction
and Explanation in the Indian Legal Context | null | null | null | null | cs.CL cs.AI cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | In the landscape of Fact-based Judgment Prediction and Explanation (FJPE),
reliance on factual data is essential for developing robust and realistic
AI-driven decision-making tools. This paper introduces TathyaNyaya, the largest
annotated dataset for FJPE tailored to the Indian legal context, encompassing
judgments from the Supreme Court of India and various High Courts. Derived from
the Hindi terms "Tathya" (fact) and "Nyaya" (justice), the TathyaNyaya dataset
is uniquely designed to focus on factual statements rather than complete legal
texts, reflecting real-world judicial processes where factual data drives
outcomes. Complementing this dataset, we present FactLegalLlama, an
instruction-tuned variant of the LLaMa-3-8B Large Language Model (LLM),
optimized for generating high-quality explanations in FJPE tasks. Finetuned on
the factual data in TathyaNyaya, FactLegalLlama integrates predictive accuracy
with coherent, contextually relevant explanations, addressing the critical need
for transparency and interpretability in AI-assisted legal systems. Our
methodology combines transformers for binary judgment prediction with
FactLegalLlama for explanation generation, creating a robust framework for
advancing FJPE in the Indian legal domain. TathyaNyaya not only surpasses
existing datasets in scale and diversity but also establishes a benchmark for
building explainable AI systems in legal analysis. The findings underscore the
importance of factual precision and domain-specific tuning in enhancing
predictive performance and interpretability, positioning TathyaNyaya and
FactLegalLlama as foundational resources for AI-assisted legal decision-making.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 05:27:32 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Nigam",
"Shubham Kumar",
""
],
[
"Patnaik",
"Balaramamahanthi Deepak",
""
],
[
"Mishra",
"Shivam",
""
],
[
"Shallum",
"Noel",
""
],
[
"Ghosh",
"Kripabandhu",
""
],
[
"Bhattacharya",
"Arnab",
""
]
] | TITLE: TathyaNyaya and FactLegalLlama: Advancing Factual Judgment Prediction
and Explanation in the Indian Legal Context
ABSTRACT: In the landscape of Fact-based Judgment Prediction and Explanation (FJPE),
reliance on factual data is essential for developing robust and realistic
AI-driven decision-making tools. This paper introduces TathyaNyaya, the largest
annotated dataset for FJPE tailored to the Indian legal context, encompassing
judgments from the Supreme Court of India and various High Courts. Derived from
the Hindi terms "Tathya" (fact) and "Nyaya" (justice), the TathyaNyaya dataset
is uniquely designed to focus on factual statements rather than complete legal
texts, reflecting real-world judicial processes where factual data drives
outcomes. Complementing this dataset, we present FactLegalLlama, an
instruction-tuned variant of the LLaMa-3-8B Large Language Model (LLM),
optimized for generating high-quality explanations in FJPE tasks. Finetuned on
the factual data in TathyaNyaya, FactLegalLlama integrates predictive accuracy
with coherent, contextually relevant explanations, addressing the critical need
for transparency and interpretability in AI-assisted legal systems. Our
methodology combines transformers for binary judgment prediction with
FactLegalLlama for explanation generation, creating a robust framework for
advancing FJPE in the Indian legal domain. TathyaNyaya not only surpasses
existing datasets in scale and diversity but also establishes a benchmark for
building explainable AI systems in legal analysis. The findings underscore the
importance of factual precision and domain-specific tuning in enhancing
predictive performance and interpretability, positioning TathyaNyaya and
FactLegalLlama as foundational resources for AI-assisted legal decision-making.
|
2504.04739 | Minwei Zhao | Minwei Zhao, Sanja Scepanovic, Stephen Law, Daniele Quercia, Ivica
Obadic | MedGNN: Capturing the Links Between Urban Characteristics and Medical
Prescriptions | 12 pages' main content. This is a preprint. Submitted to KDD 2025 | null | null | null | cs.LG cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding how urban socio-demographic and environmental factors relate
with health is essential for public health and urban planning. However,
traditional statistical methods struggle with nonlinear effects, while machine
learning models often fail to capture geographical (nearby areas being more
similar) and topological (unequal connectivity between places) effects in an
interpretable way. To address this, we propose MedGNN, a spatio-topologically
explicit framework that constructs a 2-hop spatial graph, integrating
positional and locational node embeddings with urban characteristics in a graph
neural network. Applied to MEDSAT, a comprehensive dataset covering over 150
environmental and socio-demographic factors and six prescription outcomes
(depression, anxiety, diabetes, hypertension, asthma, and opioids) across 4,835
Greater London neighborhoods, MedGNN improved predictions by over 25% on
average compared to baseline methods. Using depression prescriptions as a case
study, we analyzed graph embeddings via geographical principal component
analysis, identifying findings that: align with prior research (e.g., higher
antidepressant prescriptions among older and White populations), contribute to
ongoing debates (e.g., greenery linked to higher and NO2 to lower
prescriptions), and warrant further study (e.g., canopy evaporation correlated
with fewer prescriptions). These results demonstrate MedGNN's potential, and
more broadly, of carefully applied machine learning, to advance
transdisciplinary public health research.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 05:35:16 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhao",
"Minwei",
""
],
[
"Scepanovic",
"Sanja",
""
],
[
"Law",
"Stephen",
""
],
[
"Quercia",
"Daniele",
""
],
[
"Obadic",
"Ivica",
""
]
] | TITLE: MedGNN: Capturing the Links Between Urban Characteristics and Medical
Prescriptions
ABSTRACT: Understanding how urban socio-demographic and environmental factors relate
with health is essential for public health and urban planning. However,
traditional statistical methods struggle with nonlinear effects, while machine
learning models often fail to capture geographical (nearby areas being more
similar) and topological (unequal connectivity between places) effects in an
interpretable way. To address this, we propose MedGNN, a spatio-topologically
explicit framework that constructs a 2-hop spatial graph, integrating
positional and locational node embeddings with urban characteristics in a graph
neural network. Applied to MEDSAT, a comprehensive dataset covering over 150
environmental and socio-demographic factors and six prescription outcomes
(depression, anxiety, diabetes, hypertension, asthma, and opioids) across 4,835
Greater London neighborhoods, MedGNN improved predictions by over 25% on
average compared to baseline methods. Using depression prescriptions as a case
study, we analyzed graph embeddings via geographical principal component
analysis, identifying findings that: align with prior research (e.g., higher
antidepressant prescriptions among older and White populations), contribute to
ongoing debates (e.g., greenery linked to higher and NO2 to lower
prescriptions), and warrant further study (e.g., canopy evaporation correlated
with fewer prescriptions). These results demonstrate MedGNN's potential, and
more broadly, of carefully applied machine learning, to advance
transdisciplinary public health research.
|
2504.04740 | Samarth Mishra | Samarth Mishra, Kate Saenko and Venkatesh Saligrama | Enhancing Compositional Reasoning in Vision-Language Models with
Synthetic Preference Data | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compositionality, or correctly recognizing scenes as compositions of atomic
visual concepts, remains difficult for multimodal large language models
(MLLMs). Even state of the art MLLMs such as GPT-4o can make mistakes in
distinguishing compositions like "dog chasing cat" vs "cat chasing dog". While
on Winoground, a benchmark for measuring such reasoning, MLLMs have made
significant progress, they are still far from a human's performance. We show
that compositional reasoning in these models can be improved by elucidating
such concepts via data, where a model is trained to prefer the correct caption
for an image over a close but incorrect one. We introduce SCRAMBLe: Synthetic
Compositional Reasoning Augmentation of MLLMs with Binary preference Learning,
an approach for preference tuning open-weight MLLMs on synthetic preference
data generated in a fully automated manner from existing image-caption data.
SCRAMBLe holistically improves these MLLMs' compositional reasoning
capabilities which we can see through significant improvements across multiple
vision language compositionality benchmarks, as well as smaller but significant
improvements on general question answering tasks. As a sneak peek, SCRAMBLe
tuned Molmo-7B model improves on Winoground from 49.5% to 54.8% (best reported
to date), while improving by ~1% on more general visual question answering
tasks. Code for SCRAMBLe along with tuned models and our synthetic training
dataset is available at https://github.com/samarth4149/SCRAMBLe.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 05:35:34 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Mishra",
"Samarth",
""
],
[
"Saenko",
"Kate",
""
],
[
"Saligrama",
"Venkatesh",
""
]
] | TITLE: Enhancing Compositional Reasoning in Vision-Language Models with
Synthetic Preference Data
ABSTRACT: Compositionality, or correctly recognizing scenes as compositions of atomic
visual concepts, remains difficult for multimodal large language models
(MLLMs). Even state of the art MLLMs such as GPT-4o can make mistakes in
distinguishing compositions like "dog chasing cat" vs "cat chasing dog". While
on Winoground, a benchmark for measuring such reasoning, MLLMs have made
significant progress, they are still far from a human's performance. We show
that compositional reasoning in these models can be improved by elucidating
such concepts via data, where a model is trained to prefer the correct caption
for an image over a close but incorrect one. We introduce SCRAMBLe: Synthetic
Compositional Reasoning Augmentation of MLLMs with Binary preference Learning,
an approach for preference tuning open-weight MLLMs on synthetic preference
data generated in a fully automated manner from existing image-caption data.
SCRAMBLe holistically improves these MLLMs' compositional reasoning
capabilities which we can see through significant improvements across multiple
vision language compositionality benchmarks, as well as smaller but significant
improvements on general question answering tasks. As a sneak peek, SCRAMBLe
tuned Molmo-7B model improves on Winoground from 49.5% to 54.8% (best reported
to date), while improving by ~1% on more general visual question answering
tasks. Code for SCRAMBLe along with tuned models and our synthetic training
dataset is available at https://github.com/samarth4149/SCRAMBLe.
|
2504.04744 | He Zhu | He Zhu, Quyu Kong, Kechun Xu, Xunlong Xia, Bing Deng, Jieping Ye, Rong
Xiong, Yue Wang | Grounding 3D Object Affordance with Language Instructions, Visual
Observations and Interactions | CVPR 2025 | null | null | null | cs.CV cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Grounding 3D object affordance is a task that locates objects in 3D space
where they can be manipulated, which links perception and action for embodied
intelligence. For example, for an intelligent robot, it is necessary to
accurately ground the affordance of an object and grasp it according to human
instructions. In this paper, we introduce a novel task that grounds 3D object
affordance based on language instructions, visual observations and
interactions, which is inspired by cognitive science. We collect an Affordance
Grounding dataset with Points, Images and Language instructions (AGPIL) to
support the proposed task. In the 3D physical world, due to observation
orientation, object rotation, or spatial occlusion, we can only get a partial
observation of the object. So this dataset includes affordance estimations of
objects from full-view, partial-view, and rotation-view perspectives. To
accomplish this task, we propose LMAffordance3D, the first multi-modal,
language-guided 3D affordance grounding network, which applies a
vision-language model to fuse 2D and 3D spatial features with semantic
features. Comprehensive experiments on AGPIL demonstrate the effectiveness and
superiority of our method on this task, even in unseen experimental settings.
Our project is available at https://sites.google.com/view/lmaffordance3d.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 05:38:23 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhu",
"He",
""
],
[
"Kong",
"Quyu",
""
],
[
"Xu",
"Kechun",
""
],
[
"Xia",
"Xunlong",
""
],
[
"Deng",
"Bing",
""
],
[
"Ye",
"Jieping",
""
],
[
"Xiong",
"Rong",
""
],
[
"Wang",
"Yue",
""
]
] | TITLE: Grounding 3D Object Affordance with Language Instructions, Visual
Observations and Interactions
ABSTRACT: Grounding 3D object affordance is a task that locates objects in 3D space
where they can be manipulated, which links perception and action for embodied
intelligence. For example, for an intelligent robot, it is necessary to
accurately ground the affordance of an object and grasp it according to human
instructions. In this paper, we introduce a novel task that grounds 3D object
affordance based on language instructions, visual observations and
interactions, which is inspired by cognitive science. We collect an Affordance
Grounding dataset with Points, Images and Language instructions (AGPIL) to
support the proposed task. In the 3D physical world, due to observation
orientation, object rotation, or spatial occlusion, we can only get a partial
observation of the object. So this dataset includes affordance estimations of
objects from full-view, partial-view, and rotation-view perspectives. To
accomplish this task, we propose LMAffordance3D, the first multi-modal,
language-guided 3D affordance grounding network, which applies a
vision-language model to fuse 2D and 3D spatial features with semantic
features. Comprehensive experiments on AGPIL demonstrate the effectiveness and
superiority of our method on this task, even in unseen experimental settings.
Our project is available at https://sites.google.com/view/lmaffordance3d.
|
2504.04745 | Ankush Raut | Ankush Raut, Xiaofeng Zhu, Maria Leonor Pacheco | Can LLMs Interpret and Leverage Structured Linguistic Representations? A
Case Study with AMRs | 13 pages, 23 figures. Submitted to XLLM @ ACL 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper evaluates the ability of Large Language Models (LLMs) to leverage
contextual information in the form of structured linguistic representations.
Specifically, we examine the impact of encoding both short and long contexts
using Abstract Meaning Representation (AMR) structures across a diverse set of
language tasks. We perform our analysis using 8-bit quantized and
instruction-tuned versions of Llama 3.1 (8B), Phi-3, and Mistral 7B. Our
results indicate that, for tasks involving short contexts, augmenting the
prompt with the AMR of the original language context often degrades the
performance of the underlying LLM. However, for tasks that involve long
contexts, such as dialogue summarization in the SAMSum dataset, this
enhancement improves LLM performance, for example, by increasing the zero-shot
cosine similarity score of Llama 3.1 from 66.2% to 76%. This improvement is
more evident in the newer and larger LLMs, but does not extend to the older or
smaller ones. In addition, we observe that LLMs can effectively reconstruct the
original text from a linearized AMR, achieving a cosine similarity of 81.3% in
the best-case scenario.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 05:38:40 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Raut",
"Ankush",
""
],
[
"Zhu",
"Xiaofeng",
""
],
[
"Pacheco",
"Maria Leonor",
""
]
] | TITLE: Can LLMs Interpret and Leverage Structured Linguistic Representations? A
Case Study with AMRs
ABSTRACT: This paper evaluates the ability of Large Language Models (LLMs) to leverage
contextual information in the form of structured linguistic representations.
Specifically, we examine the impact of encoding both short and long contexts
using Abstract Meaning Representation (AMR) structures across a diverse set of
language tasks. We perform our analysis using 8-bit quantized and
instruction-tuned versions of Llama 3.1 (8B), Phi-3, and Mistral 7B. Our
results indicate that, for tasks involving short contexts, augmenting the
prompt with the AMR of the original language context often degrades the
performance of the underlying LLM. However, for tasks that involve long
contexts, such as dialogue summarization in the SAMSum dataset, this
enhancement improves LLM performance, for example, by increasing the zero-shot
cosine similarity score of Llama 3.1 from 66.2% to 76%. This improvement is
more evident in the newer and larger LLMs, but does not extend to the older or
smaller ones. In addition, we observe that LLMs can effectively reconstruct the
original text from a linearized AMR, achieving a cosine similarity of 81.3% in
the best-case scenario.
|
2504.04747 | Byung Cheol Song | Yoojin Jung and Byung Cheol Song | Two is Better than One: Efficient Ensemble Defense for Robust and
Compact Models | Accepted to CVPR2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning-based computer vision systems adopt complex and large
architectures to improve performance, yet they face challenges in deployment on
resource-constrained mobile and edge devices. To address this issue, model
compression techniques such as pruning, quantization, and matrix factorization
have been proposed; however, these compressed models are often highly
vulnerable to adversarial attacks. We introduce the \textbf{Efficient Ensemble
Defense (EED)} technique, which diversifies the compression of a single base
model based on different pruning importance scores and enhances ensemble
diversity to achieve high adversarial robustness and resource efficiency. EED
dynamically determines the number of necessary sub-models during the inference
stage, minimizing unnecessary computations while maintaining high robustness.
On the CIFAR-10 and SVHN datasets, EED demonstrated state-of-the-art robustness
performance compared to existing adversarial pruning techniques, along with an
inference speed improvement of up to 1.86 times. This proves that EED is a
powerful defense solution in resource-constrained environments.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 05:41:35 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Jung",
"Yoojin",
""
],
[
"Song",
"Byung Cheol",
""
]
] | TITLE: Two is Better than One: Efficient Ensemble Defense for Robust and
Compact Models
ABSTRACT: Deep learning-based computer vision systems adopt complex and large
architectures to improve performance, yet they face challenges in deployment on
resource-constrained mobile and edge devices. To address this issue, model
compression techniques such as pruning, quantization, and matrix factorization
have been proposed; however, these compressed models are often highly
vulnerable to adversarial attacks. We introduce the \textbf{Efficient Ensemble
Defense (EED)} technique, which diversifies the compression of a single base
model based on different pruning importance scores and enhances ensemble
diversity to achieve high adversarial robustness and resource efficiency. EED
dynamically determines the number of necessary sub-models during the inference
stage, minimizing unnecessary computations while maintaining high robustness.
On the CIFAR-10 and SVHN datasets, EED demonstrated state-of-the-art robustness
performance compared to existing adversarial pruning techniques, along with an
inference speed improvement of up to 1.86 times. This proves that EED is a
powerful defense solution in resource-constrained environments.
|
2504.04752 | Dominik Kowald PhD | Dominik Kowald | Investigating Popularity Bias Amplification in Recommender Systems
Employed in the Entertainment Domain | Under review at EWAF'25, summarizes fairness and popularity bias
research presented in Dr. Kowald's habilitation | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Recommender systems have become an integral part of our daily online
experience by analyzing past user behavior to suggest relevant content in
entertainment domains such as music, movies, and books. Today, they are among
the most widely used applications of AI and machine learning. Consequently,
regulations and guidelines for trustworthy AI, such as the European AI Act,
which addresses issues like bias and fairness, are highly relevant to the
design, development, and evaluation of recommender systems. One particularly
important type of bias in this context is popularity bias, which results in the
unfair underrepresentation of less popular content in recommendation lists.
This work summarizes our research on investigating the amplification of
popularity bias in recommender systems within the entertainment sector.
Analyzing datasets from three entertainment domains, music, movies, and anime,
we demonstrate that an item's recommendation frequency is positively correlated
with its popularity. As a result, user groups with little interest in popular
content receive less accurate recommendations compared to those who prefer
widely popular items. Furthermore, we aim to better understand the connection
between recommendation accuracy, calibration quality of algorithms, and
popularity bias amplification.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 05:58:01 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Kowald",
"Dominik",
""
]
] | TITLE: Investigating Popularity Bias Amplification in Recommender Systems
Employed in the Entertainment Domain
ABSTRACT: Recommender systems have become an integral part of our daily online
experience by analyzing past user behavior to suggest relevant content in
entertainment domains such as music, movies, and books. Today, they are among
the most widely used applications of AI and machine learning. Consequently,
regulations and guidelines for trustworthy AI, such as the European AI Act,
which addresses issues like bias and fairness, are highly relevant to the
design, development, and evaluation of recommender systems. One particularly
important type of bias in this context is popularity bias, which results in the
unfair underrepresentation of less popular content in recommendation lists.
This work summarizes our research on investigating the amplification of
popularity bias in recommender systems within the entertainment sector.
Analyzing datasets from three entertainment domains, music, movies, and anime,
we demonstrate that an item's recommendation frequency is positively correlated
with its popularity. As a result, user groups with little interest in popular
content receive less accurate recommendations compared to those who prefer
widely popular items. Furthermore, we aim to better understand the connection
between recommendation accuracy, calibration quality of algorithms, and
popularity bias amplification.
|
2504.04765 | Huijie Li | Huijie Li, Yide Yu, Si Shi, Anmin Hu, Jian Huo, Wei Lin, Chaoran Wu,
Wuman Luo | Multi-Agent Deep Reinforcement Learning for Multiple Anesthetics
Collaborative Control | null | null | null | null | eess.SY cs.SY | http://creativecommons.org/licenses/by/4.0/ | Automated control of personalized multiple anesthetics in clinical Total
Intravenous Anesthesia (TIVA) is crucial yet challenging. Current systems,
including target-controlled infusion (TCI) and closed-loop systems, either rely
on relatively static pharmacokinetic/pharmacodynamic (PK/PD) models or focus on
single anesthetic control, limiting personalization and collaborative control.
To address these issues, we propose a novel framework, Value Decomposition
Multi-Agent Deep Reinforcement Learning (VD-MADRL). VD-MADRL optimizes the
collaboration between two anesthetics propofol (Agent I) and remifentanil
(Agent II). And It uses a Markov Game (MG) to identify optimal actions among
heterogeneous agents. We employ various value function decomposition methods to
resolve the credit allocation problem and enhance collaborative control. We
also introduce a multivariate environment model based on random forest (RF) for
anesthesia state simulation. Additionally, a data resampling and alignment
technique ensures synchronized trajectory data. Our experiments on general and
thoracic surgery datasets show that VD-MADRL performs better than human
experience. It improves dose precision and keeps anesthesia states stable,
providing great clinical value.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 06:36:24 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Huijie",
""
],
[
"Yu",
"Yide",
""
],
[
"Shi",
"Si",
""
],
[
"Hu",
"Anmin",
""
],
[
"Huo",
"Jian",
""
],
[
"Lin",
"Wei",
""
],
[
"Wu",
"Chaoran",
""
],
[
"Luo",
"Wuman",
""
]
] | TITLE: Multi-Agent Deep Reinforcement Learning for Multiple Anesthetics
Collaborative Control
ABSTRACT: Automated control of personalized multiple anesthetics in clinical Total
Intravenous Anesthesia (TIVA) is crucial yet challenging. Current systems,
including target-controlled infusion (TCI) and closed-loop systems, either rely
on relatively static pharmacokinetic/pharmacodynamic (PK/PD) models or focus on
single anesthetic control, limiting personalization and collaborative control.
To address these issues, we propose a novel framework, Value Decomposition
Multi-Agent Deep Reinforcement Learning (VD-MADRL). VD-MADRL optimizes the
collaboration between two anesthetics propofol (Agent I) and remifentanil
(Agent II). And It uses a Markov Game (MG) to identify optimal actions among
heterogeneous agents. We employ various value function decomposition methods to
resolve the credit allocation problem and enhance collaborative control. We
also introduce a multivariate environment model based on random forest (RF) for
anesthesia state simulation. Additionally, a data resampling and alignment
technique ensures synchronized trajectory data. Our experiments on general and
thoracic surgery datasets show that VD-MADRL performs better than human
experience. It improves dose precision and keeps anesthesia states stable,
providing great clinical value.
|
2504.04780 | Daochang Wang | Chenxi Zhao and Daochang Wang and Siqian Zhang and Gangyao Kuang | Bottom-Up Scattering Information Perception Network for SAR target
recognition | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning methods based synthetic aperture radar (SAR) image target
recognition tasks have been widely studied currently. The existing deep methods
are insufficient to perceive and mine the scattering information of SAR images,
resulting in performance bottlenecks and poor robustness of the algorithms. To
this end, this paper proposes a novel bottom-up scattering information
perception network for more interpretable target recognition by constructing
the proprietary interpretation network for SAR images. Firstly, the localized
scattering perceptron is proposed to replace the backbone feature extractor
based on CNN networks to deeply mine the underlying scattering information of
the target. Then, an unsupervised scattering part feature extraction model is
proposed to robustly characterize the target scattering part information and
provide fine-grained target representation. Finally, by aggregating the
knowledge of target parts to form the complete target description, the
interpretability and discriminative ability of the model is improved. We
perform experiments on the FAST-Vehicle dataset and the SAR-ACD dataset to
validate the performance of the proposed method.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 07:15:08 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhao",
"Chenxi",
""
],
[
"Wang",
"Daochang",
""
],
[
"Zhang",
"Siqian",
""
],
[
"Kuang",
"Gangyao",
""
]
] | TITLE: Bottom-Up Scattering Information Perception Network for SAR target
recognition
ABSTRACT: Deep learning methods based synthetic aperture radar (SAR) image target
recognition tasks have been widely studied currently. The existing deep methods
are insufficient to perceive and mine the scattering information of SAR images,
resulting in performance bottlenecks and poor robustness of the algorithms. To
this end, this paper proposes a novel bottom-up scattering information
perception network for more interpretable target recognition by constructing
the proprietary interpretation network for SAR images. Firstly, the localized
scattering perceptron is proposed to replace the backbone feature extractor
based on CNN networks to deeply mine the underlying scattering information of
the target. Then, an unsupervised scattering part feature extraction model is
proposed to robustly characterize the target scattering part information and
provide fine-grained target representation. Finally, by aggregating the
knowledge of target parts to form the complete target description, the
interpretability and discriminative ability of the model is improved. We
perform experiments on the FAST-Vehicle dataset and the SAR-ACD dataset to
validate the performance of the proposed method.
|
2504.04781 | Chaoyi Wang | Chaoyi Wang, Baoqing Li, Xinhan Di | OCC-MLLM-CoT-Alpha: Towards Multi-stage Occlusion Recognition Based on
Large Language Models via 3D-Aware Supervision and Chain-of-Thoughts Guidance | This work has been accepted to the Multimodal Algorithmic Reasoning
(MAR) Workshop at CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Comprehending occluded objects are not well studied in existing large-scale
visual-language multi-modal models. Current state-of-the-art multi-modal large
models struggles to provide satisfactory results in understanding occluded
objects through universal visual encoders and supervised learning strategies.
Therefore, we propose OCC-MLLM-CoT-Alpha, a multi-modal large vision language
framework that integrates 3D-aware supervision and Chain-of-Thoughts guidance.
Particularly, (1) we build a multi-modal large vision-language model framework
which is consisted of a large multi-modal vision-language model and a 3D
reconstruction expert model. (2) the corresponding multi-modal
Chain-of-Thoughts is learned through a combination of supervised and
reinforcement training strategies, allowing the multi-modal vision-language
model to enhance the recognition ability with learned multi-modal
chain-of-thoughts guidance. (3) A large-scale multi-modal chain-of-thoughts
reasoning dataset, consisting of $110k$ samples of occluded objects held in
hand, is built. In the evaluation, the proposed methods demonstrate decision
score improvement of 15.75%,15.30%,16.98%,14.62%, and 4.42%,3.63%,6.94%,10.70%
for two settings of a variety of state-of-the-art models.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 07:15:26 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Chaoyi",
""
],
[
"Li",
"Baoqing",
""
],
[
"Di",
"Xinhan",
""
]
] | TITLE: OCC-MLLM-CoT-Alpha: Towards Multi-stage Occlusion Recognition Based on
Large Language Models via 3D-Aware Supervision and Chain-of-Thoughts Guidance
ABSTRACT: Comprehending occluded objects are not well studied in existing large-scale
visual-language multi-modal models. Current state-of-the-art multi-modal large
models struggles to provide satisfactory results in understanding occluded
objects through universal visual encoders and supervised learning strategies.
Therefore, we propose OCC-MLLM-CoT-Alpha, a multi-modal large vision language
framework that integrates 3D-aware supervision and Chain-of-Thoughts guidance.
Particularly, (1) we build a multi-modal large vision-language model framework
which is consisted of a large multi-modal vision-language model and a 3D
reconstruction expert model. (2) the corresponding multi-modal
Chain-of-Thoughts is learned through a combination of supervised and
reinforcement training strategies, allowing the multi-modal vision-language
model to enhance the recognition ability with learned multi-modal
chain-of-thoughts guidance. (3) A large-scale multi-modal chain-of-thoughts
reasoning dataset, consisting of $110k$ samples of occluded objects held in
hand, is built. In the evaluation, the proposed methods demonstrate decision
score improvement of 15.75%,15.30%,16.98%,14.62%, and 4.42%,3.63%,6.94%,10.70%
for two settings of a variety of state-of-the-art models.
|
2504.04783 | Tianyang Wu | Tianyang Wu, Lipeng Wan, Yuhang Wang, Qiang Wan, Xuguang Lan | Playing Non-Embedded Card-Based Games with Reinforcement Learning | Match videos: https://www.bilibili.com/video/BV1xn4y1R7GQ, All code:
https://github.com/wty-yy/katacr, Detection dataset:
https://github.com/wty-yy/Clash-Royale-Detection-Dataset, Expert dataset:
https://github.com/wty-yy/Clash-Royale-Replay-Dataset | Intelligent Robotics and Applications. ICIRA 2024. Lecture Notes
in Computer Science, vol 15206. Springer, Singapore (2025) | 10.1007/978-981-96-0792-1_20 | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Significant progress has been made in AI for games, including board games,
MOBA, and RTS games. However, complex agents are typically developed in an
embedded manner, directly accessing game state information, unlike human
players who rely on noisy visual data, leading to unfair competition.
Developing complex non-embedded agents remains challenging, especially in
card-based RTS games with complex features and large state spaces. We propose a
non-embedded offline reinforcement learning training strategy using visual
inputs to achieve real-time autonomous gameplay in the RTS game Clash Royale.
Due to the lack of a object detection dataset for this game, we designed an
efficient generative object detection dataset for training. We extract features
using state-of-the-art object detection and optical character recognition
models. Our method enables real-time image acquisition, perception feature
fusion, decision-making, and control on mobile devices, successfully defeating
built-in AI opponents. All code is open-sourced at
https://github.com/wty-yy/katacr.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 07:26:02 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wu",
"Tianyang",
""
],
[
"Wan",
"Lipeng",
""
],
[
"Wang",
"Yuhang",
""
],
[
"Wan",
"Qiang",
""
],
[
"Lan",
"Xuguang",
""
]
] | TITLE: Playing Non-Embedded Card-Based Games with Reinforcement Learning
ABSTRACT: Significant progress has been made in AI for games, including board games,
MOBA, and RTS games. However, complex agents are typically developed in an
embedded manner, directly accessing game state information, unlike human
players who rely on noisy visual data, leading to unfair competition.
Developing complex non-embedded agents remains challenging, especially in
card-based RTS games with complex features and large state spaces. We propose a
non-embedded offline reinforcement learning training strategy using visual
inputs to achieve real-time autonomous gameplay in the RTS game Clash Royale.
Due to the lack of a object detection dataset for this game, we designed an
efficient generative object detection dataset for training. We extract features
using state-of-the-art object detection and optical character recognition
models. Our method enables real-time image acquisition, perception feature
fusion, decision-making, and control on mobile devices, successfully defeating
built-in AI opponents. All code is open-sourced at
https://github.com/wty-yy/katacr.
|
2504.04784 | Hui Liu | Hui Liu, Bin Zou, Suiyun Zhang, Kecheng Chen, Rui Liu, Haoliang Li | Disentangling Instruction Influence in Diffusion Transformers for
Parallel Multi-Instruction-Guided Image Editing | 14 pages, 8 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Instruction-guided image editing enables users to specify modifications using
natural language, offering more flexibility and control. Among existing
frameworks, Diffusion Transformers (DiTs) outperform U-Net-based diffusion
models in scalability and performance. However, while real-world scenarios
often require concurrent execution of multiple instructions, step-by-step
editing suffers from accumulated errors and degraded quality, and integrating
multiple instructions with a single prompt usually results in incomplete edits
due to instruction conflicts. We propose Instruction Influence Disentanglement
(IID), a novel framework enabling parallel execution of multiple instructions
in a single denoising process, designed for DiT-based models. By analyzing
self-attention mechanisms in DiTs, we identify distinctive attention patterns
in multi-instruction settings and derive instruction-specific attention masks
to disentangle each instruction's influence. These masks guide the editing
process to ensure localized modifications while preserving consistency in
non-edited regions. Extensive experiments on open-source and custom datasets
demonstrate that IID reduces diffusion steps while improving fidelity and
instruction completion compared to existing baselines. The codes will be
publicly released upon the acceptance of the paper.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 07:26:25 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Hui",
""
],
[
"Zou",
"Bin",
""
],
[
"Zhang",
"Suiyun",
""
],
[
"Chen",
"Kecheng",
""
],
[
"Liu",
"Rui",
""
],
[
"Li",
"Haoliang",
""
]
] | TITLE: Disentangling Instruction Influence in Diffusion Transformers for
Parallel Multi-Instruction-Guided Image Editing
ABSTRACT: Instruction-guided image editing enables users to specify modifications using
natural language, offering more flexibility and control. Among existing
frameworks, Diffusion Transformers (DiTs) outperform U-Net-based diffusion
models in scalability and performance. However, while real-world scenarios
often require concurrent execution of multiple instructions, step-by-step
editing suffers from accumulated errors and degraded quality, and integrating
multiple instructions with a single prompt usually results in incomplete edits
due to instruction conflicts. We propose Instruction Influence Disentanglement
(IID), a novel framework enabling parallel execution of multiple instructions
in a single denoising process, designed for DiT-based models. By analyzing
self-attention mechanisms in DiTs, we identify distinctive attention patterns
in multi-instruction settings and derive instruction-specific attention masks
to disentangle each instruction's influence. These masks guide the editing
process to ensure localized modifications while preserving consistency in
non-edited regions. Extensive experiments on open-source and custom datasets
demonstrate that IID reduces diffusion steps while improving fidelity and
instruction completion compared to existing baselines. The codes will be
publicly released upon the acceptance of the paper.
|
2504.04789 | Zhuoning Xu | Zhuoning Xu, Jian Xu, Mingqing Zhang, Peijie Wang, Chao Deng,
Cheng-Lin Liu | Multimodal Agricultural Agent Architecture (MA3): A New Paradigm for
Intelligent Agricultural Decision-Making | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As a strategic pillar industry for human survival and development, modern
agriculture faces dual challenges: optimizing production efficiency and
achieving sustainable development. Against the backdrop of intensified climate
change leading to frequent extreme weather events, the uncertainty risks in
agricultural production systems are increasing exponentially. To address these
challenges, this study proposes an innovative \textbf{M}ultimodal
\textbf{A}gricultural \textbf{A}gent \textbf{A}rchitecture (\textbf{MA3}),
which leverages cross-modal information fusion and task collaboration
mechanisms to achieve intelligent agricultural decision-making. This study
constructs a multimodal agricultural agent dataset encompassing five major
tasks: classification, detection, Visual Question Answering (VQA), tool
selection, and agent evaluation. We propose a unified backbone for sugarcane
disease classification and detection tools, as well as a sugarcane disease
expert model. By integrating an innovative tool selection module, we develop a
multimodal agricultural agent capable of effectively performing tasks in
classification, detection, and VQA. Furthermore, we introduce a
multi-dimensional quantitative evaluation framework and conduct a comprehensive
assessment of the entire architecture over our evaluation dataset, thereby
verifying the practicality and robustness of MA3 in agricultural scenarios.
This study provides new insights and methodologies for the development of
agricultural agents, holding significant theoretical and practical
implications. Our source code and dataset will be made publicly available upon
acceptance.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 07:32:41 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Xu",
"Zhuoning",
""
],
[
"Xu",
"Jian",
""
],
[
"Zhang",
"Mingqing",
""
],
[
"Wang",
"Peijie",
""
],
[
"Deng",
"Chao",
""
],
[
"Liu",
"Cheng-Lin",
""
]
] | TITLE: Multimodal Agricultural Agent Architecture (MA3): A New Paradigm for
Intelligent Agricultural Decision-Making
ABSTRACT: As a strategic pillar industry for human survival and development, modern
agriculture faces dual challenges: optimizing production efficiency and
achieving sustainable development. Against the backdrop of intensified climate
change leading to frequent extreme weather events, the uncertainty risks in
agricultural production systems are increasing exponentially. To address these
challenges, this study proposes an innovative \textbf{M}ultimodal
\textbf{A}gricultural \textbf{A}gent \textbf{A}rchitecture (\textbf{MA3}),
which leverages cross-modal information fusion and task collaboration
mechanisms to achieve intelligent agricultural decision-making. This study
constructs a multimodal agricultural agent dataset encompassing five major
tasks: classification, detection, Visual Question Answering (VQA), tool
selection, and agent evaluation. We propose a unified backbone for sugarcane
disease classification and detection tools, as well as a sugarcane disease
expert model. By integrating an innovative tool selection module, we develop a
multimodal agricultural agent capable of effectively performing tasks in
classification, detection, and VQA. Furthermore, we introduce a
multi-dimensional quantitative evaluation framework and conduct a comprehensive
assessment of the entire architecture over our evaluation dataset, thereby
verifying the practicality and robustness of MA3 in agricultural scenarios.
This study provides new insights and methodologies for the development of
agricultural agents, holding significant theoretical and practical
implications. Our source code and dataset will be made publicly available upon
acceptance.
|
2504.04801 | Jinhong Wang | Jinhong Wang, Shuo Tong, Jian liu, Dongqi Tang, Weiqiang Wang, Wentong
Li, Hongxia Xu, Danny Chen, Jintai Chen, Jian Wu | OrderChain: A General Prompting Paradigm to Improve Ordinal
Understanding Ability of MLLM | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Despite the remarkable progress of multimodal large language models (MLLMs),
they continue to face challenges in achieving competitive performance on
ordinal regression (OR; a.k.a. ordinal classification). To address this issue,
this paper presents OrderChain, a novel and general prompting paradigm that
improves the ordinal understanding ability of MLLMs by specificity and
commonality modeling. Specifically, our OrderChain consists of a set of
task-aware prompts to facilitate the specificity modeling of diverse OR tasks
and a new range optimization Chain-of-Thought (RO-CoT), which learns a
commonality way of thinking about OR tasks by uniformly decomposing them into
multiple small-range optimization subtasks. Further, we propose a category
recursive division (CRD) method to generate instruction candidate category
prompts to support RO-CoT automatic optimization. Comprehensive experiments
show that a Large Language and Vision Assistant (LLaVA) model with our
OrderChain improves baseline LLaVA significantly on diverse OR datasets, e.g.,
from 47.5% to 93.2% accuracy on the Adience dataset for age estimation, and
from 30.0% to 85.7% accuracy on the Diabetic Retinopathy dataset. Notably,
LLaVA with our OrderChain also remarkably outperforms state-of-the-art methods
by 27% on accuracy and 0.24 on MAE on the Adience dataset. To our best
knowledge, our OrderChain is the first work that augments MLLMs for OR tasks,
and the effectiveness is witnessed across a spectrum of OR datasets.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 07:53:44 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Jinhong",
""
],
[
"Tong",
"Shuo",
""
],
[
"liu",
"Jian",
""
],
[
"Tang",
"Dongqi",
""
],
[
"Wang",
"Weiqiang",
""
],
[
"Li",
"Wentong",
""
],
[
"Xu",
"Hongxia",
""
],
[
"Chen",
"Danny",
""
],
[
"Chen",
"Jintai",
""
],
[
"Wu",
"Jian",
""
]
] | TITLE: OrderChain: A General Prompting Paradigm to Improve Ordinal
Understanding Ability of MLLM
ABSTRACT: Despite the remarkable progress of multimodal large language models (MLLMs),
they continue to face challenges in achieving competitive performance on
ordinal regression (OR; a.k.a. ordinal classification). To address this issue,
this paper presents OrderChain, a novel and general prompting paradigm that
improves the ordinal understanding ability of MLLMs by specificity and
commonality modeling. Specifically, our OrderChain consists of a set of
task-aware prompts to facilitate the specificity modeling of diverse OR tasks
and a new range optimization Chain-of-Thought (RO-CoT), which learns a
commonality way of thinking about OR tasks by uniformly decomposing them into
multiple small-range optimization subtasks. Further, we propose a category
recursive division (CRD) method to generate instruction candidate category
prompts to support RO-CoT automatic optimization. Comprehensive experiments
show that a Large Language and Vision Assistant (LLaVA) model with our
OrderChain improves baseline LLaVA significantly on diverse OR datasets, e.g.,
from 47.5% to 93.2% accuracy on the Adience dataset for age estimation, and
from 30.0% to 85.7% accuracy on the Diabetic Retinopathy dataset. Notably,
LLaVA with our OrderChain also remarkably outperforms state-of-the-art methods
by 27% on accuracy and 0.24 on MAE on the Adience dataset. To our best
knowledge, our OrderChain is the first work that augments MLLMs for OR tasks,
and the effectiveness is witnessed across a spectrum of OR datasets.
|
2504.04803 | Piotr Przymus | Piotr Przymus, Miko{\l}aj Fejzer, Jakub Nar\k{e}bski, Krzysztof
Rykaczewski and Krzysztof Stencel | Out of Sight, Still at Risk: The Lifecycle of Transitive Vulnerabilities
in Maven | null | null | null | null | cs.SE cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The modern software development landscape heavily relies on transitive
dependencies. They enable seamless integration of third-party libraries.
However, they also introduce security challenges. Transitive vulnerabilities
that arise from indirect dependencies expose projects to risks associated with
Common Vulnerabilities and Exposures (CVEs). It happens even when direct
dependencies remain secure. This paper examines the lifecycle of transitive
vulnerabilities in the Maven ecosystem. We employ survival analysis to measure
the time projects remain exposed after a CVE is introduced. Using a large
dataset of Maven projects, we identify factors that influence the resolution of
these vulnerabilities. Our findings offer practical advice on improving
dependency management.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 07:54:15 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Przymus",
"Piotr",
""
],
[
"Fejzer",
"Mikołaj",
""
],
[
"Narębski",
"Jakub",
""
],
[
"Rykaczewski",
"Krzysztof",
""
],
[
"Stencel",
"Krzysztof",
""
]
] | TITLE: Out of Sight, Still at Risk: The Lifecycle of Transitive Vulnerabilities
in Maven
ABSTRACT: The modern software development landscape heavily relies on transitive
dependencies. They enable seamless integration of third-party libraries.
However, they also introduce security challenges. Transitive vulnerabilities
that arise from indirect dependencies expose projects to risks associated with
Common Vulnerabilities and Exposures (CVEs). It happens even when direct
dependencies remain secure. This paper examines the lifecycle of transitive
vulnerabilities in the Maven ecosystem. We employ survival analysis to measure
the time projects remain exposed after a CVE is introduced. Using a large
dataset of Maven projects, we identify factors that influence the resolution of
these vulnerabilities. Our findings offer practical advice on improving
dependency management.
|
2504.04804 | Yuanpei Liu | Yuanpei Liu, Kai Han | DebGCD: Debiased Learning with Distribution Guidance for Generalized
Category Discovery | Accepted as a conference paper at ICLR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper, we tackle the problem of Generalized Category Discovery (GCD).
Given a dataset containing both labelled and unlabelled images, the objective
is to categorize all images in the unlabelled subset, irrespective of whether
they are from known or unknown classes. In GCD, an inherent label bias exists
between known and unknown classes due to the lack of ground-truth labels for
the latter. State-of-the-art methods in GCD leverage parametric classifiers
trained through self-distillation with soft labels, leaving the bias issue
unattended. Besides, they treat all unlabelled samples uniformly, neglecting
variations in certainty levels and resulting in suboptimal learning. Moreover,
the explicit identification of semantic distribution shifts between known and
unknown classes, a vital aspect for effective GCD, has been neglected. To
address these challenges, we introduce DebGCD, a \underline{Deb}iased learning
with distribution guidance framework for \underline{GCD}. Initially, DebGCD
co-trains an auxiliary debiased classifier in the same feature space as the GCD
classifier, progressively enhancing the GCD features. Moreover, we introduce a
semantic distribution detector in a separate feature space to implicitly boost
the learning efficacy of GCD. Additionally, we employ a curriculum learning
strategy based on semantic distribution certainty to steer the debiased
learning at an optimized pace. Thorough evaluations on GCD benchmarks
demonstrate the consistent state-of-the-art performance of our framework,
highlighting its superiority. Project page: https://visual-ai.github.io/debgcd/
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 07:56:01 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Yuanpei",
""
],
[
"Han",
"Kai",
""
]
] | TITLE: DebGCD: Debiased Learning with Distribution Guidance for Generalized
Category Discovery
ABSTRACT: In this paper, we tackle the problem of Generalized Category Discovery (GCD).
Given a dataset containing both labelled and unlabelled images, the objective
is to categorize all images in the unlabelled subset, irrespective of whether
they are from known or unknown classes. In GCD, an inherent label bias exists
between known and unknown classes due to the lack of ground-truth labels for
the latter. State-of-the-art methods in GCD leverage parametric classifiers
trained through self-distillation with soft labels, leaving the bias issue
unattended. Besides, they treat all unlabelled samples uniformly, neglecting
variations in certainty levels and resulting in suboptimal learning. Moreover,
the explicit identification of semantic distribution shifts between known and
unknown classes, a vital aspect for effective GCD, has been neglected. To
address these challenges, we introduce DebGCD, a \underline{Deb}iased learning
with distribution guidance framework for \underline{GCD}. Initially, DebGCD
co-trains an auxiliary debiased classifier in the same feature space as the GCD
classifier, progressively enhancing the GCD features. Moreover, we introduce a
semantic distribution detector in a separate feature space to implicitly boost
the learning efficacy of GCD. Additionally, we employ a curriculum learning
strategy based on semantic distribution certainty to steer the debiased
learning at an optimized pace. Thorough evaluations on GCD benchmarks
demonstrate the consistent state-of-the-art performance of our framework,
highlighting its superiority. Project page: https://visual-ai.github.io/debgcd/
|
2504.04810 | Piotr Przymus | Piotr Przymus, Miko{\l}aj Fejzer, Jakub Nar\k{e}bski, Rados{\l}aw
Wo\'zniak, {\L}ukasz Halada, Aleksander Kazecki, Mykhailo Molchanov and
Krzysztof Stencel | HaPy-Bug -- Human Annotated Python Bug Resolution Dataset | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present HaPy-Bug, a curated dataset of 793 Python source code commits
associated with bug fixes, with each line of code annotated by three domain
experts. The annotations offer insights into the purpose of modified files,
changes at the line level, and reviewers' confidence levels. We analyze
HaPy-Bug to examine the distribution of file purposes, types of modifications,
and tangled changes. Additionally, we explore its potential applications in bug
tracking, the analysis of bug-fixing practices, and the development of
repository analysis tools. HaPy-Bug serves as a valuable resource for advancing
research in software maintenance and security.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 08:04:56 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Przymus",
"Piotr",
""
],
[
"Fejzer",
"Mikołaj",
""
],
[
"Narębski",
"Jakub",
""
],
[
"Woźniak",
"Radosław",
""
],
[
"Halada",
"Łukasz",
""
],
[
"Kazecki",
"Aleksander",
""
],
[
"Molchanov",
"Mykhailo",
""
],
[
"Stencel",
"Krzysztof",
""
]
] | TITLE: HaPy-Bug -- Human Annotated Python Bug Resolution Dataset
ABSTRACT: We present HaPy-Bug, a curated dataset of 793 Python source code commits
associated with bug fixes, with each line of code annotated by three domain
experts. The annotations offer insights into the purpose of modified files,
changes at the line level, and reviewers' confidence levels. We analyze
HaPy-Bug to examine the distribution of file purposes, types of modifications,
and tangled changes. Additionally, we explore its potential applications in bug
tracking, the analysis of bug-fixing practices, and the development of
repository analysis tools. HaPy-Bug serves as a valuable resource for advancing
research in software maintenance and security.
|
2504.04814 | Nataliia Molchanova | Nataliia Molchanova, Pedro M. Gordaliza, Alessandro Cagol, Mario
Ocampo--Pineda, Po--Jui Lu, Matthias Weigel, Xinjie Chen, Erin S. Beck, Haris
Tsagkas, Daniel Reich, Anna St\"olting, Pietro Maggi, Delphine Ribes, Adrien
Depeursinge, Cristina Granziera, Henning M\"uller, Meritxell Bach Cuadra | Explainability of AI Uncertainty: Application to Multiple Sclerosis
Lesion Segmentation on MRI | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Trustworthy artificial intelligence (AI) is essential in healthcare,
particularly for high-stakes tasks like medical image segmentation. Explainable
AI and uncertainty quantification significantly enhance AI reliability by
addressing key attributes such as robustness, usability, and explainability.
Despite extensive technical advances in uncertainty quantification for medical
imaging, understanding the clinical informativeness and interpretability of
uncertainty remains limited. This study introduces a novel framework to explain
the potential sources of predictive uncertainty, specifically in cortical
lesion segmentation in multiple sclerosis using deep ensembles. The proposed
analysis shifts the focus from the uncertainty-error relationship towards
relevant medical and engineering factors. Our findings reveal that
instance-wise uncertainty is strongly related to lesion size, shape, and
cortical involvement. Expert rater feedback confirms that similar factors
impede annotator confidence. Evaluations conducted on two datasets (206
patients, almost 2000 lesions) under both in-domain and distribution-shift
conditions highlight the utility of the framework in different scenarios.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 08:09:27 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Molchanova",
"Nataliia",
""
],
[
"Gordaliza",
"Pedro M.",
""
],
[
"Cagol",
"Alessandro",
""
],
[
"Ocampo--Pineda",
"Mario",
""
],
[
"Lu",
"Po--Jui",
""
],
[
"Weigel",
"Matthias",
""
],
[
"Chen",
"Xinjie",
""
],
[
"Beck",
"Erin S.",
""
],
[
"Tsagkas",
"Haris",
""
],
[
"Reich",
"Daniel",
""
],
[
"Stölting",
"Anna",
""
],
[
"Maggi",
"Pietro",
""
],
[
"Ribes",
"Delphine",
""
],
[
"Depeursinge",
"Adrien",
""
],
[
"Granziera",
"Cristina",
""
],
[
"Müller",
"Henning",
""
],
[
"Cuadra",
"Meritxell Bach",
""
]
] | TITLE: Explainability of AI Uncertainty: Application to Multiple Sclerosis
Lesion Segmentation on MRI
ABSTRACT: Trustworthy artificial intelligence (AI) is essential in healthcare,
particularly for high-stakes tasks like medical image segmentation. Explainable
AI and uncertainty quantification significantly enhance AI reliability by
addressing key attributes such as robustness, usability, and explainability.
Despite extensive technical advances in uncertainty quantification for medical
imaging, understanding the clinical informativeness and interpretability of
uncertainty remains limited. This study introduces a novel framework to explain
the potential sources of predictive uncertainty, specifically in cortical
lesion segmentation in multiple sclerosis using deep ensembles. The proposed
analysis shifts the focus from the uncertainty-error relationship towards
relevant medical and engineering factors. Our findings reveal that
instance-wise uncertainty is strongly related to lesion size, shape, and
cortical involvement. Expert rater feedback confirms that similar factors
impede annotator confidence. Evaluations conducted on two datasets (206
patients, almost 2000 lesions) under both in-domain and distribution-shift
conditions highlight the utility of the framework in different scenarios.
|
2504.04829 | Wenzhong Yan | Wenzhong Yan, Feng Yin, Jun Gao, Ao Wang, Yang Tian, Ruizhi Chen | Attentional Graph Meta-Learning for Indoor Localization Using Extremely
Sparse Fingerprints | null | null | null | null | cs.LG eess.SP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fingerprint-based indoor localization is often labor-intensive due to the
need for dense grids and repeated measurements across time and space.
Maintaining high localization accuracy with extremely sparse fingerprints
remains a persistent challenge. Existing benchmark methods primarily rely on
the measured fingerprints, while neglecting valuable spatial and environmental
characteristics. In this paper, we propose a systematic integration of an
Attentional Graph Neural Network (AGNN) model, capable of learning spatial
adjacency relationships and aggregating information from neighboring
fingerprints, and a meta-learning framework that utilizes datasets with similar
environmental characteristics to enhance model training. To minimize the labor
required for fingerprint collection, we introduce two novel data augmentation
strategies: 1) unlabeled fingerprint augmentation using moving platforms, which
enables the semi-supervised AGNN model to incorporate information from
unlabeled fingerprints, and 2) synthetic labeled fingerprint augmentation
through environmental digital twins, which enhances the meta-learning framework
through a practical distribution alignment, which can minimize the feature
discrepancy between synthetic and real-world fingerprints effectively. By
integrating these novel modules, we propose the Attentional Graph Meta-Learning
(AGML) model. This novel model combines the strengths of the AGNN model and the
meta-learning framework to address the challenges posed by extremely sparse
fingerprints. To validate our approach, we collected multiple datasets from
both consumer-grade WiFi devices and professional equipment across diverse
environments. Extensive experiments conducted on both synthetic and real-world
datasets demonstrate that the AGML model-based localization method consistently
outperforms all baseline methods using sparse fingerprints across all evaluated
metrics.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 08:37:18 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yan",
"Wenzhong",
""
],
[
"Yin",
"Feng",
""
],
[
"Gao",
"Jun",
""
],
[
"Wang",
"Ao",
""
],
[
"Tian",
"Yang",
""
],
[
"Chen",
"Ruizhi",
""
]
] | TITLE: Attentional Graph Meta-Learning for Indoor Localization Using Extremely
Sparse Fingerprints
ABSTRACT: Fingerprint-based indoor localization is often labor-intensive due to the
need for dense grids and repeated measurements across time and space.
Maintaining high localization accuracy with extremely sparse fingerprints
remains a persistent challenge. Existing benchmark methods primarily rely on
the measured fingerprints, while neglecting valuable spatial and environmental
characteristics. In this paper, we propose a systematic integration of an
Attentional Graph Neural Network (AGNN) model, capable of learning spatial
adjacency relationships and aggregating information from neighboring
fingerprints, and a meta-learning framework that utilizes datasets with similar
environmental characteristics to enhance model training. To minimize the labor
required for fingerprint collection, we introduce two novel data augmentation
strategies: 1) unlabeled fingerprint augmentation using moving platforms, which
enables the semi-supervised AGNN model to incorporate information from
unlabeled fingerprints, and 2) synthetic labeled fingerprint augmentation
through environmental digital twins, which enhances the meta-learning framework
through a practical distribution alignment, which can minimize the feature
discrepancy between synthetic and real-world fingerprints effectively. By
integrating these novel modules, we propose the Attentional Graph Meta-Learning
(AGML) model. This novel model combines the strengths of the AGNN model and the
meta-learning framework to address the challenges posed by extremely sparse
fingerprints. To validate our approach, we collected multiple datasets from
both consumer-grade WiFi devices and professional equipment across diverse
environments. Extensive experiments conducted on both synthetic and real-world
datasets demonstrate that the AGML model-based localization method consistently
outperforms all baseline methods using sparse fingerprints across all evaluated
metrics.
|
2504.04831 | Niladri Shekhar Dutt | Sanjeev Muralikrishnan, Niladri Shekhar Dutt, Niloy J. Mitra | SMF: Template-free and Rig-free Animation Transfer using Kinetic Codes | null | null | null | null | cs.GR cs.CV | http://creativecommons.org/licenses/by/4.0/ | Animation retargeting involves applying a sparse motion description (e.g.,
2D/3D keypoint sequences) to a given character mesh to produce a semantically
plausible and temporally coherent full-body motion. Existing approaches come
with a mix of restrictions - they require annotated training data, assume
access to template-based shape priors or artist-designed deformation rigs,
suffer from limited generalization to unseen motion and/or shapes, or exhibit
motion jitter. We propose Self-supervised Motion Fields (SMF) as a
self-supervised framework that can be robustly trained with sparse motion
representations, without requiring dataset specific annotations, templates, or
rigs. At the heart of our method are Kinetic Codes, a novel autoencoder-based
sparse motion encoding, that exposes a semantically rich latent space
simplifying large-scale training. Our architecture comprises dedicated spatial
and temporal gradient predictors, which are trained end-to-end. The resultant
network, regularized by the Kinetic Codes's latent space, has good
generalization across shapes and motions. We evaluated our method on unseen
motion sampled from AMASS, D4D, Mixamo, and raw monocular video for animation
transfer on various characters with varying shapes and topology. We report a
new SoTA on the AMASS dataset in the context of generalization to unseen
motion. Project webpage at https://motionfields.github.io/
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 08:42:52 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Muralikrishnan",
"Sanjeev",
""
],
[
"Dutt",
"Niladri Shekhar",
""
],
[
"Mitra",
"Niloy J.",
""
]
] | TITLE: SMF: Template-free and Rig-free Animation Transfer using Kinetic Codes
ABSTRACT: Animation retargeting involves applying a sparse motion description (e.g.,
2D/3D keypoint sequences) to a given character mesh to produce a semantically
plausible and temporally coherent full-body motion. Existing approaches come
with a mix of restrictions - they require annotated training data, assume
access to template-based shape priors or artist-designed deformation rigs,
suffer from limited generalization to unseen motion and/or shapes, or exhibit
motion jitter. We propose Self-supervised Motion Fields (SMF) as a
self-supervised framework that can be robustly trained with sparse motion
representations, without requiring dataset specific annotations, templates, or
rigs. At the heart of our method are Kinetic Codes, a novel autoencoder-based
sparse motion encoding, that exposes a semantically rich latent space
simplifying large-scale training. Our architecture comprises dedicated spatial
and temporal gradient predictors, which are trained end-to-end. The resultant
network, regularized by the Kinetic Codes's latent space, has good
generalization across shapes and motions. We evaluated our method on unseen
motion sampled from AMASS, D4D, Mixamo, and raw monocular video for animation
transfer on various characters with varying shapes and topology. We report a
new SoTA on the AMASS dataset in the context of generalization to unseen
motion. Project webpage at https://motionfields.github.io/
|
2504.04835 | Shanshan Wang | Shanshan Wang, Haixiang Xu, Hui Feng, Xiaoqian Wang, Pei Song, Sijie
Liu, Jianhua He | Inland Waterway Object Detection in Multi-environment: Dataset and
Approach | 37 pages,11 figures,5 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The success of deep learning in intelligent ship visual perception relies
heavily on rich image data. However, dedicated datasets for inland waterway
vessels remain scarce, limiting the adaptability of visual perception systems
in complex environments. Inland waterways, characterized by narrow channels,
variable weather, and urban interference, pose significant challenges to object
detection systems based on existing datasets. To address these issues, this
paper introduces the Multi-environment Inland Waterway Vessel Dataset (MEIWVD),
comprising 32,478 high-quality images from diverse scenarios, including sunny,
rainy, foggy, and artificial lighting conditions. MEIWVD covers common vessel
types in the Yangtze River Basin, emphasizing diversity, sample independence,
environmental complexity, and multi-scale characteristics, making it a robust
benchmark for vessel detection. Leveraging MEIWVD, this paper proposes a
scene-guided image enhancement module to improve water surface images based on
environmental conditions adaptively. Additionally, a parameter-limited dilated
convolution enhances the representation of vessel features, while a multi-scale
dilated residual fusion method integrates multi-scale features for better
detection. Experiments show that MEIWVD provides a more rigorous benchmark for
object detection algorithms, and the proposed methods significantly improve
detector performance, especially in complex multi-environment scenarios.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 08:45:00 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Shanshan",
""
],
[
"Xu",
"Haixiang",
""
],
[
"Feng",
"Hui",
""
],
[
"Wang",
"Xiaoqian",
""
],
[
"Song",
"Pei",
""
],
[
"Liu",
"Sijie",
""
],
[
"He",
"Jianhua",
""
]
] | TITLE: Inland Waterway Object Detection in Multi-environment: Dataset and
Approach
ABSTRACT: The success of deep learning in intelligent ship visual perception relies
heavily on rich image data. However, dedicated datasets for inland waterway
vessels remain scarce, limiting the adaptability of visual perception systems
in complex environments. Inland waterways, characterized by narrow channels,
variable weather, and urban interference, pose significant challenges to object
detection systems based on existing datasets. To address these issues, this
paper introduces the Multi-environment Inland Waterway Vessel Dataset (MEIWVD),
comprising 32,478 high-quality images from diverse scenarios, including sunny,
rainy, foggy, and artificial lighting conditions. MEIWVD covers common vessel
types in the Yangtze River Basin, emphasizing diversity, sample independence,
environmental complexity, and multi-scale characteristics, making it a robust
benchmark for vessel detection. Leveraging MEIWVD, this paper proposes a
scene-guided image enhancement module to improve water surface images based on
environmental conditions adaptively. Additionally, a parameter-limited dilated
convolution enhances the representation of vessel features, while a multi-scale
dilated residual fusion method integrates multi-scale features for better
detection. Experiments show that MEIWVD provides a more rigorous benchmark for
object detection algorithms, and the proposed methods significantly improve
detector performance, especially in complex multi-environment scenarios.
|
2504.04841 | Sebastian Schmidt | Sebastian Schmidt and Julius K\"orner and Dominik Fuchsgruber and
Stefano Gasperini and Federico Tombari and Stephan G\"unnemann | Prior2Former -- Evidential Modeling of Mask Transformers for
Assumption-Free Open-World Panoptic Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In panoptic segmentation, individual instances must be separated within
semantic classes. As state-of-the-art methods rely on a pre-defined set of
classes, they struggle with novel categories and out-of-distribution (OOD)
data. This is particularly problematic in safety-critical applications, such as
autonomous driving, where reliability in unseen scenarios is essential. We
address the gap between outstanding benchmark performance and reliability by
proposing Prior2Former (P2F), the first approach for segmentation vision
transformers rooted in evidential learning. P2F extends the mask vision
transformer architecture by incorporating a Beta prior for computing model
uncertainty in pixel-wise binary mask assignments. This design enables
high-quality uncertainty estimation that effectively detects novel and OOD
objects enabling state-of-the-art anomaly instance segmentation and open-world
panoptic segmentation. Unlike most segmentation models addressing unknown
classes, P2F operates without access to OOD data samples or contrastive
training on void (i.e., unlabeled) classes, making it highly applicable in
real-world scenarios where such prior information is unavailable. Additionally,
P2F can be flexibly applied to anomaly instance and panoptic segmentation.
Through comprehensive experiments on the Cityscapes, COCO, SegmentMeIfYouCan,
and OoDIS datasets, we demonstrate the state-of-the-art performance of P2F. It
achieves the highest ranking in the OoDIS anomaly instance benchmark among
methods not using OOD data in any way.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 08:53:14 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Schmidt",
"Sebastian",
""
],
[
"Körner",
"Julius",
""
],
[
"Fuchsgruber",
"Dominik",
""
],
[
"Gasperini",
"Stefano",
""
],
[
"Tombari",
"Federico",
""
],
[
"Günnemann",
"Stephan",
""
]
] | TITLE: Prior2Former -- Evidential Modeling of Mask Transformers for
Assumption-Free Open-World Panoptic Segmentation
ABSTRACT: In panoptic segmentation, individual instances must be separated within
semantic classes. As state-of-the-art methods rely on a pre-defined set of
classes, they struggle with novel categories and out-of-distribution (OOD)
data. This is particularly problematic in safety-critical applications, such as
autonomous driving, where reliability in unseen scenarios is essential. We
address the gap between outstanding benchmark performance and reliability by
proposing Prior2Former (P2F), the first approach for segmentation vision
transformers rooted in evidential learning. P2F extends the mask vision
transformer architecture by incorporating a Beta prior for computing model
uncertainty in pixel-wise binary mask assignments. This design enables
high-quality uncertainty estimation that effectively detects novel and OOD
objects enabling state-of-the-art anomaly instance segmentation and open-world
panoptic segmentation. Unlike most segmentation models addressing unknown
classes, P2F operates without access to OOD data samples or contrastive
training on void (i.e., unlabeled) classes, making it highly applicable in
real-world scenarios where such prior information is unavailable. Additionally,
P2F can be flexibly applied to anomaly instance and panoptic segmentation.
Through comprehensive experiments on the Cityscapes, COCO, SegmentMeIfYouCan,
and OoDIS datasets, we demonstrate the state-of-the-art performance of P2F. It
achieves the highest ranking in the OoDIS anomaly instance benchmark among
methods not using OOD data in any way.
|
2504.04844 | Zhicong Sun | Zhicong Sun, Jacqueline Lo and Jinxing Hu | Embracing Dynamics: Dynamics-aware 4D Gaussian Splatting SLAM | This paper is currently under reviewed for IROS 2025 | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Simultaneous localization and mapping (SLAM) technology now has
photorealistic mapping capabilities thanks to the real-time high-fidelity
rendering capability of 3D Gaussian splatting (3DGS). However, due to the
static representation of scenes, current 3DGS-based SLAM encounters issues with
pose drift and failure to reconstruct accurate maps in dynamic environments. To
address this problem, we present D4DGS-SLAM, the first SLAM method based on
4DGS map representation for dynamic environments. By incorporating the temporal
dimension into scene representation, D4DGS-SLAM enables high-quality
reconstruction of dynamic scenes. Utilizing the dynamics-aware InfoModule, we
can obtain the dynamics, visibility, and reliability of scene points, and
filter stable static points for tracking accordingly. When optimizing Gaussian
points, we apply different isotropic regularization terms to Gaussians with
varying dynamic characteristics. Experimental results on real-world dynamic
scene datasets demonstrate that our method outperforms state-of-the-art
approaches in both camera pose tracking and map quality.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 08:56:35 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Sun",
"Zhicong",
""
],
[
"Lo",
"Jacqueline",
""
],
[
"Hu",
"Jinxing",
""
]
] | TITLE: Embracing Dynamics: Dynamics-aware 4D Gaussian Splatting SLAM
ABSTRACT: Simultaneous localization and mapping (SLAM) technology now has
photorealistic mapping capabilities thanks to the real-time high-fidelity
rendering capability of 3D Gaussian splatting (3DGS). However, due to the
static representation of scenes, current 3DGS-based SLAM encounters issues with
pose drift and failure to reconstruct accurate maps in dynamic environments. To
address this problem, we present D4DGS-SLAM, the first SLAM method based on
4DGS map representation for dynamic environments. By incorporating the temporal
dimension into scene representation, D4DGS-SLAM enables high-quality
reconstruction of dynamic scenes. Utilizing the dynamics-aware InfoModule, we
can obtain the dynamics, visibility, and reliability of scene points, and
filter stable static points for tracking accordingly. When optimizing Gaussian
points, we apply different isotropic regularization terms to Gaussians with
varying dynamic characteristics. Experimental results on real-world dynamic
scene datasets demonstrate that our method outperforms state-of-the-art
approaches in both camera pose tracking and map quality.
|
2504.04857 | Isha Sharma | Isha Sharma, Dieter Schmalstieg | 3D Gaussian Particle Approximation of VDB Datasets: A Study for
Scientific Visualization | null | null | null | null | cs.GR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The complexity and scale of Volumetric and Simulation datasets for Scientific
Visualization(SciVis) continue to grow. And the approaches and advantages of
memory-efficient data formats and storage techniques for such datasets vary.
OpenVDB library and its VDB data format excels in memory efficiency through its
hierarchical and dynamic tree structure, with active and inactive sub-trees for
data storage. It is heavily used in current production renderers for both
animation and rendering stages in VFX pipelines and photorealistic rendering of
volumes and fluids. However, it still remains to be fully leveraged in SciVis
where domains dealing with sparse scalar fields like porous media, time varying
volumes such as tornado and weather simulation or high resolution simulation of
Computational Fluid Dynamics present ample number of large challenging data
sets.Goal of this paper is not only to explore the use of OpenVDB in SciVis but
also to explore a level of detail(LOD) technique using 3D Gaussian particles
approximating voxel regions. For rendering, we utilize NVIDIA OptiX library for
ray marching through the Gaussians particles. Data modeling using 3D Gaussians
has been very popular lately due to success in stereoscopic image to 3D scene
conversion using Gaussian Splatting and Gaussian approximation and mixture
models aren't entirely new in SciVis as well. Our work explores the integration
with rendering software libraries like OpenVDB and OptiX to take advantage of
their built-in memory compaction and hardware acceleration features, while also
leveraging the performance capabilities of modern GPUs. Thus, we present a
SciVis rendering approach that uses 3D Gaussians at varying LOD in a lossy
scheme derived from VDB datasets, rather than focusing on photorealistic volume
rendering.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 09:14:15 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Sharma",
"Isha",
""
],
[
"Schmalstieg",
"Dieter",
""
]
] | TITLE: 3D Gaussian Particle Approximation of VDB Datasets: A Study for
Scientific Visualization
ABSTRACT: The complexity and scale of Volumetric and Simulation datasets for Scientific
Visualization(SciVis) continue to grow. And the approaches and advantages of
memory-efficient data formats and storage techniques for such datasets vary.
OpenVDB library and its VDB data format excels in memory efficiency through its
hierarchical and dynamic tree structure, with active and inactive sub-trees for
data storage. It is heavily used in current production renderers for both
animation and rendering stages in VFX pipelines and photorealistic rendering of
volumes and fluids. However, it still remains to be fully leveraged in SciVis
where domains dealing with sparse scalar fields like porous media, time varying
volumes such as tornado and weather simulation or high resolution simulation of
Computational Fluid Dynamics present ample number of large challenging data
sets.Goal of this paper is not only to explore the use of OpenVDB in SciVis but
also to explore a level of detail(LOD) technique using 3D Gaussian particles
approximating voxel regions. For rendering, we utilize NVIDIA OptiX library for
ray marching through the Gaussians particles. Data modeling using 3D Gaussians
has been very popular lately due to success in stereoscopic image to 3D scene
conversion using Gaussian Splatting and Gaussian approximation and mixture
models aren't entirely new in SciVis as well. Our work explores the integration
with rendering software libraries like OpenVDB and OptiX to take advantage of
their built-in memory compaction and hardware acceleration features, while also
leveraging the performance capabilities of modern GPUs. Thus, we present a
SciVis rendering approach that uses 3D Gaussians at varying LOD in a lossy
scheme derived from VDB datasets, rather than focusing on photorealistic volume
rendering.
|
2504.04861 | Hongtao Wang | Hongtao Wang, Renchi Yang, Hewen Wang, Haoran Zheng and Jianliang Xu | SAFT: Structure-aware Transformers for Textual Interaction
Classification | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Textual interaction networks (TINs) are an omnipresent data structure used to
model the interplay between users and items on e-commerce websites, social
networks, etc., where each interaction is associated with a text description.
Classifying such textual interactions (TIC) finds extensive use in detecting
spam reviews in e-commerce, fraudulent transactions in finance, and so on.
Existing TIC solutions either (i) fail to capture the rich text semantics due
to the use of context-free text embeddings, and/or (ii) disregard the bipartite
structure and node heterogeneity of TINs, leading to compromised TIC
performance. In this work, we propose SAFT, a new architecture that integrates
language- and graph-based modules for the effective fusion of textual and
structural semantics in the representation learning of interactions. In
particular, line graph attention (LGA)/gated attention units (GAUs) and
pretrained language models (PLMs) are capitalized on to model the
interaction-level and token-level signals, which are further coupled via the
proxy token in an iterative and contextualized fashion. Additionally, an
efficient and theoretically-grounded approach is developed to encode the local
and global topology information pertaining to interactions into structural
embeddings. The resulting embeddings not only inject the structural features
underlying TINs into the textual interaction encoding but also facilitate the
design of graph sampling strategies. Extensive empirical evaluations on
multiple real TIN datasets demonstrate the superiority of SAFT over the
state-of-the-art baselines in TIC accuracy.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 09:19:12 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Hongtao",
""
],
[
"Yang",
"Renchi",
""
],
[
"Wang",
"Hewen",
""
],
[
"Zheng",
"Haoran",
""
],
[
"Xu",
"Jianliang",
""
]
] | TITLE: SAFT: Structure-aware Transformers for Textual Interaction
Classification
ABSTRACT: Textual interaction networks (TINs) are an omnipresent data structure used to
model the interplay between users and items on e-commerce websites, social
networks, etc., where each interaction is associated with a text description.
Classifying such textual interactions (TIC) finds extensive use in detecting
spam reviews in e-commerce, fraudulent transactions in finance, and so on.
Existing TIC solutions either (i) fail to capture the rich text semantics due
to the use of context-free text embeddings, and/or (ii) disregard the bipartite
structure and node heterogeneity of TINs, leading to compromised TIC
performance. In this work, we propose SAFT, a new architecture that integrates
language- and graph-based modules for the effective fusion of textual and
structural semantics in the representation learning of interactions. In
particular, line graph attention (LGA)/gated attention units (GAUs) and
pretrained language models (PLMs) are capitalized on to model the
interaction-level and token-level signals, which are further coupled via the
proxy token in an iterative and contextualized fashion. Additionally, an
efficient and theoretically-grounded approach is developed to encode the local
and global topology information pertaining to interactions into structural
embeddings. The resulting embeddings not only inject the structural features
underlying TINs into the textual interaction encoding but also facilitate the
design of graph sampling strategies. Extensive empirical evaluations on
multiple real TIN datasets demonstrate the superiority of SAFT over the
state-of-the-art baselines in TIC accuracy.
|
2504.04862 | HongKuo Niu | Yunxiang Liu, Hongkuo Niu, Jianlin Zhu | GAMDTP: Dynamic Trajectory Prediction with Graph Attention Mamba Network | null | null | null | null | cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate motion prediction of traffic agents is crucial for the safety and
stability of autonomous driving systems. In this paper, we introduce GAMDTP, a
novel graph attention-based network tailored for dynamic trajectory prediction.
Specifically, we fuse the result of self attention and mamba-ssm through a gate
mechanism, leveraging the strengths of both to extract features more
efficiently and accurately, in each graph convolution layer. GAMDTP encodes the
high-definition map(HD map) data and the agents' historical trajectory
coordinates and decodes the network's output to generate the final prediction
results. Additionally, recent approaches predominantly focus on dynamically
fusing historical forecast results and rely on two-stage frameworks including
proposal and refinement. To further enhance the performance of the two-stage
frameworks we also design a scoring mechanism to evaluate the prediction
quality during the proposal and refinement processes. Experiments on the
Argoverse dataset demonstrates that GAMDTP achieves state-of-the-art
performance, achieving superior accuracy in dynamic trajectory prediction.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 09:19:20 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Yunxiang",
""
],
[
"Niu",
"Hongkuo",
""
],
[
"Zhu",
"Jianlin",
""
]
] | TITLE: GAMDTP: Dynamic Trajectory Prediction with Graph Attention Mamba Network
ABSTRACT: Accurate motion prediction of traffic agents is crucial for the safety and
stability of autonomous driving systems. In this paper, we introduce GAMDTP, a
novel graph attention-based network tailored for dynamic trajectory prediction.
Specifically, we fuse the result of self attention and mamba-ssm through a gate
mechanism, leveraging the strengths of both to extract features more
efficiently and accurately, in each graph convolution layer. GAMDTP encodes the
high-definition map(HD map) data and the agents' historical trajectory
coordinates and decodes the network's output to generate the final prediction
results. Additionally, recent approaches predominantly focus on dynamically
fusing historical forecast results and rely on two-stage frameworks including
proposal and refinement. To further enhance the performance of the two-stage
frameworks we also design a scoring mechanism to evaluate the prediction
quality during the proposal and refinement processes. Experiments on the
Argoverse dataset demonstrates that GAMDTP achieves state-of-the-art
performance, achieving superior accuracy in dynamic trajectory prediction.
|
2504.04869 | Gang Wu | Gang Wu and Junjun Jiang and Kui Jiang and Xianming Liu | Content-Aware Transformer for All-in-one Image Restoration | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image restoration has witnessed significant advancements with the development
of deep learning models. Although Transformer architectures have progressed
considerably in recent years, challenges remain, particularly the limited
receptive field in window-based self-attention. In this work, we propose
DSwinIR, a Deformable Sliding window Transformer for Image Restoration. DSwinIR
introduces a novel deformable sliding window self-attention that adaptively
adjusts receptive fields based on image content, enabling the attention
mechanism to focus on important regions and enhance feature extraction aligned
with salient features. Additionally, we introduce a central ensemble pattern to
reduce the inclusion of irrelevant content within attention windows. In this
way, the proposed DSwinIR model integrates the deformable sliding window
Transformer and central ensemble pattern to amplify the strengths of both CNNs
and Transformers while mitigating their limitations. Extensive experiments on
various image restoration tasks demonstrate that DSwinIR achieves
state-of-the-art performance. For example, in image deraining, compared to
DRSformer on the SPA dataset, DSwinIR achieves a 0.66 dB PSNR improvement. In
all-in-one image restoration, compared to PromptIR, DSwinIR achieves over a
0.66 dB and 1.04 dB improvement on three-task and five-task settings,
respectively. Pretrained models and code are available at our project
https://github.com/Aitical/DSwinIR.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 09:24:41 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wu",
"Gang",
""
],
[
"Jiang",
"Junjun",
""
],
[
"Jiang",
"Kui",
""
],
[
"Liu",
"Xianming",
""
]
] | TITLE: Content-Aware Transformer for All-in-one Image Restoration
ABSTRACT: Image restoration has witnessed significant advancements with the development
of deep learning models. Although Transformer architectures have progressed
considerably in recent years, challenges remain, particularly the limited
receptive field in window-based self-attention. In this work, we propose
DSwinIR, a Deformable Sliding window Transformer for Image Restoration. DSwinIR
introduces a novel deformable sliding window self-attention that adaptively
adjusts receptive fields based on image content, enabling the attention
mechanism to focus on important regions and enhance feature extraction aligned
with salient features. Additionally, we introduce a central ensemble pattern to
reduce the inclusion of irrelevant content within attention windows. In this
way, the proposed DSwinIR model integrates the deformable sliding window
Transformer and central ensemble pattern to amplify the strengths of both CNNs
and Transformers while mitigating their limitations. Extensive experiments on
various image restoration tasks demonstrate that DSwinIR achieves
state-of-the-art performance. For example, in image deraining, compared to
DRSformer on the SPA dataset, DSwinIR achieves a 0.66 dB PSNR improvement. In
all-in-one image restoration, compared to PromptIR, DSwinIR achieves over a
0.66 dB and 1.04 dB improvement on three-task and five-task settings,
respectively. Pretrained models and code are available at our project
https://github.com/Aitical/DSwinIR.
|
2504.04877 | Viktor Beck | Viktor Beck, Max Landauer, Markus Wurzenberger, Florian Skopik,
Andreas Rauber | SoK: LLM-based Log Parsing | 34 pages, 11 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Log data, generated by software systems, provides crucial insights for tasks
like monitoring, root cause analysis, and anomaly detection. Due to the vast
volume of logs, automated log parsing is essential to transform semi-structured
log messages into structured representations. Traditional log parsing
techniques often require manual configurations, such as defining log formats or
labeling data, which limits scalability and usability. Recent advances in large
language models (LLMs) have introduced the new research field of LLM-based log
parsing, offering potential improvements in automation and adaptability.
Despite promising results, there is no structured overview of these approaches
since this is a relatively new research field with the earliest advances
published in late 2023. This paper systematically reviews 29 LLM-based log
parsing methods, comparing their capabilities, limitations, and reliance on
manual effort. We analyze the learning and prompt-engineering paradigms
employed, efficiency- and effectiveness-enhancing techniques, and the role of
LLMs in the parsing process. We aggregate the results of the survey in a large
table comprising the characterizing features of LLM-based log parsing
approaches and derive the general process of LLM-based log parsing,
incorporating all reviewed approaches in a single flow chart. Additionally, we
benchmark seven open-source LLM-based log parsers on public datasets and
critically assess their reproducibility. Our findings summarize the advances of
this new research field and provide insights for researchers and practitioners
seeking efficient and user-friendly log parsing solutions, with all code and
results made publicly available for transparency.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 09:41:04 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Beck",
"Viktor",
""
],
[
"Landauer",
"Max",
""
],
[
"Wurzenberger",
"Markus",
""
],
[
"Skopik",
"Florian",
""
],
[
"Rauber",
"Andreas",
""
]
] | TITLE: SoK: LLM-based Log Parsing
ABSTRACT: Log data, generated by software systems, provides crucial insights for tasks
like monitoring, root cause analysis, and anomaly detection. Due to the vast
volume of logs, automated log parsing is essential to transform semi-structured
log messages into structured representations. Traditional log parsing
techniques often require manual configurations, such as defining log formats or
labeling data, which limits scalability and usability. Recent advances in large
language models (LLMs) have introduced the new research field of LLM-based log
parsing, offering potential improvements in automation and adaptability.
Despite promising results, there is no structured overview of these approaches
since this is a relatively new research field with the earliest advances
published in late 2023. This paper systematically reviews 29 LLM-based log
parsing methods, comparing their capabilities, limitations, and reliance on
manual effort. We analyze the learning and prompt-engineering paradigms
employed, efficiency- and effectiveness-enhancing techniques, and the role of
LLMs in the parsing process. We aggregate the results of the survey in a large
table comprising the characterizing features of LLM-based log parsing
approaches and derive the general process of LLM-based log parsing,
incorporating all reviewed approaches in a single flow chart. Additionally, we
benchmark seven open-source LLM-based log parsers on public datasets and
critically assess their reproducibility. Our findings summarize the advances of
this new research field and provide insights for researchers and practitioners
seeking efficient and user-friendly log parsing solutions, with all code and
results made publicly available for transparency.
|
2504.04891 | Longdi Xian | Longdi Xian and Jianzhang Ni and Mingzhu Wang | Leveraging Large Language Models for Cost-Effective, Multilingual
Depression Detection and Severity Assessment | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Depression is a prevalent mental health disorder that is difficult to detect
early due to subjective symptom assessments. Recent advancements in large
language models have offered efficient and cost-effective approaches for this
objective. In this study, we evaluated the performance of four LLMs in
depression detection using clinical interview data. We selected the best
performing model and further tested it in the severity evaluation scenario and
knowledge enhanced scenario. The robustness was evaluated in complex diagnostic
scenarios using a dataset comprising 51074 statements from six different mental
disorders. We found that DeepSeek V3 is the most reliable and cost-effective
model for depression detection, performing well in both zero-shot and few-shot
scenarios, with zero-shot being the most efficient choice. The evaluation of
severity showed low agreement with the human evaluator, particularly for mild
depression. The model maintains stably high AUCs for detecting depression in
complex diagnostic scenarios. These findings highlight DeepSeek V3s strong
potential for text-based depression detection in real-world clinical
applications. However, they also underscore the need for further refinement in
severity assessment and the mitigation of potential biases to enhance clinical
reliability.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 09:58:19 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Xian",
"Longdi",
""
],
[
"Ni",
"Jianzhang",
""
],
[
"Wang",
"Mingzhu",
""
]
] | TITLE: Leveraging Large Language Models for Cost-Effective, Multilingual
Depression Detection and Severity Assessment
ABSTRACT: Depression is a prevalent mental health disorder that is difficult to detect
early due to subjective symptom assessments. Recent advancements in large
language models have offered efficient and cost-effective approaches for this
objective. In this study, we evaluated the performance of four LLMs in
depression detection using clinical interview data. We selected the best
performing model and further tested it in the severity evaluation scenario and
knowledge enhanced scenario. The robustness was evaluated in complex diagnostic
scenarios using a dataset comprising 51074 statements from six different mental
disorders. We found that DeepSeek V3 is the most reliable and cost-effective
model for depression detection, performing well in both zero-shot and few-shot
scenarios, with zero-shot being the most efficient choice. The evaluation of
severity showed low agreement with the human evaluator, particularly for mild
depression. The model maintains stably high AUCs for detecting depression in
complex diagnostic scenarios. These findings highlight DeepSeek V3s strong
potential for text-based depression detection in real-world clinical
applications. However, they also underscore the need for further refinement in
severity assessment and the mitigation of potential biases to enhance clinical
reliability.
|
2504.04893 | Justus Westerhoff | Justus Westerhoff, Erblina Purellku, Jakob Hackstein, Leo Pinetzki,
Lorenz Hufe | SCAM: A Real-World Typographic Robustness Evaluation for Multimodal
Foundation Models | Submitted to CVPR 2025 Workshop EVAL-FoMo-2 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Typographic attacks exploit the interplay between text and visual content in
multimodal foundation models, causing misclassifications when misleading text
is embedded within images. However, existing datasets are limited in size and
diversity, making it difficult to study such vulnerabilities. In this paper, we
introduce SCAM, the largest and most diverse dataset of real-world typographic
attack images to date, containing 1,162 images across hundreds of object
categories and attack words. Through extensive benchmarking of Vision-Language
Models (VLMs) on SCAM, we demonstrate that typographic attacks significantly
degrade performance, and identify that training data and model architecture
influence the susceptibility to these attacks. Our findings reveal that
typographic attacks persist in state-of-the-art Large Vision-Language Models
(LVLMs) due to the choice of their vision encoder, though larger Large Language
Models (LLMs) backbones help mitigate their vulnerability. Additionally, we
demonstrate that synthetic attacks closely resemble real-world (handwritten)
attacks, validating their use in research. Our work provides a comprehensive
resource and empirical insights to facilitate future research toward robust and
trustworthy multimodal AI systems. We publicly release the datasets introduced
in this paper under https://huggingface.co/datasets/BLISS-e-V/SCAM, along with
the code for evaluations at https://github.com/Bliss-e-V/SCAM.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 10:01:38 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Westerhoff",
"Justus",
""
],
[
"Purellku",
"Erblina",
""
],
[
"Hackstein",
"Jakob",
""
],
[
"Pinetzki",
"Leo",
""
],
[
"Hufe",
"Lorenz",
""
]
] | TITLE: SCAM: A Real-World Typographic Robustness Evaluation for Multimodal
Foundation Models
ABSTRACT: Typographic attacks exploit the interplay between text and visual content in
multimodal foundation models, causing misclassifications when misleading text
is embedded within images. However, existing datasets are limited in size and
diversity, making it difficult to study such vulnerabilities. In this paper, we
introduce SCAM, the largest and most diverse dataset of real-world typographic
attack images to date, containing 1,162 images across hundreds of object
categories and attack words. Through extensive benchmarking of Vision-Language
Models (VLMs) on SCAM, we demonstrate that typographic attacks significantly
degrade performance, and identify that training data and model architecture
influence the susceptibility to these attacks. Our findings reveal that
typographic attacks persist in state-of-the-art Large Vision-Language Models
(LVLMs) due to the choice of their vision encoder, though larger Large Language
Models (LLMs) backbones help mitigate their vulnerability. Additionally, we
demonstrate that synthetic attacks closely resemble real-world (handwritten)
attacks, validating their use in research. Our work provides a comprehensive
resource and empirical insights to facilitate future research toward robust and
trustworthy multimodal AI systems. We publicly release the datasets introduced
in this paper under https://huggingface.co/datasets/BLISS-e-V/SCAM, along with
the code for evaluations at https://github.com/Bliss-e-V/SCAM.
|
2504.04915 | Ran Xu | Ran Xu, Wenqi Shi, Yuchen Zhuang, Yue Yu, Joyce C. Ho, Haoyu Wang,
Carl Yang | Collab-RAG: Boosting Retrieval-Augmented Generation for Complex Question
Answering via White-Box and Black-Box LLM Collaboration | Work in progress. Code: https://github.com/ritaranx/Collab-RAG/ | null | null | null | cs.CL cs.AI cs.IR cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Retrieval-Augmented Generation (RAG) systems often struggle to handle
multi-hop question-answering tasks accurately due to irrelevant context
retrieval and limited complex reasoning capabilities. We introduce Collab-RAG,
a collaborative training framework that leverages mutual enhancement between a
white-box small language model (SLM) and a blackbox large language model (LLM)
for RAG. Specifically, the SLM decomposes complex queries into simpler
sub-questions, thus enhancing the accuracy of the retrieval and facilitating
more effective reasoning by the black-box LLM. Concurrently, the black-box LLM
provides feedback signals to improve the SLM's decomposition capability. We
observe that Collab-RAG relies solely on supervision from an affordable
black-box LLM without additional distillation from frontier LLMs, yet
demonstrates strong generalization across multiple black-box LLMs. Experimental
evaluations across five multi-hop QA datasets demonstrate that Collab-RAG
substantially outperforms existing black-box-only and SLM fine-tuning baselines
by 1.8%-14.2% on average. In particular, our fine-tuned 3B SLM surpasses a
frozen 32B LLM in question decomposition, highlighting the efficiency of
Collab-RAG in improving reasoning and retrieval for complex questions. The code
of Collab-RAG is available on https://github.com/ritaranx/Collab-RAG/.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 10:52:22 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Xu",
"Ran",
""
],
[
"Shi",
"Wenqi",
""
],
[
"Zhuang",
"Yuchen",
""
],
[
"Yu",
"Yue",
""
],
[
"Ho",
"Joyce C.",
""
],
[
"Wang",
"Haoyu",
""
],
[
"Yang",
"Carl",
""
]
] | TITLE: Collab-RAG: Boosting Retrieval-Augmented Generation for Complex Question
Answering via White-Box and Black-Box LLM Collaboration
ABSTRACT: Retrieval-Augmented Generation (RAG) systems often struggle to handle
multi-hop question-answering tasks accurately due to irrelevant context
retrieval and limited complex reasoning capabilities. We introduce Collab-RAG,
a collaborative training framework that leverages mutual enhancement between a
white-box small language model (SLM) and a blackbox large language model (LLM)
for RAG. Specifically, the SLM decomposes complex queries into simpler
sub-questions, thus enhancing the accuracy of the retrieval and facilitating
more effective reasoning by the black-box LLM. Concurrently, the black-box LLM
provides feedback signals to improve the SLM's decomposition capability. We
observe that Collab-RAG relies solely on supervision from an affordable
black-box LLM without additional distillation from frontier LLMs, yet
demonstrates strong generalization across multiple black-box LLMs. Experimental
evaluations across five multi-hop QA datasets demonstrate that Collab-RAG
substantially outperforms existing black-box-only and SLM fine-tuning baselines
by 1.8%-14.2% on average. In particular, our fine-tuned 3B SLM surpasses a
frozen 32B LLM in question decomposition, highlighting the efficiency of
Collab-RAG in improving reasoning and retrieval for complex questions. The code
of Collab-RAG is available on https://github.com/ritaranx/Collab-RAG/.
|
2504.04935 | Peng Liu | Peng Liu, Heng-Chao Li, Sen Lei, Nanqing Liu, Bin Feng, and Xiao Wu | RCCFormer: A Robust Crowd Counting Network Based on Transformer | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Crowd counting, which is a key computer vision task, has emerged as a
fundamental technology in crowd analysis and public safety management. However,
challenges such as scale variations and complex backgrounds significantly
impact the accuracy of crowd counting. To mitigate these issues, this paper
proposes a robust Transformer-based crowd counting network, termed RCCFormer,
specifically designed for background suppression and scale awareness. The
proposed method incorporates a Multi-level Feature Fusion Module (MFFM), which
meticulously integrates features extracted at diverse stages of the backbone
architecture. It establishes a strong baseline capable of capturing intricate
and comprehensive feature representations, surpassing traditional baselines.
Furthermore, the introduced Detail-Embedded Attention Block (DEAB) captures
contextual information and local details through global self-attention and
local attention along with a learnable manner for efficient fusion. This
enhances the model's ability to focus on foreground regions while effectively
mitigating background noise interference. Additionally, we develop an Adaptive
Scale-Aware Module (ASAM), with our novel Input-dependent Deformable
Convolution (IDConv) as its fundamental building block. This module dynamically
adapts to changes in head target shapes and scales, significantly improving the
network's capability to accommodate large-scale variations. The effectiveness
of the proposed method is validated on the ShanghaiTech Part_A and Part_B,
NWPU-Crowd, and QNRF datasets. The results demonstrate that our RCCFormer
achieves excellent performance across all four datasets, showcasing
state-of-the-art outcomes.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 11:19:05 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Peng",
""
],
[
"Li",
"Heng-Chao",
""
],
[
"Lei",
"Sen",
""
],
[
"Liu",
"Nanqing",
""
],
[
"Feng",
"Bin",
""
],
[
"Wu",
"Xiao",
""
]
] | TITLE: RCCFormer: A Robust Crowd Counting Network Based on Transformer
ABSTRACT: Crowd counting, which is a key computer vision task, has emerged as a
fundamental technology in crowd analysis and public safety management. However,
challenges such as scale variations and complex backgrounds significantly
impact the accuracy of crowd counting. To mitigate these issues, this paper
proposes a robust Transformer-based crowd counting network, termed RCCFormer,
specifically designed for background suppression and scale awareness. The
proposed method incorporates a Multi-level Feature Fusion Module (MFFM), which
meticulously integrates features extracted at diverse stages of the backbone
architecture. It establishes a strong baseline capable of capturing intricate
and comprehensive feature representations, surpassing traditional baselines.
Furthermore, the introduced Detail-Embedded Attention Block (DEAB) captures
contextual information and local details through global self-attention and
local attention along with a learnable manner for efficient fusion. This
enhances the model's ability to focus on foreground regions while effectively
mitigating background noise interference. Additionally, we develop an Adaptive
Scale-Aware Module (ASAM), with our novel Input-dependent Deformable
Convolution (IDConv) as its fundamental building block. This module dynamically
adapts to changes in head target shapes and scales, significantly improving the
network's capability to accommodate large-scale variations. The effectiveness
of the proposed method is validated on the ShanghaiTech Part_A and Part_B,
NWPU-Crowd, and QNRF datasets. The results demonstrate that our RCCFormer
achieves excellent performance across all four datasets, showcasing
state-of-the-art outcomes.
|
2504.04945 | Rean Clive Fernandes | Rean Fernandes, Andr\'e Biedenkapp, Frank Hutter, Noor Awad | A Llama walks into the 'Bar': Efficient Supervised Fine-Tuning for Legal
Reasoning in the Multi-state Bar Exam | COLM 2025 preprint, 9 pages, 3 figures, 16 appendix pages | null | null | null | cs.LG cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Legal reasoning tasks present unique challenges for large language models
(LLMs) due to the complexity of domain-specific knowledge and reasoning
processes. This paper investigates how effectively smaller language models
(Llama 2 7B and Llama 3 8B) can be fine-tuned with a limited dataset of 1,514
Multi-state Bar Examination (MBE) questions to improve legal question answering
accuracy. We evaluate these models on the 2022 MBE questions licensed from JD
Advising, the same dataset used in the 'GPT-4 passes the Bar exam' study. Our
methodology involves collecting approximately 200 questions per legal domain
across 7 domains. We distill the dataset using Llama 3 (70B) to transform
explanations into a structured IRAC (Issue, Rule, Application, Conclusion)
format as a guided reasoning process to see if it results in better performance
over the non-distilled dataset. We compare the non-fine-tuned models against
their supervised fine-tuned (SFT) counterparts, trained for different sample
sizes per domain, to study the effect on accuracy and prompt adherence. We also
analyse option selection biases and their mitigation following SFT. In
addition, we consolidate the performance across multiple variables: prompt type
(few-shot vs zero-shot), answer ordering (chosen-option first vs
generated-explanation first), response format (Numbered list vs Markdown vs
JSON), and different decoding temperatures. Our findings show that
domain-specific SFT helps some model configurations achieve close to human
baseline performance, despite limited computational resources and a relatively
small dataset. We release both the gathered SFT dataset and the family of
Supervised Fine-tuned (SFT) adapters optimised for MBE performance. This
establishes a practical lower bound on resources needed towards achieving
effective legal question answering in smaller LLMs.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 11:31:22 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Fernandes",
"Rean",
""
],
[
"Biedenkapp",
"André",
""
],
[
"Hutter",
"Frank",
""
],
[
"Awad",
"Noor",
""
]
] | TITLE: A Llama walks into the 'Bar': Efficient Supervised Fine-Tuning for Legal
Reasoning in the Multi-state Bar Exam
ABSTRACT: Legal reasoning tasks present unique challenges for large language models
(LLMs) due to the complexity of domain-specific knowledge and reasoning
processes. This paper investigates how effectively smaller language models
(Llama 2 7B and Llama 3 8B) can be fine-tuned with a limited dataset of 1,514
Multi-state Bar Examination (MBE) questions to improve legal question answering
accuracy. We evaluate these models on the 2022 MBE questions licensed from JD
Advising, the same dataset used in the 'GPT-4 passes the Bar exam' study. Our
methodology involves collecting approximately 200 questions per legal domain
across 7 domains. We distill the dataset using Llama 3 (70B) to transform
explanations into a structured IRAC (Issue, Rule, Application, Conclusion)
format as a guided reasoning process to see if it results in better performance
over the non-distilled dataset. We compare the non-fine-tuned models against
their supervised fine-tuned (SFT) counterparts, trained for different sample
sizes per domain, to study the effect on accuracy and prompt adherence. We also
analyse option selection biases and their mitigation following SFT. In
addition, we consolidate the performance across multiple variables: prompt type
(few-shot vs zero-shot), answer ordering (chosen-option first vs
generated-explanation first), response format (Numbered list vs Markdown vs
JSON), and different decoding temperatures. Our findings show that
domain-specific SFT helps some model configurations achieve close to human
baseline performance, despite limited computational resources and a relatively
small dataset. We release both the gathered SFT dataset and the family of
Supervised Fine-tuned (SFT) adapters optimised for MBE performance. This
establishes a practical lower bound on resources needed towards achieving
effective legal question answering in smaller LLMs.
|
2504.04949 | Linwei Zhai | Linwei Zhai, Han Ding, Cui Zhao, fei wang, Ge Wang, Wang Zhi, Wei Xi | One Quantizer is Enough: Toward a Lightweight Audio Codec | null | null | null | null | cs.SD cs.AI | http://creativecommons.org/licenses/by-sa/4.0/ | Neural audio codecs have recently gained traction for their ability to
compress high-fidelity audio and generate discrete tokens that can be utilized
in downstream generative modeling tasks. However, leading approaches often rely
on resource-intensive models and multi-quantizer architectures, resulting in
considerable computational overhead and constrained real-world applicability.
In this paper, we present SQCodec, a lightweight neural audio codec that
leverages a single quantizer to address these limitations. SQCodec explores
streamlined convolutional networks and local Transformer modules, alongside
TConv, a novel mechanism designed to capture acoustic variations across
multiple temporal scales, thereby enhancing reconstruction fidelity while
reducing model complexity. Extensive experiments across diverse datasets show
that SQCodec achieves audio quality comparable to multi-quantizer baselines,
while its single-quantizer design offers enhanced adaptability and its
lightweight architecture reduces resource consumption by an order of magnitude.
The source code is publicly available at https://github.com/zhai-lw/SQCodec.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 11:34:39 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhai",
"Linwei",
""
],
[
"Ding",
"Han",
""
],
[
"Zhao",
"Cui",
""
],
[
"wang",
"fei",
""
],
[
"Wang",
"Ge",
""
],
[
"Zhi",
"Wang",
""
],
[
"Xi",
"Wei",
""
]
] | TITLE: One Quantizer is Enough: Toward a Lightweight Audio Codec
ABSTRACT: Neural audio codecs have recently gained traction for their ability to
compress high-fidelity audio and generate discrete tokens that can be utilized
in downstream generative modeling tasks. However, leading approaches often rely
on resource-intensive models and multi-quantizer architectures, resulting in
considerable computational overhead and constrained real-world applicability.
In this paper, we present SQCodec, a lightweight neural audio codec that
leverages a single quantizer to address these limitations. SQCodec explores
streamlined convolutional networks and local Transformer modules, alongside
TConv, a novel mechanism designed to capture acoustic variations across
multiple temporal scales, thereby enhancing reconstruction fidelity while
reducing model complexity. Extensive experiments across diverse datasets show
that SQCodec achieves audio quality comparable to multi-quantizer baselines,
while its single-quantizer design offers enhanced adaptability and its
lightweight architecture reduces resource consumption by an order of magnitude.
The source code is publicly available at https://github.com/zhai-lw/SQCodec.
|
2504.04950 | Xiaochen Zuo | Wenyuan Xu, Xiaochen Zuo, Chao Xin, Yu Yue, Lin Yan, Yonghui Wu | A Unified Pairwise Framework for RLHF: Bridging Generative Reward
Modeling and Policy Optimization | 11oages,2 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement Learning from Human Feedback (RLHF) has emerged as a important
paradigm for aligning large language models (LLMs) with human preferences
during post-training. This framework typically involves two stages: first,
training a reward model on human preference data, followed by optimizing the
language model using reinforcement learning algorithms. However, current RLHF
approaches may constrained by two limitations. First, existing RLHF frameworks
often rely on Bradley-Terry models to assign scalar rewards based on pairwise
comparisons of individual responses. However, this approach imposes significant
challenges on reward model (RM), as the inherent variability in prompt-response
pairs across different contexts demands robust calibration capabilities from
the RM. Second, reward models are typically initialized from generative
foundation models, such as pre-trained or supervised fine-tuned models, despite
the fact that reward models perform discriminative tasks, creating a mismatch.
This paper introduces Pairwise-RL, a RLHF framework that addresses these
challenges through a combination of generative reward modeling and a pairwise
proximal policy optimization (PPO) algorithm. Pairwise-RL unifies reward model
training and its application during reinforcement learning within a consistent
pairwise paradigm, leveraging generative modeling techniques to enhance reward
model performance and score calibration. Experimental evaluations demonstrate
that Pairwise-RL outperforms traditional RLHF frameworks across both internal
evaluation datasets and standard public benchmarks, underscoring its
effectiveness in improving alignment and model behavior.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 11:34:48 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Xu",
"Wenyuan",
""
],
[
"Zuo",
"Xiaochen",
""
],
[
"Xin",
"Chao",
""
],
[
"Yue",
"Yu",
""
],
[
"Yan",
"Lin",
""
],
[
"Wu",
"Yonghui",
""
]
] | TITLE: A Unified Pairwise Framework for RLHF: Bridging Generative Reward
Modeling and Policy Optimization
ABSTRACT: Reinforcement Learning from Human Feedback (RLHF) has emerged as a important
paradigm for aligning large language models (LLMs) with human preferences
during post-training. This framework typically involves two stages: first,
training a reward model on human preference data, followed by optimizing the
language model using reinforcement learning algorithms. However, current RLHF
approaches may constrained by two limitations. First, existing RLHF frameworks
often rely on Bradley-Terry models to assign scalar rewards based on pairwise
comparisons of individual responses. However, this approach imposes significant
challenges on reward model (RM), as the inherent variability in prompt-response
pairs across different contexts demands robust calibration capabilities from
the RM. Second, reward models are typically initialized from generative
foundation models, such as pre-trained or supervised fine-tuned models, despite
the fact that reward models perform discriminative tasks, creating a mismatch.
This paper introduces Pairwise-RL, a RLHF framework that addresses these
challenges through a combination of generative reward modeling and a pairwise
proximal policy optimization (PPO) algorithm. Pairwise-RL unifies reward model
training and its application during reinforcement learning within a consistent
pairwise paradigm, leveraging generative modeling techniques to enhance reward
model performance and score calibration. Experimental evaluations demonstrate
that Pairwise-RL outperforms traditional RLHF frameworks across both internal
evaluation datasets and standard public benchmarks, underscoring its
effectiveness in improving alignment and model behavior.
|
2504.04953 | Jos\'e Pombal | Jos\'e Pombal, Dongkeun Yoon, Patrick Fernandes, Ian Wu, Seungone Kim,
Ricardo Rei, Graham Neubig, Andr\'e F. T. Martins | M-Prometheus: A Suite of Open Multilingual LLM Judges | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | The use of language models for automatically evaluating long-form text
(LLM-as-a-judge) is becoming increasingly common, yet most LLM judges are
optimized exclusively for English, with strategies for enhancing their
multilingual evaluation capabilities remaining largely unexplored in the
current literature. This has created a disparity in the quality of automatic
evaluation methods for non-English languages, ultimately hindering the
development of models with better multilingual capabilities. To bridge this
gap, we introduce M-Prometheus, a suite of open-weight LLM judges ranging from
3B to 14B parameters that can provide both direct assessment and pairwise
comparison feedback on multilingual outputs. M-Prometheus models outperform
state-of-the-art open LLM judges on multilingual reward benchmarks spanning
more than 20 languages, as well as on literary machine translation (MT)
evaluation covering 4 language pairs. Furthermore, M-Prometheus models can be
leveraged at decoding time to significantly improve generated outputs across
all 3 tested languages, showcasing their utility for the development of better
multilingual models. Lastly, through extensive ablations, we identify the key
factors for obtaining an effective multilingual judge, including backbone model
selection and training on natively multilingual feedback data instead of
translated data. We release our models, training dataset, and code.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 11:37:26 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Pombal",
"José",
""
],
[
"Yoon",
"Dongkeun",
""
],
[
"Fernandes",
"Patrick",
""
],
[
"Wu",
"Ian",
""
],
[
"Kim",
"Seungone",
""
],
[
"Rei",
"Ricardo",
""
],
[
"Neubig",
"Graham",
""
],
[
"Martins",
"André F. T.",
""
]
] | TITLE: M-Prometheus: A Suite of Open Multilingual LLM Judges
ABSTRACT: The use of language models for automatically evaluating long-form text
(LLM-as-a-judge) is becoming increasingly common, yet most LLM judges are
optimized exclusively for English, with strategies for enhancing their
multilingual evaluation capabilities remaining largely unexplored in the
current literature. This has created a disparity in the quality of automatic
evaluation methods for non-English languages, ultimately hindering the
development of models with better multilingual capabilities. To bridge this
gap, we introduce M-Prometheus, a suite of open-weight LLM judges ranging from
3B to 14B parameters that can provide both direct assessment and pairwise
comparison feedback on multilingual outputs. M-Prometheus models outperform
state-of-the-art open LLM judges on multilingual reward benchmarks spanning
more than 20 languages, as well as on literary machine translation (MT)
evaluation covering 4 language pairs. Furthermore, M-Prometheus models can be
leveraged at decoding time to significantly improve generated outputs across
all 3 tested languages, showcasing their utility for the development of better
multilingual models. Lastly, through extensive ablations, we identify the key
factors for obtaining an effective multilingual judge, including backbone model
selection and training on natively multilingual feedback data instead of
translated data. We release our models, training dataset, and code.
|
2504.04954 | Aditya Shahane Mr | Aditya Hemant Shahane, Prathosh A.P, Sandeep Kumar | GOTHAM: Graph Class Incremental Learning Framework under Weak
Supervision | null | null | null | null | cs.AI | http://creativecommons.org/publicdomain/zero/1.0/ | Graphs are growing rapidly, along with the number of distinct label
categories associated with them. Applications like e-commerce, healthcare,
recommendation systems, and various social media platforms are rapidly moving
towards graph representation of data due to their ability to capture both
structural and attribute information. One crucial task in graph analysis is
node classification, where unlabeled nodes are categorized into predefined
classes. In practice, novel classes appear incrementally sometimes with just a
few labels (seen classes) or even without any labels (unseen classes), either
because they are new or haven't been explored much. Traditional methods assume
abundant labeled data for training, which isn't always feasible. We investigate
a broader objective: \emph{Graph Class Incremental Learning under Weak
Supervision (GCL)}, addressing this challenge by meta-training on base classes
with limited labeled instances. During the incremental streams, novel classes
can have few-shot or zero-shot representation. Our proposed framework GOTHAM
efficiently accommodates these unlabeled nodes by finding the closest prototype
representation, serving as class representatives in the attribute space. For
Text-Attributed Graphs (TAGs), our framework additionally incorporates semantic
information to enhance the representation. By employing teacher-student
knowledge distillation to mitigate forgetting, GOTHAM achieves promising
results across various tasks. Experiments on datasets such as Cora-ML, Amazon,
and OBGN-Arxiv showcase the effectiveness of our approach in handling evolving
graph data under limited supervision. The repository is available here:
\href{https://github.com/adityashahane10/GOTHAM--Graph-based-Class-Incremental-Learning-Framework-under-Weak-Supervision}{\small
\textcolor{blue}{Code}}
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 11:39:13 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Shahane",
"Aditya Hemant",
""
],
[
"P",
"Prathosh A.",
""
],
[
"Kumar",
"Sandeep",
""
]
] | TITLE: GOTHAM: Graph Class Incremental Learning Framework under Weak
Supervision
ABSTRACT: Graphs are growing rapidly, along with the number of distinct label
categories associated with them. Applications like e-commerce, healthcare,
recommendation systems, and various social media platforms are rapidly moving
towards graph representation of data due to their ability to capture both
structural and attribute information. One crucial task in graph analysis is
node classification, where unlabeled nodes are categorized into predefined
classes. In practice, novel classes appear incrementally sometimes with just a
few labels (seen classes) or even without any labels (unseen classes), either
because they are new or haven't been explored much. Traditional methods assume
abundant labeled data for training, which isn't always feasible. We investigate
a broader objective: \emph{Graph Class Incremental Learning under Weak
Supervision (GCL)}, addressing this challenge by meta-training on base classes
with limited labeled instances. During the incremental streams, novel classes
can have few-shot or zero-shot representation. Our proposed framework GOTHAM
efficiently accommodates these unlabeled nodes by finding the closest prototype
representation, serving as class representatives in the attribute space. For
Text-Attributed Graphs (TAGs), our framework additionally incorporates semantic
information to enhance the representation. By employing teacher-student
knowledge distillation to mitigate forgetting, GOTHAM achieves promising
results across various tasks. Experiments on datasets such as Cora-ML, Amazon,
and OBGN-Arxiv showcase the effectiveness of our approach in handling evolving
graph data under limited supervision. The repository is available here:
\href{https://github.com/adityashahane10/GOTHAM--Graph-based-Class-Incremental-Learning-Framework-under-Weak-Supervision}{\small
\textcolor{blue}{Code}}
|
2504.04963 | Yuzhe Zhang | Yuzhe Zhang, Min Cen, Hong Zhang | Constraint Multi-class Positive and Unlabeled Learning for Distantly
Supervised Named Entity Recognition | 28pages, 3 figures. First submitted in Oct. 2023 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Distantly supervised named entity recognition (DS-NER) has been proposed to
exploit the automatically labeled training data by external knowledge bases
instead of human annotations. However, it tends to suffer from a high false
negative rate due to the inherent incompleteness. To address this issue, we
present a novel approach called \textbf{C}onstraint \textbf{M}ulti-class
\textbf{P}ositive and \textbf{U}nlabeled Learning (CMPU), which introduces a
constraint factor on the risk estimator of multiple positive classes. It
suggests that the constraint non-negative risk estimator is more robust against
overfitting than previous PU learning methods with limited positive data. Solid
theoretical analysis on CMPU is provided to prove the validity of our approach.
Extensive experiments on two benchmark datasets that were labeled using diverse
external knowledge sources serve to demonstrate the superior performance of
CMPU in comparison to existing DS-NER methods.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 11:51:41 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhang",
"Yuzhe",
""
],
[
"Cen",
"Min",
""
],
[
"Zhang",
"Hong",
""
]
] | TITLE: Constraint Multi-class Positive and Unlabeled Learning for Distantly
Supervised Named Entity Recognition
ABSTRACT: Distantly supervised named entity recognition (DS-NER) has been proposed to
exploit the automatically labeled training data by external knowledge bases
instead of human annotations. However, it tends to suffer from a high false
negative rate due to the inherent incompleteness. To address this issue, we
present a novel approach called \textbf{C}onstraint \textbf{M}ulti-class
\textbf{P}ositive and \textbf{U}nlabeled Learning (CMPU), which introduces a
constraint factor on the risk estimator of multiple positive classes. It
suggests that the constraint non-negative risk estimator is more robust against
overfitting than previous PU learning methods with limited positive data. Solid
theoretical analysis on CMPU is provided to prove the validity of our approach.
Extensive experiments on two benchmark datasets that were labeled using diverse
external knowledge sources serve to demonstrate the superior performance of
CMPU in comparison to existing DS-NER methods.
|
2504.04974 | Ming Li | Ming Li, Ruiyi Zhang, Jian Chen, Jiuxiang Gu, Yufan Zhou, Franck
Dernoncourt, Wanrong Zhu, Tianyi Zhou, Tong Sun | Towards Visual Text Grounding of Multimodal Large Language Model | null | null | null | null | cs.CV cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite the existing evolution of Multimodal Large Language Models (MLLMs), a
non-neglectable limitation remains in their struggle with visual text
grounding, especially in text-rich images of documents. Document images, such
as scanned forms and infographics, highlight critical challenges due to their
complex layouts and textual content. However, current benchmarks do not fully
address these challenges, as they mostly focus on visual grounding on natural
images, rather than text-rich document images. Thus, to bridge this gap, we
introduce TRIG, a novel task with a newly designed instruction dataset for
benchmarking and improving the Text-Rich Image Grounding capabilities of MLLMs
in document question-answering. Specifically, we propose an OCR-LLM-human
interaction pipeline to create 800 manually annotated question-answer pairs as
a benchmark and a large-scale training set of 90$ synthetic data based on four
diverse datasets. A comprehensive evaluation of various MLLMs on our proposed
benchmark exposes substantial limitations in their grounding capability on
text-rich images. In addition, we propose two simple and effective TRIG methods
based on general instruction tuning and plug-and-play efficient embedding,
respectively. By finetuning MLLMs on our synthetic dataset, they promisingly
improve spatial reasoning and grounding capabilities.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 12:01:59 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Ming",
""
],
[
"Zhang",
"Ruiyi",
""
],
[
"Chen",
"Jian",
""
],
[
"Gu",
"Jiuxiang",
""
],
[
"Zhou",
"Yufan",
""
],
[
"Dernoncourt",
"Franck",
""
],
[
"Zhu",
"Wanrong",
""
],
[
"Zhou",
"Tianyi",
""
],
[
"Sun",
"Tong",
""
]
] | TITLE: Towards Visual Text Grounding of Multimodal Large Language Model
ABSTRACT: Despite the existing evolution of Multimodal Large Language Models (MLLMs), a
non-neglectable limitation remains in their struggle with visual text
grounding, especially in text-rich images of documents. Document images, such
as scanned forms and infographics, highlight critical challenges due to their
complex layouts and textual content. However, current benchmarks do not fully
address these challenges, as they mostly focus on visual grounding on natural
images, rather than text-rich document images. Thus, to bridge this gap, we
introduce TRIG, a novel task with a newly designed instruction dataset for
benchmarking and improving the Text-Rich Image Grounding capabilities of MLLMs
in document question-answering. Specifically, we propose an OCR-LLM-human
interaction pipeline to create 800 manually annotated question-answer pairs as
a benchmark and a large-scale training set of 90$ synthetic data based on four
diverse datasets. A comprehensive evaluation of various MLLMs on our proposed
benchmark exposes substantial limitations in their grounding capability on
text-rich images. In addition, we propose two simple and effective TRIG methods
based on general instruction tuning and plug-and-play efficient embedding,
respectively. By finetuning MLLMs on our synthetic dataset, they promisingly
improve spatial reasoning and grounding capabilities.
|
2504.04988 | Congcong Wen | Congcong Wen, Yiting Lin, Xiaokang Qu, Nan Li, Yong Liao, Hui Lin,
Xiang Li | RS-RAG: Bridging Remote Sensing Imagery and Comprehensive Knowledge with
a Multi-Modal Dataset and Retrieval-Augmented Generation Model | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent progress in VLMs has demonstrated impressive capabilities across a
variety of tasks in the natural image domain. Motivated by these advancements,
the remote sensing community has begun to adopt VLMs for remote sensing
vision-language tasks, including scene understanding, image captioning, and
visual question answering. However, existing remote sensing VLMs typically rely
on closed-set scene understanding and focus on generic scene descriptions, yet
lack the ability to incorporate external knowledge. This limitation hinders
their capacity for semantic reasoning over complex or context-dependent queries
that involve domain-specific or world knowledge. To address these challenges,
we first introduced a multimodal Remote Sensing World Knowledge (RSWK) dataset,
which comprises high-resolution satellite imagery and detailed textual
descriptions for 14,141 well-known landmarks from 175 countries, integrating
both remote sensing domain knowledge and broader world knowledge. Building upon
this dataset, we proposed a novel Remote Sensing Retrieval-Augmented Generation
(RS-RAG) framework, which consists of two key components. The Multi-Modal
Knowledge Vector Database Construction module encodes remote sensing imagery
and associated textual knowledge into a unified vector space. The Knowledge
Retrieval and Response Generation module retrieves and re-ranks relevant
knowledge based on image and/or text queries, and incorporates the retrieved
content into a knowledge-augmented prompt to guide the VLM in producing
contextually grounded responses. We validated the effectiveness of our approach
on three representative vision-language tasks, including image captioning,
image classification, and visual question answering, where RS-RAG significantly
outperformed state-of-the-art baselines.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 12:13:43 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wen",
"Congcong",
""
],
[
"Lin",
"Yiting",
""
],
[
"Qu",
"Xiaokang",
""
],
[
"Li",
"Nan",
""
],
[
"Liao",
"Yong",
""
],
[
"Lin",
"Hui",
""
],
[
"Li",
"Xiang",
""
]
] | TITLE: RS-RAG: Bridging Remote Sensing Imagery and Comprehensive Knowledge with
a Multi-Modal Dataset and Retrieval-Augmented Generation Model
ABSTRACT: Recent progress in VLMs has demonstrated impressive capabilities across a
variety of tasks in the natural image domain. Motivated by these advancements,
the remote sensing community has begun to adopt VLMs for remote sensing
vision-language tasks, including scene understanding, image captioning, and
visual question answering. However, existing remote sensing VLMs typically rely
on closed-set scene understanding and focus on generic scene descriptions, yet
lack the ability to incorporate external knowledge. This limitation hinders
their capacity for semantic reasoning over complex or context-dependent queries
that involve domain-specific or world knowledge. To address these challenges,
we first introduced a multimodal Remote Sensing World Knowledge (RSWK) dataset,
which comprises high-resolution satellite imagery and detailed textual
descriptions for 14,141 well-known landmarks from 175 countries, integrating
both remote sensing domain knowledge and broader world knowledge. Building upon
this dataset, we proposed a novel Remote Sensing Retrieval-Augmented Generation
(RS-RAG) framework, which consists of two key components. The Multi-Modal
Knowledge Vector Database Construction module encodes remote sensing imagery
and associated textual knowledge into a unified vector space. The Knowledge
Retrieval and Response Generation module retrieves and re-ranks relevant
knowledge based on image and/or text queries, and incorporates the retrieved
content into a knowledge-augmented prompt to guide the VLM in producing
contextually grounded responses. We validated the effectiveness of our approach
on three representative vision-language tasks, including image captioning,
image classification, and visual question answering, where RS-RAG significantly
outperformed state-of-the-art baselines.
|
2504.04997 | Yichen Chen | Yichen Kelly Chen, S\"oren Dittmer, Kinga Bernatowicz, Josep
Ar\'us-Pous, Kamen Bliznashki, John Aston, James H.F. Rudd, Carola-Bibiane
Sch\"onlieb, James Jones, Michael Roberts | SurvSurf: a partially monotonic neural network for first-hitting time
prediction of intermittently observed discrete and continuous sequential
events | 41 pages, 18 figures (including supplemental information). Submitted
to RSS: Data Science and Artificial Intelligence | null | null | null | stat.ML cs.AI cs.LG math.ST stat.AP stat.TH | http://creativecommons.org/licenses/by/4.0/ | We propose a neural-network based survival model (SurvSurf) specifically
designed for direct and simultaneous probabilistic prediction of the first
hitting time of sequential events from baseline. Unlike existing models,
SurvSurf is theoretically guaranteed to never violate the monotonic
relationship between the cumulative incidence functions of sequential events,
while allowing nonlinear influence from predictors. It also incorporates
implicit truths for unobserved intermediate events in model fitting, and
supports both discrete and continuous time and events. We also identified a
variant of the Integrated Brier Score (IBS) that showed robust correlation with
the mean squared error (MSE) between the true and predicted probabilities by
accounting for implied truths about the missing intermediate events. We
demonstrated the superiority of SurvSurf compared to modern and traditional
predictive survival models in two simulated datasets and two real-world
datasets, using MSE, the more robust IBS and by measuring the extent of
monotonicity violation.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 12:24:59 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chen",
"Yichen Kelly",
""
],
[
"Dittmer",
"Sören",
""
],
[
"Bernatowicz",
"Kinga",
""
],
[
"Arús-Pous",
"Josep",
""
],
[
"Bliznashki",
"Kamen",
""
],
[
"Aston",
"John",
""
],
[
"Rudd",
"James H. F.",
""
],
[
"Schönlieb",
"Carola-Bibiane",
""
],
[
"Jones",
"James",
""
],
[
"Roberts",
"Michael",
""
]
] | TITLE: SurvSurf: a partially monotonic neural network for first-hitting time
prediction of intermittently observed discrete and continuous sequential
events
ABSTRACT: We propose a neural-network based survival model (SurvSurf) specifically
designed for direct and simultaneous probabilistic prediction of the first
hitting time of sequential events from baseline. Unlike existing models,
SurvSurf is theoretically guaranteed to never violate the monotonic
relationship between the cumulative incidence functions of sequential events,
while allowing nonlinear influence from predictors. It also incorporates
implicit truths for unobserved intermediate events in model fitting, and
supports both discrete and continuous time and events. We also identified a
variant of the Integrated Brier Score (IBS) that showed robust correlation with
the mean squared error (MSE) between the true and predicted probabilities by
accounting for implied truths about the missing intermediate events. We
demonstrated the superiority of SurvSurf compared to modern and traditional
predictive survival models in two simulated datasets and two real-world
datasets, using MSE, the more robust IBS and by measuring the extent of
monotonicity violation.
|
2504.05006 | Zongwei Li | Jiuyang Bu, Wenkai Li, Zongwei Li, Zeng Zhang, Xiaoqi Li | Enhancing Smart Contract Vulnerability Detection in DApps Leveraging
Fine-Tuned LLM | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Decentralized applications (DApps) face significant security risks due to
vulnerabilities in smart contracts, with traditional detection methods
struggling to address emerging and machine-unauditable flaws. This paper
proposes a novel approach leveraging fine-tuned Large Language Models (LLMs) to
enhance smart contract vulnerability detection. We introduce a comprehensive
dataset of 215 real-world DApp projects (4,998 contracts), including
hard-to-detect logical errors like token price manipulation, addressing the
limitations of existing simplified benchmarks. By fine-tuning LLMs (Llama3-8B
and Qwen2-7B) with Full-Parameter Fine-Tuning (FFT) and Low-Rank Adaptation
(LoRA), our method achieves superior performance, attaining an F1-score of 0.83
with FFT and data augmentation via Random Over Sampling (ROS). Comparative
experiments demonstrate significant improvements over prompt-based LLMs and
state-of-the-art tools. Notably, the approach excels in detecting
non-machine-auditable vulnerabilities, achieving 0.97 precision and 0.68 recall
for price manipulation flaws. The results underscore the effectiveness of
domain-specific LLM fine-tuning and data augmentation in addressing real-world
DApp security challenges, offering a robust solution for blockchain ecosystem
protection.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 12:32:14 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Bu",
"Jiuyang",
""
],
[
"Li",
"Wenkai",
""
],
[
"Li",
"Zongwei",
""
],
[
"Zhang",
"Zeng",
""
],
[
"Li",
"Xiaoqi",
""
]
] | TITLE: Enhancing Smart Contract Vulnerability Detection in DApps Leveraging
Fine-Tuned LLM
ABSTRACT: Decentralized applications (DApps) face significant security risks due to
vulnerabilities in smart contracts, with traditional detection methods
struggling to address emerging and machine-unauditable flaws. This paper
proposes a novel approach leveraging fine-tuned Large Language Models (LLMs) to
enhance smart contract vulnerability detection. We introduce a comprehensive
dataset of 215 real-world DApp projects (4,998 contracts), including
hard-to-detect logical errors like token price manipulation, addressing the
limitations of existing simplified benchmarks. By fine-tuning LLMs (Llama3-8B
and Qwen2-7B) with Full-Parameter Fine-Tuning (FFT) and Low-Rank Adaptation
(LoRA), our method achieves superior performance, attaining an F1-score of 0.83
with FFT and data augmentation via Random Over Sampling (ROS). Comparative
experiments demonstrate significant improvements over prompt-based LLMs and
state-of-the-art tools. Notably, the approach excels in detecting
non-machine-auditable vulnerabilities, achieving 0.97 precision and 0.68 recall
for price manipulation flaws. The results underscore the effectiveness of
domain-specific LLM fine-tuning and data augmentation in addressing real-world
DApp security challenges, offering a robust solution for blockchain ecosystem
protection.
|
2504.05009 | Huw Cheston | Huw Cheston, Reuben Bance, Peter M. C. Harrison | Deconstructing Jazz Piano Style Using Machine Learning | Paper: 40 pages, 11 figures, 1 table. Supplementary material: 33
pages, 48 figures, 6 tables | null | null | null | cs.SD cs.IR cs.LG eess.AS | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Artistic style has been studied for centuries, and recent advances in machine
learning create new possibilities for understanding it computationally.
However, ensuring that machine-learning models produce insights aligned with
the interests of practitioners and critics remains a significant challenge.
Here, we focus on musical style, which benefits from a rich theoretical and
mathematical analysis tradition. We train a variety of supervised-learning
models to identify 20 iconic jazz musicians across a carefully curated dataset
of 84 hours of recordings, and interpret their decision-making processes. Our
models include a novel multi-input architecture that enables four musical
domains (melody, harmony, rhythm, and dynamics) to be analysed separately.
These models enable us to address fundamental questions in music theory and
also advance the state-of-the-art in music performer identification (94%
accuracy across 20 classes). We release open-source implementations of our
models and an accompanying web application for exploring musical styles.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 12:37:39 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Cheston",
"Huw",
""
],
[
"Bance",
"Reuben",
""
],
[
"Harrison",
"Peter M. C.",
""
]
] | TITLE: Deconstructing Jazz Piano Style Using Machine Learning
ABSTRACT: Artistic style has been studied for centuries, and recent advances in machine
learning create new possibilities for understanding it computationally.
However, ensuring that machine-learning models produce insights aligned with
the interests of practitioners and critics remains a significant challenge.
Here, we focus on musical style, which benefits from a rich theoretical and
mathematical analysis tradition. We train a variety of supervised-learning
models to identify 20 iconic jazz musicians across a carefully curated dataset
of 84 hours of recordings, and interpret their decision-making processes. Our
models include a novel multi-input architecture that enables four musical
domains (melody, harmony, rhythm, and dynamics) to be analysed separately.
These models enable us to address fundamental questions in music theory and
also advance the state-of-the-art in music performer identification (94%
accuracy across 20 classes). We release open-source implementations of our
models and an accompanying web application for exploring musical styles.
|
2504.05024 | Antonia Holzapfel | Antonia Holzapfel, Andres Felipe Posada-Moreno, Sebastian Trimpe | Concept Extraction for Time Series with ECLAD-ts | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Convolutional neural networks (CNNs) for time series classification (TSC) are
being increasingly used in applications ranging from quality prediction to
medical diagnosis. The black box nature of these models makes understanding
their prediction process difficult. This issue is crucial because CNNs are
prone to learning shortcuts and biases, compromising their robustness and
alignment with human expectations. To assess whether such mechanisms are being
used and the associated risk, it is essential to provide model explanations
that reflect the inner workings of the model. Concept Extraction (CE) methods
offer such explanations, but have mostly been developed for the image domain so
far, leaving a gap in the time series domain. In this work, we present a CE and
localization method tailored to the time series domain, based on the ideas of
CE methods for images. We propose the novel method ECLAD-ts, which provides
post-hoc global explanations based on how the models encode subsets of the
input at different levels of abstraction. For this, concepts are produced by
clustering timestep-wise aggregations of CNN activation maps, and their
importance is computed based on their impact on the prediction process. We
evaluate our method on synthetic and natural datasets. Furthermore, we assess
the advantages and limitations of CE in time series through empirical results.
Our results show that ECLAD-ts effectively explains models by leveraging their
internal representations, providing useful insights about their prediction
process.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 12:49:20 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Holzapfel",
"Antonia",
""
],
[
"Posada-Moreno",
"Andres Felipe",
""
],
[
"Trimpe",
"Sebastian",
""
]
] | TITLE: Concept Extraction for Time Series with ECLAD-ts
ABSTRACT: Convolutional neural networks (CNNs) for time series classification (TSC) are
being increasingly used in applications ranging from quality prediction to
medical diagnosis. The black box nature of these models makes understanding
their prediction process difficult. This issue is crucial because CNNs are
prone to learning shortcuts and biases, compromising their robustness and
alignment with human expectations. To assess whether such mechanisms are being
used and the associated risk, it is essential to provide model explanations
that reflect the inner workings of the model. Concept Extraction (CE) methods
offer such explanations, but have mostly been developed for the image domain so
far, leaving a gap in the time series domain. In this work, we present a CE and
localization method tailored to the time series domain, based on the ideas of
CE methods for images. We propose the novel method ECLAD-ts, which provides
post-hoc global explanations based on how the models encode subsets of the
input at different levels of abstraction. For this, concepts are produced by
clustering timestep-wise aggregations of CNN activation maps, and their
importance is computed based on their impact on the prediction process. We
evaluate our method on synthetic and natural datasets. Furthermore, we assess
the advantages and limitations of CE in time series through empirical results.
Our results show that ECLAD-ts effectively explains models by leveraging their
internal representations, providing useful insights about their prediction
process.
|
2504.05029 | Xuan Zhang | Xuan Zhang, Xiang Deng, Hongxing Yuan, Chunyu Wei, Yushun Fan | Graph-based Diffusion Model for Collaborative Filtering | null | null | null | null | cs.SI cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recently, diffusion-based recommendation methods have achieved impressive
results. However, existing approaches predominantly treat each user's
historical interactions as independent training samples, overlooking the
potential of higher-order collaborative signals between users and items. Such
signals, which encapsulate richer and more nuanced relationships, can be
naturally captured using graph-based data structures. To address this
limitation, we extend diffusion-based recommendation methods to the graph
domain by directly modeling user-item bipartite graphs with diffusion models.
This enables better modeling of the higher-order connectivity inherent in
complex interaction dynamics. However, this extension introduces two primary
challenges: (1) Noise Heterogeneity, where interactions are influenced by
various forms of continuous and discrete noise, and (2) Relation Explosion,
referring to the high computational costs of processing large-scale graphs. To
tackle these challenges, we propose a Graph-based Diffusion Model for
Collaborative Filtering (GDMCF). To address noise heterogeneity, we introduce a
multi-level noise corruption mechanism that integrates both continuous and
discrete noise, effectively simulating real-world interaction complexities. To
mitigate relation explosion, we design a user-active guided diffusion process
that selectively focuses on the most meaningful edges and active users,
reducing inference costs while preserving the graph's topological integrity.
Extensive experiments on three benchmark datasets demonstrate that GDMCF
consistently outperforms state-of-the-art methods, highlighting its
effectiveness in capturing higher-order collaborative signals and improving
recommendation performance.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 12:51:18 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhang",
"Xuan",
""
],
[
"Deng",
"Xiang",
""
],
[
"Yuan",
"Hongxing",
""
],
[
"Wei",
"Chunyu",
""
],
[
"Fan",
"Yushun",
""
]
] | TITLE: Graph-based Diffusion Model for Collaborative Filtering
ABSTRACT: Recently, diffusion-based recommendation methods have achieved impressive
results. However, existing approaches predominantly treat each user's
historical interactions as independent training samples, overlooking the
potential of higher-order collaborative signals between users and items. Such
signals, which encapsulate richer and more nuanced relationships, can be
naturally captured using graph-based data structures. To address this
limitation, we extend diffusion-based recommendation methods to the graph
domain by directly modeling user-item bipartite graphs with diffusion models.
This enables better modeling of the higher-order connectivity inherent in
complex interaction dynamics. However, this extension introduces two primary
challenges: (1) Noise Heterogeneity, where interactions are influenced by
various forms of continuous and discrete noise, and (2) Relation Explosion,
referring to the high computational costs of processing large-scale graphs. To
tackle these challenges, we propose a Graph-based Diffusion Model for
Collaborative Filtering (GDMCF). To address noise heterogeneity, we introduce a
multi-level noise corruption mechanism that integrates both continuous and
discrete noise, effectively simulating real-world interaction complexities. To
mitigate relation explosion, we design a user-active guided diffusion process
that selectively focuses on the most meaningful edges and active users,
reducing inference costs while preserving the graph's topological integrity.
Extensive experiments on three benchmark datasets demonstrate that GDMCF
consistently outperforms state-of-the-art methods, highlighting its
effectiveness in capturing higher-order collaborative signals and improving
recommendation performance.
|
2504.05030 | Fethiye Irmak Do\u{g}an | Wang Tang, Fethiye Irmak Dogan, Linbo Qing, Hatice Gunes | AsyReC: A Multimodal Graph-based Framework for Spatio-Temporal
Asymmetric Dyadic Relationship Classification | null | null | null | null | cs.CV cs.MM | http://creativecommons.org/licenses/by/4.0/ | Dyadic social relationships, which refer to relationships between two
individuals who know each other through repeated interactions (or not), are
shaped by shared spatial and temporal experiences. Current computational
methods for modeling these relationships face three major challenges: (1) the
failure to model asymmetric relationships, e.g., one individual may perceive
the other as a friend while the other perceives them as an acquaintance, (2)
the disruption of continuous interactions by discrete frame sampling, which
segments the temporal continuity of interaction in real-world scenarios, and
(3) the limitation to consider periodic behavioral cues, such as rhythmic
vocalizations or recurrent gestures, which are crucial for inferring the
evolution of dyadic relationships. To address these challenges, we propose
AsyReC, a multimodal graph-based framework for asymmetric dyadic relationship
classification, with three core innovations: (i) a triplet graph neural network
with node-edge dual attention that dynamically weights multimodal cues to
capture interaction asymmetries (addressing challenge 1); (ii) a clip-level
relationship learning architecture that preserves temporal continuity, enabling
fine-grained modeling of real-world interaction dynamics (addressing challenge
2); and (iii) a periodic temporal encoder that projects time indices onto
sine/cosine waveforms to model recurrent behavioral patterns (addressing
challenge 3). Extensive experiments on two public datasets demonstrate
state-of-the-art performance, while ablation studies validate the critical role
of asymmetric interaction modeling and periodic temporal encoding in improving
the robustness of dyadic relationship classification in real-world scenarios.
Our code is publicly available at: https://github.com/tw-repository/AsyReC.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 12:52:23 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Tang",
"Wang",
""
],
[
"Dogan",
"Fethiye Irmak",
""
],
[
"Qing",
"Linbo",
""
],
[
"Gunes",
"Hatice",
""
]
] | TITLE: AsyReC: A Multimodal Graph-based Framework for Spatio-Temporal
Asymmetric Dyadic Relationship Classification
ABSTRACT: Dyadic social relationships, which refer to relationships between two
individuals who know each other through repeated interactions (or not), are
shaped by shared spatial and temporal experiences. Current computational
methods for modeling these relationships face three major challenges: (1) the
failure to model asymmetric relationships, e.g., one individual may perceive
the other as a friend while the other perceives them as an acquaintance, (2)
the disruption of continuous interactions by discrete frame sampling, which
segments the temporal continuity of interaction in real-world scenarios, and
(3) the limitation to consider periodic behavioral cues, such as rhythmic
vocalizations or recurrent gestures, which are crucial for inferring the
evolution of dyadic relationships. To address these challenges, we propose
AsyReC, a multimodal graph-based framework for asymmetric dyadic relationship
classification, with three core innovations: (i) a triplet graph neural network
with node-edge dual attention that dynamically weights multimodal cues to
capture interaction asymmetries (addressing challenge 1); (ii) a clip-level
relationship learning architecture that preserves temporal continuity, enabling
fine-grained modeling of real-world interaction dynamics (addressing challenge
2); and (iii) a periodic temporal encoder that projects time indices onto
sine/cosine waveforms to model recurrent behavioral patterns (addressing
challenge 3). Extensive experiments on two public datasets demonstrate
state-of-the-art performance, while ablation studies validate the critical role
of asymmetric interaction modeling and periodic temporal encoding in improving
the robustness of dyadic relationship classification in real-world scenarios.
Our code is publicly available at: https://github.com/tw-repository/AsyReC.
|
2504.05033 | Jay Kamat | Jay Kamat, J\'ulia Borr\`as, Carme Torras | CloSE: A Compact Shape- and Orientation-Agnostic Cloth State
Representation | null | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Cloth manipulation is a difficult problem mainly because of the non-rigid
nature of cloth, which makes a good representation of deformation essential. We
present a new representation for the deformation-state of clothes. First, we
propose the dGLI disk representation, based on topological indices computed for
segments on the edges of the cloth mesh border that are arranged on a circular
grid. The heat-map of the dGLI disk uncovers patterns that correspond to
features of the cloth state that are consistent for different shapes, sizes of
positions of the cloth, like the corners and the fold locations. We then
abstract these important features from the dGLI disk onto a circle, calling it
the Cloth StatE representation (CloSE). This representation is compact,
continuous, and general for different shapes. Finally, we show the strengths of
this representation in two relevant applications: semantic labeling and high-
and low-level planning. The code, the dataset and the video can be accessed
from : https://jaykamat99.github.io/close-representation
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 12:54:58 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Kamat",
"Jay",
""
],
[
"Borràs",
"Júlia",
""
],
[
"Torras",
"Carme",
""
]
] | TITLE: CloSE: A Compact Shape- and Orientation-Agnostic Cloth State
Representation
ABSTRACT: Cloth manipulation is a difficult problem mainly because of the non-rigid
nature of cloth, which makes a good representation of deformation essential. We
present a new representation for the deformation-state of clothes. First, we
propose the dGLI disk representation, based on topological indices computed for
segments on the edges of the cloth mesh border that are arranged on a circular
grid. The heat-map of the dGLI disk uncovers patterns that correspond to
features of the cloth state that are consistent for different shapes, sizes of
positions of the cloth, like the corners and the fold locations. We then
abstract these important features from the dGLI disk onto a circle, calling it
the Cloth StatE representation (CloSE). This representation is compact,
continuous, and general for different shapes. Finally, we show the strengths of
this representation in two relevant applications: semantic labeling and high-
and low-level planning. The code, the dataset and the video can be accessed
from : https://jaykamat99.github.io/close-representation
|
2504.05040 | Haiwan Wei | Haiwan Wei, Yitian Yuan, Xiaohan Lan, Wei Ke, Lin Ma | InstructionBench: An Instructional Video Understanding Benchmark | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite progress in video large language models (Video-LLMs), research on
instructional video understanding, crucial for enhancing access to
instructional content, remains insufficient. To address this, we introduce
InstructionBench, an Instructional video understanding Benchmark, which
challenges models' advanced temporal reasoning within instructional videos
characterized by their strict step-by-step flow. Employing GPT-4, we formulate
Q\&A pairs in open-ended and multiple-choice formats to assess both
Coarse-Grained event-level and Fine-Grained object-level reasoning. Our
filtering strategies exclude questions answerable purely by common-sense
knowledge, focusing on visual perception and analysis when evaluating Video-LLM
models. The benchmark finally contains 5k questions across over 700 videos. We
evaluate the latest Video-LLMs on our InstructionBench, finding that
closed-source models outperform open-source ones. However, even the best model,
GPT-4o, achieves only 53.42\% accuracy, indicating significant gaps in temporal
reasoning. To advance the field, we also develop a comprehensive instructional
video dataset with over 19k Q\&A pairs from nearly 2.5k videos, using an
automated data generation framework, thereby enriching the community's research
resources.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 13:05:09 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wei",
"Haiwan",
""
],
[
"Yuan",
"Yitian",
""
],
[
"Lan",
"Xiaohan",
""
],
[
"Ke",
"Wei",
""
],
[
"Ma",
"Lin",
""
]
] | TITLE: InstructionBench: An Instructional Video Understanding Benchmark
ABSTRACT: Despite progress in video large language models (Video-LLMs), research on
instructional video understanding, crucial for enhancing access to
instructional content, remains insufficient. To address this, we introduce
InstructionBench, an Instructional video understanding Benchmark, which
challenges models' advanced temporal reasoning within instructional videos
characterized by their strict step-by-step flow. Employing GPT-4, we formulate
Q\&A pairs in open-ended and multiple-choice formats to assess both
Coarse-Grained event-level and Fine-Grained object-level reasoning. Our
filtering strategies exclude questions answerable purely by common-sense
knowledge, focusing on visual perception and analysis when evaluating Video-LLM
models. The benchmark finally contains 5k questions across over 700 videos. We
evaluate the latest Video-LLMs on our InstructionBench, finding that
closed-source models outperform open-source ones. However, even the best model,
GPT-4o, achieves only 53.42\% accuracy, indicating significant gaps in temporal
reasoning. To advance the field, we also develop a comprehensive instructional
video dataset with over 19k Q\&A pairs from nearly 2.5k videos, using an
automated data generation framework, thereby enriching the community's research
resources.
|
2504.05046 | Shenghao Ren | Shenghao Ren, Yi Lu, Jiayi Huang, Jiayi Zhao, He Zhang, Tao Yu, Qiu
Shen, Xun Cao | MotionPRO: Exploring the Role of Pressure in Human MoCap and Beyond | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing human Motion Capture (MoCap) methods mostly focus on the visual
similarity while neglecting the physical plausibility. As a result, downstream
tasks such as driving virtual human in 3D scene or humanoid robots in real
world suffer from issues such as timing drift and jitter, spatial problems like
sliding and penetration, and poor global trajectory accuracy. In this paper, we
revisit human MoCap from the perspective of interaction between human body and
physical world by exploring the role of pressure. Firstly, we construct a
large-scale human Motion capture dataset with Pressure, RGB and Optical sensors
(named MotionPRO), which comprises 70 volunteers performing 400 types of
motion, encompassing a total of 12.4M pose frames. Secondly, we examine both
the necessity and effectiveness of the pressure signal through two challenging
tasks: (1) pose and trajectory estimation based solely on pressure: We propose
a network that incorporates a small kernel decoder and a long-short-term
attention module, and proof that pressure could provide accurate global
trajectory and plausible lower body pose. (2) pose and trajectory estimation by
fusing pressure and RGB: We impose constraints on orthographic similarity along
the camera axis and whole-body contact along the vertical axis to enhance the
cross-attention strategy to fuse pressure and RGB feature maps. Experiments
demonstrate that fusing pressure with RGB features not only significantly
improves performance in terms of objective metrics, but also plausibly drives
virtual humans (SMPL) in 3D scene. Furthermore, we demonstrate that
incorporating physical perception enables humanoid robots to perform more
precise and stable actions, which is highly beneficial for the development of
embodied artificial intelligence. Project page is available at:
https://nju-cite-mocaphumanoid.github.io/MotionPRO/
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 13:17:24 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ren",
"Shenghao",
""
],
[
"Lu",
"Yi",
""
],
[
"Huang",
"Jiayi",
""
],
[
"Zhao",
"Jiayi",
""
],
[
"Zhang",
"He",
""
],
[
"Yu",
"Tao",
""
],
[
"Shen",
"Qiu",
""
],
[
"Cao",
"Xun",
""
]
] | TITLE: MotionPRO: Exploring the Role of Pressure in Human MoCap and Beyond
ABSTRACT: Existing human Motion Capture (MoCap) methods mostly focus on the visual
similarity while neglecting the physical plausibility. As a result, downstream
tasks such as driving virtual human in 3D scene or humanoid robots in real
world suffer from issues such as timing drift and jitter, spatial problems like
sliding and penetration, and poor global trajectory accuracy. In this paper, we
revisit human MoCap from the perspective of interaction between human body and
physical world by exploring the role of pressure. Firstly, we construct a
large-scale human Motion capture dataset with Pressure, RGB and Optical sensors
(named MotionPRO), which comprises 70 volunteers performing 400 types of
motion, encompassing a total of 12.4M pose frames. Secondly, we examine both
the necessity and effectiveness of the pressure signal through two challenging
tasks: (1) pose and trajectory estimation based solely on pressure: We propose
a network that incorporates a small kernel decoder and a long-short-term
attention module, and proof that pressure could provide accurate global
trajectory and plausible lower body pose. (2) pose and trajectory estimation by
fusing pressure and RGB: We impose constraints on orthographic similarity along
the camera axis and whole-body contact along the vertical axis to enhance the
cross-attention strategy to fuse pressure and RGB feature maps. Experiments
demonstrate that fusing pressure with RGB features not only significantly
improves performance in terms of objective metrics, but also plausibly drives
virtual humans (SMPL) in 3D scene. Furthermore, we demonstrate that
incorporating physical perception enables humanoid robots to perform more
precise and stable actions, which is highly beneficial for the development of
embodied artificial intelligence. Project page is available at:
https://nju-cite-mocaphumanoid.github.io/MotionPRO/
|
2504.05049 | Shuai Chen | Shuai Chen, Fanman Meng, Haoran Wei, Chenhao Wu, Qingbo Wu, Linfeng
Xu, Hongliang Li | CMaP-SAM: Contraction Mapping Prior for SAM-driven Few-shot Segmentation | 7 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Few-shot segmentation (FSS) aims to segment new classes using few annotated
images. While recent FSS methods have shown considerable improvements by
leveraging Segment Anything Model (SAM), they face two critical limitations:
insufficient utilization of structural correlations in query images, and
significant information loss when converting continuous position priors to
discrete point prompts. To address these challenges, we propose CMaP-SAM, a
novel framework that introduces contraction mapping theory to optimize position
priors for SAM-driven few-shot segmentation. CMaP-SAM consists of three key
components: (1) a contraction mapping module that formulates position prior
optimization as a Banach contraction mapping with convergence guarantees. This
module iteratively refines position priors through pixel-wise structural
similarity, generating a converged prior that preserves both semantic guidance
from reference images and structural correlations in query images; (2) an
adaptive distribution alignment module bridging continuous priors with SAM's
binary mask prompt encoder; and (3) a foreground-background decoupled
refinement architecture producing accurate final segmentation masks. Extensive
experiments demonstrate CMaP-SAM's effectiveness, achieving state-of-the-art
performance with 71.1 mIoU on PASCAL-$5^i$ and 56.1 on COCO-$20^i$ datasets.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 13:19:16 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chen",
"Shuai",
""
],
[
"Meng",
"Fanman",
""
],
[
"Wei",
"Haoran",
""
],
[
"Wu",
"Chenhao",
""
],
[
"Wu",
"Qingbo",
""
],
[
"Xu",
"Linfeng",
""
],
[
"Li",
"Hongliang",
""
]
] | TITLE: CMaP-SAM: Contraction Mapping Prior for SAM-driven Few-shot Segmentation
ABSTRACT: Few-shot segmentation (FSS) aims to segment new classes using few annotated
images. While recent FSS methods have shown considerable improvements by
leveraging Segment Anything Model (SAM), they face two critical limitations:
insufficient utilization of structural correlations in query images, and
significant information loss when converting continuous position priors to
discrete point prompts. To address these challenges, we propose CMaP-SAM, a
novel framework that introduces contraction mapping theory to optimize position
priors for SAM-driven few-shot segmentation. CMaP-SAM consists of three key
components: (1) a contraction mapping module that formulates position prior
optimization as a Banach contraction mapping with convergence guarantees. This
module iteratively refines position priors through pixel-wise structural
similarity, generating a converged prior that preserves both semantic guidance
from reference images and structural correlations in query images; (2) an
adaptive distribution alignment module bridging continuous priors with SAM's
binary mask prompt encoder; and (3) a foreground-background decoupled
refinement architecture producing accurate final segmentation masks. Extensive
experiments demonstrate CMaP-SAM's effectiveness, achieving state-of-the-art
performance with 71.1 mIoU on PASCAL-$5^i$ and 56.1 on COCO-$20^i$ datasets.
|
2504.05059 | Weizi Li | Chandra Raskoti, Iftekharul Islam, Xuan Wang, and Weizi Li | MIAT: Maneuver-Intention-Aware Transformer for Spatio-Temporal
Trajectory Prediction | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Accurate vehicle trajectory prediction is critical for safe and efficient
autonomous driving, especially in mixed traffic environments with both
human-driven and autonomous vehicles. However, uncertainties introduced by
inherent driving behaviors -- such as acceleration, deceleration, and left and
right maneuvers -- pose significant challenges for reliable trajectory
prediction. We introduce a Maneuver-Intention-Aware Transformer (MIAT)
architecture, which integrates a maneuver intention awareness mechanism with
spatiotemporal interaction modeling to enhance long-horizon trajectory
predictions. We systematically investigate the impact of varying awareness of
maneuver intention on both short- and long-horizon trajectory predictions.
Evaluated on the real-world NGSIM dataset and benchmarked against various
transformer- and LSTM-based methods, our approach achieves an improvement of up
to 4.7% in short-horizon predictions and a 1.6% in long-horizon predictions
compared to other intention-aware benchmark methods. Moreover, by leveraging an
intention awareness control mechanism, MIAT realizes an 11.1% performance boost
in long-horizon predictions, with a modest drop in short-horizon performance.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 13:30:00 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Raskoti",
"Chandra",
""
],
[
"Islam",
"Iftekharul",
""
],
[
"Wang",
"Xuan",
""
],
[
"Li",
"Weizi",
""
]
] | TITLE: MIAT: Maneuver-Intention-Aware Transformer for Spatio-Temporal
Trajectory Prediction
ABSTRACT: Accurate vehicle trajectory prediction is critical for safe and efficient
autonomous driving, especially in mixed traffic environments with both
human-driven and autonomous vehicles. However, uncertainties introduced by
inherent driving behaviors -- such as acceleration, deceleration, and left and
right maneuvers -- pose significant challenges for reliable trajectory
prediction. We introduce a Maneuver-Intention-Aware Transformer (MIAT)
architecture, which integrates a maneuver intention awareness mechanism with
spatiotemporal interaction modeling to enhance long-horizon trajectory
predictions. We systematically investigate the impact of varying awareness of
maneuver intention on both short- and long-horizon trajectory predictions.
Evaluated on the real-world NGSIM dataset and benchmarked against various
transformer- and LSTM-based methods, our approach achieves an improvement of up
to 4.7% in short-horizon predictions and a 1.6% in long-horizon predictions
compared to other intention-aware benchmark methods. Moreover, by leveraging an
intention awareness control mechanism, MIAT realizes an 11.1% performance boost
in long-horizon predictions, with a modest drop in short-horizon performance.
|
2504.05060 | Weidong Su | Yong-Ying Zeng, Zi-Ju Liao, Jun-Yi Li, Wei-Dong Su | Universal scaling laws of boundary-driven turbulence | null | null | null | null | physics.flu-dyn | http://creativecommons.org/licenses/by/4.0/ | Turbulence is a fundamental flow phenomenon, typically anisotropic at large
scales and approximately isotropic at small scales. The classical Kolmogorov
scaling laws (2/3, -5/3 and 4/5) have been well-established for turbulence
without small-scale body forcing, describing second-order velocity structure
functions, energy spectra, and third-order velocity structure functions in an
intermediate small-scale range. However, their validity boundary remains
unclear. Here, we identify new 1 and -2 scaling laws (replacing 2/3 and -5/3
laws) alongside the unchanged 4/5 law in the core region of boundary-driven
turbulence, where energy is injected solely through viscous friction at moving
boundaries. Local isotropy is recovered after high-pass filtering. Notably,
odd-order velocity structure functions with and without absolute value exhibit
distinct scaling exponents. A characteristic speed in the inertial range,
derived from the constant ratio of third- to second-order structure functions,
quantifies the time-averaged projectile speed at the bulk interface. Based on
energy dissipation rate and the characteristic speed, a phenomenological
framework for structure functions is developed together with a model for
probability distributions of velocity increment at distinct small-scales. The
universal scaling laws formulated can produce the full set of scaling exponents
for low- and high-order velocity structure functions, including both the
odd-orders' with and without absolute value, which are validated by direct
numerical simulations and experimental datasets.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 13:33:23 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zeng",
"Yong-Ying",
""
],
[
"Liao",
"Zi-Ju",
""
],
[
"Li",
"Jun-Yi",
""
],
[
"Su",
"Wei-Dong",
""
]
] | TITLE: Universal scaling laws of boundary-driven turbulence
ABSTRACT: Turbulence is a fundamental flow phenomenon, typically anisotropic at large
scales and approximately isotropic at small scales. The classical Kolmogorov
scaling laws (2/3, -5/3 and 4/5) have been well-established for turbulence
without small-scale body forcing, describing second-order velocity structure
functions, energy spectra, and third-order velocity structure functions in an
intermediate small-scale range. However, their validity boundary remains
unclear. Here, we identify new 1 and -2 scaling laws (replacing 2/3 and -5/3
laws) alongside the unchanged 4/5 law in the core region of boundary-driven
turbulence, where energy is injected solely through viscous friction at moving
boundaries. Local isotropy is recovered after high-pass filtering. Notably,
odd-order velocity structure functions with and without absolute value exhibit
distinct scaling exponents. A characteristic speed in the inertial range,
derived from the constant ratio of third- to second-order structure functions,
quantifies the time-averaged projectile speed at the bulk interface. Based on
energy dissipation rate and the characteristic speed, a phenomenological
framework for structure functions is developed together with a model for
probability distributions of velocity increment at distinct small-scales. The
universal scaling laws formulated can produce the full set of scaling exponents
for low- and high-order velocity structure functions, including both the
odd-orders' with and without absolute value, which are validated by direct
numerical simulations and experimental datasets.
|
2504.05062 | Chenfeng Xu | Chenfeng Xu | LDGNet: A Lightweight Difference Guiding Network for Remote Sensing
Change Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid advancement of deep learning, the field of change detection
(CD) in remote sensing imagery has achieved remarkable progress. Existing
change detection methods primarily focus on achieving higher accuracy with
increased computational costs and parameter sizes, leaving development of
lightweight methods for rapid real-world processing an underexplored challenge.
To address this challenge, we propose a Lightweight Difference Guiding Network
(LDGNet), leveraging absolute difference image to guide optical remote sensing
change detection. First, to enhance the feature representation capability of
the lightweight backbone network, we propose the Difference Guiding Module
(DGM), which leverages multi-scale features extracted from the absolute
difference image to progressively influence the original image encoder at each
layer, thereby reinforcing feature extraction. Second, we propose the
Difference-Aware Dynamic Fusion (DADF) module with Visual State Space Model
(VSSM) for lightweight long-range dependency modeling. The module first uses
feature absolute differences to guide VSSM's global contextual modeling of
change regions, then employs difference attention to dynamically fuse these
long-range features with feature differences, enhancing change semantics while
suppressing noise and background. Extensive experiments on multiple datasets
demonstrate that our method achieves comparable or superior performance to
current state-of-the-art (SOTA) methods requiring several times more
computation, while maintaining only 3.43M parameters and 1.12G FLOPs.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 13:33:54 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Xu",
"Chenfeng",
""
]
] | TITLE: LDGNet: A Lightweight Difference Guiding Network for Remote Sensing
Change Detection
ABSTRACT: With the rapid advancement of deep learning, the field of change detection
(CD) in remote sensing imagery has achieved remarkable progress. Existing
change detection methods primarily focus on achieving higher accuracy with
increased computational costs and parameter sizes, leaving development of
lightweight methods for rapid real-world processing an underexplored challenge.
To address this challenge, we propose a Lightweight Difference Guiding Network
(LDGNet), leveraging absolute difference image to guide optical remote sensing
change detection. First, to enhance the feature representation capability of
the lightweight backbone network, we propose the Difference Guiding Module
(DGM), which leverages multi-scale features extracted from the absolute
difference image to progressively influence the original image encoder at each
layer, thereby reinforcing feature extraction. Second, we propose the
Difference-Aware Dynamic Fusion (DADF) module with Visual State Space Model
(VSSM) for lightweight long-range dependency modeling. The module first uses
feature absolute differences to guide VSSM's global contextual modeling of
change regions, then employs difference attention to dynamically fuse these
long-range features with feature differences, enhancing change semantics while
suppressing noise and background. Extensive experiments on multiple datasets
demonstrate that our method achieves comparable or superior performance to
current state-of-the-art (SOTA) methods requiring several times more
computation, while maintaining only 3.43M parameters and 1.12G FLOPs.
|
2504.05081 | Tianshi Zheng | Tianshi Zheng, Yixiang Chen, Chengxi Li, Chunyang Li, Qing Zong,
Haochen Shi, Baixuan Xu, Yangqiu Song, Ginny Y. Wong, Simon See | The Curse of CoT: On the Limitations of Chain-of-Thought in In-Context
Learning | 30 pages, 12 tables, 6 figures | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | Chain-of-Thought (CoT) prompting has been widely recognized for its ability
to enhance reasoning capabilities in large language models (LLMs) through the
generation of explicit explanatory rationales. However, our study reveals a
surprising contradiction to this prevailing perspective. Through extensive
experiments involving 16 state-of-the-art LLMs and nine diverse pattern-based
in-context learning (ICL) datasets, we demonstrate that CoT and its reasoning
variants consistently underperform direct answering across varying model scales
and benchmark complexities. To systematically investigate this unexpected
phenomenon, we designed extensive experiments to validate several hypothetical
explanations. Our analysis uncovers a fundamental explicit-implicit duality
driving CoT's performance in pattern-based ICL: while explicit reasoning
falters due to LLMs' struggles to infer underlying patterns from
demonstrations, implicit reasoning-disrupted by the increased contextual
distance of CoT rationales-often compensates, delivering correct answers
despite flawed rationales. This duality explains CoT's relative
underperformance, as noise from weak explicit inference undermines the process,
even as implicit mechanisms partially salvage outcomes. Notably, even long-CoT
reasoning models, which excel in abstract and symbolic reasoning, fail to fully
overcome these limitations despite higher computational costs. Our findings
challenge existing assumptions regarding the universal efficacy of CoT,
yielding novel insights into its limitations and guiding future research toward
more nuanced and effective reasoning methodologies for LLMs.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 13:51:06 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zheng",
"Tianshi",
""
],
[
"Chen",
"Yixiang",
""
],
[
"Li",
"Chengxi",
""
],
[
"Li",
"Chunyang",
""
],
[
"Zong",
"Qing",
""
],
[
"Shi",
"Haochen",
""
],
[
"Xu",
"Baixuan",
""
],
[
"Song",
"Yangqiu",
""
],
[
"Wong",
"Ginny Y.",
""
],
[
"See",
"Simon",
""
]
] | TITLE: The Curse of CoT: On the Limitations of Chain-of-Thought in In-Context
Learning
ABSTRACT: Chain-of-Thought (CoT) prompting has been widely recognized for its ability
to enhance reasoning capabilities in large language models (LLMs) through the
generation of explicit explanatory rationales. However, our study reveals a
surprising contradiction to this prevailing perspective. Through extensive
experiments involving 16 state-of-the-art LLMs and nine diverse pattern-based
in-context learning (ICL) datasets, we demonstrate that CoT and its reasoning
variants consistently underperform direct answering across varying model scales
and benchmark complexities. To systematically investigate this unexpected
phenomenon, we designed extensive experiments to validate several hypothetical
explanations. Our analysis uncovers a fundamental explicit-implicit duality
driving CoT's performance in pattern-based ICL: while explicit reasoning
falters due to LLMs' struggles to infer underlying patterns from
demonstrations, implicit reasoning-disrupted by the increased contextual
distance of CoT rationales-often compensates, delivering correct answers
despite flawed rationales. This duality explains CoT's relative
underperformance, as noise from weak explicit inference undermines the process,
even as implicit mechanisms partially salvage outcomes. Notably, even long-CoT
reasoning models, which excel in abstract and symbolic reasoning, fail to fully
overcome these limitations despite higher computational costs. Our findings
challenge existing assumptions regarding the universal efficacy of CoT,
yielding novel insights into its limitations and guiding future research toward
more nuanced and effective reasoning methodologies for LLMs.
|
2504.05104 | Markus Leippold | Saeid Ario Vaghefi, Aymane Hachcham, Veronica Grasso, Jiska Manicus,
Nakiete Msemo, Chiara Colesanti Senni, Markus Leippold | AI for Climate Finance: Agentic Retrieval and Multi-Step Reasoning for
Early Warning System Investments | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Tracking financial investments in climate adaptation is a complex and
expertise-intensive task, particularly for Early Warning Systems (EWS), which
lack standardized financial reporting across multilateral development banks
(MDBs) and funds. To address this challenge, we introduce an LLM-based agentic
AI system that integrates contextual retrieval, fine-tuning, and multi-step
reasoning to extract relevant financial data, classify investments, and ensure
compliance with funding guidelines. Our study focuses on a real-world
application: tracking EWS investments in the Climate Risk and Early Warning
Systems (CREWS) Fund. We analyze 25 MDB project documents and evaluate multiple
AI-driven classification methods, including zero-shot and few-shot learning,
fine-tuned transformer-based classifiers, chain-of-thought (CoT) prompting, and
an agent-based retrieval-augmented generation (RAG) approach. Our results show
that the agent-based RAG approach significantly outperforms other methods,
achieving 87\% accuracy, 89\% precision, and 83\% recall. Additionally, we
contribute a benchmark dataset and expert-annotated corpus, providing a
valuable resource for future research in AI-driven financial tracking and
climate finance transparency.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 14:11:11 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Vaghefi",
"Saeid Ario",
""
],
[
"Hachcham",
"Aymane",
""
],
[
"Grasso",
"Veronica",
""
],
[
"Manicus",
"Jiska",
""
],
[
"Msemo",
"Nakiete",
""
],
[
"Senni",
"Chiara Colesanti",
""
],
[
"Leippold",
"Markus",
""
]
] | TITLE: AI for Climate Finance: Agentic Retrieval and Multi-Step Reasoning for
Early Warning System Investments
ABSTRACT: Tracking financial investments in climate adaptation is a complex and
expertise-intensive task, particularly for Early Warning Systems (EWS), which
lack standardized financial reporting across multilateral development banks
(MDBs) and funds. To address this challenge, we introduce an LLM-based agentic
AI system that integrates contextual retrieval, fine-tuning, and multi-step
reasoning to extract relevant financial data, classify investments, and ensure
compliance with funding guidelines. Our study focuses on a real-world
application: tracking EWS investments in the Climate Risk and Early Warning
Systems (CREWS) Fund. We analyze 25 MDB project documents and evaluate multiple
AI-driven classification methods, including zero-shot and few-shot learning,
fine-tuned transformer-based classifiers, chain-of-thought (CoT) prompting, and
an agent-based retrieval-augmented generation (RAG) approach. Our results show
that the agent-based RAG approach significantly outperforms other methods,
achieving 87\% accuracy, 89\% precision, and 83\% recall. Additionally, we
contribute a benchmark dataset and expert-annotated corpus, providing a
valuable resource for future research in AI-driven financial tracking and
climate finance transparency.
|
2504.05107 | Baosheng Li | Baosheng Li, Weifeng Gao, Zehui Xiong, Jin Xie, Binquan Guo and Miao
Du | Decentralized Semantic Federated Learning for Real-Time Public Safety
Tasks: Challenges, Methods, and Directions | null | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | Public safety tasks rely on the collaborative functioning of multiple edge
devices (MEDs) and base stations (BSs) in different regions, consuming
significant communication energy and computational resources to execute
critical operations like fire monitoring and rescue missions. Traditional
federated edge computing (EC) methods require frequent central communication,
consuming substantial energy and struggling with resource heterogeneity across
devices, networks, and data. To this end, this paper introduces a decentralized
semantic federated learning (DSFL) framework tailored for large-scale wireless
communication systems and heterogeneous MEDs. The framework incorporates a
hierarchical semantic communication (SC) scheme to extend EC coverage and
reduce communication overhead. Specifically, the lower layer optimizes intra-BS
communication through task-specific encoding and selective transmission under
constrained networks, while the upper layer ensures robust inter-BS
communication via semantic aggregation and distributed consensus across
different regions. To further balance communication costs and semantic
accuracy, an energy-efficient aggregation scheme is developed for both intra-BS
and inter-BS communication. The effectiveness of the DSFL framework is
demonstrated through a case study using the BoWFire dataset, showcasing its
potential in real-time fire detection scenarios. Finally, we outlines open
issues for edge intelligence and SC in public safety tasks.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 14:13:50 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Baosheng",
""
],
[
"Gao",
"Weifeng",
""
],
[
"Xiong",
"Zehui",
""
],
[
"Xie",
"Jin",
""
],
[
"Guo",
"Binquan",
""
],
[
"Du",
"Miao",
""
]
] | TITLE: Decentralized Semantic Federated Learning for Real-Time Public Safety
Tasks: Challenges, Methods, and Directions
ABSTRACT: Public safety tasks rely on the collaborative functioning of multiple edge
devices (MEDs) and base stations (BSs) in different regions, consuming
significant communication energy and computational resources to execute
critical operations like fire monitoring and rescue missions. Traditional
federated edge computing (EC) methods require frequent central communication,
consuming substantial energy and struggling with resource heterogeneity across
devices, networks, and data. To this end, this paper introduces a decentralized
semantic federated learning (DSFL) framework tailored for large-scale wireless
communication systems and heterogeneous MEDs. The framework incorporates a
hierarchical semantic communication (SC) scheme to extend EC coverage and
reduce communication overhead. Specifically, the lower layer optimizes intra-BS
communication through task-specific encoding and selective transmission under
constrained networks, while the upper layer ensures robust inter-BS
communication via semantic aggregation and distributed consensus across
different regions. To further balance communication costs and semantic
accuracy, an energy-efficient aggregation scheme is developed for both intra-BS
and inter-BS communication. The effectiveness of the DSFL framework is
demonstrated through a case study using the BoWFire dataset, showcasing its
potential in real-time fire detection scenarios. Finally, we outlines open
issues for edge intelligence and SC in public safety tasks.
|
2504.05112 | Lv Dakang | Ronghui Zhang, Dakang Lyu, Tengfei Li, Yunfan Wu, Ujjal Manandhar,
Benfei Wang, Junzhou Chen, Bolin Gao, Danwei Wang, and Yiqiu Tan | ABCDWaveNet: Advancing Robust Road Ponding Detection in Fog through
Dynamic Frequency-Spatial Synergy | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Road ponding presents a significant threat to vehicle safety, particularly in
adverse fog conditions, where reliable detection remains a persistent challenge
for Advanced Driver Assistance Systems (ADAS). To address this, we propose
ABCDWaveNet, a novel deep learning framework leveraging Dynamic
Frequency-Spatial Synergy for robust ponding detection in fog. The core of
ABCDWaveNet achieves this synergy by integrating dynamic convolution for
adaptive feature extraction across varying visibilities with a wavelet-based
module for synergistic frequency-spatial feature enhancement, significantly
improving robustness against fog interference. Building on this foundation,
ABCDWaveNet captures multi-scale structural and contextual information,
subsequently employing an Adaptive Attention Coupling Gate (AACG) to adaptively
fuse global and local features for enhanced accuracy. To facilitate realistic
evaluations under combined adverse conditions, we introduce the Foggy Low-Light
Puddle dataset. Extensive experiments demonstrate that ABCDWaveNet establishes
new state-of-the-art performance, achieving significant Intersection over Union
(IoU) gains of 3.51%, 1.75%, and 1.03% on the Foggy-Puddle, Puddle-1000, and
our Foggy Low-Light Puddle datasets, respectively. Furthermore, its processing
speed of 25.48 FPS on an NVIDIA Jetson AGX Orin confirms its suitability for
ADAS deployment. These findings underscore the effectiveness of the proposed
Dynamic Frequency-Spatial Synergy within ABCDWaveNet, offering valuable
insights for developing proactive road safety solutions capable of operating
reliably in challenging weather conditions.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 14:15:48 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhang",
"Ronghui",
""
],
[
"Lyu",
"Dakang",
""
],
[
"Li",
"Tengfei",
""
],
[
"Wu",
"Yunfan",
""
],
[
"Manandhar",
"Ujjal",
""
],
[
"Wang",
"Benfei",
""
],
[
"Chen",
"Junzhou",
""
],
[
"Gao",
"Bolin",
""
],
[
"Wang",
"Danwei",
""
],
[
"Tan",
"Yiqiu",
""
]
] | TITLE: ABCDWaveNet: Advancing Robust Road Ponding Detection in Fog through
Dynamic Frequency-Spatial Synergy
ABSTRACT: Road ponding presents a significant threat to vehicle safety, particularly in
adverse fog conditions, where reliable detection remains a persistent challenge
for Advanced Driver Assistance Systems (ADAS). To address this, we propose
ABCDWaveNet, a novel deep learning framework leveraging Dynamic
Frequency-Spatial Synergy for robust ponding detection in fog. The core of
ABCDWaveNet achieves this synergy by integrating dynamic convolution for
adaptive feature extraction across varying visibilities with a wavelet-based
module for synergistic frequency-spatial feature enhancement, significantly
improving robustness against fog interference. Building on this foundation,
ABCDWaveNet captures multi-scale structural and contextual information,
subsequently employing an Adaptive Attention Coupling Gate (AACG) to adaptively
fuse global and local features for enhanced accuracy. To facilitate realistic
evaluations under combined adverse conditions, we introduce the Foggy Low-Light
Puddle dataset. Extensive experiments demonstrate that ABCDWaveNet establishes
new state-of-the-art performance, achieving significant Intersection over Union
(IoU) gains of 3.51%, 1.75%, and 1.03% on the Foggy-Puddle, Puddle-1000, and
our Foggy Low-Light Puddle datasets, respectively. Furthermore, its processing
speed of 25.48 FPS on an NVIDIA Jetson AGX Orin confirms its suitability for
ADAS deployment. These findings underscore the effectiveness of the proposed
Dynamic Frequency-Spatial Synergy within ABCDWaveNet, offering valuable
insights for developing proactive road safety solutions capable of operating
reliably in challenging weather conditions.
|
2504.05125 | Suhang Gu | Suhang Gu, Ye Wang, Yongxin Chou, Jinliang Cong, Mingli Lu, Zhuqing
Jiao | Interpretable Style Takagi-Sugeno-Kang Fuzzy Clustering | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Clustering is an efficient and essential technique for exploring latent
knowledge of data. However, limited attention has been given to the
interpretability of the clusters detected by most clustering algorithms. In
addition, due to the homogeneity of data, different groups of data have their
own homogeneous styles. In this paper, the above two aspects are considered,
and an interpretable style Takagi-Sugeno-Kang (TSK) fuzzy clustering
(IS-TSK-FC) algorithm is proposed. The clustering behavior of IS-TSK-FC is
fully guided by the TSK fuzzy inference on fuzzy rules. In particular, samples
are grouped into clusters represented by the corresponding consequent vectors
of all fuzzy rules learned in an unsupervised manner. This can explain how the
clusters are generated in detail, thus making the underlying decision-making
process of the IS-TSK-FC interpretable. Moreover, a series of style matrices
are introduced to facilitate the consequents of fuzzy rules in IS-TSK-FC by
capturing the styles of clusters as well as the nuances between different
styles. Consequently, all the fuzzy rules in IS-TSK-FC have powerful data
representation capability. After determining the antecedents of all the fuzzy
rules, the optimization problem of IS-TSK-FC can be iteratively solved in an
alternation manner. The effectiveness of IS-TSK-FC as an interpretable
clustering tool is validated through extensive experiments on benchmark
datasets with unknown implicit/explicit styles. Specially, the superior
clustering performance of IS-TSK-FC is demonstrated on case studies where
different groups of data present explicit styles. The source code of IS-TSK-FC
can be downloaded from https://github.com/gusuhang10/IS-TSK-FC.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 14:28:56 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Gu",
"Suhang",
""
],
[
"Wang",
"Ye",
""
],
[
"Chou",
"Yongxin",
""
],
[
"Cong",
"Jinliang",
""
],
[
"Lu",
"Mingli",
""
],
[
"Jiao",
"Zhuqing",
""
]
] | TITLE: Interpretable Style Takagi-Sugeno-Kang Fuzzy Clustering
ABSTRACT: Clustering is an efficient and essential technique for exploring latent
knowledge of data. However, limited attention has been given to the
interpretability of the clusters detected by most clustering algorithms. In
addition, due to the homogeneity of data, different groups of data have their
own homogeneous styles. In this paper, the above two aspects are considered,
and an interpretable style Takagi-Sugeno-Kang (TSK) fuzzy clustering
(IS-TSK-FC) algorithm is proposed. The clustering behavior of IS-TSK-FC is
fully guided by the TSK fuzzy inference on fuzzy rules. In particular, samples
are grouped into clusters represented by the corresponding consequent vectors
of all fuzzy rules learned in an unsupervised manner. This can explain how the
clusters are generated in detail, thus making the underlying decision-making
process of the IS-TSK-FC interpretable. Moreover, a series of style matrices
are introduced to facilitate the consequents of fuzzy rules in IS-TSK-FC by
capturing the styles of clusters as well as the nuances between different
styles. Consequently, all the fuzzy rules in IS-TSK-FC have powerful data
representation capability. After determining the antecedents of all the fuzzy
rules, the optimization problem of IS-TSK-FC can be iteratively solved in an
alternation manner. The effectiveness of IS-TSK-FC as an interpretable
clustering tool is validated through extensive experiments on benchmark
datasets with unknown implicit/explicit styles. Specially, the superior
clustering performance of IS-TSK-FC is demonstrated on case studies where
different groups of data present explicit styles. The source code of IS-TSK-FC
can be downloaded from https://github.com/gusuhang10/IS-TSK-FC.
|
2504.05140 | Shuai Han | Shuai Han, Lukas Stelz, Thomas R. Sokolowski, Kai Zhou, Horst
St\"ocker | Unifying Physics- and Data-Driven Modeling via Novel Causal
Spatiotemporal Graph Neural Network for Interpretable Epidemic Forecasting | 32 pages, 12 figures. Submitted to Expert Systems with Applications
and currently under review. This version includes minor revisions. The work
proposes a physics-informed deep learning framework integrating a novel
epidemic model with causal spatiotemporal graph neural networks for
interpretable forecasting | null | null | null | cs.LG physics.soc-ph q-bio.QM stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Accurate epidemic forecasting is crucial for effective disease control and
prevention. Traditional compartmental models often struggle to estimate
temporally and spatially varying epidemiological parameters, while deep
learning models typically overlook disease transmission dynamics and lack
interpretability in the epidemiological context. To address these limitations,
we propose a novel Causal Spatiotemporal Graph Neural Network (CSTGNN), a
hybrid framework that integrates a Spatio-Contact SIR model with Graph Neural
Networks (GNNs) to capture the spatiotemporal propagation of epidemics.
Inter-regional human mobility exhibits continuous and smooth spatiotemporal
patterns, leading to adjacent graph structures that share underlying mobility
dynamics. To model these dynamics, we employ an adaptive static connectivity
graph to represent the stable components of human mobility and utilize a
temporal dynamics model to capture fluctuations within these patterns. By
integrating the adaptive static connectivity graph with the temporal dynamics
graph, we construct a dynamic graph that encapsulates the comprehensive
properties of human mobility networks. Additionally, to capture temporal trends
and variations in infectious disease spread, we introduce a temporal
decomposition model to handle temporal dependence. This model is then
integrated with a dynamic graph convolutional network for epidemic forecasting.
We validate our model using real-world datasets at the provincial level in
China and the state level in Germany. Extensive studies demonstrate that our
method effectively models the spatiotemporal dynamics of infectious diseases,
providing a valuable tool for forecasting and intervention strategies.
Furthermore, analysis of the learned parameters offers insights into disease
transmission mechanisms, enhancing the interpretability and practical
applicability of our model.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 14:46:11 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Han",
"Shuai",
""
],
[
"Stelz",
"Lukas",
""
],
[
"Sokolowski",
"Thomas R.",
""
],
[
"Zhou",
"Kai",
""
],
[
"Stöcker",
"Horst",
""
]
] | TITLE: Unifying Physics- and Data-Driven Modeling via Novel Causal
Spatiotemporal Graph Neural Network for Interpretable Epidemic Forecasting
ABSTRACT: Accurate epidemic forecasting is crucial for effective disease control and
prevention. Traditional compartmental models often struggle to estimate
temporally and spatially varying epidemiological parameters, while deep
learning models typically overlook disease transmission dynamics and lack
interpretability in the epidemiological context. To address these limitations,
we propose a novel Causal Spatiotemporal Graph Neural Network (CSTGNN), a
hybrid framework that integrates a Spatio-Contact SIR model with Graph Neural
Networks (GNNs) to capture the spatiotemporal propagation of epidemics.
Inter-regional human mobility exhibits continuous and smooth spatiotemporal
patterns, leading to adjacent graph structures that share underlying mobility
dynamics. To model these dynamics, we employ an adaptive static connectivity
graph to represent the stable components of human mobility and utilize a
temporal dynamics model to capture fluctuations within these patterns. By
integrating the adaptive static connectivity graph with the temporal dynamics
graph, we construct a dynamic graph that encapsulates the comprehensive
properties of human mobility networks. Additionally, to capture temporal trends
and variations in infectious disease spread, we introduce a temporal
decomposition model to handle temporal dependence. This model is then
integrated with a dynamic graph convolutional network for epidemic forecasting.
We validate our model using real-world datasets at the provincial level in
China and the state level in Germany. Extensive studies demonstrate that our
method effectively models the spatiotemporal dynamics of infectious diseases,
providing a valuable tool for forecasting and intervention strategies.
Furthermore, analysis of the learned parameters offers insights into disease
transmission mechanisms, enhancing the interpretability and practical
applicability of our model.
|
2504.05148 | Yasuhiro Yao | Yasuhiro Yao, Ryoichi Ishikawa, Takeshi Oishi | Stereo-LiDAR Fusion by Semi-Global Matching With Discrete
Disparity-Matching Cost and Semidensification | 8 pages, 8 figures, 7 tables | in IEEE Robotics and Automation Letters, vol. 10, no. 5, pp.
4548-4555, May 2025 | 10.1109/LRA.2025.3552236 | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | We present a real-time, non-learning depth estimation method that fuses Light
Detection and Ranging (LiDAR) data with stereo camera input. Our approach
comprises three key techniques: Semi-Global Matching (SGM) stereo with Discrete
Disparity-matching Cost (DDC), semidensification of LiDAR disparity, and a
consistency check that combines stereo images and LiDAR data. Each of these
components is designed for parallelization on a GPU to realize real-time
performance. When it was evaluated on the KITTI dataset, the proposed method
achieved an error rate of 2.79\%, outperforming the previous state-of-the-art
real-time stereo-LiDAR fusion method, which had an error rate of 3.05\%.
Furthermore, we tested the proposed method in various scenarios, including
different LiDAR point densities, varying weather conditions, and indoor
environments, to demonstrate its high adaptability. We believe that the
real-time and non-learning nature of our method makes it highly practical for
applications in robotics and automation.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 14:54:08 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yao",
"Yasuhiro",
""
],
[
"Ishikawa",
"Ryoichi",
""
],
[
"Oishi",
"Takeshi",
""
]
] | TITLE: Stereo-LiDAR Fusion by Semi-Global Matching With Discrete
Disparity-Matching Cost and Semidensification
ABSTRACT: We present a real-time, non-learning depth estimation method that fuses Light
Detection and Ranging (LiDAR) data with stereo camera input. Our approach
comprises three key techniques: Semi-Global Matching (SGM) stereo with Discrete
Disparity-matching Cost (DDC), semidensification of LiDAR disparity, and a
consistency check that combines stereo images and LiDAR data. Each of these
components is designed for parallelization on a GPU to realize real-time
performance. When it was evaluated on the KITTI dataset, the proposed method
achieved an error rate of 2.79\%, outperforming the previous state-of-the-art
real-time stereo-LiDAR fusion method, which had an error rate of 3.05\%.
Furthermore, we tested the proposed method in various scenarios, including
different LiDAR point densities, varying weather conditions, and indoor
environments, to demonstrate its high adaptability. We believe that the
real-time and non-learning nature of our method makes it highly practical for
applications in robotics and automation.
|
2504.05158 | Yinfeng Yu | Xuechun Shao, Yinfeng Yu, Liejun Wang | Leveraging Label Potential for Enhanced Multimodal Emotion Recognition | Main paper (8 pages). Accepted for publication by IJCNN 2025 | null | null | null | cs.SD cs.AI eess.AS | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Multimodal emotion recognition (MER) seeks to integrate various modalities to
predict emotional states accurately. However, most current research focuses
solely on the fusion of audio and text features, overlooking the valuable
information in emotion labels. This oversight could potentially hinder the
performance of existing methods, as emotion labels harbor rich, insightful
information that could significantly aid MER. We introduce a novel model called
Label Signal-Guided Multimodal Emotion Recognition (LSGMER) to overcome this
limitation. This model aims to fully harness the power of emotion label
information to boost the classification accuracy and stability of MER.
Specifically, LSGMER employs a Label Signal Enhancement module that optimizes
the representation of modality features by interacting with audio and text
features through label embeddings, enabling it to capture the nuances of
emotions precisely. Furthermore, we propose a Joint Objective Optimization(JOO)
approach to enhance classification accuracy by introducing the
Attribution-Prediction Consistency Constraint (APC), which strengthens the
alignment between fused features and emotion categories. Extensive experiments
conducted on the IEMOCAP and MELD datasets have demonstrated the effectiveness
of our proposed LSGMER model.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 15:00:34 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Shao",
"Xuechun",
""
],
[
"Yu",
"Yinfeng",
""
],
[
"Wang",
"Liejun",
""
]
] | TITLE: Leveraging Label Potential for Enhanced Multimodal Emotion Recognition
ABSTRACT: Multimodal emotion recognition (MER) seeks to integrate various modalities to
predict emotional states accurately. However, most current research focuses
solely on the fusion of audio and text features, overlooking the valuable
information in emotion labels. This oversight could potentially hinder the
performance of existing methods, as emotion labels harbor rich, insightful
information that could significantly aid MER. We introduce a novel model called
Label Signal-Guided Multimodal Emotion Recognition (LSGMER) to overcome this
limitation. This model aims to fully harness the power of emotion label
information to boost the classification accuracy and stability of MER.
Specifically, LSGMER employs a Label Signal Enhancement module that optimizes
the representation of modality features by interacting with audio and text
features through label embeddings, enabling it to capture the nuances of
emotions precisely. Furthermore, we propose a Joint Objective Optimization(JOO)
approach to enhance classification accuracy by introducing the
Attribution-Prediction Consistency Constraint (APC), which strengthens the
alignment between fused features and emotion categories. Extensive experiments
conducted on the IEMOCAP and MELD datasets have demonstrated the effectiveness
of our proposed LSGMER model.
|
2504.05170 | Bonan Ding | Bonan Ding, Jin Xie, Jing Nie, Jiale Cao | SSLFusion: Scale & Space Aligned Latent Fusion Model for Multimodal 3D
Object Detection | Accepted by AAAI 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal 3D object detection based on deep neural networks has indeed made
significant progress. However, it still faces challenges due to the
misalignment of scale and spatial information between features extracted from
2D images and those derived from 3D point clouds. Existing methods usually
aggregate multimodal features at a single stage. However, leveraging
multi-stage cross-modal features is crucial for detecting objects of various
scales. Therefore, these methods often struggle to integrate features across
different scales and modalities effectively, thereby restricting the accuracy
of detection. Additionally, the time-consuming Query-Key-Value-based
(QKV-based) cross-attention operations often utilized in existing methods aid
in reasoning the location and existence of objects by capturing non-local
contexts. However, this approach tends to increase computational complexity. To
address these challenges, we present SSLFusion, a novel Scale & Space Aligned
Latent Fusion Model, consisting of a scale-aligned fusion strategy (SAF), a
3D-to-2D space alignment module (SAM), and a latent cross-modal fusion module
(LFM). SAF mitigates scale misalignment between modalities by aggregating
features from both images and point clouds across multiple levels. SAM is
designed to reduce the inter-modal gap between features from images and point
clouds by incorporating 3D coordinate information into 2D image features.
Additionally, LFM captures cross-modal non-local contexts in the latent space
without utilizing the QKV-based attention operations, thus mitigating
computational complexity. Experiments on the KITTI and DENSE datasets
demonstrate that our SSLFusion outperforms state-of-the-art methods. Our
approach obtains an absolute gain of 2.15% in 3D AP, compared with the
state-of-art method GraphAlign on the moderate level of the KITTI test set.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 15:15:06 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ding",
"Bonan",
""
],
[
"Xie",
"Jin",
""
],
[
"Nie",
"Jing",
""
],
[
"Cao",
"Jiale",
""
]
] | TITLE: SSLFusion: Scale & Space Aligned Latent Fusion Model for Multimodal 3D
Object Detection
ABSTRACT: Multimodal 3D object detection based on deep neural networks has indeed made
significant progress. However, it still faces challenges due to the
misalignment of scale and spatial information between features extracted from
2D images and those derived from 3D point clouds. Existing methods usually
aggregate multimodal features at a single stage. However, leveraging
multi-stage cross-modal features is crucial for detecting objects of various
scales. Therefore, these methods often struggle to integrate features across
different scales and modalities effectively, thereby restricting the accuracy
of detection. Additionally, the time-consuming Query-Key-Value-based
(QKV-based) cross-attention operations often utilized in existing methods aid
in reasoning the location and existence of objects by capturing non-local
contexts. However, this approach tends to increase computational complexity. To
address these challenges, we present SSLFusion, a novel Scale & Space Aligned
Latent Fusion Model, consisting of a scale-aligned fusion strategy (SAF), a
3D-to-2D space alignment module (SAM), and a latent cross-modal fusion module
(LFM). SAF mitigates scale misalignment between modalities by aggregating
features from both images and point clouds across multiple levels. SAM is
designed to reduce the inter-modal gap between features from images and point
clouds by incorporating 3D coordinate information into 2D image features.
Additionally, LFM captures cross-modal non-local contexts in the latent space
without utilizing the QKV-based attention operations, thus mitigating
computational complexity. Experiments on the KITTI and DENSE datasets
demonstrate that our SSLFusion outperforms state-of-the-art methods. Our
approach obtains an absolute gain of 2.15% in 3D AP, compared with the
state-of-art method GraphAlign on the moderate level of the KITTI test set.
|
2504.05172 | Guangqiang Li | Guangqiang Li, M. Amine Atoui and Xiangshun Li | Attention-Based Multi-Scale Temporal Fusion Network for Uncertain-Mode
Fault Diagnosis in Multimode Processes | 31 pages,11 figures | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fault diagnosis in multimode processes plays a critical role in ensuring the
safe operation of industrial systems across multiple modes. It faces a great
challenge yet to be addressed - that is, the significant distributional
differences among monitoring data from multiple modes make it difficult for the
models to extract shared feature representations related to system health
conditions. In response to this problem, this paper introduces a novel method
called attention-based multi-scale temporal fusion network. The multi-scale
depthwise convolution and gated recurrent unit are employed to extract
multi-scale contextual local features and long-short-term features. A temporal
attention mechanism is designed to focus on critical time points with higher
cross-mode shared information, thereby enhancing the accuracy of fault
diagnosis. The proposed model is applied to Tennessee Eastman process dataset
and three-phase flow facility dataset. The experiments demonstrate that the
proposed model achieves superior diagnostic performance and maintains a small
model size.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 15:16:22 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Guangqiang",
""
],
[
"Atoui",
"M. Amine",
""
],
[
"Li",
"Xiangshun",
""
]
] | TITLE: Attention-Based Multi-Scale Temporal Fusion Network for Uncertain-Mode
Fault Diagnosis in Multimode Processes
ABSTRACT: Fault diagnosis in multimode processes plays a critical role in ensuring the
safe operation of industrial systems across multiple modes. It faces a great
challenge yet to be addressed - that is, the significant distributional
differences among monitoring data from multiple modes make it difficult for the
models to extract shared feature representations related to system health
conditions. In response to this problem, this paper introduces a novel method
called attention-based multi-scale temporal fusion network. The multi-scale
depthwise convolution and gated recurrent unit are employed to extract
multi-scale contextual local features and long-short-term features. A temporal
attention mechanism is designed to focus on critical time points with higher
cross-mode shared information, thereby enhancing the accuracy of fault
diagnosis. The proposed model is applied to Tennessee Eastman process dataset
and three-phase flow facility dataset. The experiments demonstrate that the
proposed model achieves superior diagnostic performance and maintains a small
model size.
|
2504.05174 | Veronica Sanz | Veronica Sanz | Learning symmetries in datasets | 17 pages, 9 figures | null | null | null | cs.LG hep-ph | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We investigate how symmetries present in datasets affect the structure of the
latent space learned by Variational Autoencoders (VAEs). By training VAEs on
data originating from simple mechanical systems and particle collisions, we
analyze the organization of the latent space through a relevance measure that
identifies the most meaningful latent directions. We show that when symmetries
or approximate symmetries are present, the VAE self-organizes its latent space,
effectively compressing the data along a reduced number of latent variables.
This behavior captures the intrinsic dimensionality determined by the symmetry
constraints and reveals hidden relations among the features. Furthermore, we
provide a theoretical analysis of a simple toy model, demonstrating how, under
idealized conditions, the latent space aligns with the symmetry directions of
the data manifold. We illustrate these findings with examples ranging from
two-dimensional datasets with $O(2)$ symmetry to realistic datasets from
electron-positron and proton-proton collisions. Our results highlight the
potential of unsupervised generative models to expose underlying structures in
data and offer a novel approach to symmetry discovery without explicit
supervision.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 15:17:41 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Sanz",
"Veronica",
""
]
] | TITLE: Learning symmetries in datasets
ABSTRACT: We investigate how symmetries present in datasets affect the structure of the
latent space learned by Variational Autoencoders (VAEs). By training VAEs on
data originating from simple mechanical systems and particle collisions, we
analyze the organization of the latent space through a relevance measure that
identifies the most meaningful latent directions. We show that when symmetries
or approximate symmetries are present, the VAE self-organizes its latent space,
effectively compressing the data along a reduced number of latent variables.
This behavior captures the intrinsic dimensionality determined by the symmetry
constraints and reveals hidden relations among the features. Furthermore, we
provide a theoretical analysis of a simple toy model, demonstrating how, under
idealized conditions, the latent space aligns with the symmetry directions of
the data manifold. We illustrate these findings with examples ranging from
two-dimensional datasets with $O(2)$ symmetry to realistic datasets from
electron-positron and proton-proton collisions. Our results highlight the
potential of unsupervised generative models to expose underlying structures in
data and offer a novel approach to symmetry discovery without explicit
supervision.
|
2504.05180 | Wei Li | Wei Li, Yang Zou, Christopher Ellis, Ruben Purdy, Shawn Blanton,
Jos\'e M. F. Moura | BRIDGES: Bridging Graph Modality and Large Language Models within EDA
Tasks | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | While many EDA tasks already involve graph-based data, existing LLMs in EDA
primarily either represent graphs as sequential text, or simply ignore
graph-structured data that might be beneficial like dataflow graphs of RTL
code. Recent studies have found that LLM performance suffers when graphs are
represented as sequential text, and using additional graph information
significantly boosts performance. To address these challenges, we introduce
BRIDGES, a framework designed to incorporate graph modality into LLMs for EDA
tasks. BRIDGES integrates an automated data generation workflow, a solution
that combines graph modality with LLM, and a comprehensive evaluation suite.
First, we establish an LLM-driven workflow to generate RTL and netlist-level
data, converting them into dataflow and netlist graphs with function
descriptions. This workflow yields a large-scale dataset comprising over
500,000 graph instances and more than 1.5 billion tokens. Second, we propose a
lightweight cross-modal projector that encodes graph representations into
text-compatible prompts, enabling LLMs to effectively utilize graph data
without architectural modifications. Experimental results demonstrate 2x to 10x
improvements across multiple tasks compared to text-only baselines, including
accuracy in design retrieval, type prediction and perplexity in function
description, with negligible computational overhead (<1% model weights increase
and <30% additional runtime overhead). Even without additional LLM finetuning,
our results outperform text-only by a large margin. We plan to release BRIDGES,
including the dataset, models, and training flow.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 15:27:32 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Wei",
""
],
[
"Zou",
"Yang",
""
],
[
"Ellis",
"Christopher",
""
],
[
"Purdy",
"Ruben",
""
],
[
"Blanton",
"Shawn",
""
],
[
"Moura",
"José M. F.",
""
]
] | TITLE: BRIDGES: Bridging Graph Modality and Large Language Models within EDA
Tasks
ABSTRACT: While many EDA tasks already involve graph-based data, existing LLMs in EDA
primarily either represent graphs as sequential text, or simply ignore
graph-structured data that might be beneficial like dataflow graphs of RTL
code. Recent studies have found that LLM performance suffers when graphs are
represented as sequential text, and using additional graph information
significantly boosts performance. To address these challenges, we introduce
BRIDGES, a framework designed to incorporate graph modality into LLMs for EDA
tasks. BRIDGES integrates an automated data generation workflow, a solution
that combines graph modality with LLM, and a comprehensive evaluation suite.
First, we establish an LLM-driven workflow to generate RTL and netlist-level
data, converting them into dataflow and netlist graphs with function
descriptions. This workflow yields a large-scale dataset comprising over
500,000 graph instances and more than 1.5 billion tokens. Second, we propose a
lightweight cross-modal projector that encodes graph representations into
text-compatible prompts, enabling LLMs to effectively utilize graph data
without architectural modifications. Experimental results demonstrate 2x to 10x
improvements across multiple tasks compared to text-only baselines, including
accuracy in design retrieval, type prediction and perplexity in function
description, with negligible computational overhead (<1% model weights increase
and <30% additional runtime overhead). Even without additional LLM finetuning,
our results outperform text-only by a large margin. We plan to release BRIDGES,
including the dataset, models, and training flow.
|
2504.05181 | Kidist Amde Mekonnen Miss | Kidist Amde Mekonnen, Yubao Tang, Maarten de Rijke | Lightweight and Direct Document Relevance Optimization for Generative
Information Retrieval | 13 pages, 5 figures. Submitted to SIGIR 2025. Proposes DDRO, a
lightweight and reinforcement-free document relevance optimization method for
generative retrieval. Code and pretrained models available at:
https://github.com/kidist-amde/DDRO-Direct-Document-Relevance-Optimization | null | null | null | cs.IR cs.AI cs.DL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Generative information retrieval (GenIR) is a promising neural retrieval
paradigm that formulates document retrieval as a document identifier (docid)
generation task, allowing for end-to-end optimization toward a unified global
retrieval objective. However, existing GenIR models suffer from token-level
misalignment, where models trained to predict the next token often fail to
capture document-level relevance effectively. While reinforcement
learning-based methods, such as reinforcement learning from relevance feedback
(RLRF), aim to address this misalignment through reward modeling, they
introduce significant complexity, requiring the optimization of an auxiliary
reward function followed by reinforcement fine-tuning, which is computationally
expensive and often unstable. To address these challenges, we propose direct
document relevance optimization (DDRO), which aligns token-level docid
generation with document-level relevance estimation through direct optimization
via pairwise ranking, eliminating the need for explicit reward modeling and
reinforcement learning. Experimental results on benchmark datasets, including
MS MARCO document and Natural Questions, show that DDRO outperforms
reinforcement learning-based methods, achieving a 7.4% improvement in MRR@10
for MS MARCO and a 19.9% improvement for Natural Questions. These findings
highlight DDRO's potential to enhance retrieval effectiveness with a simplified
optimization approach. By framing alignment as a direct optimization problem,
DDRO simplifies the ranking optimization pipeline of GenIR models while
offering a viable alternative to reinforcement learning-based methods.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 15:27:37 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Mekonnen",
"Kidist Amde",
""
],
[
"Tang",
"Yubao",
""
],
[
"de Rijke",
"Maarten",
""
]
] | TITLE: Lightweight and Direct Document Relevance Optimization for Generative
Information Retrieval
ABSTRACT: Generative information retrieval (GenIR) is a promising neural retrieval
paradigm that formulates document retrieval as a document identifier (docid)
generation task, allowing for end-to-end optimization toward a unified global
retrieval objective. However, existing GenIR models suffer from token-level
misalignment, where models trained to predict the next token often fail to
capture document-level relevance effectively. While reinforcement
learning-based methods, such as reinforcement learning from relevance feedback
(RLRF), aim to address this misalignment through reward modeling, they
introduce significant complexity, requiring the optimization of an auxiliary
reward function followed by reinforcement fine-tuning, which is computationally
expensive and often unstable. To address these challenges, we propose direct
document relevance optimization (DDRO), which aligns token-level docid
generation with document-level relevance estimation through direct optimization
via pairwise ranking, eliminating the need for explicit reward modeling and
reinforcement learning. Experimental results on benchmark datasets, including
MS MARCO document and Natural Questions, show that DDRO outperforms
reinforcement learning-based methods, achieving a 7.4% improvement in MRR@10
for MS MARCO and a 19.9% improvement for Natural Questions. These findings
highlight DDRO's potential to enhance retrieval effectiveness with a simplified
optimization approach. By framing alignment as a direct optimization problem,
DDRO simplifies the ranking optimization pipeline of GenIR models while
offering a viable alternative to reinforcement learning-based methods.
|
2504.05183 | Rachel de Jong | Samuel Bonello, Rachel G. de Jong, Thomas H. W. B\"ack and Frank W.
Takes | Utility-aware Social Network Anonymization using Genetic Algorithms | null | null | null | null | cs.SI | http://creativecommons.org/licenses/by/4.0/ | Social networks may contain privacy-sensitive information about individuals.
The objective of the network anonymization problem is to alter a given social
network dataset such that the number of anonymous nodes in the social graph is
maximized. Here, a node is anonymous if it does not have a unique surrounding
network structure. At the same time, the aim is to ensure data utility, i.e.,
preserve topological network properties and retain good performance on
downstream network analysis tasks. We propose two versions of a genetic
algorithm tailored to this problem: one generic GA and a uniqueness-aware GA
(UGA). The latter aims to target edges more effectively during mutation by
avoiding edges connected to already anonymous nodes. After hyperparameter
tuning, we compare the two GAs against two existing baseline algorithms on
several real-world network datasets. Results show that the proposed genetic
algorithms manage to anonymize on average 14 times more nodes than the best
baseline algorithm. Additionally, data utility experiments demonstrate how the
UGA requires fewer edge deletions, and how our GAs and the baselines retain
performance on downstream tasks equally well. Overall, our results suggest that
genetic algorithms are a promising approach for finding solutions to the
network anonymization problem.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 15:29:28 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Bonello",
"Samuel",
""
],
[
"de Jong",
"Rachel G.",
""
],
[
"Bäck",
"Thomas H. W.",
""
],
[
"Takes",
"Frank W.",
""
]
] | TITLE: Utility-aware Social Network Anonymization using Genetic Algorithms
ABSTRACT: Social networks may contain privacy-sensitive information about individuals.
The objective of the network anonymization problem is to alter a given social
network dataset such that the number of anonymous nodes in the social graph is
maximized. Here, a node is anonymous if it does not have a unique surrounding
network structure. At the same time, the aim is to ensure data utility, i.e.,
preserve topological network properties and retain good performance on
downstream network analysis tasks. We propose two versions of a genetic
algorithm tailored to this problem: one generic GA and a uniqueness-aware GA
(UGA). The latter aims to target edges more effectively during mutation by
avoiding edges connected to already anonymous nodes. After hyperparameter
tuning, we compare the two GAs against two existing baseline algorithms on
several real-world network datasets. Results show that the proposed genetic
algorithms manage to anonymize on average 14 times more nodes than the best
baseline algorithm. Additionally, data utility experiments demonstrate how the
UGA requires fewer edge deletions, and how our GAs and the baselines retain
performance on downstream tasks equally well. Overall, our results suggest that
genetic algorithms are a promising approach for finding solutions to the
network anonymization problem.
|
2504.05184 | Rayan Mahjoub | Rayan Merghani Ahmed, Adnan Iltaf, Bin Li, Shoujun Zhou | MSA-UNet3+: Multi-Scale Attention UNet3+ with New Supervised
Prototypical Contrastive Loss for Coronary DSA Image Segmentation | Work in progress | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The accurate segmentation of coronary Digital Subtraction Angiography (DSA)
images is essential for diagnosing and treating coronary artery diseases.
Despite advances in deep learning-based segmentation, challenges such as low
contrast, noise, overlapping structures, high intra-class variance, and class
imbalance limit precise vessel delineation. To overcome these limitations, we
propose the MSA-UNet3+: a Multi-Scale Attention enhanced UNet3+ architecture
for coronary DSA image segmentation. The framework combined Multi-Scale Dilated
Bottleneck (MSD-Bottleneck) with Contextual Attention Fusion Module (CAFM),
which not only enhances multi-scale feature extraction but also preserve
fine-grained details, and improve contextual understanding. Furthermore, we
propose a new Supervised Prototypical Contrastive Loss (SPCL), which combines
supervised and prototypical contrastive learning to minimize class imbalance
and high intra-class variance by focusing on hard-to-classified background
samples. Experiments carried out on a private coronary DSA dataset demonstrate
that MSA-UNet3+ outperforms state-of-the-art methods, achieving a Dice
coefficient of 87.73%, an F1-score of 87.78%, and significantly reduced Average
Surface Distance (ASD) and Average Contour Distance (ACD). The developed
framework provides clinicians with precise vessel segmentation, enabling
accurate identification of coronary stenosis and supporting informed diagnostic
and therapeutic decisions. The code will be released at the following GitHub
profile link https://github.com/rayanmerghani/MSA-UNet3plus.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 15:35:30 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ahmed",
"Rayan Merghani",
""
],
[
"Iltaf",
"Adnan",
""
],
[
"Li",
"Bin",
""
],
[
"Zhou",
"Shoujun",
""
]
] | TITLE: MSA-UNet3+: Multi-Scale Attention UNet3+ with New Supervised
Prototypical Contrastive Loss for Coronary DSA Image Segmentation
ABSTRACT: The accurate segmentation of coronary Digital Subtraction Angiography (DSA)
images is essential for diagnosing and treating coronary artery diseases.
Despite advances in deep learning-based segmentation, challenges such as low
contrast, noise, overlapping structures, high intra-class variance, and class
imbalance limit precise vessel delineation. To overcome these limitations, we
propose the MSA-UNet3+: a Multi-Scale Attention enhanced UNet3+ architecture
for coronary DSA image segmentation. The framework combined Multi-Scale Dilated
Bottleneck (MSD-Bottleneck) with Contextual Attention Fusion Module (CAFM),
which not only enhances multi-scale feature extraction but also preserve
fine-grained details, and improve contextual understanding. Furthermore, we
propose a new Supervised Prototypical Contrastive Loss (SPCL), which combines
supervised and prototypical contrastive learning to minimize class imbalance
and high intra-class variance by focusing on hard-to-classified background
samples. Experiments carried out on a private coronary DSA dataset demonstrate
that MSA-UNet3+ outperforms state-of-the-art methods, achieving a Dice
coefficient of 87.73%, an F1-score of 87.78%, and significantly reduced Average
Surface Distance (ASD) and Average Contour Distance (ACD). The developed
framework provides clinicians with precise vessel segmentation, enabling
accurate identification of coronary stenosis and supporting informed diagnostic
and therapeutic decisions. The code will be released at the following GitHub
profile link https://github.com/rayanmerghani/MSA-UNet3plus.
|
2504.05187 | Yu Min Park | Yu Min Park, Yan Kyaw Tun, Walid Saad, and Choong Seon Hong | Resource-Efficient Beam Prediction in mmWave Communications with
Multimodal Realistic Simulation Framework | 12 pages, 8 figures, Submitted to IEEE Transactions on Communications
on Apr. 07, 2025 | null | null | null | cs.NI cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Beamforming is a key technology in millimeter-wave (mmWave) communications
that improves signal transmission by optimizing directionality and intensity.
However, conventional channel estimation methods, such as pilot signals or beam
sweeping, often fail to adapt to rapidly changing communication environments.
To address this limitation, multimodal sensing-aided beam prediction has gained
significant attention, using various sensing data from devices such as LiDAR,
radar, GPS, and RGB images to predict user locations or network conditions.
Despite its promising potential, the adoption of multimodal sensing-aided beam
prediction is hindered by high computational complexity, high costs, and
limited datasets. Thus, in this paper, a resource-efficient learning approach
is proposed to transfer knowledge from a multimodal network to a monomodal
(radar-only) network based on cross-modal relational knowledge distillation
(CRKD), while reducing computational overhead and preserving predictive
accuracy. To enable multimodal learning with realistic data, a novel multimodal
simulation framework is developed while integrating sensor data generated from
the autonomous driving simulator CARLA with MATLAB-based mmWave channel
modeling, and reflecting real-world conditions. The proposed CRKD achieves its
objective by distilling relational information across different feature spaces,
which enhances beam prediction performance without relying on expensive sensor
data. Simulation results demonstrate that CRKD efficiently distills multimodal
knowledge, allowing a radar-only model to achieve $94.62\%$ of the teacher
performance. In particular, this is achieved with just $10\%$ of the teacher
network's parameters, thereby significantly reducing computational complexity
and dependence on multimodal sensor data.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 15:38:25 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Park",
"Yu Min",
""
],
[
"Tun",
"Yan Kyaw",
""
],
[
"Saad",
"Walid",
""
],
[
"Hong",
"Choong Seon",
""
]
] | TITLE: Resource-Efficient Beam Prediction in mmWave Communications with
Multimodal Realistic Simulation Framework
ABSTRACT: Beamforming is a key technology in millimeter-wave (mmWave) communications
that improves signal transmission by optimizing directionality and intensity.
However, conventional channel estimation methods, such as pilot signals or beam
sweeping, often fail to adapt to rapidly changing communication environments.
To address this limitation, multimodal sensing-aided beam prediction has gained
significant attention, using various sensing data from devices such as LiDAR,
radar, GPS, and RGB images to predict user locations or network conditions.
Despite its promising potential, the adoption of multimodal sensing-aided beam
prediction is hindered by high computational complexity, high costs, and
limited datasets. Thus, in this paper, a resource-efficient learning approach
is proposed to transfer knowledge from a multimodal network to a monomodal
(radar-only) network based on cross-modal relational knowledge distillation
(CRKD), while reducing computational overhead and preserving predictive
accuracy. To enable multimodal learning with realistic data, a novel multimodal
simulation framework is developed while integrating sensor data generated from
the autonomous driving simulator CARLA with MATLAB-based mmWave channel
modeling, and reflecting real-world conditions. The proposed CRKD achieves its
objective by distilling relational information across different feature spaces,
which enhances beam prediction performance without relying on expensive sensor
data. Simulation results demonstrate that CRKD efficiently distills multimodal
knowledge, allowing a radar-only model to achieve $94.62\%$ of the teacher
performance. In particular, this is achieved with just $10\%$ of the teacher
network's parameters, thereby significantly reducing computational complexity
and dependence on multimodal sensor data.
|
2504.05201 | Tejas Sudharshan Mathai | Jared Frazier, Tejas Sudharshan Mathai, Jianfei Liu, Angshuman Paul,
and Ronald M. Summers | 3D Universal Lesion Detection and Tagging in CT with Self-Training | Published at SPIE Medical Imaging 2023 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Radiologists routinely perform the tedious task of lesion localization,
classification, and size measurement in computed tomography (CT) studies.
Universal lesion detection and tagging (ULDT) can simultaneously help alleviate
the cumbersome nature of lesion measurement and enable tumor burden assessment.
Previous ULDT approaches utilize the publicly available DeepLesion dataset,
however it does not provide the full volumetric (3D) extent of lesions and also
displays a severe class imbalance. In this work, we propose a self-training
pipeline to detect 3D lesions and tag them according to the body part they
occur in. We used a significantly limited 30\% subset of DeepLesion to train a
VFNet model for 2D lesion detection and tagging. Next, the 2D lesion context
was expanded into 3D, and the mined 3D lesion proposals were integrated back
into the baseline training data in order to retrain the model over multiple
rounds. Through the self-training procedure, our VFNet model learned from its
own predictions, detected lesions in 3D, and tagged them. Our results indicated
that our VFNet model achieved an average sensitivity of 46.9\% at [0.125:8]
false positives (FP) with a limited 30\% data subset in comparison to the
46.8\% of an existing approach that used the entire DeepLesion dataset. To our
knowledge, we are the first to jointly detect lesions in 3D and tag them
according to the body part label.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 15:50:27 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Frazier",
"Jared",
""
],
[
"Mathai",
"Tejas Sudharshan",
""
],
[
"Liu",
"Jianfei",
""
],
[
"Paul",
"Angshuman",
""
],
[
"Summers",
"Ronald M.",
""
]
] | TITLE: 3D Universal Lesion Detection and Tagging in CT with Self-Training
ABSTRACT: Radiologists routinely perform the tedious task of lesion localization,
classification, and size measurement in computed tomography (CT) studies.
Universal lesion detection and tagging (ULDT) can simultaneously help alleviate
the cumbersome nature of lesion measurement and enable tumor burden assessment.
Previous ULDT approaches utilize the publicly available DeepLesion dataset,
however it does not provide the full volumetric (3D) extent of lesions and also
displays a severe class imbalance. In this work, we propose a self-training
pipeline to detect 3D lesions and tag them according to the body part they
occur in. We used a significantly limited 30\% subset of DeepLesion to train a
VFNet model for 2D lesion detection and tagging. Next, the 2D lesion context
was expanded into 3D, and the mined 3D lesion proposals were integrated back
into the baseline training data in order to retrain the model over multiple
rounds. Through the self-training procedure, our VFNet model learned from its
own predictions, detected lesions in 3D, and tagged them. Our results indicated
that our VFNet model achieved an average sensitivity of 46.9\% at [0.125:8]
false positives (FP) with a limited 30\% data subset in comparison to the
46.8\% of an existing approach that used the entire DeepLesion dataset. To our
knowledge, we are the first to jointly detect lesions in 3D and tag them
according to the body part label.
|
2504.05202 | Pasin Manurangsi | Charlie Harrison, Pasin Manurangsi | Infinitely Divisible Noise for Differential Privacy: Nearly Optimal
Error in the High $\varepsilon$ Regime | To appear in FORC 2025 | null | null | null | cs.CR cs.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Differential privacy (DP) can be achieved in a distributed manner, where
multiple parties add independent noise such that their sum protects the overall
dataset with DP. A common technique here is for each party to sample their
noise from the decomposition of an infinitely divisible distribution. We
analyze two mechanisms in this setting: 1) the generalized discrete Laplace
(GDL) mechanism, whose distribution (which is closed under summation) follows
from differences of i.i.d. negative binomial shares, and 2) the multi-scale
discrete Laplace (MSDLap) mechanism, a novel mechanism following the sum of
multiple i.i.d. discrete Laplace shares at different scales.
For $\varepsilon \geq 1$, our mechanisms can be parameterized to have
$O\left(\Delta^3 e^{-\varepsilon}\right)$ and $O\left(\min\left(\Delta^3
e^{-\varepsilon}, \Delta^2 e^{-2\varepsilon/3}\right)\right)$ MSE,
respectively, where $\Delta$ denote the sensitivity; the latter bound matches
known optimality results. We also show a transformation from the discrete
setting to the continuous setting, which allows us to transform both mechanisms
to the continuous setting and thereby achieve the optimal $O\left(\Delta^2
e^{-2\varepsilon / 3}\right)$ MSE. To our knowledge, these are the first
infinitely divisible additive noise mechanisms that achieve order-optimal MSE
under pure DP, so our work shows formally there is no separation in utility
when query-independent noise adding mechanisms are restricted to infinitely
divisible noise. For the continuous setting, our result improves upon the Arete
mechanism from [Pagh and Stausholm, ALT 2022] which gives an MSE of
$O\left(\Delta^2 e^{-\varepsilon/4}\right)$. Furthermore, we give an exact
sampler tuned to efficiently implement the MSDLap mechanism, and we apply our
results to improve a state of the art multi-message shuffle DP protocol in the
high $\varepsilon$ regime.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 15:50:46 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Harrison",
"Charlie",
""
],
[
"Manurangsi",
"Pasin",
""
]
] | TITLE: Infinitely Divisible Noise for Differential Privacy: Nearly Optimal
Error in the High $\varepsilon$ Regime
ABSTRACT: Differential privacy (DP) can be achieved in a distributed manner, where
multiple parties add independent noise such that their sum protects the overall
dataset with DP. A common technique here is for each party to sample their
noise from the decomposition of an infinitely divisible distribution. We
analyze two mechanisms in this setting: 1) the generalized discrete Laplace
(GDL) mechanism, whose distribution (which is closed under summation) follows
from differences of i.i.d. negative binomial shares, and 2) the multi-scale
discrete Laplace (MSDLap) mechanism, a novel mechanism following the sum of
multiple i.i.d. discrete Laplace shares at different scales.
For $\varepsilon \geq 1$, our mechanisms can be parameterized to have
$O\left(\Delta^3 e^{-\varepsilon}\right)$ and $O\left(\min\left(\Delta^3
e^{-\varepsilon}, \Delta^2 e^{-2\varepsilon/3}\right)\right)$ MSE,
respectively, where $\Delta$ denote the sensitivity; the latter bound matches
known optimality results. We also show a transformation from the discrete
setting to the continuous setting, which allows us to transform both mechanisms
to the continuous setting and thereby achieve the optimal $O\left(\Delta^2
e^{-2\varepsilon / 3}\right)$ MSE. To our knowledge, these are the first
infinitely divisible additive noise mechanisms that achieve order-optimal MSE
under pure DP, so our work shows formally there is no separation in utility
when query-independent noise adding mechanisms are restricted to infinitely
divisible noise. For the continuous setting, our result improves upon the Arete
mechanism from [Pagh and Stausholm, ALT 2022] which gives an MSE of
$O\left(\Delta^2 e^{-\varepsilon/4}\right)$. Furthermore, we give an exact
sampler tuned to efficiently implement the MSDLap mechanism, and we apply our
results to improve a state of the art multi-message shuffle DP protocol in the
high $\varepsilon$ regime.
|
2504.05207 | Tejas Sudharshan Mathai | Alexander Shieh, Tejas Sudharshan Mathai, Jianfei Liu, Angshuman Paul,
and Ronald M. Summers | Correcting Class Imbalances with Self-Training for Improved Universal
Lesion Detection and Tagging | Published at SPIE Medical Imaging 2023 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Universal lesion detection and tagging (ULDT) in CT studies is critical for
tumor burden assessment and tracking the progression of lesion status
(growth/shrinkage) over time. However, a lack of fully annotated data hinders
the development of effective ULDT approaches. Prior work used the DeepLesion
dataset (4,427 patients, 10,594 studies, 32,120 CT slices, 32,735 lesions, 8
body part labels) for algorithmic development, but this dataset is not
completely annotated and contains class imbalances. To address these issues, in
this work, we developed a self-training pipeline for ULDT. A VFNet model was
trained on a limited 11.5\% subset of DeepLesion (bounding boxes + tags) to
detect and classify lesions in CT studies. Then, it identified and incorporated
novel lesion candidates from a larger unseen data subset into its training set,
and self-trained itself over multiple rounds. Multiple self-training
experiments were conducted with different threshold policies to select
predicted lesions with higher quality and cover the class imbalances. We
discovered that direct self-training improved the sensitivities of
over-represented lesion classes at the expense of under-represented classes.
However, upsampling the lesions mined during self-training along with a
variable threshold policy yielded a 6.5\% increase in sensitivity at 4 FP in
contrast to self-training without class balancing (72\% vs 78.5\%) and a 11.7\%
increase compared to the same self-training policy without upsampling (66.8\%
vs 78.5\%). Furthermore, we show that our results either improved or maintained
the sensitivity at 4FP for all 8 lesion classes.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 15:57:03 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Shieh",
"Alexander",
""
],
[
"Mathai",
"Tejas Sudharshan",
""
],
[
"Liu",
"Jianfei",
""
],
[
"Paul",
"Angshuman",
""
],
[
"Summers",
"Ronald M.",
""
]
] | TITLE: Correcting Class Imbalances with Self-Training for Improved Universal
Lesion Detection and Tagging
ABSTRACT: Universal lesion detection and tagging (ULDT) in CT studies is critical for
tumor burden assessment and tracking the progression of lesion status
(growth/shrinkage) over time. However, a lack of fully annotated data hinders
the development of effective ULDT approaches. Prior work used the DeepLesion
dataset (4,427 patients, 10,594 studies, 32,120 CT slices, 32,735 lesions, 8
body part labels) for algorithmic development, but this dataset is not
completely annotated and contains class imbalances. To address these issues, in
this work, we developed a self-training pipeline for ULDT. A VFNet model was
trained on a limited 11.5\% subset of DeepLesion (bounding boxes + tags) to
detect and classify lesions in CT studies. Then, it identified and incorporated
novel lesion candidates from a larger unseen data subset into its training set,
and self-trained itself over multiple rounds. Multiple self-training
experiments were conducted with different threshold policies to select
predicted lesions with higher quality and cover the class imbalances. We
discovered that direct self-training improved the sensitivities of
over-represented lesion classes at the expense of under-represented classes.
However, upsampling the lesions mined during self-training along with a
variable threshold policy yielded a 6.5\% increase in sensitivity at 4 FP in
contrast to self-training without class balancing (72\% vs 78.5\%) and a 11.7\%
increase compared to the same self-training policy without upsampling (66.8\%
vs 78.5\%). Furthermore, we show that our results either improved or maintained
the sensitivity at 4FP for all 8 lesion classes.
|
2504.05210 | Joshua Hatherley | Joshua Hatherley | A moving target in AI-assisted decision-making: Dataset shift, model
updating, and the problem of update opacity | null | Ethics and Information Technology 27(2): 20 (2025) | 10.1007/s10676-025-09829-2 | null | cs.CY cs.AI cs.HC cs.LG | http://creativecommons.org/licenses/by/4.0/ | Machine learning (ML) systems are vulnerable to performance decline over time
due to dataset shift. To address this problem, experts often suggest that ML
systems should be regularly updated to ensure ongoing performance stability.
Some scholarly literature has begun to address the epistemic and ethical
challenges associated with different updating methodologies. Thus far, however,
little attention has been paid to the impact of model updating on the
ML-assisted decision-making process itself, particularly in the AI ethics and
AI epistemology literatures. This article aims to address this gap in the
literature. It argues that model updating introduces a new sub-type of opacity
into ML-assisted decision-making -- update opacity -- that occurs when users
cannot understand how or why an update has changed the reasoning or behaviour
of an ML system. This type of opacity presents a variety of distinctive
epistemic and safety concerns that available solutions to the black box problem
in ML are largely ill-equipped to address. A variety of alternative strategies
may be developed or pursued to address the problem of update opacity more
directly, including bi-factual explanations, dynamic model reporting, and
update compatibility. However, each of these strategies presents its own risks
or carries significant limitations. Further research will be needed to address
the epistemic and safety concerns associated with model updating and update
opacity going forward.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 15:58:23 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Hatherley",
"Joshua",
""
]
] | TITLE: A moving target in AI-assisted decision-making: Dataset shift, model
updating, and the problem of update opacity
ABSTRACT: Machine learning (ML) systems are vulnerable to performance decline over time
due to dataset shift. To address this problem, experts often suggest that ML
systems should be regularly updated to ensure ongoing performance stability.
Some scholarly literature has begun to address the epistemic and ethical
challenges associated with different updating methodologies. Thus far, however,
little attention has been paid to the impact of model updating on the
ML-assisted decision-making process itself, particularly in the AI ethics and
AI epistemology literatures. This article aims to address this gap in the
literature. It argues that model updating introduces a new sub-type of opacity
into ML-assisted decision-making -- update opacity -- that occurs when users
cannot understand how or why an update has changed the reasoning or behaviour
of an ML system. This type of opacity presents a variety of distinctive
epistemic and safety concerns that available solutions to the black box problem
in ML are largely ill-equipped to address. A variety of alternative strategies
may be developed or pursued to address the problem of update opacity more
directly, including bi-factual explanations, dynamic model reporting, and
update compatibility. However, each of these strategies presents its own risks
or carries significant limitations. Further research will be needed to address
the epistemic and safety concerns associated with model updating and update
opacity going forward.
|
2504.05214 | Sefika Efeoglu | Sefika Efeoglu, Adrian Paschke, Sonja Schimmler | Post-Training Language Models for Continual Relation Extraction | 17 pages | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Real-world data, such as news articles, social media posts, and chatbot
conversations, is inherently dynamic and non-stationary, presenting significant
challenges for constructing real-time structured representations through
knowledge graphs (KGs). Relation Extraction (RE), a fundamental component of KG
creation, often struggles to adapt to evolving data when traditional models
rely on static, outdated datasets. Continual Relation Extraction (CRE) methods
tackle this issue by incrementally learning new relations while preserving
previously acquired knowledge. This study investigates the application of
pre-trained language models (PLMs), specifically large language models (LLMs),
to CRE, with a focus on leveraging memory replay to address catastrophic
forgetting. We evaluate decoder-only models (eg, Mistral-7B and Llama2-7B) and
encoder-decoder models (eg, Flan-T5 Base) on the TACRED and FewRel datasets.
Task-incremental fine-tuning of LLMs demonstrates superior performance over
earlier approaches using encoder-only models like BERT on TACRED, excelling in
seen-task accuracy and overall performance (measured by whole and average
accuracy), particularly with the Mistral and Flan-T5 models. Results on FewRel
are similarly promising, achieving second place in whole and average accuracy
metrics. This work underscores critical factors in knowledge transfer, language
model architecture, and KG completeness, advancing CRE with LLMs and memory
replay for dynamic, real-time relation extraction.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 16:01:22 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Efeoglu",
"Sefika",
""
],
[
"Paschke",
"Adrian",
""
],
[
"Schimmler",
"Sonja",
""
]
] | TITLE: Post-Training Language Models for Continual Relation Extraction
ABSTRACT: Real-world data, such as news articles, social media posts, and chatbot
conversations, is inherently dynamic and non-stationary, presenting significant
challenges for constructing real-time structured representations through
knowledge graphs (KGs). Relation Extraction (RE), a fundamental component of KG
creation, often struggles to adapt to evolving data when traditional models
rely on static, outdated datasets. Continual Relation Extraction (CRE) methods
tackle this issue by incrementally learning new relations while preserving
previously acquired knowledge. This study investigates the application of
pre-trained language models (PLMs), specifically large language models (LLMs),
to CRE, with a focus on leveraging memory replay to address catastrophic
forgetting. We evaluate decoder-only models (eg, Mistral-7B and Llama2-7B) and
encoder-decoder models (eg, Flan-T5 Base) on the TACRED and FewRel datasets.
Task-incremental fine-tuning of LLMs demonstrates superior performance over
earlier approaches using encoder-only models like BERT on TACRED, excelling in
seen-task accuracy and overall performance (measured by whole and average
accuracy), particularly with the Mistral and Flan-T5 models. Results on FewRel
are similarly promising, achieving second place in whole and average accuracy
metrics. This work underscores critical factors in knowledge transfer, language
model architecture, and KG completeness, advancing CRE with LLMs and memory
replay for dynamic, real-time relation extraction.
|
2504.05219 | Abdurrahim Yilmaz | Abdurrahim Yilmaz, Serra Atilla Aydin, Deniz Temur, Furkan Yuceyalcin,
Berkin Deniz Kahya, Rahmetullah Varol, Ozay Gokoz, Gulsum Gencoglan, Huseyin
Uvet, Gonca Elcin | An ensemble deep learning approach to detect tumors on Mohs micrographic
surgery slides | 14 pages, 2 figures | null | null | null | cs.CV eess.IV | http://creativecommons.org/licenses/by/4.0/ | Mohs micrographic surgery (MMS) is the gold standard technique for removing
high risk nonmelanoma skin cancer however, intraoperative histopathological
examination demands significant time, effort, and professionality. The
objective of this study is to develop a deep learning model to detect basal
cell carcinoma (BCC) and artifacts on Mohs slides. A total of 731 Mohs slides
from 51 patients with BCCs were used in this study, with 91 containing tumor
and 640 without tumor which was defined as non-tumor. The dataset was employed
to train U-Net based models that segment tumor and non-tumor regions on the
slides. The segmented patches were classified as tumor, or non-tumor to produce
predictions for whole slide images (WSIs). For the segmentation phase, the deep
learning model success was measured using a Dice score with 0.70 and 0.67
value, area under the curve (AUC) score with 0.98 and 0.96 for tumor and
non-tumor, respectively. For the tumor classification, an AUC of 0.98 for
patch-based detection, and AUC of 0.91 for slide-based detection was obtained
on the test dataset. We present an AI system that can detect tumors and
non-tumors in Mohs slides with high success. Deep learning can aid Mohs
surgeons and dermatopathologists in making more accurate decisions.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 16:05:42 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yilmaz",
"Abdurrahim",
""
],
[
"Aydin",
"Serra Atilla",
""
],
[
"Temur",
"Deniz",
""
],
[
"Yuceyalcin",
"Furkan",
""
],
[
"Kahya",
"Berkin Deniz",
""
],
[
"Varol",
"Rahmetullah",
""
],
[
"Gokoz",
"Ozay",
""
],
[
"Gencoglan",
"Gulsum",
""
],
[
"Uvet",
"Huseyin",
""
],
[
"Elcin",
"Gonca",
""
]
] | TITLE: An ensemble deep learning approach to detect tumors on Mohs micrographic
surgery slides
ABSTRACT: Mohs micrographic surgery (MMS) is the gold standard technique for removing
high risk nonmelanoma skin cancer however, intraoperative histopathological
examination demands significant time, effort, and professionality. The
objective of this study is to develop a deep learning model to detect basal
cell carcinoma (BCC) and artifacts on Mohs slides. A total of 731 Mohs slides
from 51 patients with BCCs were used in this study, with 91 containing tumor
and 640 without tumor which was defined as non-tumor. The dataset was employed
to train U-Net based models that segment tumor and non-tumor regions on the
slides. The segmented patches were classified as tumor, or non-tumor to produce
predictions for whole slide images (WSIs). For the segmentation phase, the deep
learning model success was measured using a Dice score with 0.70 and 0.67
value, area under the curve (AUC) score with 0.98 and 0.96 for tumor and
non-tumor, respectively. For the tumor classification, an AUC of 0.98 for
patch-based detection, and AUC of 0.91 for slide-based detection was obtained
on the test dataset. We present an AI system that can detect tumors and
non-tumors in Mohs slides with high success. Deep learning can aid Mohs
surgeons and dermatopathologists in making more accurate decisions.
|
2504.05224 | Zeqin Yu | Zeqin Yu, Jiangqun Ni, Jian Zhang, Haoyi Deng, Yuzhen Lin | Reinforced Multi-teacher Knowledge Distillation for Efficient General
Image Forgery Detection and Localization | Published to AAAI2025 (Oral) | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Image forgery detection and localization (IFDL) is of vital importance as
forged images can spread misinformation that poses potential threats to our
daily lives. However, previous methods still struggled to effectively handle
forged images processed with diverse forgery operations in real-world
scenarios. In this paper, we propose a novel Reinforced Multi-teacher Knowledge
Distillation (Re-MTKD) framework for the IFDL task, structured around an
encoder-decoder \textbf{C}onvNeXt-\textbf{U}perNet along with
\textbf{E}dge-Aware Module, named Cue-Net. First, three Cue-Net models are
separately trained for the three main types of image forgeries, i.e.,
copy-move, splicing, and inpainting, which then serve as the multi-teacher
models to train the target student model with Cue-Net through self-knowledge
distillation. A Reinforced Dynamic Teacher Selection (Re-DTS) strategy is
developed to dynamically assign weights to the involved teacher models, which
facilitates specific knowledge transfer and enables the student model to
effectively learn both the common and specific natures of diverse tampering
traces. Extensive experiments demonstrate that, compared with other
state-of-the-art methods, the proposed method achieves superior performance on
several recently emerged datasets comprised of various kinds of image
forgeries.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 16:12:05 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yu",
"Zeqin",
""
],
[
"Ni",
"Jiangqun",
""
],
[
"Zhang",
"Jian",
""
],
[
"Deng",
"Haoyi",
""
],
[
"Lin",
"Yuzhen",
""
]
] | TITLE: Reinforced Multi-teacher Knowledge Distillation for Efficient General
Image Forgery Detection and Localization
ABSTRACT: Image forgery detection and localization (IFDL) is of vital importance as
forged images can spread misinformation that poses potential threats to our
daily lives. However, previous methods still struggled to effectively handle
forged images processed with diverse forgery operations in real-world
scenarios. In this paper, we propose a novel Reinforced Multi-teacher Knowledge
Distillation (Re-MTKD) framework for the IFDL task, structured around an
encoder-decoder \textbf{C}onvNeXt-\textbf{U}perNet along with
\textbf{E}dge-Aware Module, named Cue-Net. First, three Cue-Net models are
separately trained for the three main types of image forgeries, i.e.,
copy-move, splicing, and inpainting, which then serve as the multi-teacher
models to train the target student model with Cue-Net through self-knowledge
distillation. A Reinforced Dynamic Teacher Selection (Re-DTS) strategy is
developed to dynamically assign weights to the involved teacher models, which
facilitates specific knowledge transfer and enables the student model to
effectively learn both the common and specific natures of diverse tampering
traces. Extensive experiments demonstrate that, compared with other
state-of-the-art methods, the proposed method achieves superior performance on
several recently emerged datasets comprised of various kinds of image
forgeries.
|
2504.05227 | Julio Silva-Rodr\'iguez | Julio Silva-Rodr\'iguez, Jose Dolz and Ismail Ben Ayed | A Reality Check of Vision-Language Pre-training in Radiology: Have We
Progressed Using Text? | IPMI 2025. Code and weights: https://github.com/jusiro/DLILP | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Vision-language pre-training has recently gained popularity as it allows
learning rich feature representations using large-scale data sources. This
paradigm has quickly made its way into the medical image analysis community. In
particular, there is an impressive amount of recent literature developing
vision-language models for radiology. However, the available medical datasets
with image-text supervision are scarce, and medical concepts are fine-grained,
involving expert knowledge that existing vision-language models struggle to
encode. In this paper, we propose to take a prudent step back from the
literature and revisit supervised, unimodal pre-training, using fine-grained
labels instead. We conduct an extensive comparison demonstrating that unimodal
pre-training is highly competitive and better suited to integrating
heterogeneous data sources. Our results also question the potential of recent
vision-language models for open-vocabulary generalization, which have been
evaluated using optimistic experimental settings. Finally, we study novel
alternatives to better integrate fine-grained labels and noisy text
supervision.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 16:13:26 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Silva-Rodríguez",
"Julio",
""
],
[
"Dolz",
"Jose",
""
],
[
"Ayed",
"Ismail Ben",
""
]
] | TITLE: A Reality Check of Vision-Language Pre-training in Radiology: Have We
Progressed Using Text?
ABSTRACT: Vision-language pre-training has recently gained popularity as it allows
learning rich feature representations using large-scale data sources. This
paradigm has quickly made its way into the medical image analysis community. In
particular, there is an impressive amount of recent literature developing
vision-language models for radiology. However, the available medical datasets
with image-text supervision are scarce, and medical concepts are fine-grained,
involving expert knowledge that existing vision-language models struggle to
encode. In this paper, we propose to take a prudent step back from the
literature and revisit supervised, unimodal pre-training, using fine-grained
labels instead. We conduct an extensive comparison demonstrating that unimodal
pre-training is highly competitive and better suited to integrating
heterogeneous data sources. Our results also question the potential of recent
vision-language models for open-vocabulary generalization, which have been
evaluated using optimistic experimental settings. Finally, we study novel
alternatives to better integrate fine-grained labels and noisy text
supervision.
|
2504.05229 | Islam Eldifrawi Mr. | Islam Eldifrawi, Shengrui Wang, Amine Trabelsi | FinGrAct: A Framework for FINe-GRrained Evaluation of ACTionability in
Explainable Automatic Fact-Checking | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The field of explainable Automatic Fact-Checking (AFC) aims to enhance the
transparency and trustworthiness of automated fact-verification systems by
providing clear and comprehensible explanations. However, the effectiveness of
these explanations depends on their actionability --their ability to empower
users to make informed decisions and mitigate misinformation. Despite
actionability being a critical property of high-quality explanations, no prior
research has proposed a dedicated method to evaluate it. This paper introduces
FinGrAct, a fine-grained evaluation framework that can access the web, and it
is designed to assess actionability in AFC explanations through well-defined
criteria and an evaluation dataset. FinGrAct surpasses state-of-the-art (SOTA)
evaluators, achieving the highest Pearson and Kendall correlation with human
judgments while demonstrating the lowest ego-centric bias, making it a more
robust evaluation approach for actionability evaluation in AFC.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 16:14:27 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Eldifrawi",
"Islam",
""
],
[
"Wang",
"Shengrui",
""
],
[
"Trabelsi",
"Amine",
""
]
] | TITLE: FinGrAct: A Framework for FINe-GRrained Evaluation of ACTionability in
Explainable Automatic Fact-Checking
ABSTRACT: The field of explainable Automatic Fact-Checking (AFC) aims to enhance the
transparency and trustworthiness of automated fact-verification systems by
providing clear and comprehensible explanations. However, the effectiveness of
these explanations depends on their actionability --their ability to empower
users to make informed decisions and mitigate misinformation. Despite
actionability being a critical property of high-quality explanations, no prior
research has proposed a dedicated method to evaluate it. This paper introduces
FinGrAct, a fine-grained evaluation framework that can access the web, and it
is designed to assess actionability in AFC explanations through well-defined
criteria and an evaluation dataset. FinGrAct surpasses state-of-the-art (SOTA)
evaluators, achieving the highest Pearson and Kendall correlation with human
judgments while demonstrating the lowest ego-centric bias, making it a more
robust evaluation approach for actionability evaluation in AFC.
|
2504.05238 | Guibo Luo | Zhekai Zhou, Guibo Luo, Mingzhi Chen, Zhenyu Weng, and Yuesheng Zhu | Federated Learning for Medical Image Classification: A Comprehensive
Benchmark | null | null | null | null | cs.CV cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The federated learning paradigm is wellsuited for the field of medical image
analysis, as it can effectively cope with machine learning on isolated
multicenter data while protecting the privacy of participating parties.
However, current research on optimization algorithms in federated learning
often focuses on limited datasets and scenarios, primarily centered around
natural images, with insufficient comparative experiments in medical contexts.
In this work, we conduct a comprehensive evaluation of several state-of-the-art
federated learning algorithms in the context of medical imaging. We conduct a
fair comparison of classification models trained using various federated
learning algorithms across multiple medical imaging datasets. Additionally, we
evaluate system performance metrics, such as communication cost and
computational efficiency, while considering different federated learning
architectures. Our findings show that medical imaging datasets pose substantial
challenges for current federated learning optimization algorithms. No single
algorithm consistently delivers optimal performance across all medical
federated learning scenarios, and many optimization algorithms may underperform
when applied to these datasets. Our experiments provide a benchmark and
guidance for future research and application of federated learning in medical
imaging contexts. Furthermore, we propose an efficient and robust method that
combines generative techniques using denoising diffusion probabilistic models
with label smoothing to augment datasets, widely enhancing the performance of
federated learning on classification tasks across various medical imaging
datasets. Our code will be released on GitHub, offering a reliable and
comprehensive benchmark for future federated learning studies in medical
imaging.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 16:22:18 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhou",
"Zhekai",
""
],
[
"Luo",
"Guibo",
""
],
[
"Chen",
"Mingzhi",
""
],
[
"Weng",
"Zhenyu",
""
],
[
"Zhu",
"Yuesheng",
""
]
] | TITLE: Federated Learning for Medical Image Classification: A Comprehensive
Benchmark
ABSTRACT: The federated learning paradigm is wellsuited for the field of medical image
analysis, as it can effectively cope with machine learning on isolated
multicenter data while protecting the privacy of participating parties.
However, current research on optimization algorithms in federated learning
often focuses on limited datasets and scenarios, primarily centered around
natural images, with insufficient comparative experiments in medical contexts.
In this work, we conduct a comprehensive evaluation of several state-of-the-art
federated learning algorithms in the context of medical imaging. We conduct a
fair comparison of classification models trained using various federated
learning algorithms across multiple medical imaging datasets. Additionally, we
evaluate system performance metrics, such as communication cost and
computational efficiency, while considering different federated learning
architectures. Our findings show that medical imaging datasets pose substantial
challenges for current federated learning optimization algorithms. No single
algorithm consistently delivers optimal performance across all medical
federated learning scenarios, and many optimization algorithms may underperform
when applied to these datasets. Our experiments provide a benchmark and
guidance for future research and application of federated learning in medical
imaging contexts. Furthermore, we propose an efficient and robust method that
combines generative techniques using denoising diffusion probabilistic models
with label smoothing to augment datasets, widely enhancing the performance of
federated learning on classification tasks across various medical imaging
datasets. Our code will be released on GitHub, offering a reliable and
comprehensive benchmark for future federated learning studies in medical
imaging.
|
2504.05245 | Afsaneh Mahanipour | Afsaneh Mahanipour, Hana Khamfroush | Embedded Federated Feature Selection with Dynamic Sparse Training:
Balancing Accuracy-Cost Tradeoffs | This paper has been accepted for presentation at IJCNN 2025 | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning (FL) enables multiple resource-constrained edge devices
with varying levels of heterogeneity to collaboratively train a global model.
However, devices with limited capacity can create bottlenecks and slow down
model convergence. One effective approach to addressing this issue is to use an
efficient feature selection method, which reduces overall resource demands by
minimizing communication and computation costs, thereby mitigating the impact
of struggling nodes. Existing federated feature selection (FFS) methods are
either considered as a separate step from FL or rely on a third party. These
approaches increase computation and communication overhead, making them
impractical for real-world high-dimensional datasets. To address this, we
present \textit{Dynamic Sparse Federated Feature Selection} (DSFFS), the first
innovative embedded FFS that is efficient in both communication and
computation. In the proposed method, feature selection occurs simultaneously
with model training. During training, input-layer neurons, their connections,
and hidden-layer connections are dynamically pruned and regrown, eliminating
uninformative features. This process enhances computational efficiency on
devices, improves network communication efficiency, and boosts global model
performance. Several experiments are conducted on nine real-world datasets of
varying dimensionality from diverse domains, including biology, image, speech,
and text. The results under a realistic non-iid data distribution setting show
that our approach achieves a better trade-off between accuracy, computation,
and communication costs by selecting more informative features compared to
other state-of-the-art FFS methods.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 16:33:05 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Mahanipour",
"Afsaneh",
""
],
[
"Khamfroush",
"Hana",
""
]
] | TITLE: Embedded Federated Feature Selection with Dynamic Sparse Training:
Balancing Accuracy-Cost Tradeoffs
ABSTRACT: Federated Learning (FL) enables multiple resource-constrained edge devices
with varying levels of heterogeneity to collaboratively train a global model.
However, devices with limited capacity can create bottlenecks and slow down
model convergence. One effective approach to addressing this issue is to use an
efficient feature selection method, which reduces overall resource demands by
minimizing communication and computation costs, thereby mitigating the impact
of struggling nodes. Existing federated feature selection (FFS) methods are
either considered as a separate step from FL or rely on a third party. These
approaches increase computation and communication overhead, making them
impractical for real-world high-dimensional datasets. To address this, we
present \textit{Dynamic Sparse Federated Feature Selection} (DSFFS), the first
innovative embedded FFS that is efficient in both communication and
computation. In the proposed method, feature selection occurs simultaneously
with model training. During training, input-layer neurons, their connections,
and hidden-layer connections are dynamically pruned and regrown, eliminating
uninformative features. This process enhances computational efficiency on
devices, improves network communication efficiency, and boosts global model
performance. Several experiments are conducted on nine real-world datasets of
varying dimensionality from diverse domains, including biology, image, speech,
and text. The results under a realistic non-iid data distribution setting show
that our approach achieves a better trade-off between accuracy, computation,
and communication costs by selecting more informative features compared to
other state-of-the-art FFS methods.
|
2504.05249 | Olaf Wysocki | Wenzhao Tang, Weihang Li, Xiucheng Liang, Olaf Wysocki, Filip
Biljecki, Christoph Holst, Boris Jutzi | Texture2LoD3: Enabling LoD3 Building Reconstruction With Panoramic
Images | Accepted for CVPRW '25 | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Despite recent advancements in surface reconstruction, Level of Detail (LoD)
3 building reconstruction remains an unresolved challenge. The main issue
pertains to the object-oriented modelling paradigm, which requires
georeferencing, watertight geometry, facade semantics, and low-poly
representation -- Contrasting unstructured mesh-oriented models. In
Texture2LoD3, we introduce a novel method leveraging the ubiquity of 3D
building model priors and panoramic street-level images, enabling the
reconstruction of LoD3 building models. We observe that prior low-detail
building models can serve as valid planar targets for ortho-rectifying
street-level panoramic images. Moreover, deploying segmentation on accurately
textured low-level building surfaces supports maintaining essential
georeferencing, watertight geometry, and low-poly representation for LoD3
reconstruction. In the absence of LoD3 validation data, we additionally
introduce the ReLoD3 dataset, on which we experimentally demonstrate that our
method leads to improved facade segmentation accuracy by 11% and can replace
costly manual projections. We believe that Texture2LoD3 can scale the adoption
of LoD3 models, opening applications in estimating building solar potential or
enhancing autonomous driving simulations. The project website, code, and data
are available here: https://wenzhaotang.github.io/Texture2LoD3/.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 16:40:16 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Tang",
"Wenzhao",
""
],
[
"Li",
"Weihang",
""
],
[
"Liang",
"Xiucheng",
""
],
[
"Wysocki",
"Olaf",
""
],
[
"Biljecki",
"Filip",
""
],
[
"Holst",
"Christoph",
""
],
[
"Jutzi",
"Boris",
""
]
] | TITLE: Texture2LoD3: Enabling LoD3 Building Reconstruction With Panoramic
Images
ABSTRACT: Despite recent advancements in surface reconstruction, Level of Detail (LoD)
3 building reconstruction remains an unresolved challenge. The main issue
pertains to the object-oriented modelling paradigm, which requires
georeferencing, watertight geometry, facade semantics, and low-poly
representation -- Contrasting unstructured mesh-oriented models. In
Texture2LoD3, we introduce a novel method leveraging the ubiquity of 3D
building model priors and panoramic street-level images, enabling the
reconstruction of LoD3 building models. We observe that prior low-detail
building models can serve as valid planar targets for ortho-rectifying
street-level panoramic images. Moreover, deploying segmentation on accurately
textured low-level building surfaces supports maintaining essential
georeferencing, watertight geometry, and low-poly representation for LoD3
reconstruction. In the absence of LoD3 validation data, we additionally
introduce the ReLoD3 dataset, on which we experimentally demonstrate that our
method leads to improved facade segmentation accuracy by 11% and can replace
costly manual projections. We believe that Texture2LoD3 can scale the adoption
of LoD3 models, opening applications in estimating building solar potential or
enhancing autonomous driving simulations. The project website, code, and data
are available here: https://wenzhaotang.github.io/Texture2LoD3/.
|
2504.05253 | Ben Lonnqvist | Ben Lonnqvist, Elsa Scialom, Abdulkadir Gokce, Zehra Merchant, Michael
H. Herzog, Martin Schrimpf | Contour Integration Underlies Human-Like Vision | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Despite the tremendous success of deep learning in computer vision, models
still fall behind humans in generalizing to new input distributions. Existing
benchmarks do not investigate the specific failure points of models by
analyzing performance under many controlled conditions. Our study
systematically dissects where and why models struggle with contour integration
-- a hallmark of human vision -- by designing an experiment that tests object
recognition under various levels of object fragmentation. Humans (n=50) perform
at high accuracy, even with few object contours present. This is in contrast to
models which exhibit substantially lower sensitivity to increasing object
contours, with most of the over 1,000 models we tested barely performing above
chance. Only at very large scales ($\sim5B$ training dataset size) do models
begin to approach human performance. Importantly, humans exhibit an integration
bias -- a preference towards recognizing objects made up of directional
fragments over directionless fragments. We find that not only do models that
share this property perform better at our task, but that this bias also
increases with model training dataset size, and training models to exhibit
contour integration leads to high shape bias. Taken together, our results
suggest that contour integration is a hallmark of object vision that underlies
object recognition performance, and may be a mechanism learned from data at
scale.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 16:45:06 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lonnqvist",
"Ben",
""
],
[
"Scialom",
"Elsa",
""
],
[
"Gokce",
"Abdulkadir",
""
],
[
"Merchant",
"Zehra",
""
],
[
"Herzog",
"Michael H.",
""
],
[
"Schrimpf",
"Martin",
""
]
] | TITLE: Contour Integration Underlies Human-Like Vision
ABSTRACT: Despite the tremendous success of deep learning in computer vision, models
still fall behind humans in generalizing to new input distributions. Existing
benchmarks do not investigate the specific failure points of models by
analyzing performance under many controlled conditions. Our study
systematically dissects where and why models struggle with contour integration
-- a hallmark of human vision -- by designing an experiment that tests object
recognition under various levels of object fragmentation. Humans (n=50) perform
at high accuracy, even with few object contours present. This is in contrast to
models which exhibit substantially lower sensitivity to increasing object
contours, with most of the over 1,000 models we tested barely performing above
chance. Only at very large scales ($\sim5B$ training dataset size) do models
begin to approach human performance. Importantly, humans exhibit an integration
bias -- a preference towards recognizing objects made up of directional
fragments over directionless fragments. We find that not only do models that
share this property perform better at our task, but that this bias also
increases with model training dataset size, and training models to exhibit
contour integration leads to high shape bias. Taken together, our results
suggest that contour integration is a hallmark of object vision that underlies
object recognition performance, and may be a mechanism learned from data at
scale.
|
2504.05254 | Sara Pohland | Sara Pohland and Claire Tomlin | Explaining Low Perception Model Competency with High-Competency
Counterfactuals | null | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | There exist many methods to explain how an image classification model
generates its decision, but very little work has explored methods to explain
why a classifier might lack confidence in its prediction. As there are various
reasons the classifier might lose confidence, it would be valuable for this
model to not only indicate its level of uncertainty but also explain why it is
uncertain. Counterfactual images have been used to visualize changes that could
be made to an image to generate a different classification decision. In this
work, we explore the use of counterfactuals to offer an explanation for low
model competency--a generalized form of predictive uncertainty that measures
confidence. Toward this end, we develop five novel methods to generate
high-competency counterfactual images, namely Image Gradient Descent (IGD),
Feature Gradient Descent (FGD), Autoencoder Reconstruction (Reco), Latent
Gradient Descent (LGD), and Latent Nearest Neighbors (LNN). We evaluate these
methods across two unique datasets containing images with six known causes for
low model competency and find Reco, LGD, and LNN to be the most promising
methods for counterfactual generation. We further evaluate how these three
methods can be utilized by pre-trained Multimodal Large Language Models (MLLMs)
to generate language explanations for low model competency. We find that the
inclusion of a counterfactual image in the language model query greatly
increases the ability of the model to generate an accurate explanation for the
cause of low model competency, thus demonstrating the utility of counterfactual
images in explaining low perception model competency.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 16:46:52 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Pohland",
"Sara",
""
],
[
"Tomlin",
"Claire",
""
]
] | TITLE: Explaining Low Perception Model Competency with High-Competency
Counterfactuals
ABSTRACT: There exist many methods to explain how an image classification model
generates its decision, but very little work has explored methods to explain
why a classifier might lack confidence in its prediction. As there are various
reasons the classifier might lose confidence, it would be valuable for this
model to not only indicate its level of uncertainty but also explain why it is
uncertain. Counterfactual images have been used to visualize changes that could
be made to an image to generate a different classification decision. In this
work, we explore the use of counterfactuals to offer an explanation for low
model competency--a generalized form of predictive uncertainty that measures
confidence. Toward this end, we develop five novel methods to generate
high-competency counterfactual images, namely Image Gradient Descent (IGD),
Feature Gradient Descent (FGD), Autoencoder Reconstruction (Reco), Latent
Gradient Descent (LGD), and Latent Nearest Neighbors (LNN). We evaluate these
methods across two unique datasets containing images with six known causes for
low model competency and find Reco, LGD, and LNN to be the most promising
methods for counterfactual generation. We further evaluate how these three
methods can be utilized by pre-trained Multimodal Large Language Models (MLLMs)
to generate language explanations for low model competency. We find that the
inclusion of a counterfactual image in the language model query greatly
increases the ability of the model to generate an accurate explanation for the
cause of low model competency, thus demonstrating the utility of counterfactual
images in explaining low perception model competency.
|
2504.05265 | German Barquero | German Barquero, Nadine Bertsch, Manojkumar Marramreddy, Carlos
Chac\'on, Filippo Arcadu, Ferran Rigual, Nicky Sijia He, Cristina Palmero,
Sergio Escalera, Yuting Ye, Robin Kips | From Sparse Signal to Smooth Motion: Real-Time Motion Generation with
Rolling Prediction Models | Published in CVPR'25. Webpage: https://barquerogerman.github.io/RPM/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In extended reality (XR), generating full-body motion of the users is
important to understand their actions, drive their virtual avatars for social
interaction, and convey a realistic sense of presence. While prior works
focused on spatially sparse and always-on input signals from motion
controllers, many XR applications opt for vision-based hand tracking for
reduced user friction and better immersion. Compared to controllers, hand
tracking signals are less accurate and can even be missing for an extended
period of time. To handle such unreliable inputs, we present Rolling Prediction
Model (RPM), an online and real-time approach that generates smooth full-body
motion from temporally and spatially sparse input signals. Our model generates
1) accurate motion that matches the inputs (i.e., tracking mode) and 2)
plausible motion when inputs are missing (i.e., synthesis mode). More
importantly, RPM generates seamless transitions from tracking to synthesis, and
vice versa. To demonstrate the practical importance of handling noisy and
missing inputs, we present GORP, the first dataset of realistic sparse inputs
from a commercial virtual reality (VR) headset with paired high quality body
motion ground truth. GORP provides >14 hours of VR gameplay data from 28 people
using motion controllers (spatially sparse) and hand tracking (spatially and
temporally sparse). We benchmark RPM against the state of the art on both
synthetic data and GORP to highlight how we can bridge the gap for real-world
applications with a realistic dataset and by handling unreliable input signals.
Our code, pretrained models, and GORP dataset are available in the project
webpage.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 17:00:34 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Barquero",
"German",
""
],
[
"Bertsch",
"Nadine",
""
],
[
"Marramreddy",
"Manojkumar",
""
],
[
"Chacón",
"Carlos",
""
],
[
"Arcadu",
"Filippo",
""
],
[
"Rigual",
"Ferran",
""
],
[
"He",
"Nicky Sijia",
""
],
[
"Palmero",
"Cristina",
""
],
[
"Escalera",
"Sergio",
""
],
[
"Ye",
"Yuting",
""
],
[
"Kips",
"Robin",
""
]
] | TITLE: From Sparse Signal to Smooth Motion: Real-Time Motion Generation with
Rolling Prediction Models
ABSTRACT: In extended reality (XR), generating full-body motion of the users is
important to understand their actions, drive their virtual avatars for social
interaction, and convey a realistic sense of presence. While prior works
focused on spatially sparse and always-on input signals from motion
controllers, many XR applications opt for vision-based hand tracking for
reduced user friction and better immersion. Compared to controllers, hand
tracking signals are less accurate and can even be missing for an extended
period of time. To handle such unreliable inputs, we present Rolling Prediction
Model (RPM), an online and real-time approach that generates smooth full-body
motion from temporally and spatially sparse input signals. Our model generates
1) accurate motion that matches the inputs (i.e., tracking mode) and 2)
plausible motion when inputs are missing (i.e., synthesis mode). More
importantly, RPM generates seamless transitions from tracking to synthesis, and
vice versa. To demonstrate the practical importance of handling noisy and
missing inputs, we present GORP, the first dataset of realistic sparse inputs
from a commercial virtual reality (VR) headset with paired high quality body
motion ground truth. GORP provides >14 hours of VR gameplay data from 28 people
using motion controllers (spatially sparse) and hand tracking (spatially and
temporally sparse). We benchmark RPM against the state of the art on both
synthetic data and GORP to highlight how we can bridge the gap for real-world
applications with a realistic dataset and by handling unreliable input signals.
Our code, pretrained models, and GORP dataset are available in the project
webpage.
|
2504.05276 | Yucheng Chu | Yucheng Chu, Peng He, Hang Li, Haoyu Han, Kaiqi Yang, Yu Xue, Tingting
Li, Joseph Krajcik and Jiliang Tang | Enhancing LLM-Based Short Answer Grading with Retrieval-Augmented
Generation | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Short answer assessment is a vital component of science education, allowing
evaluation of students' complex three-dimensional understanding. Large language
models (LLMs) that possess human-like ability in linguistic tasks are
increasingly popular in assisting human graders to reduce their workload.
However, LLMs' limitations in domain knowledge restrict their understanding in
task-specific requirements and hinder their ability to achieve satisfactory
performance. Retrieval-augmented generation (RAG) emerges as a promising
solution by enabling LLMs to access relevant domain-specific knowledge during
assessment. In this work, we propose an adaptive RAG framework for automated
grading that dynamically retrieves and incorporates domain-specific knowledge
based on the question and student answer context. Our approach combines
semantic search and curated educational sources to retrieve valuable reference
materials. Experimental results in a science education dataset demonstrate that
our system achieves an improvement in grading accuracy compared to baseline LLM
approaches. The findings suggest that RAG-enhanced grading systems can serve as
reliable support with efficient performance gains.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 17:17:41 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chu",
"Yucheng",
""
],
[
"He",
"Peng",
""
],
[
"Li",
"Hang",
""
],
[
"Han",
"Haoyu",
""
],
[
"Yang",
"Kaiqi",
""
],
[
"Xue",
"Yu",
""
],
[
"Li",
"Tingting",
""
],
[
"Krajcik",
"Joseph",
""
],
[
"Tang",
"Jiliang",
""
]
] | TITLE: Enhancing LLM-Based Short Answer Grading with Retrieval-Augmented
Generation
ABSTRACT: Short answer assessment is a vital component of science education, allowing
evaluation of students' complex three-dimensional understanding. Large language
models (LLMs) that possess human-like ability in linguistic tasks are
increasingly popular in assisting human graders to reduce their workload.
However, LLMs' limitations in domain knowledge restrict their understanding in
task-specific requirements and hinder their ability to achieve satisfactory
performance. Retrieval-augmented generation (RAG) emerges as a promising
solution by enabling LLMs to access relevant domain-specific knowledge during
assessment. In this work, we propose an adaptive RAG framework for automated
grading that dynamically retrieves and incorporates domain-specific knowledge
based on the question and student answer context. Our approach combines
semantic search and curated educational sources to retrieve valuable reference
materials. Experimental results in a science education dataset demonstrate that
our system achieves an improvement in grading accuracy compared to baseline LLM
approaches. The findings suggest that RAG-enhanced grading systems can serve as
reliable support with efficient performance gains.
|
2504.05288 | Dongping Chen | Mingyang Fu, Yuyang Peng, Benlin Liu, Yao Wan, Dongping Chen | LiveVQA: Live Visual Knowledge Seeking | Work in progress | null | null | null | cs.CV cs.CL | http://creativecommons.org/licenses/by/4.0/ | We introduce LiveVQA, an automatically collected dataset of latest visual
knowledge from the Internet with synthesized VQA problems. LiveVQA consists of
3,602 single- and multi-hop visual questions from 6 news websites across 14
news categories, featuring high-quality image-text coherence and authentic
information. Our evaluation across 15 MLLMs (e.g., GPT-4o, Gemma-3, and
Qwen-2.5-VL family) demonstrates that stronger models perform better overall,
with advanced visual reasoning capabilities proving crucial for complex
multi-hop questions. Despite excellent performance on textual problems, models
with tools like search engines still show significant gaps when addressing
visual questions requiring latest visual knowledge, highlighting important
areas for future research.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 17:39:31 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Fu",
"Mingyang",
""
],
[
"Peng",
"Yuyang",
""
],
[
"Liu",
"Benlin",
""
],
[
"Wan",
"Yao",
""
],
[
"Chen",
"Dongping",
""
]
] | TITLE: LiveVQA: Live Visual Knowledge Seeking
ABSTRACT: We introduce LiveVQA, an automatically collected dataset of latest visual
knowledge from the Internet with synthesized VQA problems. LiveVQA consists of
3,602 single- and multi-hop visual questions from 6 news websites across 14
news categories, featuring high-quality image-text coherence and authentic
information. Our evaluation across 15 MLLMs (e.g., GPT-4o, Gemma-3, and
Qwen-2.5-VL family) demonstrates that stronger models perform better overall,
with advanced visual reasoning capabilities proving crucial for complex
multi-hop questions. Despite excellent performance on textual problems, models
with tools like search engines still show significant gaps when addressing
visual questions requiring latest visual knowledge, highlighting important
areas for future research.
|
2504.05291 | Tariq Iqbal | Haley N. Green, Tariq Iqbal | Using Physiological Measures, Gaze, and Facial Expressions to Model
Human Trust in a Robot Partner | Accepted at the IEEE International Conference on Robotics and
Automation (ICRA), 2025 | IEEE International Conference on Robotics and Automation (ICRA),
2025 | null | null | cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | With robots becoming increasingly prevalent in various domains, it has become
crucial to equip them with tools to achieve greater fluency in interactions
with humans. One of the promising areas for further exploration lies in human
trust. A real-time, objective model of human trust could be used to maximize
productivity, preserve safety, and mitigate failure. In this work, we attempt
to use physiological measures, gaze, and facial expressions to model human
trust in a robot partner. We are the first to design an in-person, human-robot
supervisory interaction study to create a dedicated trust dataset. Using this
dataset, we train machine learning algorithms to identify the objective
measures that are most indicative of trust in a robot partner, advancing trust
prediction in human-robot interactions. Our findings indicate that a
combination of sensor modalities (blood volume pulse, electrodermal activity,
skin temperature, and gaze) can enhance the accuracy of detecting human trust
in a robot partner. Furthermore, the Extra Trees, Random Forest, and Decision
Trees classifiers exhibit consistently better performance in measuring the
person's trust in the robot partner. These results lay the groundwork for
constructing a real-time trust model for human-robot interaction, which could
foster more efficient interactions between humans and robots.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 17:45:17 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Green",
"Haley N.",
""
],
[
"Iqbal",
"Tariq",
""
]
] | TITLE: Using Physiological Measures, Gaze, and Facial Expressions to Model
Human Trust in a Robot Partner
ABSTRACT: With robots becoming increasingly prevalent in various domains, it has become
crucial to equip them with tools to achieve greater fluency in interactions
with humans. One of the promising areas for further exploration lies in human
trust. A real-time, objective model of human trust could be used to maximize
productivity, preserve safety, and mitigate failure. In this work, we attempt
to use physiological measures, gaze, and facial expressions to model human
trust in a robot partner. We are the first to design an in-person, human-robot
supervisory interaction study to create a dedicated trust dataset. Using this
dataset, we train machine learning algorithms to identify the objective
measures that are most indicative of trust in a robot partner, advancing trust
prediction in human-robot interactions. Our findings indicate that a
combination of sensor modalities (blood volume pulse, electrodermal activity,
skin temperature, and gaze) can enhance the accuracy of detecting human trust
in a robot partner. Furthermore, the Extra Trees, Random Forest, and Decision
Trees classifiers exhibit consistently better performance in measuring the
person's trust in the robot partner. These results lay the groundwork for
constructing a real-time trust model for human-robot interaction, which could
foster more efficient interactions between humans and robots.
|
2504.05298 | Yu Sun | Karan Dalal, Daniel Koceja, Gashon Hussein, Jiarui Xu, Yue Zhao,
Youjin Song, Shihao Han, Ka Chun Cheung, Jan Kautz, Carlos Guestrin,
Tatsunori Hashimoto, Sanmi Koyejo, Yejin Choi, Yu Sun, Xiaolong Wang | One-Minute Video Generation with Test-Time Training | CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Transformers today still struggle to generate one-minute videos because
self-attention layers are inefficient for long context. Alternatives such as
Mamba layers struggle with complex multi-scene stories because their hidden
states are less expressive. We experiment with Test-Time Training (TTT) layers,
whose hidden states themselves can be neural networks, therefore more
expressive. Adding TTT layers into a pre-trained Transformer enables it to
generate one-minute videos from text storyboards. For proof of concept, we
curate a dataset based on Tom and Jerry cartoons. Compared to baselines such as
Mamba~2, Gated DeltaNet, and sliding-window attention layers, TTT layers
generate much more coherent videos that tell complex stories, leading by 34 Elo
points in a human evaluation of 100 videos per method. Although promising,
results still contain artifacts, likely due to the limited capability of the
pre-trained 5B model. The efficiency of our implementation can also be
improved. We have only experimented with one-minute videos due to resource
constraints, but the approach can be extended to longer videos and more complex
stories. Sample videos, code and annotations are available at:
https://test-time-training.github.io/video-dit
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 17:56:31 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Dalal",
"Karan",
""
],
[
"Koceja",
"Daniel",
""
],
[
"Hussein",
"Gashon",
""
],
[
"Xu",
"Jiarui",
""
],
[
"Zhao",
"Yue",
""
],
[
"Song",
"Youjin",
""
],
[
"Han",
"Shihao",
""
],
[
"Cheung",
"Ka Chun",
""
],
[
"Kautz",
"Jan",
""
],
[
"Guestrin",
"Carlos",
""
],
[
"Hashimoto",
"Tatsunori",
""
],
[
"Koyejo",
"Sanmi",
""
],
[
"Choi",
"Yejin",
""
],
[
"Sun",
"Yu",
""
],
[
"Wang",
"Xiaolong",
""
]
] | TITLE: One-Minute Video Generation with Test-Time Training
ABSTRACT: Transformers today still struggle to generate one-minute videos because
self-attention layers are inefficient for long context. Alternatives such as
Mamba layers struggle with complex multi-scene stories because their hidden
states are less expressive. We experiment with Test-Time Training (TTT) layers,
whose hidden states themselves can be neural networks, therefore more
expressive. Adding TTT layers into a pre-trained Transformer enables it to
generate one-minute videos from text storyboards. For proof of concept, we
curate a dataset based on Tom and Jerry cartoons. Compared to baselines such as
Mamba~2, Gated DeltaNet, and sliding-window attention layers, TTT layers
generate much more coherent videos that tell complex stories, leading by 34 Elo
points in a human evaluation of 100 videos per method. Although promising,
results still contain artifacts, likely due to the limited capability of the
pre-trained 5B model. The efficiency of our implementation can also be
improved. We have only experimented with one-minute videos due to resource
constraints, but the approach can be extended to longer videos and more complex
stories. Sample videos, code and annotations are available at:
https://test-time-training.github.io/video-dit
|
2504.05305 | Sangbeom Lim Samuel | Sangbeom Lim, Junwan Kim, Heeji Yoon, Jaewoo Jung, Seungryong Kim | URECA: Unique Region Caption Anything | Project page: https://cvlab-kaist.github.io/URECA Code:
https://github.com/cvlab-kaist/URECA | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Region-level captioning aims to generate natural language descriptions for
specific image regions while highlighting their distinguishing features.
However, existing methods struggle to produce unique captions across
multi-granularity, limiting their real-world applicability. To address the need
for detailed region-level understanding, we introduce URECA dataset, a
large-scale dataset tailored for multi-granularity region captioning. Unlike
prior datasets that focus primarily on salient objects, URECA dataset ensures a
unique and consistent mapping between regions and captions by incorporating a
diverse set of objects, parts, and background elements. Central to this is a
stage-wise data curation pipeline, where each stage incrementally refines
region selection and caption generation. By leveraging Multimodal Large
Language Models (MLLMs) at each stage, our pipeline produces distinctive and
contextually grounded captions with improved accuracy and semantic diversity.
Building upon this dataset, we present URECA, a novel captioning model designed
to effectively encode multi-granularity regions. URECA maintains essential
spatial properties such as position and shape through simple yet impactful
modifications to existing MLLMs, enabling fine-grained and semantically rich
region descriptions. Our approach introduces dynamic mask modeling and a
high-resolution mask encoder to enhance caption uniqueness. Experiments show
that URECA achieves state-of-the-art performance on URECA dataset and
generalizes well to existing region-level captioning benchmarks.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 17:59:44 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lim",
"Sangbeom",
""
],
[
"Kim",
"Junwan",
""
],
[
"Yoon",
"Heeji",
""
],
[
"Jung",
"Jaewoo",
""
],
[
"Kim",
"Seungryong",
""
]
] | TITLE: URECA: Unique Region Caption Anything
ABSTRACT: Region-level captioning aims to generate natural language descriptions for
specific image regions while highlighting their distinguishing features.
However, existing methods struggle to produce unique captions across
multi-granularity, limiting their real-world applicability. To address the need
for detailed region-level understanding, we introduce URECA dataset, a
large-scale dataset tailored for multi-granularity region captioning. Unlike
prior datasets that focus primarily on salient objects, URECA dataset ensures a
unique and consistent mapping between regions and captions by incorporating a
diverse set of objects, parts, and background elements. Central to this is a
stage-wise data curation pipeline, where each stage incrementally refines
region selection and caption generation. By leveraging Multimodal Large
Language Models (MLLMs) at each stage, our pipeline produces distinctive and
contextually grounded captions with improved accuracy and semantic diversity.
Building upon this dataset, we present URECA, a novel captioning model designed
to effectively encode multi-granularity regions. URECA maintains essential
spatial properties such as position and shape through simple yet impactful
modifications to existing MLLMs, enabling fine-grained and semantically rich
region descriptions. Our approach introduces dynamic mask modeling and a
high-resolution mask encoder to enhance caption uniqueness. Experiments show
that URECA achieves state-of-the-art performance on URECA dataset and
generalizes well to existing region-level captioning benchmarks.
|
2208.10598 | Lanqin Yuan | Lanqin Yuan and Marian-Andrei Rizoiu | Generalizing Hate Speech Detection Using Multi-Task Learning: A Case
Study of Political Public Figures | null | Computer Speech & Language 89 (2025) 101690 | 10.1016/j.csl.2024.101690 | null | cs.CL cs.CY cs.SI | http://creativecommons.org/licenses/by/4.0/ | Automatic identification of hateful and abusive content is vital in combating
the spread of harmful online content and its damaging effects. Most existing
works evaluate models by examining the generalization error on train-test
splits on hate speech datasets. These datasets often differ in their
definitions and labeling criteria, leading to poor generalization performance
when predicting across new domains and datasets. This work proposes a new
Multi-task Learning (MTL) pipeline that trains simultaneously across multiple
hate speech datasets to construct a more encompassing classification model.
Using a dataset-level leave-one-out evaluation (designating a dataset for
testing and jointly training on all others), we trial the MTL detection on new,
previously unseen datasets. Our results consistently outperform a large sample
of existing work. We show strong results when examining the generalization
error in train-test splits and substantial improvements when predicting on
previously unseen datasets. Furthermore, we assemble a novel dataset, dubbed
PubFigs, focusing on the problematic speech of American Public Political
Figures. We crowdsource-label using Amazon MTurk more than $20,000$ tweets and
machine-label problematic speech in all the $305,235$ tweets in PubFigs. We
find that the abusive and hate tweeting mainly originates from right-leaning
figures and relates to six topics, including Islam, women, ethnicity, and
immigrants. We show that MTL builds embeddings that can simultaneously separate
abusive from hate speech, and identify its topics.
| [
{
"version": "v1",
"created": "Mon, 22 Aug 2022 21:13:38 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 05:08:13 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yuan",
"Lanqin",
""
],
[
"Rizoiu",
"Marian-Andrei",
""
]
] | TITLE: Generalizing Hate Speech Detection Using Multi-Task Learning: A Case
Study of Political Public Figures
ABSTRACT: Automatic identification of hateful and abusive content is vital in combating
the spread of harmful online content and its damaging effects. Most existing
works evaluate models by examining the generalization error on train-test
splits on hate speech datasets. These datasets often differ in their
definitions and labeling criteria, leading to poor generalization performance
when predicting across new domains and datasets. This work proposes a new
Multi-task Learning (MTL) pipeline that trains simultaneously across multiple
hate speech datasets to construct a more encompassing classification model.
Using a dataset-level leave-one-out evaluation (designating a dataset for
testing and jointly training on all others), we trial the MTL detection on new,
previously unseen datasets. Our results consistently outperform a large sample
of existing work. We show strong results when examining the generalization
error in train-test splits and substantial improvements when predicting on
previously unseen datasets. Furthermore, we assemble a novel dataset, dubbed
PubFigs, focusing on the problematic speech of American Public Political
Figures. We crowdsource-label using Amazon MTurk more than $20,000$ tweets and
machine-label problematic speech in all the $305,235$ tweets in PubFigs. We
find that the abusive and hate tweeting mainly originates from right-leaning
figures and relates to six topics, including Islam, women, ethnicity, and
immigrants. We show that MTL builds embeddings that can simultaneously separate
abusive from hate speech, and identify its topics.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.