Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2504.03006 | Jing Gao | Jing Gao, Ce Zheng, Laszlo A. Jeni, Zackory Erickson | DiSRT-In-Bed: Diffusion-Based Sim-to-Real Transfer Framework for In-Bed
Human Mesh Recovery | 16 pages, 19 figures. Accepted to CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In-bed human mesh recovery can be crucial and enabling for several healthcare
applications, including sleep pattern monitoring, rehabilitation support, and
pressure ulcer prevention. However, it is difficult to collect large real-world
visual datasets in this domain, in part due to privacy and expense constraints,
which in turn presents significant challenges for training and deploying deep
learning models. Existing in-bed human mesh estimation methods often rely
heavily on real-world data, limiting their ability to generalize across
different in-bed scenarios, such as varying coverings and environmental
settings. To address this, we propose a Sim-to-Real Transfer Framework for
in-bed human mesh recovery from overhead depth images, which leverages
large-scale synthetic data alongside limited or no real-world samples. We
introduce a diffusion model that bridges the gap between synthetic data and
real data to support generalization in real-world in-bed pose and body
inference scenarios. Extensive experiments and ablation studies validate the
effectiveness of our framework, demonstrating significant improvements in
robustness and adaptability across diverse healthcare scenarios.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 19:57:16 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Gao",
"Jing",
""
],
[
"Zheng",
"Ce",
""
],
[
"Jeni",
"Laszlo A.",
""
],
[
"Erickson",
"Zackory",
""
]
] | TITLE: DiSRT-In-Bed: Diffusion-Based Sim-to-Real Transfer Framework for In-Bed
Human Mesh Recovery
ABSTRACT: In-bed human mesh recovery can be crucial and enabling for several healthcare
applications, including sleep pattern monitoring, rehabilitation support, and
pressure ulcer prevention. However, it is difficult to collect large real-world
visual datasets in this domain, in part due to privacy and expense constraints,
which in turn presents significant challenges for training and deploying deep
learning models. Existing in-bed human mesh estimation methods often rely
heavily on real-world data, limiting their ability to generalize across
different in-bed scenarios, such as varying coverings and environmental
settings. To address this, we propose a Sim-to-Real Transfer Framework for
in-bed human mesh recovery from overhead depth images, which leverages
large-scale synthetic data alongside limited or no real-world samples. We
introduce a diffusion model that bridges the gap between synthetic data and
real data to support generalization in real-world in-bed pose and body
inference scenarios. Extensive experiments and ablation studies validate the
effectiveness of our framework, demonstrating significant improvements in
robustness and adaptability across diverse healthcare scenarios.
|
2504.03010 | Shaoyuan Xu Ph.D. | Shaoyuan Xu, Yang Cheng, Qian Lin, Jan P. Allebach | Emotion Recognition Using Convolutional Neural Networks | null | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Emotion has an important role in daily life, as it helps people better
communicate with and understand each other more efficiently. Facial expressions
can be classified into 7 categories: angry, disgust, fear, happy, neutral, sad
and surprise. How to detect and recognize these seven emotions has become a
popular topic in the past decade. In this paper, we develop an emotion
recognition system that can apply emotion recognition on both still images and
real-time videos by using deep learning.
We build our own emotion recognition classification and regression system
from scratch, which includes dataset collection, data preprocessing , model
training and testing. Given a certain image or a real-time video, our system is
able to show the classification and regression results for all of the 7
emotions. The proposed system is tested on 2 different datasets, and achieved
an accuracy of over 80\%. Moreover, the result obtained from real-time testing
proves the feasibility of implementing convolutional neural networks in real
time to detect emotions accurately and efficiently.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 20:08:32 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Xu",
"Shaoyuan",
""
],
[
"Cheng",
"Yang",
""
],
[
"Lin",
"Qian",
""
],
[
"Allebach",
"Jan P.",
""
]
] | TITLE: Emotion Recognition Using Convolutional Neural Networks
ABSTRACT: Emotion has an important role in daily life, as it helps people better
communicate with and understand each other more efficiently. Facial expressions
can be classified into 7 categories: angry, disgust, fear, happy, neutral, sad
and surprise. How to detect and recognize these seven emotions has become a
popular topic in the past decade. In this paper, we develop an emotion
recognition system that can apply emotion recognition on both still images and
real-time videos by using deep learning.
We build our own emotion recognition classification and regression system
from scratch, which includes dataset collection, data preprocessing , model
training and testing. Given a certain image or a real-time video, our system is
able to show the classification and regression results for all of the 7
emotions. The proposed system is tested on 2 different datasets, and achieved
an accuracy of over 80\%. Moreover, the result obtained from real-time testing
proves the feasibility of implementing convolutional neural networks in real
time to detect emotions accurately and efficiently.
|
2504.03011 | Junying Wang | Junying Wang, Jingyuan Liu, Xin Sun, Krishna Kumar Singh, Zhixin Shu,
He Zhang, Jimei Yang, Nanxuan Zhao, Tuanfeng Y. Wang, Simon S. Chen, Ulrich
Neumann, Jae Shin Yoon | Comprehensive Relighting: Generalizable and Consistent Monocular Human
Relighting and Harmonization | Project page:https://junyingw.github.io/paper/relighting. Accepted by
CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper introduces Comprehensive Relighting, the first all-in-one approach
that can both control and harmonize the lighting from an image or video of
humans with arbitrary body parts from any scene. Building such a generalizable
model is extremely challenging due to the lack of dataset, restricting existing
image-based relighting models to a specific scenario (e.g., face or static
human). To address this challenge, we repurpose a pre-trained diffusion model
as a general image prior and jointly model the human relighting and background
harmonization in the coarse-to-fine framework. To further enhance the temporal
coherence of the relighting, we introduce an unsupervised temporal lighting
model that learns the lighting cycle consistency from many real-world videos
without any ground truth. In inference time, our temporal lighting module is
combined with the diffusion models through the spatio-temporal feature blending
algorithms without extra training; and we apply a new guided refinement as a
post-processing to preserve the high-frequency details from the input image. In
the experiments, Comprehensive Relighting shows a strong generalizability and
lighting temporal coherence, outperforming existing image-based human
relighting and harmonization methods.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 20:10:50 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Wang",
"Junying",
""
],
[
"Liu",
"Jingyuan",
""
],
[
"Sun",
"Xin",
""
],
[
"Singh",
"Krishna Kumar",
""
],
[
"Shu",
"Zhixin",
""
],
[
"Zhang",
"He",
""
],
[
"Yang",
"Jimei",
""
],
[
"Zhao",
"Nanxuan",
""
],
[
"Wang",
"Tuanfeng Y.",
""
],
[
"Chen",
"Simon S.",
""
],
[
"Neumann",
"Ulrich",
""
],
[
"Yoon",
"Jae Shin",
""
]
] | TITLE: Comprehensive Relighting: Generalizable and Consistent Monocular Human
Relighting and Harmonization
ABSTRACT: This paper introduces Comprehensive Relighting, the first all-in-one approach
that can both control and harmonize the lighting from an image or video of
humans with arbitrary body parts from any scene. Building such a generalizable
model is extremely challenging due to the lack of dataset, restricting existing
image-based relighting models to a specific scenario (e.g., face or static
human). To address this challenge, we repurpose a pre-trained diffusion model
as a general image prior and jointly model the human relighting and background
harmonization in the coarse-to-fine framework. To further enhance the temporal
coherence of the relighting, we introduce an unsupervised temporal lighting
model that learns the lighting cycle consistency from many real-world videos
without any ground truth. In inference time, our temporal lighting module is
combined with the diffusion models through the spatio-temporal feature blending
algorithms without extra training; and we apply a new guided refinement as a
post-processing to preserve the high-frequency details from the input image. In
the experiments, Comprehensive Relighting shows a strong generalizability and
lighting temporal coherence, outperforming existing image-based human
relighting and harmonization methods.
|
2504.03026 | Yiran Xu | Yiran Xu, Siqi Xie, Zhuofang Li, Harris Shadmany, Yinxiao Li, Luciano
Sbaiz, Miaosen Wang, Junjie Ke, Jose Lezama, Hang Qi, Han Zhang, Jesse
Berent, Ming-Hsuan Yang, Irfan Essa, Jia-Bin Huang, Feng Yang | HALO: Human-Aligned End-to-end Image Retargeting with Layered
Transformations | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | Image retargeting aims to change the aspect-ratio of an image while
maintaining its content and structure with less visual artifacts. Existing
methods still generate many artifacts or fail to maintain original content or
structure. To address this, we introduce HALO, an end-to-end trainable solution
for image retargeting. Since humans are more sensitive to distortions in
salient areas than non-salient areas of an image, HALO decomposes the input
image into salient/non-salient layers and applies different wrapping fields to
different layers. To further minimize the structure distortion in the output
images, we propose perceptual structure similarity loss which measures the
structure similarity between input and output images and aligns with human
perception. Both quantitative results and a user study on the RetargetMe
dataset show that HALO achieves SOTA. Especially, our method achieves an 18.4%
higher user preference compared to the baselines on average.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 20:53:19 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Xu",
"Yiran",
""
],
[
"Xie",
"Siqi",
""
],
[
"Li",
"Zhuofang",
""
],
[
"Shadmany",
"Harris",
""
],
[
"Li",
"Yinxiao",
""
],
[
"Sbaiz",
"Luciano",
""
],
[
"Wang",
"Miaosen",
""
],
[
"Ke",
"Junjie",
""
],
[
"Lezama",
"Jose",
""
],
[
"Qi",
"Hang",
""
],
[
"Zhang",
"Han",
""
],
[
"Berent",
"Jesse",
""
],
[
"Yang",
"Ming-Hsuan",
""
],
[
"Essa",
"Irfan",
""
],
[
"Huang",
"Jia-Bin",
""
],
[
"Yang",
"Feng",
""
]
] | TITLE: HALO: Human-Aligned End-to-end Image Retargeting with Layered
Transformations
ABSTRACT: Image retargeting aims to change the aspect-ratio of an image while
maintaining its content and structure with less visual artifacts. Existing
methods still generate many artifacts or fail to maintain original content or
structure. To address this, we introduce HALO, an end-to-end trainable solution
for image retargeting. Since humans are more sensitive to distortions in
salient areas than non-salient areas of an image, HALO decomposes the input
image into salient/non-salient layers and applies different wrapping fields to
different layers. To further minimize the structure distortion in the output
images, we propose perceptual structure similarity loss which measures the
structure similarity between input and output images and aligns with human
perception. Both quantitative results and a user study on the RetargetMe
dataset show that HALO achieves SOTA. Especially, our method achieves an 18.4%
higher user preference compared to the baselines on average.
|
2504.03036 | Z\'ebulon Goriely | Z\'ebulon Goriely and Paula Buttery | IPA-CHILDES & G2P+: Feature-Rich Resources for Cross-Lingual Phonology
and Phonemic Language Modeling | 19 pages, 7 figures. Submitted to CoNLL 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In this paper, we introduce two resources: (i) G2P+, a tool for converting
orthographic datasets to a consistent phonemic representation; and (ii) IPA
CHILDES, a phonemic dataset of child-centered speech across 31 languages. Prior
tools for grapheme-to-phoneme conversion result in phonemic vocabularies that
are inconsistent with established phonemic inventories, an issue which G2P+
addresses by leveraging the inventories in the Phoible database. Using this
tool, we augment CHILDES with phonemic transcriptions to produce IPA CHILDES.
This new resource fills several gaps in existing phonemic datasets, which often
lack multilingual coverage, spontaneous speech, and a focus on child-directed
language. We demonstrate the utility of this dataset for phonological research
by training phoneme language models on 11 languages and probing them for
distinctive features, finding that the distributional properties of phonemes
are sufficient to learn major class and place features cross-lingually.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 21:22:19 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Goriely",
"Zébulon",
""
],
[
"Buttery",
"Paula",
""
]
] | TITLE: IPA-CHILDES & G2P+: Feature-Rich Resources for Cross-Lingual Phonology
and Phonemic Language Modeling
ABSTRACT: In this paper, we introduce two resources: (i) G2P+, a tool for converting
orthographic datasets to a consistent phonemic representation; and (ii) IPA
CHILDES, a phonemic dataset of child-centered speech across 31 languages. Prior
tools for grapheme-to-phoneme conversion result in phonemic vocabularies that
are inconsistent with established phonemic inventories, an issue which G2P+
addresses by leveraging the inventories in the Phoible database. Using this
tool, we augment CHILDES with phonemic transcriptions to produce IPA CHILDES.
This new resource fills several gaps in existing phonemic datasets, which often
lack multilingual coverage, spontaneous speech, and a focus on child-directed
language. We demonstrate the utility of this dataset for phonological research
by training phoneme language models on 11 languages and probing them for
distinctive features, finding that the distributional properties of phonemes
are sufficient to learn major class and place features cross-lingually.
|
2504.03041 | Huiming Sun | Huiming Sun, Yikang Li, Kangning Yang, Ruineng Li, Daitao Xing, Yangbo
Xie, Lan Fu, Kaiyu Zhang, Ming Chen, Jiaming Ding, Jiang Geng, Jie Cai, Zibo
Meng, Chiuman Ho | VIP: Video Inpainting Pipeline for Real World Human Removal | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Inpainting for real-world human and pedestrian removal in high-resolution
video clips presents significant challenges, particularly in achieving
high-quality outcomes, ensuring temporal consistency, and managing complex
object interactions that involve humans, their belongings, and their shadows.
In this paper, we introduce VIP (Video Inpainting Pipeline), a novel promptless
video inpainting framework for real-world human removal applications. VIP
enhances a state-of-the-art text-to-video model with a motion module and
employs a Variational Autoencoder (VAE) for progressive denoising in the latent
space. Additionally, we implement an efficient human-and-belongings
segmentation for precise mask generation. Sufficient experimental results
demonstrate that VIP achieves superior temporal consistency and visual fidelity
across diverse real-world scenarios, surpassing state-of-the-art methods on
challenging datasets. Our key contributions include the development of the VIP
pipeline, a reference frame integration technique, and the Dual-Fusion Latent
Segment Refinement method, all of which address the complexities of inpainting
in long, high-resolution video sequences.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 21:40:10 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Sun",
"Huiming",
""
],
[
"Li",
"Yikang",
""
],
[
"Yang",
"Kangning",
""
],
[
"Li",
"Ruineng",
""
],
[
"Xing",
"Daitao",
""
],
[
"Xie",
"Yangbo",
""
],
[
"Fu",
"Lan",
""
],
[
"Zhang",
"Kaiyu",
""
],
[
"Chen",
"Ming",
""
],
[
"Ding",
"Jiaming",
""
],
[
"Geng",
"Jiang",
""
],
[
"Cai",
"Jie",
""
],
[
"Meng",
"Zibo",
""
],
[
"Ho",
"Chiuman",
""
]
] | TITLE: VIP: Video Inpainting Pipeline for Real World Human Removal
ABSTRACT: Inpainting for real-world human and pedestrian removal in high-resolution
video clips presents significant challenges, particularly in achieving
high-quality outcomes, ensuring temporal consistency, and managing complex
object interactions that involve humans, their belongings, and their shadows.
In this paper, we introduce VIP (Video Inpainting Pipeline), a novel promptless
video inpainting framework for real-world human removal applications. VIP
enhances a state-of-the-art text-to-video model with a motion module and
employs a Variational Autoencoder (VAE) for progressive denoising in the latent
space. Additionally, we implement an efficient human-and-belongings
segmentation for precise mask generation. Sufficient experimental results
demonstrate that VIP achieves superior temporal consistency and visual fidelity
across diverse real-world scenarios, surpassing state-of-the-art methods on
challenging datasets. Our key contributions include the development of the VIP
pipeline, a reference frame integration technique, and the Dual-Fusion Latent
Segment Refinement method, all of which address the complexities of inpainting
in long, high-resolution video sequences.
|
2504.03047 | Reef Alturki | Reef Alturki, Adrian Hilton, Jean-Yves Guillemaut | Attention-Aware Multi-View Pedestrian Tracking | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In spite of the recent advancements in multi-object tracking, occlusion poses
a significant challenge. Multi-camera setups have been used to address this
challenge by providing a comprehensive coverage of the scene. Recent multi-view
pedestrian detection models have highlighted the potential of an early-fusion
strategy, projecting feature maps of all views to a common ground plane or the
Bird's Eye View (BEV), and then performing detection. This strategy has been
shown to improve both detection and tracking performance. However, the
perspective transformation results in significant distortion on the ground
plane, affecting the robustness of the appearance features of the pedestrians.
To tackle this limitation, we propose a novel model that incorporates attention
mechanisms in a multi-view pedestrian tracking scenario. Our model utilizes an
early-fusion strategy for detection, and a cross-attention mechanism to
establish robust associations between pedestrians in different frames, while
efficiently propagating pedestrian features across frames, resulting in a more
robust feature representation for each pedestrian. Extensive experiments
demonstrate that our model outperforms state-of-the-art models, with an IDF1
score of $96.1\%$ on Wildtrack dataset, and $85.7\%$ on MultiviewX dataset.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 21:53:08 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Alturki",
"Reef",
""
],
[
"Hilton",
"Adrian",
""
],
[
"Guillemaut",
"Jean-Yves",
""
]
] | TITLE: Attention-Aware Multi-View Pedestrian Tracking
ABSTRACT: In spite of the recent advancements in multi-object tracking, occlusion poses
a significant challenge. Multi-camera setups have been used to address this
challenge by providing a comprehensive coverage of the scene. Recent multi-view
pedestrian detection models have highlighted the potential of an early-fusion
strategy, projecting feature maps of all views to a common ground plane or the
Bird's Eye View (BEV), and then performing detection. This strategy has been
shown to improve both detection and tracking performance. However, the
perspective transformation results in significant distortion on the ground
plane, affecting the robustness of the appearance features of the pedestrians.
To tackle this limitation, we propose a novel model that incorporates attention
mechanisms in a multi-view pedestrian tracking scenario. Our model utilizes an
early-fusion strategy for detection, and a cross-attention mechanism to
establish robust associations between pedestrians in different frames, while
efficiently propagating pedestrian features across frames, resulting in a more
robust feature representation for each pedestrian. Extensive experiments
demonstrate that our model outperforms state-of-the-art models, with an IDF1
score of $96.1\%$ on Wildtrack dataset, and $85.7\%$ on MultiviewX dataset.
|
2504.03051 | Chengyang He | Chengyang He, Wenlong Zhang, Violet Xinying Chen, Yue Ning, Ping Wang | Task as Context Prompting for Accurate Medical Symptom Coding Using
Large Language Models | 11 pages, 5 figures, 5 Tables, ACM/IEEE International Conference on
Connected Health: Applications, Systems and Engineering Technologies (CHASE
'25), June 24--26, 2025, New York, NY, USA | null | 10.1145/3721201.3721383 | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | Accurate medical symptom coding from unstructured clinical text, such as
vaccine safety reports, is a critical task with applications in
pharmacovigilance and safety monitoring. Symptom coding, as tailored in this
study, involves identifying and linking nuanced symptom mentions to
standardized vocabularies like MedDRA, differentiating it from broader medical
coding tasks. Traditional approaches to this task, which treat symptom
extraction and linking as independent workflows, often fail to handle the
variability and complexity of clinical narratives, especially for rare cases.
Recent advancements in Large Language Models (LLMs) offer new opportunities but
face challenges in achieving consistent performance. To address these issues,
we propose Task as Context (TACO) Prompting, a novel framework that unifies
extraction and linking tasks by embedding task-specific context into LLM
prompts. Our study also introduces SYMPCODER, a human-annotated dataset derived
from Vaccine Adverse Event Reporting System (VAERS) reports, and a two-stage
evaluation framework to comprehensively assess both symptom linking and mention
fidelity. Our comprehensive evaluation of multiple LLMs, including Llama2-chat,
Jackalope-7b, GPT-3.5 Turbo, GPT-4 Turbo, and GPT-4o, demonstrates TACO's
effectiveness in improving flexibility and accuracy for tailored tasks like
symptom coding, paving the way for more specific coding tasks and advancing
clinical text processing methodologies.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 21:57:17 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"He",
"Chengyang",
""
],
[
"Zhang",
"Wenlong",
""
],
[
"Chen",
"Violet Xinying",
""
],
[
"Ning",
"Yue",
""
],
[
"Wang",
"Ping",
""
]
] | TITLE: Task as Context Prompting for Accurate Medical Symptom Coding Using
Large Language Models
ABSTRACT: Accurate medical symptom coding from unstructured clinical text, such as
vaccine safety reports, is a critical task with applications in
pharmacovigilance and safety monitoring. Symptom coding, as tailored in this
study, involves identifying and linking nuanced symptom mentions to
standardized vocabularies like MedDRA, differentiating it from broader medical
coding tasks. Traditional approaches to this task, which treat symptom
extraction and linking as independent workflows, often fail to handle the
variability and complexity of clinical narratives, especially for rare cases.
Recent advancements in Large Language Models (LLMs) offer new opportunities but
face challenges in achieving consistent performance. To address these issues,
we propose Task as Context (TACO) Prompting, a novel framework that unifies
extraction and linking tasks by embedding task-specific context into LLM
prompts. Our study also introduces SYMPCODER, a human-annotated dataset derived
from Vaccine Adverse Event Reporting System (VAERS) reports, and a two-stage
evaluation framework to comprehensively assess both symptom linking and mention
fidelity. Our comprehensive evaluation of multiple LLMs, including Llama2-chat,
Jackalope-7b, GPT-3.5 Turbo, GPT-4 Turbo, and GPT-4o, demonstrates TACO's
effectiveness in improving flexibility and accuracy for tailored tasks like
symptom coding, paving the way for more specific coding tasks and advancing
clinical text processing methodologies.
|
2504.03079 | Maria Zurek | Henry Klest, Maria \.Zurek, Tegan D. Beattie, Manoj Jadhav, Sylvester
Joosten, Bobae Kim, Minho Kim, Jessica Metcalfe, Zisis Papandreou, Jared
Richards | Evaluation of the Response to Electrons and Pions in the Scintillating
Fiber and Lead Calorimeter for the Future Electron-Ion Collider | null | null | null | null | physics.ins-det hep-ex nucl-ex | http://creativecommons.org/licenses/by/4.0/ | The performance of the Baby Barrel Electromagnetic Calorimeter (Baby BCAL) -
a small-scale lead-scintillating-fiber (Pb/ScFi) prototype of the GlueX Barrel
Electromagnetic Calorimeter (BCAL) - was tested in a dedicated beam campaign at
the Fermilab Test Beam Facility (FTBF). This study provides a benchmark for the
Pb/ScFi component of the future Barrel Imaging Calorimeter (BIC) in the ePIC
detector at the Electron-Ion Collider (EIC). The detector response to electrons
and pions was studied at beam energies between 4 and 10 GeV, extending previous
GlueX tests [NIM A 596 (2008) 327-337 and arXiv:1801.03088] to a higher energy
regime.
The calibrated detector exhibits good linearity within uncertainties, and its
electron energy resolution meets EIC requirements. The data further constrain
the constant term in the energy resolution to below 1.9%, improving upon
previous constraints at lower energies. Simulations reproduce key features of
the electron and pion data within the limitations of the collected dataset and
the FTBF test environment. Electron-pion separation in the test beam setup was
analyzed using multiple methods, incorporating varying degrees of beam-related
effects. The inclusion of longitudinal shower profile information enhanced the
separation performance, underscoring its relevance for the full-scale BIC in
ePIC. These results provide essential benchmarks for the Pb/ScFi section of the
future BIC, validating detector simulations and guiding optimization strategies
for electron-pion discrimination.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 22:59:24 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Klest",
"Henry",
""
],
[
"Żurek",
"Maria",
""
],
[
"Beattie",
"Tegan D.",
""
],
[
"Jadhav",
"Manoj",
""
],
[
"Joosten",
"Sylvester",
""
],
[
"Kim",
"Bobae",
""
],
[
"Kim",
"Minho",
""
],
[
"Metcalfe",
"Jessica",
""
],
[
"Papandreou",
"Zisis",
""
],
[
"Richards",
"Jared",
""
]
] | TITLE: Evaluation of the Response to Electrons and Pions in the Scintillating
Fiber and Lead Calorimeter for the Future Electron-Ion Collider
ABSTRACT: The performance of the Baby Barrel Electromagnetic Calorimeter (Baby BCAL) -
a small-scale lead-scintillating-fiber (Pb/ScFi) prototype of the GlueX Barrel
Electromagnetic Calorimeter (BCAL) - was tested in a dedicated beam campaign at
the Fermilab Test Beam Facility (FTBF). This study provides a benchmark for the
Pb/ScFi component of the future Barrel Imaging Calorimeter (BIC) in the ePIC
detector at the Electron-Ion Collider (EIC). The detector response to electrons
and pions was studied at beam energies between 4 and 10 GeV, extending previous
GlueX tests [NIM A 596 (2008) 327-337 and arXiv:1801.03088] to a higher energy
regime.
The calibrated detector exhibits good linearity within uncertainties, and its
electron energy resolution meets EIC requirements. The data further constrain
the constant term in the energy resolution to below 1.9%, improving upon
previous constraints at lower energies. Simulations reproduce key features of
the electron and pion data within the limitations of the collected dataset and
the FTBF test environment. Electron-pion separation in the test beam setup was
analyzed using multiple methods, incorporating varying degrees of beam-related
effects. The inclusion of longitudinal shower profile information enhanced the
separation performance, underscoring its relevance for the full-scale BIC in
ePIC. These results provide essential benchmarks for the Pb/ScFi section of the
future BIC, validating detector simulations and guiding optimization strategies
for electron-pion discrimination.
|
2504.03089 | Kunal Dargan | Prashant Kumar, Dheeraj Vattikonda, Kshitij Madhav Bhat, Kunal Dargan,
Prem Kalra | SLACK: Attacking LiDAR-based SLAM with Adversarial Point Injections | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The widespread adoption of learning-based methods for the LiDAR makes
autonomous vehicles vulnerable to adversarial attacks through adversarial
\textit{point injections (PiJ)}. It poses serious security challenges for
navigation and map generation. Despite its critical nature, no major work
exists that studies learning-based attacks on LiDAR-based SLAM. Our work
proposes SLACK, an end-to-end deep generative adversarial model to attack LiDAR
scans with several point injections without deteriorating LiDAR quality. To
facilitate SLACK, we design a novel yet simple autoencoder that augments
contrastive learning with segmentation-based attention for precise
reconstructions. SLACK demonstrates superior performance on the task of
\textit{point injections (PiJ)} compared to the best baselines on KITTI and
CARLA-64 dataset while maintaining accurate scan quality. We qualitatively and
quantitatively demonstrate PiJ attacks using a fraction of LiDAR points. It
severely degrades navigation and map quality without deteriorating the LiDAR
scan quality.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 23:52:49 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Kumar",
"Prashant",
""
],
[
"Vattikonda",
"Dheeraj",
""
],
[
"Bhat",
"Kshitij Madhav",
""
],
[
"Dargan",
"Kunal",
""
],
[
"Kalra",
"Prem",
""
]
] | TITLE: SLACK: Attacking LiDAR-based SLAM with Adversarial Point Injections
ABSTRACT: The widespread adoption of learning-based methods for the LiDAR makes
autonomous vehicles vulnerable to adversarial attacks through adversarial
\textit{point injections (PiJ)}. It poses serious security challenges for
navigation and map generation. Despite its critical nature, no major work
exists that studies learning-based attacks on LiDAR-based SLAM. Our work
proposes SLACK, an end-to-end deep generative adversarial model to attack LiDAR
scans with several point injections without deteriorating LiDAR quality. To
facilitate SLACK, we design a novel yet simple autoencoder that augments
contrastive learning with segmentation-based attention for precise
reconstructions. SLACK demonstrates superior performance on the task of
\textit{point injections (PiJ)} compared to the best baselines on KITTI and
CARLA-64 dataset while maintaining accurate scan quality. We qualitatively and
quantitatively demonstrate PiJ attacks using a fraction of LiDAR points. It
severely degrades navigation and map quality without deteriorating the LiDAR
scan quality.
|
2504.03092 | Md Zahidul Islam | Md Zahidul Islam, Md Shahidul Islam, Biswajit Chandra das, Syed Ali
Reza, Proshanta Kumar Bhowmik, Kanchon Kumar Bishnu, Md Shafiqur Rahman,
Redoyan Chowdhury, Laxmi Pant | Machine Learning-Based Detection and Analysis of Suspicious Activities
in Bitcoin Wallet Transactions in the USA | 20 pages,7 figures | null | 10.62754/joe.v4i1.6214 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | The dramatic adoption of Bitcoin and other cryptocurrencies in the USA has
revolutionized the financial landscape and provided unprecedented investment
and transaction efficiency opportunities. The prime objective of this research
project is to develop machine learning algorithms capable of effectively
identifying and tracking suspicious activity in Bitcoin wallet transactions.
With high-tech analysis, the study aims to create a model with a feature for
identifying trends and outliers that can expose illicit activity. The current
study specifically focuses on Bitcoin transaction information in America, with
a strong emphasis placed on the importance of knowing about the immediate
environment in and through which such transactions pass through. The dataset is
composed of in-depth Bitcoin wallet transactional information, including
important factors such as transaction values, timestamps, network flows, and
addresses for wallets. All entries in the dataset expose information about
financial transactions between wallets, including received and sent
transactions, and such information is significant for analysis and trends that
can represent suspicious activity. This study deployed three accredited
algorithms, most notably, Logistic Regression, Random Forest, and Support
Vector Machines. In retrospect, Random Forest emerged as the best model with
the highest F1 Score, showcasing its ability to handle non-linear relationships
in the data. Insights revealed significant patterns in wallet activity, such as
the correlation between unredeemed transactions and final balances. The
application of machine algorithms in tracking cryptocurrencies is a tool for
creating transparent and secure U.S. markets.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 00:07:32 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Islam",
"Md Zahidul",
""
],
[
"Islam",
"Md Shahidul",
""
],
[
"das",
"Biswajit Chandra",
""
],
[
"Reza",
"Syed Ali",
""
],
[
"Bhowmik",
"Proshanta Kumar",
""
],
[
"Bishnu",
"Kanchon Kumar",
""
],
[
"Rahman",
"Md Shafiqur",
""
],
[
"Chowdhury",
"Redoyan",
""
],
[
"Pant",
"Laxmi",
""
]
] | TITLE: Machine Learning-Based Detection and Analysis of Suspicious Activities
in Bitcoin Wallet Transactions in the USA
ABSTRACT: The dramatic adoption of Bitcoin and other cryptocurrencies in the USA has
revolutionized the financial landscape and provided unprecedented investment
and transaction efficiency opportunities. The prime objective of this research
project is to develop machine learning algorithms capable of effectively
identifying and tracking suspicious activity in Bitcoin wallet transactions.
With high-tech analysis, the study aims to create a model with a feature for
identifying trends and outliers that can expose illicit activity. The current
study specifically focuses on Bitcoin transaction information in America, with
a strong emphasis placed on the importance of knowing about the immediate
environment in and through which such transactions pass through. The dataset is
composed of in-depth Bitcoin wallet transactional information, including
important factors such as transaction values, timestamps, network flows, and
addresses for wallets. All entries in the dataset expose information about
financial transactions between wallets, including received and sent
transactions, and such information is significant for analysis and trends that
can represent suspicious activity. This study deployed three accredited
algorithms, most notably, Logistic Regression, Random Forest, and Support
Vector Machines. In retrospect, Random Forest emerged as the best model with
the highest F1 Score, showcasing its ability to handle non-linear relationships
in the data. Insights revealed significant patterns in wallet activity, such as
the correlation between unredeemed transactions and final balances. The
application of machine algorithms in tracking cryptocurrencies is a tool for
creating transparent and secure U.S. markets.
|
2504.03093 | Zhiqun Zuo | Zhiqun Zuo and Ding Zhu and Mohammad Mahdi Khalili | Post-processing for Fair Regression via Explainable SVD | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | This paper presents a post-processing algorithm for training fair neural
network regression models that satisfy statistical parity, utilizing an
explainable singular value decomposition (SVD) of the weight matrix. We propose
a linear transformation of the weight matrix, whereby the singular values
derived from the SVD of the transformed matrix directly correspond to the
differences in the first and second moments of the output distributions across
two groups. Consequently, we can convert the fairness constraints into
constraints on the singular values. We analytically solve the problem of
finding the optimal weights under these constraints. Experimental validation on
various datasets demonstrates that our method achieves a similar or superior
fairness-accuracy trade-off compared to the baselines without using the
sensitive attribute at the inference time.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 00:10:01 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Zuo",
"Zhiqun",
""
],
[
"Zhu",
"Ding",
""
],
[
"Khalili",
"Mohammad Mahdi",
""
]
] | TITLE: Post-processing for Fair Regression via Explainable SVD
ABSTRACT: This paper presents a post-processing algorithm for training fair neural
network regression models that satisfy statistical parity, utilizing an
explainable singular value decomposition (SVD) of the weight matrix. We propose
a linear transformation of the weight matrix, whereby the singular values
derived from the SVD of the transformed matrix directly correspond to the
differences in the first and second moments of the output distributions across
two groups. Consequently, we can convert the fairness constraints into
constraints on the singular values. We analytically solve the problem of
finding the optimal weights under these constraints. Experimental validation on
various datasets demonstrates that our method achieves a similar or superior
fairness-accuracy trade-off compared to the baselines without using the
sensitive attribute at the inference time.
|
2504.03096 | Zhen Hao Sia | Zhen Hao Sia, Yogesh Singh Rawat | Scaling Open-Vocabulary Action Detection | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this work, we focus on scaling open-vocabulary action detection. Existing
approaches for action detection are predominantly limited to closed-set
scenarios and rely on complex, parameter-heavy architectures. Extending these
models to the open-vocabulary setting poses two key challenges: (1) the lack of
large-scale datasets with many action classes for robust training, and (2)
parameter-heavy adaptations to a pretrained vision-language contrastive model
to convert it for detection, risking overfitting the additional non-pretrained
parameters to base action classes. Firstly, we introduce an encoder-only
multimodal model for video action detection, reducing the reliance on
parameter-heavy additions for video action detection. Secondly, we introduce a
simple weakly supervised training strategy to exploit an existing closed-set
action detection dataset for pretraining. Finally, we depart from the ill-posed
base-to-novel benchmark used by prior works in open-vocabulary action detection
and devise a new benchmark to evaluate on existing closed-set action detection
datasets without ever using them for training, showing novel results to serve
as baselines for future work.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 00:28:42 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Sia",
"Zhen Hao",
""
],
[
"Rawat",
"Yogesh Singh",
""
]
] | TITLE: Scaling Open-Vocabulary Action Detection
ABSTRACT: In this work, we focus on scaling open-vocabulary action detection. Existing
approaches for action detection are predominantly limited to closed-set
scenarios and rely on complex, parameter-heavy architectures. Extending these
models to the open-vocabulary setting poses two key challenges: (1) the lack of
large-scale datasets with many action classes for robust training, and (2)
parameter-heavy adaptations to a pretrained vision-language contrastive model
to convert it for detection, risking overfitting the additional non-pretrained
parameters to base action classes. Firstly, we introduce an encoder-only
multimodal model for video action detection, reducing the reliance on
parameter-heavy additions for video action detection. Secondly, we introduce a
simple weakly supervised training strategy to exploit an existing closed-set
action detection dataset for pretraining. Finally, we depart from the ill-posed
base-to-novel benchmark used by prior works in open-vocabulary action detection
and devise a new benchmark to evaluate on existing closed-set action detection
datasets without ever using them for training, showing novel results to serve
as baselines for future work.
|
2504.03101 | Weili Cao | Weili Cao, Jianyou Wang, Youze Zheng, Longtian Bao, Qirui Zheng,
Taylor Berg-Kirkpatrick, Ramamohan Paturi, Leon Bergen | Single-Pass Document Scanning for Question Answering | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Handling extremely large documents for question answering is challenging:
chunk-based embedding methods often lose track of important global context,
while full-context transformers can be prohibitively expensive for hundreds of
thousands of tokens. We propose a single-pass document scanning approach that
processes the entire text in linear time, preserving global coherence while
deciding which sentences are most relevant to the query. On 41 QA benchmarks,
our single-pass scanner consistently outperforms chunk-based embedding methods
and competes with large language models at a fraction of the computational
cost. By conditioning on the entire preceding context without chunk breaks, the
method preserves global coherence, which is especially important for long
documents. Overall, single-pass document scanning offers a simple solution for
question answering over massive text. All code, datasets, and model checkpoints
are available at https://github.com/MambaRetriever/MambaRetriever
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 01:08:32 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Cao",
"Weili",
""
],
[
"Wang",
"Jianyou",
""
],
[
"Zheng",
"Youze",
""
],
[
"Bao",
"Longtian",
""
],
[
"Zheng",
"Qirui",
""
],
[
"Berg-Kirkpatrick",
"Taylor",
""
],
[
"Paturi",
"Ramamohan",
""
],
[
"Bergen",
"Leon",
""
]
] | TITLE: Single-Pass Document Scanning for Question Answering
ABSTRACT: Handling extremely large documents for question answering is challenging:
chunk-based embedding methods often lose track of important global context,
while full-context transformers can be prohibitively expensive for hundreds of
thousands of tokens. We propose a single-pass document scanning approach that
processes the entire text in linear time, preserving global coherence while
deciding which sentences are most relevant to the query. On 41 QA benchmarks,
our single-pass scanner consistently outperforms chunk-based embedding methods
and competes with large language models at a fraction of the computational
cost. By conditioning on the entire preceding context without chunk breaks, the
method preserves global coherence, which is especially important for long
documents. Overall, single-pass document scanning offers a simple solution for
question answering over massive text. All code, datasets, and model checkpoints
are available at https://github.com/MambaRetriever/MambaRetriever
|
2504.03107 | Sanghyuck Lee | Sanghyuck Lee, Sangkeun Park, Jaesung Lee | Exploiting Fine-Grained Skip Behaviors for Micro-Video Recommendation | 9 pages, 5 figures. Published in Proceedings of the AAAI Conference
on Artificial Intelligence (AAAI), 2025 | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The growing trend of sharing short videos on social media platforms, where
users capture and share moments from their daily lives, has led to an increase
in research efforts focused on micro-video recommendations. However,
conventional methods oversimplify the modeling of skip behavior, categorizing
interactions solely as positive or negative based on whether skipping occurs.
This study was motivated by the importance of the first few seconds of
micro-videos, leading to a refinement of signals into three distinct
categories: highly positive, less positive, and negative. Specifically, we
classify skip interactions occurring within a short time as negatives, while
those occurring after a delay are categorized as less positive. The proposed
dual-level graph and hierarchical ranking loss are designed to effectively
learn these fine-grained interactions. Our experiments demonstrated that the
proposed method outperformed three conventional methods across eight evaluation
measures on two public datasets.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 01:25:26 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Lee",
"Sanghyuck",
""
],
[
"Park",
"Sangkeun",
""
],
[
"Lee",
"Jaesung",
""
]
] | TITLE: Exploiting Fine-Grained Skip Behaviors for Micro-Video Recommendation
ABSTRACT: The growing trend of sharing short videos on social media platforms, where
users capture and share moments from their daily lives, has led to an increase
in research efforts focused on micro-video recommendations. However,
conventional methods oversimplify the modeling of skip behavior, categorizing
interactions solely as positive or negative based on whether skipping occurs.
This study was motivated by the importance of the first few seconds of
micro-videos, leading to a refinement of signals into three distinct
categories: highly positive, less positive, and negative. Specifically, we
classify skip interactions occurring within a short time as negatives, while
those occurring after a delay are categorized as less positive. The proposed
dual-level graph and hierarchical ranking loss are designed to effectively
learn these fine-grained interactions. Our experiments demonstrated that the
proposed method outperformed three conventional methods across eight evaluation
measures on two public datasets.
|
2504.03108 | Xuanyu Liu | Xuanyu Liu, Huiyun Yao, Jinggui Gao, Zhongyi Guo, Xue Zhang, Yulin
Dong | Multi-Granularity Vision Fastformer with Fusion Mechanism for Skin
Lesion Segmentation | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Background:Convolutional Neural Networks(CNN) and Vision Transformers(ViT)
are the main techniques used in Medical image segmentation. However, CNN is
limited to local contextual information, and ViT's quadratic complexity results
in significant computational costs. At the same time, equipping the model to
distinguish lesion boundaries with varying degrees of severity is also a
challenge encountered in skin lesion segmentation. Purpose:This research aims
to optimize the balance between computational costs and long-range dependency
modelling and achieve excellent generalization across lesions with different
degrees of severity. Methods:we propose a lightweight U-shape network that
utilizes Vision Fastformer with Fusion Mechanism (VFFM-UNet). We inherit the
advantages of Fastformer's additive attention mechanism, combining element-wise
product and matrix product for comprehensive feature extraction and channel
reduction to save computational costs. In order to accurately identify the
lesion boundaries with varying degrees of severity, we designed Fusion
Mechanism including Multi-Granularity Fusion and Channel Fusion, which can
process the feature maps in the granularity and channel levels to obtain
different contextual information. Results:Comprehensive experiments on the
ISIC2017, ISIC2018 and PH2 datasets demonstrate that VFFM-UNet outperforms
existing state-of-the-art models regarding parameter numbers, computational
complexity and segmentation performance. In short, compared to MISSFormer, our
model achieves superior segmentation performance while reducing parameter and
computation costs by 101x and 15x, respectively. Conclusions:Both quantitative
and qualitative analyses show that VFFM-UNet sets a new benchmark by reaching
an ideal balance between parameter numbers, computational complexity, and
segmentation performance compared to existing state-of-the-art models.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 01:27:43 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Liu",
"Xuanyu",
""
],
[
"Yao",
"Huiyun",
""
],
[
"Gao",
"Jinggui",
""
],
[
"Guo",
"Zhongyi",
""
],
[
"Zhang",
"Xue",
""
],
[
"Dong",
"Yulin",
""
]
] | TITLE: Multi-Granularity Vision Fastformer with Fusion Mechanism for Skin
Lesion Segmentation
ABSTRACT: Background:Convolutional Neural Networks(CNN) and Vision Transformers(ViT)
are the main techniques used in Medical image segmentation. However, CNN is
limited to local contextual information, and ViT's quadratic complexity results
in significant computational costs. At the same time, equipping the model to
distinguish lesion boundaries with varying degrees of severity is also a
challenge encountered in skin lesion segmentation. Purpose:This research aims
to optimize the balance between computational costs and long-range dependency
modelling and achieve excellent generalization across lesions with different
degrees of severity. Methods:we propose a lightweight U-shape network that
utilizes Vision Fastformer with Fusion Mechanism (VFFM-UNet). We inherit the
advantages of Fastformer's additive attention mechanism, combining element-wise
product and matrix product for comprehensive feature extraction and channel
reduction to save computational costs. In order to accurately identify the
lesion boundaries with varying degrees of severity, we designed Fusion
Mechanism including Multi-Granularity Fusion and Channel Fusion, which can
process the feature maps in the granularity and channel levels to obtain
different contextual information. Results:Comprehensive experiments on the
ISIC2017, ISIC2018 and PH2 datasets demonstrate that VFFM-UNet outperforms
existing state-of-the-art models regarding parameter numbers, computational
complexity and segmentation performance. In short, compared to MISSFormer, our
model achieves superior segmentation performance while reducing parameter and
computation costs by 101x and 15x, respectively. Conclusions:Both quantitative
and qualitative analyses show that VFFM-UNet sets a new benchmark by reaching
an ideal balance between parameter numbers, computational complexity, and
segmentation performance compared to existing state-of-the-art models.
|
2504.03118 | Ziteng Wei | Ziteng Wei, Qiang He, Bing Li, Feifei Chen, Yun Yang | NuWa: Deriving Lightweight Task-Specific Vision Transformers for Edge
Devices | 8 pages, 12 figures, 6 tables | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision Transformers (ViTs) excel in computer vision tasks but lack
flexibility for edge devices' diverse needs. A vital issue is that ViTs
pre-trained to cover a broad range of tasks are \textit{over-qualified} for
edge devices that usually demand only part of a ViT's knowledge for specific
tasks. Their task-specific accuracy on these edge devices is suboptimal. We
discovered that small ViTs that focus on device-specific tasks can improve
model accuracy and in the meantime, accelerate model inference. This paper
presents NuWa, an approach that derives small ViTs from the base ViT for edge
devices with specific task requirements. NuWa can transfer task-specific
knowledge extracted from the base ViT into small ViTs that fully leverage
constrained resources on edge devices to maximize model accuracy with inference
latency assurance. Experiments with three base ViTs on three public datasets
demonstrate that compared with state-of-the-art solutions, NuWa improves model
accuracy by up to $\text{11.83}\%$ and accelerates model inference by
1.29$\times$ - 2.79$\times$. Code for reproduction is available at
https://anonymous.4open.science/r/Task_Specific-3A5E.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 02:19:01 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Wei",
"Ziteng",
""
],
[
"He",
"Qiang",
""
],
[
"Li",
"Bing",
""
],
[
"Chen",
"Feifei",
""
],
[
"Yang",
"Yun",
""
]
] | TITLE: NuWa: Deriving Lightweight Task-Specific Vision Transformers for Edge
Devices
ABSTRACT: Vision Transformers (ViTs) excel in computer vision tasks but lack
flexibility for edge devices' diverse needs. A vital issue is that ViTs
pre-trained to cover a broad range of tasks are \textit{over-qualified} for
edge devices that usually demand only part of a ViT's knowledge for specific
tasks. Their task-specific accuracy on these edge devices is suboptimal. We
discovered that small ViTs that focus on device-specific tasks can improve
model accuracy and in the meantime, accelerate model inference. This paper
presents NuWa, an approach that derives small ViTs from the base ViT for edge
devices with specific task requirements. NuWa can transfer task-specific
knowledge extracted from the base ViT into small ViTs that fully leverage
constrained resources on edge devices to maximize model accuracy with inference
latency assurance. Experiments with three base ViTs on three public datasets
demonstrate that compared with state-of-the-art solutions, NuWa improves model
accuracy by up to $\text{11.83}\%$ and accelerates model inference by
1.29$\times$ - 2.79$\times$. Code for reproduction is available at
https://anonymous.4open.science/r/Task_Specific-3A5E.
|
2504.03128 | Ka Him Wong | Kahim Wong, Jicheng Zhou, Kemou Li, Yain-Whar Si, Xiaowei Wu, and
Jiantao Zhou | FontGuard: A Robust Font Watermarking Approach Leveraging Deep Font
Knowledge | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The proliferation of AI-generated content brings significant concerns on the
forensic and security issues such as source tracing, copyright protection, etc,
highlighting the need for effective watermarking technologies. Font-based text
watermarking has emerged as an effective solution to embed information, which
could ensure copyright, traceability, and compliance of the generated text
content. Existing font watermarking methods usually neglect essential font
knowledge, which leads to watermarked fonts of low quality and limited
embedding capacity. These methods are also vulnerable to real-world
distortions, low-resolution fonts, and inaccurate character segmentation. In
this paper, we introduce FontGuard, a novel font watermarking model that
harnesses the capabilities of font models and language-guided contrastive
learning. Unlike previous methods that focus solely on the pixel-level
alteration, FontGuard modifies fonts by altering hidden style features,
resulting in better font quality upon watermark embedding. We also leverage the
font manifold to increase the embedding capacity of our proposed method by
generating substantial font variants closely resembling the original font.
Furthermore, in the decoder, we employ an image-text contrastive learning to
reconstruct the embedded bits, which can achieve desirable robustness against
various real-world transmission distortions. FontGuard outperforms
state-of-the-art methods by +5.4%, +7.4%, and +5.8% in decoding accuracy under
synthetic, cross-media, and online social network distortions, respectively,
while improving the visual quality by 52.7% in terms of LPIPS. Moreover,
FontGuard uniquely allows the generation of watermarked fonts for unseen fonts
without re-training the network. The code and dataset are available at
https://github.com/KAHIMWONG/FontGuard.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 02:39:33 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Wong",
"Kahim",
""
],
[
"Zhou",
"Jicheng",
""
],
[
"Li",
"Kemou",
""
],
[
"Si",
"Yain-Whar",
""
],
[
"Wu",
"Xiaowei",
""
],
[
"Zhou",
"Jiantao",
""
]
] | TITLE: FontGuard: A Robust Font Watermarking Approach Leveraging Deep Font
Knowledge
ABSTRACT: The proliferation of AI-generated content brings significant concerns on the
forensic and security issues such as source tracing, copyright protection, etc,
highlighting the need for effective watermarking technologies. Font-based text
watermarking has emerged as an effective solution to embed information, which
could ensure copyright, traceability, and compliance of the generated text
content. Existing font watermarking methods usually neglect essential font
knowledge, which leads to watermarked fonts of low quality and limited
embedding capacity. These methods are also vulnerable to real-world
distortions, low-resolution fonts, and inaccurate character segmentation. In
this paper, we introduce FontGuard, a novel font watermarking model that
harnesses the capabilities of font models and language-guided contrastive
learning. Unlike previous methods that focus solely on the pixel-level
alteration, FontGuard modifies fonts by altering hidden style features,
resulting in better font quality upon watermark embedding. We also leverage the
font manifold to increase the embedding capacity of our proposed method by
generating substantial font variants closely resembling the original font.
Furthermore, in the decoder, we employ an image-text contrastive learning to
reconstruct the embedded bits, which can achieve desirable robustness against
various real-world transmission distortions. FontGuard outperforms
state-of-the-art methods by +5.4%, +7.4%, and +5.8% in decoding accuracy under
synthetic, cross-media, and online social network distortions, respectively,
while improving the visual quality by 52.7% in terms of LPIPS. Moreover,
FontGuard uniquely allows the generation of watermarked fonts for unseen fonts
without re-training the network. The code and dataset are available at
https://github.com/KAHIMWONG/FontGuard.
|
2504.03153 | Sathish Kumar | Natalie Tirabassi, Sathish A. P. Kumar, Sumit Jha and Arvind
Ramanathan | MORAL: A Multimodal Reinforcement Learning Framework for Decision Making
in Autonomous Laboratories | 9 pages, 14 figures and 3 tables | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose MORAL (a multimodal reinforcement learning framework for decision
making in autonomous laboratories) that enhances sequential decision-making in
autonomous robotic laboratories through the integration of visual and textual
inputs. Using the BridgeData V2 dataset, we generate fine-tuned image captions
with a pretrained BLIP-2 vision-language model and combine them with visual
features through an early fusion strategy. The fused representations are
processed using Deep Q-Network (DQN) and Proximal Policy Optimization (PPO)
agents. Experimental results demonstrate that multimodal agents achieve a 20%
improvement in task completion rates and significantly outperform visual-only
and textual-only baselines after sufficient training. Compared to
transformer-based and recurrent multimodal RL models, our approach achieves
superior performance in cumulative reward and caption quality metrics (BLEU,
METEOR, ROUGE-L). These results highlight the impact of semantically aligned
language cues in enhancing agent learning efficiency and generalization. The
proposed framework contributes to the advancement of multimodal reinforcement
learning and embodied AI systems in dynamic, real-world environments.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 04:15:52 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Tirabassi",
"Natalie",
""
],
[
"Kumar",
"Sathish A. P.",
""
],
[
"Jha",
"Sumit",
""
],
[
"Ramanathan",
"Arvind",
""
]
] | TITLE: MORAL: A Multimodal Reinforcement Learning Framework for Decision Making
in Autonomous Laboratories
ABSTRACT: We propose MORAL (a multimodal reinforcement learning framework for decision
making in autonomous laboratories) that enhances sequential decision-making in
autonomous robotic laboratories through the integration of visual and textual
inputs. Using the BridgeData V2 dataset, we generate fine-tuned image captions
with a pretrained BLIP-2 vision-language model and combine them with visual
features through an early fusion strategy. The fused representations are
processed using Deep Q-Network (DQN) and Proximal Policy Optimization (PPO)
agents. Experimental results demonstrate that multimodal agents achieve a 20%
improvement in task completion rates and significantly outperform visual-only
and textual-only baselines after sufficient training. Compared to
transformer-based and recurrent multimodal RL models, our approach achieves
superior performance in cumulative reward and caption quality metrics (BLEU,
METEOR, ROUGE-L). These results highlight the impact of semantically aligned
language cues in enhancing agent learning efficiency and generalization. The
proposed framework contributes to the advancement of multimodal reinforcement
learning and embodied AI systems in dynamic, real-world environments.
|
2504.03162 | Ruoyu Chen | Zihan Gu, Ruoyu Chen, Hua Zhang, Yue Hu, Xiaochun Cao | Beyond Progress Measures: Theoretical Insights into the Mechanism of
Grokking | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Grokking, referring to the abrupt improvement in test accuracy after extended
overfitting, offers valuable insights into the mechanisms of model
generalization. Existing researches based on progress measures imply that
grokking relies on understanding the optimization dynamics when the loss
function is dominated solely by the weight decay term. However, we find that
this optimization merely leads to token uniformity, which is not a sufficient
condition for grokking. In this work, we investigate the grokking mechanism
underlying the Transformer in the task of prime number operations. Based on
theoretical analysis and experimental validation, we present the following
insights: (i) The weight decay term encourages uniformity across all tokens in
the embedding space when it is minimized. (ii) The occurrence of grokking is
jointly determined by the uniformity of the embedding space and the
distribution of the training dataset. Building on these insights, we provide a
unified perspective for understanding various previously proposed progress
measures and introduce a novel, concise, and effective progress measure that
could trace the changes in test loss more accurately. Finally, to demonstrate
the versatility of our theoretical framework, we design a dedicated dataset to
validate our theory on ResNet-18, successfully showcasing the occurrence of
grokking.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 04:42:38 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Gu",
"Zihan",
""
],
[
"Chen",
"Ruoyu",
""
],
[
"Zhang",
"Hua",
""
],
[
"Hu",
"Yue",
""
],
[
"Cao",
"Xiaochun",
""
]
] | TITLE: Beyond Progress Measures: Theoretical Insights into the Mechanism of
Grokking
ABSTRACT: Grokking, referring to the abrupt improvement in test accuracy after extended
overfitting, offers valuable insights into the mechanisms of model
generalization. Existing researches based on progress measures imply that
grokking relies on understanding the optimization dynamics when the loss
function is dominated solely by the weight decay term. However, we find that
this optimization merely leads to token uniformity, which is not a sufficient
condition for grokking. In this work, we investigate the grokking mechanism
underlying the Transformer in the task of prime number operations. Based on
theoretical analysis and experimental validation, we present the following
insights: (i) The weight decay term encourages uniformity across all tokens in
the embedding space when it is minimized. (ii) The occurrence of grokking is
jointly determined by the uniformity of the embedding space and the
distribution of the training dataset. Building on these insights, we provide a
unified perspective for understanding various previously proposed progress
measures and introduce a novel, concise, and effective progress measure that
could trace the changes in test loss more accurately. Finally, to demonstrate
the versatility of our theoretical framework, we design a dedicated dataset to
validate our theory on ResNet-18, successfully showcasing the occurrence of
grokking.
|
2504.03165 | Xuanyu Lei | Weitao Li, Kaiming Liu, Xiangyu Zhang, Xuanyu Lei, Weizhi Ma, Yang Liu | Efficient Dynamic Clustering-Based Document Compression for
Retrieval-Augmented-Generation | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrieval-Augmented Generation (RAG) has emerged as a widely adopted approach
for knowledge integration during large language model (LLM) inference in recent
years. However, current RAG implementations face challenges in effectively
addressing noise, repetition and redundancy in retrieved content, primarily due
to their limited ability to exploit fine-grained inter-document relationships.
To address these limitations, we propose an \textbf{E}fficient \textbf{D}ynamic
\textbf{C}lustering-based document \textbf{C}ompression framework
(\textbf{EDC\textsuperscript{2}-RAG}) that effectively utilizes latent
inter-document relationships while simultaneously removing irrelevant
information and redundant content. We validate our approach, built upon
GPT-3.5, on widely used knowledge-QA and hallucination-detected datasets. The
results show that this method achieves consistent performance improvements
across various scenarios and experimental settings, demonstrating strong
robustness and applicability. Our code and datasets can be found at
https://github.com/Tsinghua-dhy/EDC-2-RAG.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 04:43:13 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Li",
"Weitao",
""
],
[
"Liu",
"Kaiming",
""
],
[
"Zhang",
"Xiangyu",
""
],
[
"Lei",
"Xuanyu",
""
],
[
"Ma",
"Weizhi",
""
],
[
"Liu",
"Yang",
""
]
] | TITLE: Efficient Dynamic Clustering-Based Document Compression for
Retrieval-Augmented-Generation
ABSTRACT: Retrieval-Augmented Generation (RAG) has emerged as a widely adopted approach
for knowledge integration during large language model (LLM) inference in recent
years. However, current RAG implementations face challenges in effectively
addressing noise, repetition and redundancy in retrieved content, primarily due
to their limited ability to exploit fine-grained inter-document relationships.
To address these limitations, we propose an \textbf{E}fficient \textbf{D}ynamic
\textbf{C}lustering-based document \textbf{C}ompression framework
(\textbf{EDC\textsuperscript{2}-RAG}) that effectively utilizes latent
inter-document relationships while simultaneously removing irrelevant
information and redundant content. We validate our approach, built upon
GPT-3.5, on widely used knowledge-QA and hallucination-detected datasets. The
results show that this method achieves consistent performance improvements
across various scenarios and experimental settings, demonstrating strong
robustness and applicability. Our code and datasets can be found at
https://github.com/Tsinghua-dhy/EDC-2-RAG.
|
2504.03167 | Sila Lertbanjongngam | Haruhiko Yoshioka, Sila Lertbanjongngam, Masayuki Inaba, Youmei Fan,
Takashi Nakano, Kazumasa Shimari, Raula Gaikovina Kula, Kenichi Matsumoto | Do Developers Depend on Deprecated Library Versions? A Mining Study of
Log4j | Accepted for publication in 22nd international conference on Mining
Software Repositories (MSR 2025) : 5 pages, 6 figures | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Log4j has become a widely adopted logging library for Java programs due to
its long history and high reliability. Its widespread use is notable not only
because of its maturity but also due to the complexity and depth of its
features, which have made it an essential tool for many developers. However,
Log4j 1.x, which reached its end of support (deprecated), poses significant
security risks and has numerous deprecated features that can be exploited by
attackers. Despite this, some clients may still rely on this library. We aim to
understand whether clients are still using Log4j 1.x despite its official
support ending. We utilized the Mining Software Repositories 2025 challenge
dataset, which provides a large and representative sample of open-source
software projects. We analyzed over 10,000 log entries from the Mining Software
Repositories 2025 challenge dataset using the Goblin framework to identify
trends in usage rates for both Log4j 1.x and Log4j-core 2.x. Specifically, our
study addressed two key issues: (1) We examined the usage rates and trends for
these two libraries, highlighting any notable differences or patterns in their
adoption. (2) We demonstrate that projects initiated after a deprecated library
has reached the end of its support lifecycle can still maintain significant
popularity. These findings highlight how deprecated are still popular, with the
next step being to understand the reasoning behind these adoptions.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 04:49:36 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yoshioka",
"Haruhiko",
""
],
[
"Lertbanjongngam",
"Sila",
""
],
[
"Inaba",
"Masayuki",
""
],
[
"Fan",
"Youmei",
""
],
[
"Nakano",
"Takashi",
""
],
[
"Shimari",
"Kazumasa",
""
],
[
"Kula",
"Raula Gaikovina",
""
],
[
"Matsumoto",
"Kenichi",
""
]
] | TITLE: Do Developers Depend on Deprecated Library Versions? A Mining Study of
Log4j
ABSTRACT: Log4j has become a widely adopted logging library for Java programs due to
its long history and high reliability. Its widespread use is notable not only
because of its maturity but also due to the complexity and depth of its
features, which have made it an essential tool for many developers. However,
Log4j 1.x, which reached its end of support (deprecated), poses significant
security risks and has numerous deprecated features that can be exploited by
attackers. Despite this, some clients may still rely on this library. We aim to
understand whether clients are still using Log4j 1.x despite its official
support ending. We utilized the Mining Software Repositories 2025 challenge
dataset, which provides a large and representative sample of open-source
software projects. We analyzed over 10,000 log entries from the Mining Software
Repositories 2025 challenge dataset using the Goblin framework to identify
trends in usage rates for both Log4j 1.x and Log4j-core 2.x. Specifically, our
study addressed two key issues: (1) We examined the usage rates and trends for
these two libraries, highlighting any notable differences or patterns in their
adoption. (2) We demonstrate that projects initiated after a deprecated library
has reached the end of its support lifecycle can still maintain significant
popularity. These findings highlight how deprecated are still popular, with the
next step being to understand the reasoning behind these adoptions.
|
2504.03168 | Ross Greer | Lucas Choi and Ross Greer | Finding the Reflection Point: Unpadding Images to Remove Data
Augmentation Artifacts in Large Open Source Image Datasets for Machine
Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we address a novel image restoration problem relevant to
machine learning dataset curation: the detection and removal of noisy mirrored
padding artifacts. While data augmentation techniques like padding are
necessary for standardizing image dimensions, they can introduce artifacts that
degrade model evaluation when datasets are repurposed across domains. We
propose a systematic algorithm to precisely delineate the reflection boundary
through a minimum mean squared error approach with thresholding and remove
reflective padding. Our method effectively identifies the transition between
authentic content and its mirrored counterpart, even in the presence of
compression or interpolation noise. We demonstrate our algorithm's efficacy on
the SHEL5k dataset, showing significant performance improvements in zero-shot
object detection tasks using OWLv2, with average precision increasing from 0.47
to 0.61 for hard hat detection and from 0.68 to 0.73 for person detection. By
addressing annotation inconsistencies and distorted objects in padded regions,
our approach enhances dataset integrity, enabling more reliable model
evaluation across computer vision tasks.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 04:54:10 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Choi",
"Lucas",
""
],
[
"Greer",
"Ross",
""
]
] | TITLE: Finding the Reflection Point: Unpadding Images to Remove Data
Augmentation Artifacts in Large Open Source Image Datasets for Machine
Learning
ABSTRACT: In this paper, we address a novel image restoration problem relevant to
machine learning dataset curation: the detection and removal of noisy mirrored
padding artifacts. While data augmentation techniques like padding are
necessary for standardizing image dimensions, they can introduce artifacts that
degrade model evaluation when datasets are repurposed across domains. We
propose a systematic algorithm to precisely delineate the reflection boundary
through a minimum mean squared error approach with thresholding and remove
reflective padding. Our method effectively identifies the transition between
authentic content and its mirrored counterpart, even in the presence of
compression or interpolation noise. We demonstrate our algorithm's efficacy on
the SHEL5k dataset, showing significant performance improvements in zero-shot
object detection tasks using OWLv2, with average precision increasing from 0.47
to 0.61 for hard hat detection and from 0.68 to 0.73 for person detection. By
addressing annotation inconsistencies and distorted objects in padded regions,
our approach enhances dataset integrity, enabling more reliable model
evaluation across computer vision tasks.
|
2504.03171 | Zeyang Zheng | Zeyang Zheng, Arman Hosseini, Dong Chen, Omid Shoghli, and Arsalan
Heydarian | Real-Time Roadway Obstacle Detection for Electric Scooters Using Deep
Learning and Multi-Sensor Fusion | Accepted at ASCE International Conference on Computing in Civil
Engineering (i3ce) | null | null | null | cs.CV cs.AI cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing adoption of electric scooters (e-scooters) in urban areas has
coincided with a rise in traffic accidents and injuries, largely due to their
small wheels, lack of suspension, and sensitivity to uneven surfaces. While
deep learning-based object detection has been widely used to improve automobile
safety, its application for e-scooter obstacle detection remains unexplored.
This study introduces a novel ground obstacle detection system for e-scooters,
integrating an RGB camera, and a depth camera to enhance real-time road hazard
detection. Additionally, the Inertial Measurement Unit (IMU) measures linear
vertical acceleration to identify surface vibrations, guiding the selection of
six obstacle categories: tree branches, manhole covers, potholes, pine cones,
non-directional cracks, and truncated domes. All sensors, including the RGB
camera, depth camera, and IMU, are integrated within the Intel RealSense Camera
D435i. A deep learning model powered by YOLO detects road hazards and utilizes
depth data to estimate obstacle proximity. Evaluated on the seven hours of
naturalistic riding dataset, the system achieves a high mean average precision
(mAP) of 0.827 and demonstrates excellent real-time performance. This approach
provides an effective solution to enhance e-scooter safety through advanced
computer vision and data fusion. The dataset is accessible at
https://zenodo.org/records/14583718, and the project code is hosted on
https://github.com/Zeyang-Zheng/Real-Time-Roadway-Obstacle-Detection-for-Electric-Scooters.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 05:01:16 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Zheng",
"Zeyang",
""
],
[
"Hosseini",
"Arman",
""
],
[
"Chen",
"Dong",
""
],
[
"Shoghli",
"Omid",
""
],
[
"Heydarian",
"Arsalan",
""
]
] | TITLE: Real-Time Roadway Obstacle Detection for Electric Scooters Using Deep
Learning and Multi-Sensor Fusion
ABSTRACT: The increasing adoption of electric scooters (e-scooters) in urban areas has
coincided with a rise in traffic accidents and injuries, largely due to their
small wheels, lack of suspension, and sensitivity to uneven surfaces. While
deep learning-based object detection has been widely used to improve automobile
safety, its application for e-scooter obstacle detection remains unexplored.
This study introduces a novel ground obstacle detection system for e-scooters,
integrating an RGB camera, and a depth camera to enhance real-time road hazard
detection. Additionally, the Inertial Measurement Unit (IMU) measures linear
vertical acceleration to identify surface vibrations, guiding the selection of
six obstacle categories: tree branches, manhole covers, potholes, pine cones,
non-directional cracks, and truncated domes. All sensors, including the RGB
camera, depth camera, and IMU, are integrated within the Intel RealSense Camera
D435i. A deep learning model powered by YOLO detects road hazards and utilizes
depth data to estimate obstacle proximity. Evaluated on the seven hours of
naturalistic riding dataset, the system achieves a high mean average precision
(mAP) of 0.827 and demonstrates excellent real-time performance. This approach
provides an effective solution to enhance e-scooter safety through advanced
computer vision and data fusion. The dataset is accessible at
https://zenodo.org/records/14583718, and the project code is hosted on
https://github.com/Zeyang-Zheng/Real-Time-Roadway-Obstacle-Detection-for-Electric-Scooters.
|
2504.03173 | Hongliang Zhang | Hongliang Zhang, Jiguo Yu, Fenghua Xu, Chunqiang Hu, Yongzhao Zhang,
Xiaofen Wang, Zhongyuan Yu, Xiaosong Zhang | PPFPL: Cross-silo Privacy-preserving Federated Prototype Learning
Against Data Poisoning Attacks on Non-IID Data | null | null | null | null | cs.CR cs.DC | http://creativecommons.org/licenses/by/4.0/ | Privacy-Preserving Federated Learning (PPFL) allows multiple clients to
collaboratively train a deep learning model by submitting hidden model updates.
Nonetheless, PPFL is vulnerable to data poisoning attacks due to the
distributed training nature of clients. Existing solutions have struggled to
improve the performance of cross-silo PPFL in poisoned Non-IID data. To address
the issues, this paper proposes a privacy-preserving federated prototype
learning framework, named PPFPL, which enhances the cross-silo FL performance
in poisoned Non-IID data while effectively resisting data poisoning attacks.
Specifically, we adopt prototypes as client-submitted model updates to
eliminate the impact of tampered data distribution on federated learning.
Moreover, we utilize two servers to achieve Byzantine-robust aggregation by
secure aggregation protocol, which greatly reduces the impact of malicious
clients. Theoretical analyses confirm the convergence of PPFPL, and
experimental results on publicly available datasets show that PPFPL is
effective for resisting data poisoning attacks with Non-IID conditions.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 05:05:24 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Zhang",
"Hongliang",
""
],
[
"Yu",
"Jiguo",
""
],
[
"Xu",
"Fenghua",
""
],
[
"Hu",
"Chunqiang",
""
],
[
"Zhang",
"Yongzhao",
""
],
[
"Wang",
"Xiaofen",
""
],
[
"Yu",
"Zhongyuan",
""
],
[
"Zhang",
"Xiaosong",
""
]
] | TITLE: PPFPL: Cross-silo Privacy-preserving Federated Prototype Learning
Against Data Poisoning Attacks on Non-IID Data
ABSTRACT: Privacy-Preserving Federated Learning (PPFL) allows multiple clients to
collaboratively train a deep learning model by submitting hidden model updates.
Nonetheless, PPFL is vulnerable to data poisoning attacks due to the
distributed training nature of clients. Existing solutions have struggled to
improve the performance of cross-silo PPFL in poisoned Non-IID data. To address
the issues, this paper proposes a privacy-preserving federated prototype
learning framework, named PPFPL, which enhances the cross-silo FL performance
in poisoned Non-IID data while effectively resisting data poisoning attacks.
Specifically, we adopt prototypes as client-submitted model updates to
eliminate the impact of tampered data distribution on federated learning.
Moreover, we utilize two servers to achieve Byzantine-robust aggregation by
secure aggregation protocol, which greatly reduces the impact of malicious
clients. Theoretical analyses confirm the convergence of PPFPL, and
experimental results on publicly available datasets show that PPFPL is
effective for resisting data poisoning attacks with Non-IID conditions.
|
2504.03188 | Kotaro Ikeda | Kotaro Ikeda, Masanori Koyama, Jinzhe Zhang, Kohei Hayashi and Kenji
Fukumizu | Simultaneous Learning of Optimal Transports for Training All-to-All
Flow-Based Condition Transfer Model | 29 pages, 17 figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose a flow-based method for learning all-to-all
transfer maps among conditional distributions, approximating pairwise optimal
transport. The proposed method addresses the challenge of handling continuous
conditions, which often involve a large set of conditions with sparse empirical
observations per condition. We introduce a novel cost function that enables
simultaneous learning of optimal transports for all pairs of conditional
distributions. Our method is supported by a theoretical guarantee that, in the
limit, it converges to pairwise optimal transports among infinite pairs of
conditional distributions. The learned transport maps are subsequently used to
couple data points in conditional flow matching. We demonstrate the
effectiveness of this method on synthetic and benchmark datasets, as well as on
chemical datasets where continuous physical properties are defined as
conditions.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 05:32:54 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Ikeda",
"Kotaro",
""
],
[
"Koyama",
"Masanori",
""
],
[
"Zhang",
"Jinzhe",
""
],
[
"Hayashi",
"Kohei",
""
],
[
"Fukumizu",
"Kenji",
""
]
] | TITLE: Simultaneous Learning of Optimal Transports for Training All-to-All
Flow-Based Condition Transfer Model
ABSTRACT: In this paper, we propose a flow-based method for learning all-to-all
transfer maps among conditional distributions, approximating pairwise optimal
transport. The proposed method addresses the challenge of handling continuous
conditions, which often involve a large set of conditions with sparse empirical
observations per condition. We introduce a novel cost function that enables
simultaneous learning of optimal transports for all pairs of conditional
distributions. Our method is supported by a theoretical guarantee that, in the
limit, it converges to pairwise optimal transports among infinite pairs of
conditional distributions. The learned transport maps are subsequently used to
couple data points in conditional flow matching. We demonstrate the
effectiveness of this method on synthetic and benchmark datasets, as well as on
chemical datasets where continuous physical properties are defined as
conditions.
|
2504.03198 | Jiaxin Guo | Jiaxin Guo, Wenzhen Dong, Tianyu Huang, Hao Ding, Ziyi Wang, Haomin
Kuang, Qi Dou, Yun-Hui Liu | Endo3R: Unified Online Reconstruction from Dynamic Monocular Endoscopic
Video | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reconstructing 3D scenes from monocular surgical videos can enhance surgeon's
perception and therefore plays a vital role in various computer-assisted
surgery tasks. However, achieving scale-consistent reconstruction remains an
open challenge due to inherent issues in endoscopic videos, such as dynamic
deformations and textureless surfaces. Despite recent advances, current methods
either rely on calibration or instrument priors to estimate scale, or employ
SfM-like multi-stage pipelines, leading to error accumulation and requiring
offline optimization. In this paper, we present Endo3R, a unified 3D foundation
model for online scale-consistent reconstruction from monocular surgical video,
without any priors or extra optimization. Our model unifies the tasks by
predicting globally aligned pointmaps, scale-consistent video depths, and
camera parameters without any offline optimization. The core contribution of
our method is expanding the capability of the recent pairwise reconstruction
model to long-term incremental dynamic reconstruction by an uncertainty-aware
dual memory mechanism. The mechanism maintains history tokens of both
short-term dynamics and long-term spatial consistency. Notably, to tackle the
highly dynamic nature of surgical scenes, we measure the uncertainty of tokens
via Sampson distance and filter out tokens with high uncertainty. Regarding the
scarcity of endoscopic datasets with ground-truth depth and camera poses, we
further devise a self-supervised mechanism with a novel dynamics-aware flow
loss. Abundant experiments on SCARED and Hamlyn datasets demonstrate our
superior performance in zero-shot surgical video depth prediction and camera
pose estimation with online efficiency. Project page:
https://wrld.github.io/Endo3R/.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 06:05:22 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Guo",
"Jiaxin",
""
],
[
"Dong",
"Wenzhen",
""
],
[
"Huang",
"Tianyu",
""
],
[
"Ding",
"Hao",
""
],
[
"Wang",
"Ziyi",
""
],
[
"Kuang",
"Haomin",
""
],
[
"Dou",
"Qi",
""
],
[
"Liu",
"Yun-Hui",
""
]
] | TITLE: Endo3R: Unified Online Reconstruction from Dynamic Monocular Endoscopic
Video
ABSTRACT: Reconstructing 3D scenes from monocular surgical videos can enhance surgeon's
perception and therefore plays a vital role in various computer-assisted
surgery tasks. However, achieving scale-consistent reconstruction remains an
open challenge due to inherent issues in endoscopic videos, such as dynamic
deformations and textureless surfaces. Despite recent advances, current methods
either rely on calibration or instrument priors to estimate scale, or employ
SfM-like multi-stage pipelines, leading to error accumulation and requiring
offline optimization. In this paper, we present Endo3R, a unified 3D foundation
model for online scale-consistent reconstruction from monocular surgical video,
without any priors or extra optimization. Our model unifies the tasks by
predicting globally aligned pointmaps, scale-consistent video depths, and
camera parameters without any offline optimization. The core contribution of
our method is expanding the capability of the recent pairwise reconstruction
model to long-term incremental dynamic reconstruction by an uncertainty-aware
dual memory mechanism. The mechanism maintains history tokens of both
short-term dynamics and long-term spatial consistency. Notably, to tackle the
highly dynamic nature of surgical scenes, we measure the uncertainty of tokens
via Sampson distance and filter out tokens with high uncertainty. Regarding the
scarcity of endoscopic datasets with ground-truth depth and camera poses, we
further devise a self-supervised mechanism with a novel dynamics-aware flow
loss. Abundant experiments on SCARED and Hamlyn datasets demonstrate our
superior performance in zero-shot surgical video depth prediction and camera
pose estimation with online efficiency. Project page:
https://wrld.github.io/Endo3R/.
|
2504.03221 | Abu Saleh Musa Miah Dr. | Jungpil Shin, Abu Saleh Musa Miah, Sota Konnai, Shu Hoshitaka, Pankoo
Kim | Electromyography-Based Gesture Recognition: Hierarchical Feature
Extraction for Enhanced Spatial-Temporal Dynamics | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Hand gesture recognition using multichannel surface electromyography (sEMG)
is challenging due to unstable predictions and inefficient time-varying feature
enhancement. To overcome the lack of signal based time-varying feature
problems, we propose a lightweight squeeze-excitation deep learning-based multi
stream spatial temporal dynamics time-varying feature extraction approach to
build an effective sEMG-based hand gesture recognition system. Each branch of
the proposed model was designed to extract hierarchical features, capturing
both global and detailed spatial-temporal relationships to ensure feature
effectiveness. The first branch, utilizing a Bidirectional-TCN (Bi-TCN),
focuses on capturing long-term temporal dependencies by modelling past and
future temporal contexts, providing a holistic view of gesture dynamics. The
second branch, incorporating a 1D Convolutional layer, separable CNN, and
Squeeze-and-Excitation (SE) block, efficiently extracts spatial-temporal
features while emphasizing critical feature channels, enhancing feature
relevance. The third branch, combining a Temporal Convolutional Network (TCN)
and Bidirectional LSTM (BiLSTM), captures bidirectional temporal relationships
and time-varying patterns. Outputs from all branches are fused using
concatenation to capture subtle variations in the data and then refined with a
channel attention module, selectively focusing on the most informative features
while improving computational efficiency. The proposed model was tested on the
Ninapro DB2, DB4, and DB5 datasets, achieving accuracy rates of 96.41%, 92.40%,
and 93.34%, respectively. These results demonstrate the capability of the
system to handle complex sEMG dynamics, offering advancements in prosthetic
limb control and human-machine interface technologies with significant
implications for assistive technologies.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 07:11:12 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Shin",
"Jungpil",
""
],
[
"Miah",
"Abu Saleh Musa",
""
],
[
"Konnai",
"Sota",
""
],
[
"Hoshitaka",
"Shu",
""
],
[
"Kim",
"Pankoo",
""
]
] | TITLE: Electromyography-Based Gesture Recognition: Hierarchical Feature
Extraction for Enhanced Spatial-Temporal Dynamics
ABSTRACT: Hand gesture recognition using multichannel surface electromyography (sEMG)
is challenging due to unstable predictions and inefficient time-varying feature
enhancement. To overcome the lack of signal based time-varying feature
problems, we propose a lightweight squeeze-excitation deep learning-based multi
stream spatial temporal dynamics time-varying feature extraction approach to
build an effective sEMG-based hand gesture recognition system. Each branch of
the proposed model was designed to extract hierarchical features, capturing
both global and detailed spatial-temporal relationships to ensure feature
effectiveness. The first branch, utilizing a Bidirectional-TCN (Bi-TCN),
focuses on capturing long-term temporal dependencies by modelling past and
future temporal contexts, providing a holistic view of gesture dynamics. The
second branch, incorporating a 1D Convolutional layer, separable CNN, and
Squeeze-and-Excitation (SE) block, efficiently extracts spatial-temporal
features while emphasizing critical feature channels, enhancing feature
relevance. The third branch, combining a Temporal Convolutional Network (TCN)
and Bidirectional LSTM (BiLSTM), captures bidirectional temporal relationships
and time-varying patterns. Outputs from all branches are fused using
concatenation to capture subtle variations in the data and then refined with a
channel attention module, selectively focusing on the most informative features
while improving computational efficiency. The proposed model was tested on the
Ninapro DB2, DB4, and DB5 datasets, achieving accuracy rates of 96.41%, 92.40%,
and 93.34%, respectively. These results demonstrate the capability of the
system to handle complex sEMG dynamics, offering advancements in prosthetic
limb control and human-machine interface technologies with significant
implications for assistive technologies.
|
2504.03229 | Youngjae Jeon | Youngjae Jeon, Eunho Heo, Jinmo Lee, Taewon Uhm, Dongjin Lee | A Robust Method for Fault Detection and Severity Estimation in
Mechanical Vibration Data | 8 pages, 9 figures | 2025 IEEE International Conference on Prognostics and Health
Management (ICPHM) | null | null | eess.SY cs.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper proposes a robust method for fault detection and severity
estimation in multivariate time-series data to enhance predictive maintenance
of mechanical systems. We use the Temporal Graph Convolutional Network (T-GCN)
model to capture both spatial and temporal dependencies among variables. This
enables accurate future state predictions under varying operational conditions.
To address the challenge of fluctuating anomaly scores that reduce fault
severity estimation accuracy, we introduce a novel fault severity index based
on the mean and standard deviation of anomaly scores. This generates a
continuous and reliable severity measurement. We validate the proposed method
using two experimental datasets: an open IMS bearing dataset and data collected
from a fanjet electric propulsion system. Results demonstrate that our method
significantly reduces abrupt fluctuations and inconsistencies in anomaly
scores. This provides a more dependable foundation for maintenance planning and
risk management in safety-critical applications.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 07:22:29 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Jeon",
"Youngjae",
""
],
[
"Heo",
"Eunho",
""
],
[
"Lee",
"Jinmo",
""
],
[
"Uhm",
"Taewon",
""
],
[
"Lee",
"Dongjin",
""
]
] | TITLE: A Robust Method for Fault Detection and Severity Estimation in
Mechanical Vibration Data
ABSTRACT: This paper proposes a robust method for fault detection and severity
estimation in multivariate time-series data to enhance predictive maintenance
of mechanical systems. We use the Temporal Graph Convolutional Network (T-GCN)
model to capture both spatial and temporal dependencies among variables. This
enables accurate future state predictions under varying operational conditions.
To address the challenge of fluctuating anomaly scores that reduce fault
severity estimation accuracy, we introduce a novel fault severity index based
on the mean and standard deviation of anomaly scores. This generates a
continuous and reliable severity measurement. We validate the proposed method
using two experimental datasets: an open IMS bearing dataset and data collected
from a fanjet electric propulsion system. Results demonstrate that our method
significantly reduces abrupt fluctuations and inconsistencies in anomaly
scores. This provides a more dependable foundation for maintenance planning and
risk management in safety-critical applications.
|
2504.03235 | Ibne Farabi Shihab | Ibne Farabi Shihab and Anuj Sharma | Crash Time Matters: HybridMamba for Fine-Grained Temporal Localization
in Traffic Surveillance Footage | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic crash detection in long-form surveillance videos is critical for
emergency response and infrastructure planning but remains difficult due to the
brief and rare nature of crash events. We introduce HybridMamba, a novel
architecture that combines visual transformers with state-space temporal
modeling to achieve accurate crash time localization. Our method uses
multi-level token compression and hierarchical temporal processing to remain
computationally efficient without sacrificing temporal resolution. Evaluated on
a large-scale dataset from the Iowa Department of Transportation, HybridMamba
achieves a mean absolute error of 1.50 seconds, with 65.2 percent of
predictions within one second of the ground truth. It outperforms recent
video-language models such as TimeChat and VideoLLaMA2 by up to 2.8 seconds,
while using significantly fewer parameters. Our results demonstrate strong
generalization across videos ranging from 2 to 40 minutes in diverse
conditions. HybridMamba offers a robust and efficient solution for fine-grained
temporal localization in traffic surveillance. The code will be released upon
publication.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 07:35:11 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Shihab",
"Ibne Farabi",
""
],
[
"Sharma",
"Anuj",
""
]
] | TITLE: Crash Time Matters: HybridMamba for Fine-Grained Temporal Localization
in Traffic Surveillance Footage
ABSTRACT: Traffic crash detection in long-form surveillance videos is critical for
emergency response and infrastructure planning but remains difficult due to the
brief and rare nature of crash events. We introduce HybridMamba, a novel
architecture that combines visual transformers with state-space temporal
modeling to achieve accurate crash time localization. Our method uses
multi-level token compression and hierarchical temporal processing to remain
computationally efficient without sacrificing temporal resolution. Evaluated on
a large-scale dataset from the Iowa Department of Transportation, HybridMamba
achieves a mean absolute error of 1.50 seconds, with 65.2 percent of
predictions within one second of the ground truth. It outperforms recent
video-language models such as TimeChat and VideoLLaMA2 by up to 2.8 seconds,
while using significantly fewer parameters. Our results demonstrate strong
generalization across videos ranging from 2 to 40 minutes in diverse
conditions. HybridMamba offers a robust and efficient solution for fine-grained
temporal localization in traffic surveillance. The code will be released upon
publication.
|
2504.03238 | Efklidis Katsaros | Akis Nousias, Efklidis Katsaros, Evangelos Syrmos, Panagiotis
Radoglou-Grammatikis, Thomas Lagkas, Vasileios Argyriou, Ioannis Moscholios,
Evangelos Markakis, Sotirios Goudos and Panagiotis Sarigiannidis | Malware Detection in Docker Containers: An Image is Worth a Thousand
Logs | Accepted at ICC-W | null | null | null | cs.CR cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Malware detection is increasingly challenged by evolving techniques like
obfuscation and polymorphism, limiting the effectiveness of traditional
methods. Meanwhile, the widespread adoption of software containers has
introduced new security challenges, including the growing threat of malicious
software injection, where a container, once compromised, can serve as entry
point for further cyberattacks. In this work, we address these security issues
by introducing a method to identify compromised containers through machine
learning analysis of their file systems. We cast the entire software containers
into large RGB images via their tarball representations, and propose to use
established Convolutional Neural Network architectures on a streaming,
patch-based manner. To support our experiments, we release the COSOCO
dataset--the first of its kind--containing 3364 large-scale RGB images of
benign and compromised software containers at
https://huggingface.co/datasets/k3ylabs/cosoco-image-dataset. Our method
detects more malware and achieves higher F1 and Recall scores than all
individual and ensembles of VirusTotal engines, demonstrating its effectiveness
and setting a new standard for identifying malware-compromised software
containers.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 07:38:16 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Nousias",
"Akis",
""
],
[
"Katsaros",
"Efklidis",
""
],
[
"Syrmos",
"Evangelos",
""
],
[
"Radoglou-Grammatikis",
"Panagiotis",
""
],
[
"Lagkas",
"Thomas",
""
],
[
"Argyriou",
"Vasileios",
""
],
[
"Moscholios",
"Ioannis",
""
],
[
"Markakis",
"Evangelos",
""
],
[
"Goudos",
"Sotirios",
""
],
[
"Sarigiannidis",
"Panagiotis",
""
]
] | TITLE: Malware Detection in Docker Containers: An Image is Worth a Thousand
Logs
ABSTRACT: Malware detection is increasingly challenged by evolving techniques like
obfuscation and polymorphism, limiting the effectiveness of traditional
methods. Meanwhile, the widespread adoption of software containers has
introduced new security challenges, including the growing threat of malicious
software injection, where a container, once compromised, can serve as entry
point for further cyberattacks. In this work, we address these security issues
by introducing a method to identify compromised containers through machine
learning analysis of their file systems. We cast the entire software containers
into large RGB images via their tarball representations, and propose to use
established Convolutional Neural Network architectures on a streaming,
patch-based manner. To support our experiments, we release the COSOCO
dataset--the first of its kind--containing 3364 large-scale RGB images of
benign and compromised software containers at
https://huggingface.co/datasets/k3ylabs/cosoco-image-dataset. Our method
detects more malware and achieves higher F1 and Recall scores than all
individual and ensembles of VirusTotal engines, demonstrating its effectiveness
and setting a new standard for identifying malware-compromised software
containers.
|
2504.03254 | YiMin Wei | Yimin Wei, Aoran Xiao, Yexian Ren, Yuting Zhu, Hongruixuan Chen,
Junshi Xia, Naoto Yokoya | SARLANG-1M: A Benchmark for Vision-Language Modeling in SAR Image
Understanding | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Synthetic Aperture Radar (SAR) is a crucial remote sensing technology,
enabling all-weather, day-and-night observation with strong surface penetration
for precise and continuous environmental monitoring and analysis. However, SAR
image interpretation remains challenging due to its complex physical imaging
mechanisms and significant visual disparities from human perception. Recently,
Vision-Language Models (VLMs) have demonstrated remarkable success in RGB image
understanding, offering powerful open-vocabulary interpretation and flexible
language interaction. However, their application to SAR images is severely
constrained by the absence of SAR-specific knowledge in their training
distributions, leading to suboptimal performance. To address this limitation,
we introduce SARLANG-1M, a large-scale benchmark tailored for multimodal SAR
image understanding, with a primary focus on integrating SAR with textual
modality. SARLANG-1M comprises more than 1 million high-quality SAR image-text
pairs collected from over 59 cities worldwide. It features hierarchical
resolutions (ranging from 0.1 to 25 meters), fine-grained semantic descriptions
(including both concise and detailed captions), diverse remote sensing
categories (1,696 object types and 16 land cover classes), and multi-task
question-answering pairs spanning seven applications and 1,012 question types.
Extensive experiments on mainstream VLMs demonstrate that fine-tuning with
SARLANG-1M significantly enhances their performance in SAR image
interpretation, reaching performance comparable to human experts. The dataset
and code will be made publicly available at
https://github.com/Jimmyxichen/SARLANG-1M.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 08:09:53 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Wei",
"Yimin",
""
],
[
"Xiao",
"Aoran",
""
],
[
"Ren",
"Yexian",
""
],
[
"Zhu",
"Yuting",
""
],
[
"Chen",
"Hongruixuan",
""
],
[
"Xia",
"Junshi",
""
],
[
"Yokoya",
"Naoto",
""
]
] | TITLE: SARLANG-1M: A Benchmark for Vision-Language Modeling in SAR Image
Understanding
ABSTRACT: Synthetic Aperture Radar (SAR) is a crucial remote sensing technology,
enabling all-weather, day-and-night observation with strong surface penetration
for precise and continuous environmental monitoring and analysis. However, SAR
image interpretation remains challenging due to its complex physical imaging
mechanisms and significant visual disparities from human perception. Recently,
Vision-Language Models (VLMs) have demonstrated remarkable success in RGB image
understanding, offering powerful open-vocabulary interpretation and flexible
language interaction. However, their application to SAR images is severely
constrained by the absence of SAR-specific knowledge in their training
distributions, leading to suboptimal performance. To address this limitation,
we introduce SARLANG-1M, a large-scale benchmark tailored for multimodal SAR
image understanding, with a primary focus on integrating SAR with textual
modality. SARLANG-1M comprises more than 1 million high-quality SAR image-text
pairs collected from over 59 cities worldwide. It features hierarchical
resolutions (ranging from 0.1 to 25 meters), fine-grained semantic descriptions
(including both concise and detailed captions), diverse remote sensing
categories (1,696 object types and 16 land cover classes), and multi-task
question-answering pairs spanning seven applications and 1,012 question types.
Extensive experiments on mainstream VLMs demonstrate that fine-tuning with
SARLANG-1M significantly enhances their performance in SAR image
interpretation, reaching performance comparable to human experts. The dataset
and code will be made publicly available at
https://github.com/Jimmyxichen/SARLANG-1M.
|
2504.03258 | Shuxiao Ding | Shuxiao Ding, Yutong Yang, Julian Wiederer, Markus Braun, Peizheng Li,
Juergen Gall, Bin Yang | TQD-Track: Temporal Query Denoising for 3D Multi-Object Tracking | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Query denoising has become a standard training strategy for DETR-based
detectors by addressing the slow convergence issue. Besides that, query
denoising can be used to increase the diversity of training samples for
modeling complex scenarios which is critical for Multi-Object Tracking (MOT),
showing its potential in MOT application. Existing approaches integrate query
denoising within the tracking-by-attention paradigm. However, as the denoising
process only happens within the single frame, it cannot benefit the tracker to
learn temporal-related information. In addition, the attention mask in query
denoising prevents information exchange between denoising and object queries,
limiting its potential in improving association using self-attention. To
address these issues, we propose TQD-Track, which introduces Temporal Query
Denoising (TQD) tailored for MOT, enabling denoising queries to carry temporal
information and instance-specific feature representation. We introduce diverse
noise types onto denoising queries that simulate real-world challenges in MOT.
We analyze our proposed TQD for different tracking paradigms, and find out the
paradigm with explicit learned data association module, e.g.
tracking-by-detection or alternating detection and association, benefit from
TQD by a larger margin. For these paradigms, we further design an association
mask in the association module to ensure the consistent interaction between
track and detection queries as during inference. Extensive experiments on the
nuScenes dataset demonstrate that our approach consistently enhances different
tracking methods by only changing the training process, especially the
paradigms with explicit association module.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 08:18:48 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Ding",
"Shuxiao",
""
],
[
"Yang",
"Yutong",
""
],
[
"Wiederer",
"Julian",
""
],
[
"Braun",
"Markus",
""
],
[
"Li",
"Peizheng",
""
],
[
"Gall",
"Juergen",
""
],
[
"Yang",
"Bin",
""
]
] | TITLE: TQD-Track: Temporal Query Denoising for 3D Multi-Object Tracking
ABSTRACT: Query denoising has become a standard training strategy for DETR-based
detectors by addressing the slow convergence issue. Besides that, query
denoising can be used to increase the diversity of training samples for
modeling complex scenarios which is critical for Multi-Object Tracking (MOT),
showing its potential in MOT application. Existing approaches integrate query
denoising within the tracking-by-attention paradigm. However, as the denoising
process only happens within the single frame, it cannot benefit the tracker to
learn temporal-related information. In addition, the attention mask in query
denoising prevents information exchange between denoising and object queries,
limiting its potential in improving association using self-attention. To
address these issues, we propose TQD-Track, which introduces Temporal Query
Denoising (TQD) tailored for MOT, enabling denoising queries to carry temporal
information and instance-specific feature representation. We introduce diverse
noise types onto denoising queries that simulate real-world challenges in MOT.
We analyze our proposed TQD for different tracking paradigms, and find out the
paradigm with explicit learned data association module, e.g.
tracking-by-detection or alternating detection and association, benefit from
TQD by a larger margin. For these paradigms, we further design an association
mask in the association module to ensure the consistent interaction between
track and detection queries as during inference. Extensive experiments on the
nuScenes dataset demonstrate that our approach consistently enhances different
tracking methods by only changing the training process, especially the
paradigms with explicit association module.
|
2504.03279 | Qichen Wang | Qichen Wang, Bingnan Chen, Binyang Dai, Ke Yi, Feifei Li, Liang Lin | Yannakakis+: Practical Acyclic Query Evaluation with Theoretical
Guarantees | Technical report for the SIGMOD 2025 paper | null | null | null | cs.DB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Acyclic conjunctive queries form the backbone of most analytical workloads,
and have been extensively studied in the literature from both theoretical and
practical angles. However, there is still a large divide between theory and
practice. While the 40-year-old Yannakakis algorithm has strong theoretical
running time guarantees, it has not been adopted in real systems due to its
high hidden constant factor. In this paper, we strive to close this gap by
proposing Yannakakis+, an improved version of the Yannakakis algorithm, which
is more practically efficient while preserving its theoretical guarantees. Our
experiments demonstrate that Yannakakis+ consistently outperforms the original
Yannakakis algorithm by 2x to 5x across a wide range of queries and datasets.
Another nice feature of our new algorithm is that it generates a traditional
DAG query plan consisting of standard relational operators, allowing
Yannakakis+ to be easily plugged into any standard SQL engine. Our system
prototype currently supports four different SQL engines (DuckDB, PostgreSQL,
SparkSQL, and AnalyticDB from Alibaba Cloud), and our experiments show that
Yannakakis+ is able to deliver better performance than their native query plans
on 160 out of the 162 queries tested, with an average speedup of 2.41x and a
maximum speedup of 47,059x.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 09:04:58 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Wang",
"Qichen",
""
],
[
"Chen",
"Bingnan",
""
],
[
"Dai",
"Binyang",
""
],
[
"Yi",
"Ke",
""
],
[
"Li",
"Feifei",
""
],
[
"Lin",
"Liang",
""
]
] | TITLE: Yannakakis+: Practical Acyclic Query Evaluation with Theoretical
Guarantees
ABSTRACT: Acyclic conjunctive queries form the backbone of most analytical workloads,
and have been extensively studied in the literature from both theoretical and
practical angles. However, there is still a large divide between theory and
practice. While the 40-year-old Yannakakis algorithm has strong theoretical
running time guarantees, it has not been adopted in real systems due to its
high hidden constant factor. In this paper, we strive to close this gap by
proposing Yannakakis+, an improved version of the Yannakakis algorithm, which
is more practically efficient while preserving its theoretical guarantees. Our
experiments demonstrate that Yannakakis+ consistently outperforms the original
Yannakakis algorithm by 2x to 5x across a wide range of queries and datasets.
Another nice feature of our new algorithm is that it generates a traditional
DAG query plan consisting of standard relational operators, allowing
Yannakakis+ to be easily plugged into any standard SQL engine. Our system
prototype currently supports four different SQL engines (DuckDB, PostgreSQL,
SparkSQL, and AnalyticDB from Alibaba Cloud), and our experiments show that
Yannakakis+ is able to deliver better performance than their native query plans
on 160 out of the 162 queries tested, with an average speedup of 2.41x and a
maximum speedup of 47,059x.
|
2504.03295 | Bingqian Wang | Bingqian Wang and Quan Fang and Jiachen Sun and Xiaoxiao Ma | Stance-Driven Multimodal Controlled Statement Generation: New Dataset
and Task | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Formulating statements that support diverse or controversial stances on
specific topics is vital for platforms that enable user expression, reshape
political discourse, and drive social critique and information dissemination.
With the rise of Large Language Models (LLMs), controllable text generation
towards specific stances has become a promising research area with applications
in shaping public opinion and commercial marketing. However, current datasets
often focus solely on pure texts, lacking multimodal content and effective
context, particularly in the context of stance detection. In this paper, we
formally define and study the new problem of stance-driven controllable content
generation for tweets with text and images, where given a multimodal post (text
and image/video), a model generates a stance-controlled response. To this end,
we create the Multimodal Stance Generation Dataset (StanceGen2024), the first
resource explicitly designed for multimodal stance-controllable text generation
in political discourse. It includes posts and user comments from the 2024 U.S.
presidential election, featuring text, images, videos, and stance annotations
to explore how multimodal political content shapes stance expression.
Furthermore, we propose a Stance-Driven Multimodal Generation (SDMG) framework
that integrates weighted fusion of multimodal features and stance guidance to
improve semantic consistency and stance control. We release the dataset and
code (https://anonymous.4open.science/r/StanceGen-BE9D) for public use and
further research.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 09:20:19 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Wang",
"Bingqian",
""
],
[
"Fang",
"Quan",
""
],
[
"Sun",
"Jiachen",
""
],
[
"Ma",
"Xiaoxiao",
""
]
] | TITLE: Stance-Driven Multimodal Controlled Statement Generation: New Dataset
and Task
ABSTRACT: Formulating statements that support diverse or controversial stances on
specific topics is vital for platforms that enable user expression, reshape
political discourse, and drive social critique and information dissemination.
With the rise of Large Language Models (LLMs), controllable text generation
towards specific stances has become a promising research area with applications
in shaping public opinion and commercial marketing. However, current datasets
often focus solely on pure texts, lacking multimodal content and effective
context, particularly in the context of stance detection. In this paper, we
formally define and study the new problem of stance-driven controllable content
generation for tweets with text and images, where given a multimodal post (text
and image/video), a model generates a stance-controlled response. To this end,
we create the Multimodal Stance Generation Dataset (StanceGen2024), the first
resource explicitly designed for multimodal stance-controllable text generation
in political discourse. It includes posts and user comments from the 2024 U.S.
presidential election, featuring text, images, videos, and stance annotations
to explore how multimodal political content shapes stance expression.
Furthermore, we propose a Stance-Driven Multimodal Generation (SDMG) framework
that integrates weighted fusion of multimodal features and stance guidance to
improve semantic consistency and stance control. We release the dataset and
code (https://anonymous.4open.science/r/StanceGen-BE9D) for public use and
further research.
|
2504.03302 | Afshin Khadangi | Afshin Khadangi, Amir Sartipi, Igor Tchappi, Ramin Bahmani | Noise Augmented Fine Tuning for Mitigating Hallucinations in Large
Language Models | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) often produce inaccurate or misleading
content-hallucinations. To address this challenge, we introduce Noise-Augmented
Fine-Tuning (NoiseFiT), a novel framework that leverages adaptive noise
injection based on the signal-to-noise ratio (SNR) to enhance model robustness.
In particular, NoiseFiT selectively perturbs layers identified as either
high-SNR (more robust) or low-SNR (potentially under-regularized) using a
dynamically scaled Gaussian noise. We further propose a hybrid loss that
combines standard cross-entropy, soft cross-entropy, and consistency
regularization to ensure stable and accurate outputs under noisy training
conditions. Our theoretical analysis shows that adaptive noise injection is
both unbiased and variance-preserving, providing strong guarantees for
convergence in expectation. Empirical results on multiple test and benchmark
datasets demonstrate that NoiseFiT significantly reduces hallucination rates,
often improving or matching baseline performance in key tasks. These findings
highlight the promise of noise-driven strategies for achieving robust,
trustworthy language modeling without incurring prohibitive computational
overhead. Given the comprehensive and detailed nature of our experiments, we
have publicly released the fine-tuning logs, benchmark evaluation artifacts,
and source code online at W&B, Hugging Face, and GitHub, respectively, to
foster further research, accessibility and reproducibility.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 09:27:19 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Khadangi",
"Afshin",
""
],
[
"Sartipi",
"Amir",
""
],
[
"Tchappi",
"Igor",
""
],
[
"Bahmani",
"Ramin",
""
]
] | TITLE: Noise Augmented Fine Tuning for Mitigating Hallucinations in Large
Language Models
ABSTRACT: Large language models (LLMs) often produce inaccurate or misleading
content-hallucinations. To address this challenge, we introduce Noise-Augmented
Fine-Tuning (NoiseFiT), a novel framework that leverages adaptive noise
injection based on the signal-to-noise ratio (SNR) to enhance model robustness.
In particular, NoiseFiT selectively perturbs layers identified as either
high-SNR (more robust) or low-SNR (potentially under-regularized) using a
dynamically scaled Gaussian noise. We further propose a hybrid loss that
combines standard cross-entropy, soft cross-entropy, and consistency
regularization to ensure stable and accurate outputs under noisy training
conditions. Our theoretical analysis shows that adaptive noise injection is
both unbiased and variance-preserving, providing strong guarantees for
convergence in expectation. Empirical results on multiple test and benchmark
datasets demonstrate that NoiseFiT significantly reduces hallucination rates,
often improving or matching baseline performance in key tasks. These findings
highlight the promise of noise-driven strategies for achieving robust,
trustworthy language modeling without incurring prohibitive computational
overhead. Given the comprehensive and detailed nature of our experiments, we
have publicly released the fine-tuning logs, benchmark evaluation artifacts,
and source code online at W&B, Hugging Face, and GitHub, respectively, to
foster further research, accessibility and reproducibility.
|
2504.03322 | Wan Tian | Wan Tian, Zhongfeng Qin | Block Toeplitz Sparse Precision Matrix Estimation for Large-Scale
Interval-Valued Time Series Forecasting | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modeling and forecasting interval-valued time series (ITS) have attracted
considerable attention due to their growing presence in various contexts. To
the best of our knowledge, there have been no efforts to model large-scale ITS.
In this paper, we propose a feature extraction procedure for large-scale ITS,
which involves key steps such as auto-segmentation and clustering, and feature
transfer learning. This procedure can be seamlessly integrated with any
suitable prediction models for forecasting purposes. Specifically, we transform
the automatic segmentation and clustering of ITS into the estimation of
Toeplitz sparse precision matrices and assignment set. The
majorization-minimization algorithm is employed to convert this highly
non-convex optimization problem into two subproblems. We derive efficient
dynamic programming and alternating direction method to solve these two
subproblems alternately and establish their convergence properties. By
employing the Joint Recurrence Plot (JRP) to image subsequence and assigning a
class label to each cluster, an image dataset is constructed. Then, an
appropriate neural network is chosen to train on this image dataset and used to
extract features for the next step of forecasting. Real data applications
demonstrate that the proposed method can effectively obtain invariant
representations of the raw data and enhance forecasting performance.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 09:57:05 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Tian",
"Wan",
""
],
[
"Qin",
"Zhongfeng",
""
]
] | TITLE: Block Toeplitz Sparse Precision Matrix Estimation for Large-Scale
Interval-Valued Time Series Forecasting
ABSTRACT: Modeling and forecasting interval-valued time series (ITS) have attracted
considerable attention due to their growing presence in various contexts. To
the best of our knowledge, there have been no efforts to model large-scale ITS.
In this paper, we propose a feature extraction procedure for large-scale ITS,
which involves key steps such as auto-segmentation and clustering, and feature
transfer learning. This procedure can be seamlessly integrated with any
suitable prediction models for forecasting purposes. Specifically, we transform
the automatic segmentation and clustering of ITS into the estimation of
Toeplitz sparse precision matrices and assignment set. The
majorization-minimization algorithm is employed to convert this highly
non-convex optimization problem into two subproblems. We derive efficient
dynamic programming and alternating direction method to solve these two
subproblems alternately and establish their convergence properties. By
employing the Joint Recurrence Plot (JRP) to image subsequence and assigning a
class label to each cluster, an image dataset is constructed. Then, an
appropriate neural network is chosen to train on this image dataset and used to
extract features for the next step of forecasting. Real data applications
demonstrate that the proposed method can effectively obtain invariant
representations of the raw data and enhance forecasting performance.
|
2504.03325 | Omar Amri | Omar Amri, Carla Seatzu, Alessandro Giua, Dimitri Lefebvre | Probabilistic State Estimation of Timed Probabilistic Discrete Event
Systems via Artificial Neural Networks [Draft Version] | null | null | null | null | eess.SY cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper is about the state estimation of timed probabilistic discrete
event systems. The main contribution is to propose general procedures for
developing state estimation approaches based on artificial neural networks. It
is assumed that no formal model of the system exists but a data set is
available, which contains the history of the timed behaviour of the systems.
This dataset will be exploited to develop a neural network model that uses both
logical and temporal information gathered during the functioning of the system
as inputs and provides the state probability vector as output. Two main
approaches are successively proposed (i) state estimation of timed
probabilistic discrete event systems over observations: in this case the state
estimate is reconstructed at the occurrence of each new observation; (ii) state
estimation of timed probabilistic discrete event systems over time: in this
case the state estimate is reconstructed at each clock time increment. For each
approach, the paper outlines the process of data preprocessing, model building
and implementation. This paper not only proposes groundbreaking approaches but
also opens the door to further exploitation of artificial neural networks for
the benefit of discrete event systems.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 10:09:07 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Amri",
"Omar",
""
],
[
"Seatzu",
"Carla",
""
],
[
"Giua",
"Alessandro",
""
],
[
"Lefebvre",
"Dimitri",
""
]
] | TITLE: Probabilistic State Estimation of Timed Probabilistic Discrete Event
Systems via Artificial Neural Networks [Draft Version]
ABSTRACT: This paper is about the state estimation of timed probabilistic discrete
event systems. The main contribution is to propose general procedures for
developing state estimation approaches based on artificial neural networks. It
is assumed that no formal model of the system exists but a data set is
available, which contains the history of the timed behaviour of the systems.
This dataset will be exploited to develop a neural network model that uses both
logical and temporal information gathered during the functioning of the system
as inputs and provides the state probability vector as output. Two main
approaches are successively proposed (i) state estimation of timed
probabilistic discrete event systems over observations: in this case the state
estimate is reconstructed at the occurrence of each new observation; (ii) state
estimation of timed probabilistic discrete event systems over time: in this
case the state estimate is reconstructed at each clock time increment. For each
approach, the paper outlines the process of data preprocessing, model building
and implementation. This paper not only proposes groundbreaking approaches but
also opens the door to further exploitation of artificial neural networks for
the benefit of discrete event systems.
|
2504.03327 | Makoto Takamoto | Makoto Takamoto, Daniel O\~noro-Rubio, Wiem Ben Rim, Takashi Maruyama,
and Bhushan Kotnis | Optimal Embedding Guided Negative Sample Generation for Knowledge Graph
Link Prediction | 11 pages, 6 figures, 15 Tables, accepted and to be published in TMLR | null | null | null | cs.LG cs.CL cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Knowledge graph embedding (KGE) models encode the structural information of
knowledge graphs to predicting new links. Effective training of these models
requires distinguishing between positive and negative samples with high
precision. Although prior research has shown that improving the quality of
negative samples can significantly enhance model accuracy, identifying
high-quality negative samples remains a challenging problem. This paper
theoretically investigates the condition under which negative samples lead to
optimal KG embedding and identifies a sufficient condition for an effective
negative sample distribution. Based on this theoretical foundation, we propose
\textbf{E}mbedding \textbf{MU}tation (\textsc{EMU}), a novel framework that
\emph{generates} negative samples satisfying this condition, in contrast to
conventional methods that focus on \emph{identifying} challenging negative
samples within the training data. Importantly, the simplicity of \textsc{EMU}
ensures seamless integration with existing KGE models and negative sampling
methods. To evaluate its efficacy, we conducted comprehensive experiments
across multiple datasets. The results consistently demonstrate significant
improvements in link prediction performance across various KGE models and
negative sampling methods. Notably, \textsc{EMU} enables performance
improvements comparable to those achieved by models with embedding dimension
five times larger. An implementation of the method and experiments are
available at https://github.com/nec-research/EMU-KG.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 10:10:18 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Takamoto",
"Makoto",
""
],
[
"Oñoro-Rubio",
"Daniel",
""
],
[
"Rim",
"Wiem Ben",
""
],
[
"Maruyama",
"Takashi",
""
],
[
"Kotnis",
"Bhushan",
""
]
] | TITLE: Optimal Embedding Guided Negative Sample Generation for Knowledge Graph
Link Prediction
ABSTRACT: Knowledge graph embedding (KGE) models encode the structural information of
knowledge graphs to predicting new links. Effective training of these models
requires distinguishing between positive and negative samples with high
precision. Although prior research has shown that improving the quality of
negative samples can significantly enhance model accuracy, identifying
high-quality negative samples remains a challenging problem. This paper
theoretically investigates the condition under which negative samples lead to
optimal KG embedding and identifies a sufficient condition for an effective
negative sample distribution. Based on this theoretical foundation, we propose
\textbf{E}mbedding \textbf{MU}tation (\textsc{EMU}), a novel framework that
\emph{generates} negative samples satisfying this condition, in contrast to
conventional methods that focus on \emph{identifying} challenging negative
samples within the training data. Importantly, the simplicity of \textsc{EMU}
ensures seamless integration with existing KGE models and negative sampling
methods. To evaluate its efficacy, we conducted comprehensive experiments
across multiple datasets. The results consistently demonstrate significant
improvements in link prediction performance across various KGE models and
negative sampling methods. Notably, \textsc{EMU} enables performance
improvements comparable to those achieved by models with embedding dimension
five times larger. An implementation of the method and experiments are
available at https://github.com/nec-research/EMU-KG.
|
2504.03329 | Francesca Ronchini | Francesca Ronchini, Ho-Hsiang Wu, Wei-Cheng Lin, Fabio Antonacci | Mind the Prompt: Prompting Strategies in Audio Generations for Improving
Sound Classification | Accepted at Generative Data Augmentation for Real-World Signal
Processing Applications Workshop | null | null | null | eess.AS cs.AI cs.SD eess.SP | http://creativecommons.org/licenses/by/4.0/ | This paper investigates the design of effective prompt strategies for
generating realistic datasets using Text-To-Audio (TTA) models. We also analyze
different techniques for efficiently combining these datasets to enhance their
utility in sound classification tasks. By evaluating two sound classification
datasets with two TTA models, we apply a range of prompt strategies. Our
findings reveal that task-specific prompt strategies significantly outperform
basic prompt approaches in data generation. Furthermore, merging datasets
generated using different TTA models proves to enhance classification
performance more effectively than merely increasing the training dataset size.
Overall, our results underscore the advantages of these methods as effective
data augmentation techniques using synthetic data.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 10:14:11 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Ronchini",
"Francesca",
""
],
[
"Wu",
"Ho-Hsiang",
""
],
[
"Lin",
"Wei-Cheng",
""
],
[
"Antonacci",
"Fabio",
""
]
] | TITLE: Mind the Prompt: Prompting Strategies in Audio Generations for Improving
Sound Classification
ABSTRACT: This paper investigates the design of effective prompt strategies for
generating realistic datasets using Text-To-Audio (TTA) models. We also analyze
different techniques for efficiently combining these datasets to enhance their
utility in sound classification tasks. By evaluating two sound classification
datasets with two TTA models, we apply a range of prompt strategies. Our
findings reveal that task-specific prompt strategies significantly outperform
basic prompt approaches in data generation. Furthermore, merging datasets
generated using different TTA models proves to enhance classification
performance more effectively than merely increasing the training dataset size.
Overall, our results underscore the advantages of these methods as effective
data augmentation techniques using synthetic data.
|
2504.03334 | Christina Halmich | Christina Halmich, Lucas H\"oschler, Christoph Schranz, Christian
Borgelt | Data Augmentation of Time-Series Data in Human Movement Biomechanics: A
Scoping Review | Preprint under review at PLOS ONE | null | null | null | cs.LG cs.HC | http://creativecommons.org/licenses/by/4.0/ | The integration of machine learning and deep learning has transformed data
analytics in biomechanics, enabled by extensive wearable sensor data. However,
the field faces challenges such as limited large-scale datasets and high data
acquisition costs, which hinder the development of robust algorithms. Data
augmentation techniques show promise in addressing these issues, but their
application to biomechanical time-series data requires comprehensive
evaluation.
This scoping review investigates data augmentation methods for time-series
data in the biomechanics domain. It analyzes current approaches for augmenting
and generating time-series datasets, evaluates their effectiveness, and offers
recommendations for applying these techniques in biomechanics.
Four databases, PubMed, IEEE Xplore, Scopus, and Web of Science, were
searched for studies published between 2013 and 2024. Following PRISMA-ScR
guidelines, a two-stage screening identified 21 relevant publications.
Results show that there is no universally preferred method for augmenting
biomechanical time-series data; instead, methods vary based on study
objectives. A major issue identified is the absence of soft tissue artifacts in
synthetic data, leading to discrepancies referred to as the synthetic gap.
Moreover, many studies lack proper evaluation of augmentation methods, making
it difficult to assess their effects on model performance and data quality.
This review highlights the critical role of data augmentation in addressing
limited dataset availability and improving model generalization in
biomechanics. Tailoring augmentation strategies to the characteristics of
biomechanical data is essential for advancing predictive modeling. A better
understanding of how different augmentation methods impact data quality and
downstream tasks will be key to developing more effective and realistic
techniques.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 10:31:44 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Halmich",
"Christina",
""
],
[
"Höschler",
"Lucas",
""
],
[
"Schranz",
"Christoph",
""
],
[
"Borgelt",
"Christian",
""
]
] | TITLE: Data Augmentation of Time-Series Data in Human Movement Biomechanics: A
Scoping Review
ABSTRACT: The integration of machine learning and deep learning has transformed data
analytics in biomechanics, enabled by extensive wearable sensor data. However,
the field faces challenges such as limited large-scale datasets and high data
acquisition costs, which hinder the development of robust algorithms. Data
augmentation techniques show promise in addressing these issues, but their
application to biomechanical time-series data requires comprehensive
evaluation.
This scoping review investigates data augmentation methods for time-series
data in the biomechanics domain. It analyzes current approaches for augmenting
and generating time-series datasets, evaluates their effectiveness, and offers
recommendations for applying these techniques in biomechanics.
Four databases, PubMed, IEEE Xplore, Scopus, and Web of Science, were
searched for studies published between 2013 and 2024. Following PRISMA-ScR
guidelines, a two-stage screening identified 21 relevant publications.
Results show that there is no universally preferred method for augmenting
biomechanical time-series data; instead, methods vary based on study
objectives. A major issue identified is the absence of soft tissue artifacts in
synthetic data, leading to discrepancies referred to as the synthetic gap.
Moreover, many studies lack proper evaluation of augmentation methods, making
it difficult to assess their effects on model performance and data quality.
This review highlights the critical role of data augmentation in addressing
limited dataset availability and improving model generalization in
biomechanics. Tailoring augmentation strategies to the characteristics of
biomechanical data is essential for advancing predictive modeling. A better
understanding of how different augmentation methods impact data quality and
downstream tasks will be key to developing more effective and realistic
techniques.
|
2504.03342 | Keke Tang | Guide Yang, Chao Hou, Weilong Peng, Xiang Fang, Yongwei Nie, Peican
Zhu, and Keke Tang | EOOD: Entropy-based Out-of-distribution Detection | IJCNN 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks (DNNs) often exhibit overconfidence when encountering
out-of-distribution (OOD) samples, posing significant challenges for
deployment. Since DNNs are trained on in-distribution (ID) datasets, the
information flow of ID samples through DNNs inevitably differs from that of OOD
samples. In this paper, we propose an Entropy-based Out-Of-distribution
Detection (EOOD) framework. EOOD first identifies specific block where the
information flow differences between ID and OOD samples are more pronounced,
using both ID and pseudo-OOD samples. It then calculates the conditional
entropy on the selected block as the OOD confidence score. Comprehensive
experiments conducted across various ID and OOD settings demonstrate the
effectiveness of EOOD in OOD detection and its superiority over
state-of-the-art methods.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 10:57:03 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yang",
"Guide",
""
],
[
"Hou",
"Chao",
""
],
[
"Peng",
"Weilong",
""
],
[
"Fang",
"Xiang",
""
],
[
"Nie",
"Yongwei",
""
],
[
"Zhu",
"Peican",
""
],
[
"Tang",
"Keke",
""
]
] | TITLE: EOOD: Entropy-based Out-of-distribution Detection
ABSTRACT: Deep neural networks (DNNs) often exhibit overconfidence when encountering
out-of-distribution (OOD) samples, posing significant challenges for
deployment. Since DNNs are trained on in-distribution (ID) datasets, the
information flow of ID samples through DNNs inevitably differs from that of OOD
samples. In this paper, we propose an Entropy-based Out-Of-distribution
Detection (EOOD) framework. EOOD first identifies specific block where the
information flow differences between ID and OOD samples are more pronounced,
using both ID and pseudo-OOD samples. It then calculates the conditional
entropy on the selected block as the OOD confidence score. Comprehensive
experiments conducted across various ID and OOD settings demonstrate the
effectiveness of EOOD in OOD detection and its superiority over
state-of-the-art methods.
|
2504.03347 | Nathan Clarke | Mohamad Hachem, Adam Lanfranchi, Nathan Clarke, Joakim Kavrestad | Optimizing Password Cracking for Digital Investigations | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by/4.0/ | Efficient password cracking is a critical aspect of digital forensics,
enabling investigators to decrypt protected content during criminal
investigations. Traditional password cracking methods, including brute-force,
dictionary and rule-based attacks face challenges in balancing efficiency with
increasing computational complexity. This study explores rule based
optimisation strategies to enhance the effectiveness of password cracking while
minimising resource consumption. By analysing publicly available password
datasets, we propose an optimised rule set that reduces computational
iterations by approximately 40%, significantly improving the speed of password
recovery. Additionally, the impact of national password recommendations were
examined, specifically, the UK National Cyber Security Centre's three word
password guideline on password security and forensic recovery. Through user
generated password surveys, we evaluate the crackability of three word
passwords using dictionaries of varying common word proportions. Results
indicate that while three word passwords provide improved memorability and
usability, they remain vulnerable when common word combinations are used, with
up to 77.5% of passwords cracked using a 30% common word dictionary subset. The
study underscores the importance of dynamic password cracking strategies that
account for evolving user behaviours and policy driven password structures.
Findings contribution to both forensic efficiency and cyber security awareness,
highlight the dual impact of password policies on security and investigative
capabilities. Future work will focus upon refining rule based cracking
techniques and expanding research on password composition trends.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 11:03:39 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Hachem",
"Mohamad",
""
],
[
"Lanfranchi",
"Adam",
""
],
[
"Clarke",
"Nathan",
""
],
[
"Kavrestad",
"Joakim",
""
]
] | TITLE: Optimizing Password Cracking for Digital Investigations
ABSTRACT: Efficient password cracking is a critical aspect of digital forensics,
enabling investigators to decrypt protected content during criminal
investigations. Traditional password cracking methods, including brute-force,
dictionary and rule-based attacks face challenges in balancing efficiency with
increasing computational complexity. This study explores rule based
optimisation strategies to enhance the effectiveness of password cracking while
minimising resource consumption. By analysing publicly available password
datasets, we propose an optimised rule set that reduces computational
iterations by approximately 40%, significantly improving the speed of password
recovery. Additionally, the impact of national password recommendations were
examined, specifically, the UK National Cyber Security Centre's three word
password guideline on password security and forensic recovery. Through user
generated password surveys, we evaluate the crackability of three word
passwords using dictionaries of varying common word proportions. Results
indicate that while three word passwords provide improved memorability and
usability, they remain vulnerable when common word combinations are used, with
up to 77.5% of passwords cracked using a 30% common word dictionary subset. The
study underscores the importance of dynamic password cracking strategies that
account for evolving user behaviours and policy driven password structures.
Findings contribution to both forensic efficiency and cyber security awareness,
highlight the dual impact of password policies on security and investigative
capabilities. Future work will focus upon refining rule based cracking
techniques and expanding research on password composition trends.
|
2504.03349 | Denis Coquenet | Denis Coquenet | Meta-DAN: towards an efficient prediction strategy for page-level
handwritten text recognition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in text recognition led to a paradigm shift for page-level
recognition, from multi-step segmentation-based approaches to end-to-end
attention-based ones. However, the na\"ive character-level autoregressive
decoding process results in long prediction times: it requires several seconds
to process a single page image on a modern GPU. We propose the Meta Document
Attention Network (Meta-DAN) as a novel decoding strategy to reduce the
prediction time while enabling a better context modeling. It relies on two main
components: windowed queries, to process several transformer queries
altogether, enlarging the context modeling with near future; and multi-token
predictions, whose goal is to predict several tokens per query instead of only
the next one. We evaluate the proposed approach on 10 full-page handwritten
datasets and demonstrate state-of-the-art results on average in terms of
character error rate. Source code and weights of trained models are available
at https://github.com/FactoDeepLearning/meta_dan.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 11:06:09 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Coquenet",
"Denis",
""
]
] | TITLE: Meta-DAN: towards an efficient prediction strategy for page-level
handwritten text recognition
ABSTRACT: Recent advances in text recognition led to a paradigm shift for page-level
recognition, from multi-step segmentation-based approaches to end-to-end
attention-based ones. However, the na\"ive character-level autoregressive
decoding process results in long prediction times: it requires several seconds
to process a single page image on a modern GPU. We propose the Meta Document
Attention Network (Meta-DAN) as a novel decoding strategy to reduce the
prediction time while enabling a better context modeling. It relies on two main
components: windowed queries, to process several transformer queries
altogether, enlarging the context modeling with near future; and multi-token
predictions, whose goal is to predict several tokens per query instead of only
the next one. We evaluate the proposed approach on 10 full-page handwritten
datasets and demonstrate state-of-the-art results on average in terms of
character error rate. Source code and weights of trained models are available
at https://github.com/FactoDeepLearning/meta_dan.
|
2504.03352 | Kaustubh Shivshankar Shejole Mr. | Kaustubh Shivshankar Shejole, Pushpak Bhattacharyya | Detecting Stereotypes and Anti-stereotypes the Correct Way Using Social
Psychological Underpinnings | null | null | null | null | cs.CL cs.CY cs.HC | http://creativecommons.org/licenses/by/4.0/ | Stereotypes are known to be highly pernicious, making their detection
critically important. However, current research predominantly focuses on
detecting and evaluating stereotypical biases in LLMs, leaving the study of
stereotypes in its early stages. Many studies have failed to clearly
distinguish between stereotypes and stereotypical biases, which has
significantly slowed progress in advancing research in this area. Stereotype
and anti-stereotype detection is a problem that requires knowledge of society;
hence, it is one of the most difficult areas in Responsible AI. This work
investigates this task, where we propose a four-tuple definition and provide
precise terminology distinguishing stereotype, anti-stereotype, stereotypical
bias, and bias, offering valuable insights into their various aspects. In this
paper, we propose StereoDetect, a high-quality benchmarking dataset curated for
this task by optimally utilizing current datasets such as StereoSet and
WinoQueer, involving a manual verification process and the transfer of semantic
information. We demonstrate that language models for reasoning with fewer than
10B parameters often get confused when detecting anti-stereotypes. We also
demonstrate the critical importance of well-curated datasets by comparing our
model with other current models for stereotype detection. The dataset and code
is available at https://github.com/KaustubhShejole/StereoDetect.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 11:14:38 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Shejole",
"Kaustubh Shivshankar",
""
],
[
"Bhattacharyya",
"Pushpak",
""
]
] | TITLE: Detecting Stereotypes and Anti-stereotypes the Correct Way Using Social
Psychological Underpinnings
ABSTRACT: Stereotypes are known to be highly pernicious, making their detection
critically important. However, current research predominantly focuses on
detecting and evaluating stereotypical biases in LLMs, leaving the study of
stereotypes in its early stages. Many studies have failed to clearly
distinguish between stereotypes and stereotypical biases, which has
significantly slowed progress in advancing research in this area. Stereotype
and anti-stereotype detection is a problem that requires knowledge of society;
hence, it is one of the most difficult areas in Responsible AI. This work
investigates this task, where we propose a four-tuple definition and provide
precise terminology distinguishing stereotype, anti-stereotype, stereotypical
bias, and bias, offering valuable insights into their various aspects. In this
paper, we propose StereoDetect, a high-quality benchmarking dataset curated for
this task by optimally utilizing current datasets such as StereoSet and
WinoQueer, involving a manual verification process and the transfer of semantic
information. We demonstrate that language models for reasoning with fewer than
10B parameters often get confused when detecting anti-stereotypes. We also
demonstrate the critical importance of well-curated datasets by comparing our
model with other current models for stereotype detection. The dataset and code
is available at https://github.com/KaustubhShejole/StereoDetect.
|
2504.03360 | Erik Johannes Husom | Erik Johannes Husom, Arda Goknil, Merve Astekin, Lwin Khin Shar, Andre
K{\aa}sen, Sagar Sen, Benedikt Andreas Mithassel, Ahmet Soylu | Sustainable LLM Inference for Edge AI: Evaluating Quantized LLMs for
Energy Efficiency, Output Accuracy, and Inference Latency | 30 pages, 14 figures | null | null | null | cs.CY cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Deploying Large Language Models (LLMs) on edge devices presents significant
challenges due to computational constraints, memory limitations, inference
speed, and energy consumption. Model quantization has emerged as a key
technique to enable efficient LLM inference by reducing model size and
computational overhead. In this study, we conduct a comprehensive analysis of
28 quantized LLMs from the Ollama library, which applies by default
Post-Training Quantization (PTQ) and weight-only quantization techniques,
deployed on an edge device (Raspberry Pi 4 with 4GB RAM). We evaluate energy
efficiency, inference performance, and output accuracy across multiple
quantization levels and task types. Models are benchmarked on five standardized
datasets (CommonsenseQA, BIG-Bench Hard, TruthfulQA, GSM8K, and HumanEval), and
we employ a high-resolution, hardware-based energy measurement tool to capture
real-world power consumption. Our findings reveal the trade-offs between energy
efficiency, inference speed, and accuracy in different quantization settings,
highlighting configurations that optimize LLM deployment for
resource-constrained environments. By integrating hardware-level energy
profiling with LLM benchmarking, this study provides actionable insights for
sustainable AI, bridging a critical gap in existing research on energy-aware
LLM deployment.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 11:29:30 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Husom",
"Erik Johannes",
""
],
[
"Goknil",
"Arda",
""
],
[
"Astekin",
"Merve",
""
],
[
"Shar",
"Lwin Khin",
""
],
[
"Kåsen",
"Andre",
""
],
[
"Sen",
"Sagar",
""
],
[
"Mithassel",
"Benedikt Andreas",
""
],
[
"Soylu",
"Ahmet",
""
]
] | TITLE: Sustainable LLM Inference for Edge AI: Evaluating Quantized LLMs for
Energy Efficiency, Output Accuracy, and Inference Latency
ABSTRACT: Deploying Large Language Models (LLMs) on edge devices presents significant
challenges due to computational constraints, memory limitations, inference
speed, and energy consumption. Model quantization has emerged as a key
technique to enable efficient LLM inference by reducing model size and
computational overhead. In this study, we conduct a comprehensive analysis of
28 quantized LLMs from the Ollama library, which applies by default
Post-Training Quantization (PTQ) and weight-only quantization techniques,
deployed on an edge device (Raspberry Pi 4 with 4GB RAM). We evaluate energy
efficiency, inference performance, and output accuracy across multiple
quantization levels and task types. Models are benchmarked on five standardized
datasets (CommonsenseQA, BIG-Bench Hard, TruthfulQA, GSM8K, and HumanEval), and
we employ a high-resolution, hardware-based energy measurement tool to capture
real-world power consumption. Our findings reveal the trade-offs between energy
efficiency, inference speed, and accuracy in different quantization settings,
highlighting configurations that optimize LLM deployment for
resource-constrained environments. By integrating hardware-level energy
profiling with LLM benchmarking, this study provides actionable insights for
sustainable AI, bridging a critical gap in existing research on energy-aware
LLM deployment.
|
2504.03369 | Chen Hu | Chen Hu, Enrica Tricomi, Eojin Rho, Daekyum Kim, Lorenzo Masia, Shan
Luo and Letizia Gionfrida | Point Cloud-based Grasping for Soft Hand Exoskeleton | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Grasping is a fundamental skill for interacting with and manipulating objects
in the environment. However, this ability can be challenging for individuals
with hand impairments. Soft hand exoskeletons designed to assist grasping can
enhance or restore essential hand functions, yet controlling these soft
exoskeletons to support users effectively remains difficult due to the
complexity of understanding the environment. This study presents a vision-based
predictive control framework that leverages contextual awareness from depth
perception to predict the grasping target and determine the next control state
for activation. Unlike data-driven approaches that require extensive labelled
datasets and struggle with generalizability, our method is grounded in
geometric modelling, enabling robust adaptation across diverse grasping
scenarios. The Grasping Ability Score (GAS) was used to evaluate performance,
with our system achieving a state-of-the-art GAS of 91% across 15 objects and
healthy participants, demonstrating its effectiveness across different object
types. The proposed approach maintained reconstruction success for unseen
objects, underscoring its enhanced generalizability compared to learning-based
models.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 11:40:04 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Hu",
"Chen",
""
],
[
"Tricomi",
"Enrica",
""
],
[
"Rho",
"Eojin",
""
],
[
"Kim",
"Daekyum",
""
],
[
"Masia",
"Lorenzo",
""
],
[
"Luo",
"Shan",
""
],
[
"Gionfrida",
"Letizia",
""
]
] | TITLE: Point Cloud-based Grasping for Soft Hand Exoskeleton
ABSTRACT: Grasping is a fundamental skill for interacting with and manipulating objects
in the environment. However, this ability can be challenging for individuals
with hand impairments. Soft hand exoskeletons designed to assist grasping can
enhance or restore essential hand functions, yet controlling these soft
exoskeletons to support users effectively remains difficult due to the
complexity of understanding the environment. This study presents a vision-based
predictive control framework that leverages contextual awareness from depth
perception to predict the grasping target and determine the next control state
for activation. Unlike data-driven approaches that require extensive labelled
datasets and struggle with generalizability, our method is grounded in
geometric modelling, enabling robust adaptation across diverse grasping
scenarios. The Grasping Ability Score (GAS) was used to evaluate performance,
with our system achieving a state-of-the-art GAS of 91% across 15 objects and
healthy participants, demonstrating its effectiveness across different object
types. The proposed approach maintained reconstruction success for unseen
objects, underscoring its enhanced generalizability compared to learning-based
models.
|
2504.03376 | Edern Le Bot | Edern Le Bot, R\'emi Giraud, Boris Mansencal, Thomas Tourdias, Jos\`e
V. Manjon, Pierrick Coup\'e | FLAIRBrainSeg: Fine-grained brain segmentation using FLAIR MRI only | 9 pages, 6 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper introduces a novel method for brain segmentation using only FLAIR
MRIs, specifically targeting cases where access to other imaging modalities is
limited. By leveraging existing automatic segmentation methods, we train a
network to approximate segmentations, typically obtained from T1-weighted MRIs.
Our method, called FLAIRBrainSeg, produces segmentations of 132 structures and
is robust to multiple sclerosis lesions. Experiments on both in-domain and
out-of-domain datasets demonstrate that our method outperforms
modality-agnostic approaches based on image synthesis, the only currently
available alternative for performing brain parcellation using FLAIR MRI alone.
This technique holds promise for scenarios where T1-weighted MRIs are
unavailable and offers a valuable alternative for clinicians and researchers in
need of reliable anatomical segmentation.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 11:47:18 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Bot",
"Edern Le",
""
],
[
"Giraud",
"Rémi",
""
],
[
"Mansencal",
"Boris",
""
],
[
"Tourdias",
"Thomas",
""
],
[
"Manjon",
"Josè V.",
""
],
[
"Coupé",
"Pierrick",
""
]
] | TITLE: FLAIRBrainSeg: Fine-grained brain segmentation using FLAIR MRI only
ABSTRACT: This paper introduces a novel method for brain segmentation using only FLAIR
MRIs, specifically targeting cases where access to other imaging modalities is
limited. By leveraging existing automatic segmentation methods, we train a
network to approximate segmentations, typically obtained from T1-weighted MRIs.
Our method, called FLAIRBrainSeg, produces segmentations of 132 structures and
is robust to multiple sclerosis lesions. Experiments on both in-domain and
out-of-domain datasets demonstrate that our method outperforms
modality-agnostic approaches based on image synthesis, the only currently
available alternative for performing brain parcellation using FLAIR MRI alone.
This technique holds promise for scenarios where T1-weighted MRIs are
unavailable and offers a valuable alternative for clinicians and researchers in
need of reliable anatomical segmentation.
|
2504.03397 | Aashi Shrinate | Aashi Shrinate and Twinkle Tripathy | Leveraging Network Topology in a Two-way Competition for Influence in
the Friedkin-Johnsen Model | null | null | null | null | eess.SY cs.SY | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this paper, we consider two stubborn agents who compete for `influence'
over a strongly connected group of agents. This framework represents real-world
contests, such as competition among firms, two-party elections, and sports
rivalries, among others. Considering stubbornness of agents to be an immutable
property, we utilise the network topology alone to increase the influence of a
preferred stubborn agent. We demonstrate this on a special class of strongly
connected networks by identifying the supporters of each of the stubborn agents
in such networks. Thereafter, we present sufficient conditions under which a
network perturbation always increases the influence of the preferred stubborn
agent. A key advantage of the proposed topology-based conditions is that they
hold independent of the edge weights in the network. Most importantly, we
assert that there exists a sequence of perturbations that can make the lesser
influential stubborn agent more influential. Finally, we demonstrate our
results over the Sampson's Monastery dataset.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 12:15:19 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Shrinate",
"Aashi",
""
],
[
"Tripathy",
"Twinkle",
""
]
] | TITLE: Leveraging Network Topology in a Two-way Competition for Influence in
the Friedkin-Johnsen Model
ABSTRACT: In this paper, we consider two stubborn agents who compete for `influence'
over a strongly connected group of agents. This framework represents real-world
contests, such as competition among firms, two-party elections, and sports
rivalries, among others. Considering stubbornness of agents to be an immutable
property, we utilise the network topology alone to increase the influence of a
preferred stubborn agent. We demonstrate this on a special class of strongly
connected networks by identifying the supporters of each of the stubborn agents
in such networks. Thereafter, we present sufficient conditions under which a
network perturbation always increases the influence of the preferred stubborn
agent. A key advantage of the proposed topology-based conditions is that they
hold independent of the edge weights in the network. Most importantly, we
assert that there exists a sequence of perturbations that can make the lesser
influential stubborn agent more influential. Finally, we demonstrate our
results over the Sampson's Monastery dataset.
|
2504.03415 | Zhe Wang | Zhe Wang and Yifei Zhu | NeRFlex: Resource-aware Real-time High-quality Rendering of Complex
Scenes on Mobile Devices | This paper is accepted by 45th IEEE International Conference on
Distributed Computing Systems (ICDCS 2025) | null | null | null | cs.GR cs.CV cs.LG cs.MM cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Neural Radiance Fields (NeRF) is a cutting-edge neural network-based
technique for novel view synthesis in 3D reconstruction. However, its
significant computational demands pose challenges for deployment on mobile
devices. While mesh-based NeRF solutions have shown potential in achieving
real-time rendering on mobile platforms, they often fail to deliver
high-quality reconstructions when rendering practical complex scenes.
Additionally, the non-negligible memory overhead caused by pre-computed
intermediate results complicates their practical application. To overcome these
challenges, we present NeRFlex, a resource-aware, high-resolution, real-time
rendering framework for complex scenes on mobile devices. NeRFlex integrates
mobile NeRF rendering with multi-NeRF representations that decompose a scene
into multiple sub-scenes, each represented by an individual NeRF network.
Crucially, NeRFlex considers both memory and computation constraints as
first-class citizens and redesigns the reconstruction process accordingly.
NeRFlex first designs a detail-oriented segmentation module to identify
sub-scenes with high-frequency details. For each NeRF network, a lightweight
profiler, built on domain knowledge, is used to accurately map configurations
to visual quality and memory usage. Based on these insights and the resource
constraints on mobile devices, NeRFlex presents a dynamic programming algorithm
to efficiently determine configurations for all NeRF representations, despite
the NP-hardness of the original decision problem. Extensive experiments on
real-world datasets and mobile devices demonstrate that NeRFlex achieves
real-time, high-quality rendering on commercial mobile devices.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 12:53:33 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Wang",
"Zhe",
""
],
[
"Zhu",
"Yifei",
""
]
] | TITLE: NeRFlex: Resource-aware Real-time High-quality Rendering of Complex
Scenes on Mobile Devices
ABSTRACT: Neural Radiance Fields (NeRF) is a cutting-edge neural network-based
technique for novel view synthesis in 3D reconstruction. However, its
significant computational demands pose challenges for deployment on mobile
devices. While mesh-based NeRF solutions have shown potential in achieving
real-time rendering on mobile platforms, they often fail to deliver
high-quality reconstructions when rendering practical complex scenes.
Additionally, the non-negligible memory overhead caused by pre-computed
intermediate results complicates their practical application. To overcome these
challenges, we present NeRFlex, a resource-aware, high-resolution, real-time
rendering framework for complex scenes on mobile devices. NeRFlex integrates
mobile NeRF rendering with multi-NeRF representations that decompose a scene
into multiple sub-scenes, each represented by an individual NeRF network.
Crucially, NeRFlex considers both memory and computation constraints as
first-class citizens and redesigns the reconstruction process accordingly.
NeRFlex first designs a detail-oriented segmentation module to identify
sub-scenes with high-frequency details. For each NeRF network, a lightweight
profiler, built on domain knowledge, is used to accurately map configurations
to visual quality and memory usage. Based on these insights and the resource
constraints on mobile devices, NeRFlex presents a dynamic programming algorithm
to efficiently determine configurations for all NeRF representations, despite
the NP-hardness of the original decision problem. Extensive experiments on
real-world datasets and mobile devices demonstrate that NeRFlex achieves
real-time, high-quality rendering on commercial mobile devices.
|
2504.03423 | Sathish Kumar | Sathish Kumar, Swaroop Damodaran, Naveen Kumar Kuruba, Sumit Jha, and
Arvind Ramanathan | DML-RAM: Deep Multimodal Learning Framework for Robotic Arm Manipulation
using Pre-trained Models | 7 pages , 4 figures | null | null | null | cs.LG cs.RO | http://creativecommons.org/licenses/by/4.0/ | This paper presents a novel deep learning framework for robotic arm
manipulation that integrates multimodal inputs using a late-fusion strategy.
Unlike traditional end-to-end or reinforcement learning approaches, our method
processes image sequences with pre-trained models and robot state data with
machine learning algorithms, fusing their outputs to predict continuous action
values for control. Evaluated on BridgeData V2 and Kuka datasets, the best
configuration (VGG16 + Random Forest) achieved MSEs of 0.0021 and 0.0028,
respectively, demonstrating strong predictive performance and robustness. The
framework supports modularity, interpretability, and real-time decision-making,
aligning with the goals of adaptive, human-in-the-loop cyber-physical systems.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 13:11:43 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Kumar",
"Sathish",
""
],
[
"Damodaran",
"Swaroop",
""
],
[
"Kuruba",
"Naveen Kumar",
""
],
[
"Jha",
"Sumit",
""
],
[
"Ramanathan",
"Arvind",
""
]
] | TITLE: DML-RAM: Deep Multimodal Learning Framework for Robotic Arm Manipulation
using Pre-trained Models
ABSTRACT: This paper presents a novel deep learning framework for robotic arm
manipulation that integrates multimodal inputs using a late-fusion strategy.
Unlike traditional end-to-end or reinforcement learning approaches, our method
processes image sequences with pre-trained models and robot state data with
machine learning algorithms, fusing their outputs to predict continuous action
values for control. Evaluated on BridgeData V2 and Kuka datasets, the best
configuration (VGG16 + Random Forest) achieved MSEs of 0.0021 and 0.0028,
respectively, demonstrating strong predictive performance and robustness. The
framework supports modularity, interpretability, and real-time decision-making,
aligning with the goals of adaptive, human-in-the-loop cyber-physical systems.
|
2504.03424 | Adam Moss | Adam Moss | The AI Cosmologist I: An Agentic System for Automated Data Analysis | 45 pages | null | null | null | astro-ph.IM astro-ph.CO astro-ph.GA cs.AI physics.data-an | http://creativecommons.org/licenses/by/4.0/ | We present the AI Cosmologist, an agentic system designed to automate
cosmological/astronomical data analysis and machine learning research
workflows. This implements a complete pipeline from idea generation to
experimental evaluation and research dissemination, mimicking the scientific
process typically performed by human researchers. The system employs
specialized agents for planning, coding, execution, analysis, and synthesis
that work together to develop novel approaches. Unlike traditional auto
machine-learning systems, the AI Cosmologist generates diverse implementation
strategies, writes complete code, handles execution errors, analyzes results,
and synthesizes new approaches based on experimental outcomes. We demonstrate
the AI Cosmologist capabilities across several machine learning tasks, showing
how it can successfully explore solution spaces, iterate based on experimental
results, and combine successful elements from different approaches. Our results
indicate that agentic systems can automate portions of the research process,
potentially accelerating scientific discovery. The code and experimental data
used in this paper are available on GitHub at
https://github.com/adammoss/aicosmologist. Example papers included in the
appendix demonstrate the system's capability to autonomously produce complete
scientific publications, starting from only the dataset and task description
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 13:12:08 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Moss",
"Adam",
""
]
] | TITLE: The AI Cosmologist I: An Agentic System for Automated Data Analysis
ABSTRACT: We present the AI Cosmologist, an agentic system designed to automate
cosmological/astronomical data analysis and machine learning research
workflows. This implements a complete pipeline from idea generation to
experimental evaluation and research dissemination, mimicking the scientific
process typically performed by human researchers. The system employs
specialized agents for planning, coding, execution, analysis, and synthesis
that work together to develop novel approaches. Unlike traditional auto
machine-learning systems, the AI Cosmologist generates diverse implementation
strategies, writes complete code, handles execution errors, analyzes results,
and synthesizes new approaches based on experimental outcomes. We demonstrate
the AI Cosmologist capabilities across several machine learning tasks, showing
how it can successfully explore solution spaces, iterate based on experimental
results, and combine successful elements from different approaches. Our results
indicate that agentic systems can automate portions of the research process,
potentially accelerating scientific discovery. The code and experimental data
used in this paper are available on GitHub at
https://github.com/adammoss/aicosmologist. Example papers included in the
appendix demonstrate the system's capability to autonomously produce complete
scientific publications, starting from only the dataset and task description
|
2504.03434 | Batuhan Ozyurt | Batuhan Ozyurt, Roya Arkhmammadova, Deniz Yuret | Locations of Characters in Narratives: Andersen and Persuasion Datasets | 14 pages, 3 figures, 10 tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by-sa/4.0/ | The ability of machines to grasp spatial understanding within narrative
contexts is an intriguing aspect of reading comprehension that continues to be
studied. Motivated by the goal to test the AI's competence in understanding the
relationship between characters and their respective locations in narratives,
we introduce two new datasets: Andersen and Persuasion. For the Andersen
dataset, we selected fifteen children's stories from "Andersen's Fairy Tales"
by Hans Christian Andersen and manually annotated the characters and their
respective locations throughout each story. Similarly, for the Persuasion
dataset, characters and their locations in the novel "Persuasion" by Jane
Austen were also manually annotated. We used these datasets to prompt Large
Language Models (LLMs). The prompts are created by extracting excerpts from the
stories or the novel and combining them with a question asking the location of
a character mentioned in that excerpt. Out of the five LLMs we tested, the
best-performing one for the Andersen dataset accurately identified the location
in 61.85% of the examples, while for the Persuasion dataset, the
best-performing one did so in 56.06% of the cases.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 13:25:32 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Ozyurt",
"Batuhan",
""
],
[
"Arkhmammadova",
"Roya",
""
],
[
"Yuret",
"Deniz",
""
]
] | TITLE: Locations of Characters in Narratives: Andersen and Persuasion Datasets
ABSTRACT: The ability of machines to grasp spatial understanding within narrative
contexts is an intriguing aspect of reading comprehension that continues to be
studied. Motivated by the goal to test the AI's competence in understanding the
relationship between characters and their respective locations in narratives,
we introduce two new datasets: Andersen and Persuasion. For the Andersen
dataset, we selected fifteen children's stories from "Andersen's Fairy Tales"
by Hans Christian Andersen and manually annotated the characters and their
respective locations throughout each story. Similarly, for the Persuasion
dataset, characters and their locations in the novel "Persuasion" by Jane
Austen were also manually annotated. We used these datasets to prompt Large
Language Models (LLMs). The prompts are created by extracting excerpts from the
stories or the novel and combining them with a question asking the location of
a character mentioned in that excerpt. Out of the five LLMs we tested, the
best-performing one for the Andersen dataset accurately identified the location
in 61.85% of the examples, while for the Persuasion dataset, the
best-performing one did so in 56.06% of the cases.
|
2504.03439 | Amin Dehghani | Mohammad Reza Yousefi, Ali Bakrani, Amin Dehghani | Early detection of diabetes through transfer learning-based eye (vision)
screening and improvement of machine learning model performance and advanced
parameter setting algorithms | 25 pages,12 Figures, 1 Table | null | null | null | eess.IV cs.CV eess.SP | http://creativecommons.org/licenses/by/4.0/ | Diabetic Retinopathy (DR) is a serious and common complication of diabetes,
caused by prolonged high blood sugar levels that damage the small retinal blood
vessels. If left untreated, DR can progress to retinal vein occlusion and
stimulate abnormal blood vessel growth, significantly increasing the risk of
blindness. Traditional diabetes diagnosis methods often utilize convolutional
neural networks (CNNs) to extract visual features from retinal images, followed
by classification algorithms such as decision trees and k-nearest neighbors
(KNN) for disease detection. However, these approaches face several challenges,
including low accuracy and sensitivity, lengthy machine learning (ML) model
training due to high data complexity and volume, and the use of limited
datasets for testing and evaluation. This study investigates the application of
transfer learning (TL) to enhance ML model performance in DR detection. Key
improvements include dimensionality reduction, optimized learning rate
adjustments, and advanced parameter tuning algorithms, aimed at increasing
efficiency and diagnostic accuracy. The proposed model achieved an overall
accuracy of 84% on the testing dataset, outperforming prior studies. The
highest class-specific accuracy reached 89%, with a maximum sensitivity of 97%
and an F1-score of 92%, demonstrating strong performance in identifying DR
cases. These findings suggest that TL-based DR screening is a promising
approach for early diagnosis, enabling timely interventions to prevent vision
loss and improve patient outcomes.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 13:30:21 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yousefi",
"Mohammad Reza",
""
],
[
"Bakrani",
"Ali",
""
],
[
"Dehghani",
"Amin",
""
]
] | TITLE: Early detection of diabetes through transfer learning-based eye (vision)
screening and improvement of machine learning model performance and advanced
parameter setting algorithms
ABSTRACT: Diabetic Retinopathy (DR) is a serious and common complication of diabetes,
caused by prolonged high blood sugar levels that damage the small retinal blood
vessels. If left untreated, DR can progress to retinal vein occlusion and
stimulate abnormal blood vessel growth, significantly increasing the risk of
blindness. Traditional diabetes diagnosis methods often utilize convolutional
neural networks (CNNs) to extract visual features from retinal images, followed
by classification algorithms such as decision trees and k-nearest neighbors
(KNN) for disease detection. However, these approaches face several challenges,
including low accuracy and sensitivity, lengthy machine learning (ML) model
training due to high data complexity and volume, and the use of limited
datasets for testing and evaluation. This study investigates the application of
transfer learning (TL) to enhance ML model performance in DR detection. Key
improvements include dimensionality reduction, optimized learning rate
adjustments, and advanced parameter tuning algorithms, aimed at increasing
efficiency and diagnostic accuracy. The proposed model achieved an overall
accuracy of 84% on the testing dataset, outperforming prior studies. The
highest class-specific accuracy reached 89%, with a maximum sensitivity of 97%
and an F1-score of 92%, demonstrating strong performance in identifying DR
cases. These findings suggest that TL-based DR screening is a promising
approach for early diagnosis, enabling timely interventions to prevent vision
loss and improve patient outcomes.
|
2504.03450 | Van Anh Nguyen | Van-Anh Nguyen, Thanh-Toan Do, Mehrtash Harandi, Dinh Phung, Trung Le | Optimizing Specific and Shared Parameters for Efficient Parameter Tuning | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Foundation models, with a vast number of parameters and pretraining on
massive datasets, achieve state-of-the-art performance across various
applications. However, efficiently adapting them to downstream tasks with
minimal computational overhead remains a challenge. Parameter-Efficient
Transfer Learning (PETL) addresses this by fine-tuning only a small subset of
parameters while preserving pre-trained knowledge. In this paper, we propose
SaS, a novel PETL method that effectively mitigates distributional shifts
during fine-tuning. SaS integrates (1) a shared module that captures common
statistical characteristics across layers using low-rank projections and (2) a
layer-specific module that employs hypernetworks to generate tailored
parameters for each layer. This dual design ensures an optimal balance between
performance and parameter efficiency while introducing less than 0.05%
additional parameters, making it significantly more compact than existing
methods. Extensive experiments on diverse downstream tasks, few-shot settings
and domain generalization demonstrate that SaS significantly enhances
performance while maintaining superior parameter efficiency compared to
existing methods, highlighting the importance of capturing both shared and
layer-specific information in transfer learning. Code and data are available at
https://anonymous.4open.science/r/SaS-PETL-3565.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 13:43:54 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Nguyen",
"Van-Anh",
""
],
[
"Do",
"Thanh-Toan",
""
],
[
"Harandi",
"Mehrtash",
""
],
[
"Phung",
"Dinh",
""
],
[
"Le",
"Trung",
""
]
] | TITLE: Optimizing Specific and Shared Parameters for Efficient Parameter Tuning
ABSTRACT: Foundation models, with a vast number of parameters and pretraining on
massive datasets, achieve state-of-the-art performance across various
applications. However, efficiently adapting them to downstream tasks with
minimal computational overhead remains a challenge. Parameter-Efficient
Transfer Learning (PETL) addresses this by fine-tuning only a small subset of
parameters while preserving pre-trained knowledge. In this paper, we propose
SaS, a novel PETL method that effectively mitigates distributional shifts
during fine-tuning. SaS integrates (1) a shared module that captures common
statistical characteristics across layers using low-rank projections and (2) a
layer-specific module that employs hypernetworks to generate tailored
parameters for each layer. This dual design ensures an optimal balance between
performance and parameter efficiency while introducing less than 0.05%
additional parameters, making it significantly more compact than existing
methods. Extensive experiments on diverse downstream tasks, few-shot settings
and domain generalization demonstrate that SaS significantly enhances
performance while maintaining superior parameter efficiency compared to
existing methods, highlighting the importance of capturing both shared and
layer-specific information in transfer learning. Code and data are available at
https://anonymous.4open.science/r/SaS-PETL-3565.
|
2504.03463 | David Landry | David Landry, Claire Monteleoni and Anastase Charantonis | Generating ensembles of spatially-coherent in-situ forecasts using flow
matching | 16 pages, 7 figures | null | null | null | physics.ao-ph cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose a machine-learning-based methodology for in-situ weather forecast
postprocessing that is both spatially coherent and multivariate. Compared to
previous work, our Flow MAtching Postprocessing (FMAP) better represents the
correlation structures of the observations distribution, while also improving
marginal performance at the stations. FMAP generates forecasts that are not
bound to what is already modeled by the underlying gridded prediction and can
infer new correlation structures from data. The resulting model can generate an
arbitrary number of forecasts from a limited number of numerical simulations,
allowing for low-cost forecasting systems. A single training is sufficient to
perform postprocessing at multiple lead times, in contrast with other methods
which use multiple trained networks at generation time. This work details our
methodology, including a spatial attention transformer backbone trained within
a flow matching generative modeling framework. FMAP shows promising performance
in experiments on the EUPPBench dataset, forecasting surface temperature and
wind gust values at station locations in western Europe up to five-day lead
times.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 14:12:53 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Landry",
"David",
""
],
[
"Monteleoni",
"Claire",
""
],
[
"Charantonis",
"Anastase",
""
]
] | TITLE: Generating ensembles of spatially-coherent in-situ forecasts using flow
matching
ABSTRACT: We propose a machine-learning-based methodology for in-situ weather forecast
postprocessing that is both spatially coherent and multivariate. Compared to
previous work, our Flow MAtching Postprocessing (FMAP) better represents the
correlation structures of the observations distribution, while also improving
marginal performance at the stations. FMAP generates forecasts that are not
bound to what is already modeled by the underlying gridded prediction and can
infer new correlation structures from data. The resulting model can generate an
arbitrary number of forecasts from a limited number of numerical simulations,
allowing for low-cost forecasting systems. A single training is sufficient to
perform postprocessing at multiple lead times, in contrast with other methods
which use multiple trained networks at generation time. This work details our
methodology, including a spatial attention transformer backbone trained within
a flow matching generative modeling framework. FMAP shows promising performance
in experiments on the EUPPBench dataset, forecasting surface temperature and
wind gust values at station locations in western Europe up to five-day lead
times.
|
2504.03476 | Dengfeng Pan | Sheng Lian, Dengfeng Pan, Jianlong Cai, Guang-Yong Chen, Zhun Zhong,
Zhiming Luo, Shen Zhao, Shuo Li | ATM-Net: Anatomy-Aware Text-Guided Multi-Modal Fusion for Fine-Grained
Lumbar Spine Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate lumbar spine segmentation is crucial for diagnosing spinal
disorders. Existing methods typically use coarse-grained segmentation
strategies that lack the fine detail needed for precise diagnosis.
Additionally, their reliance on visual-only models hinders the capture of
anatomical semantics, leading to misclassified categories and poor segmentation
details. To address these limitations, we present ATM-Net, an innovative
framework that employs an anatomy-aware, text-guided, multi-modal fusion
mechanism for fine-grained segmentation of lumbar substructures, i.e.,
vertebrae (VBs), intervertebral discs (IDs), and spinal canal (SC). ATM-Net
adopts the Anatomy-aware Text Prompt Generator (ATPG) to adaptively convert
image annotations into anatomy-aware prompts in different views. These insights
are further integrated with image features via the Holistic Anatomy-aware
Semantic Fusion (HASF) module, building a comprehensive anatomical context. The
Channel-wise Contrastive Anatomy-Aware Enhancement (CCAE) module further
enhances class discrimination and refines segmentation through class-wise
channel-level multi-modal contrastive learning. Extensive experiments on the
MRSpineSeg and SPIDER datasets demonstrate that ATM-Net significantly
outperforms state-of-the-art methods, with consistent improvements regarding
class discrimination and segmentation details. For example, ATM-Net achieves
Dice of 79.39% and HD95 of 9.91 pixels on SPIDER, outperforming the competitive
SpineParseNet by 8.31% and 4.14 pixels, respectively.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 14:36:12 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Lian",
"Sheng",
""
],
[
"Pan",
"Dengfeng",
""
],
[
"Cai",
"Jianlong",
""
],
[
"Chen",
"Guang-Yong",
""
],
[
"Zhong",
"Zhun",
""
],
[
"Luo",
"Zhiming",
""
],
[
"Zhao",
"Shen",
""
],
[
"Li",
"Shuo",
""
]
] | TITLE: ATM-Net: Anatomy-Aware Text-Guided Multi-Modal Fusion for Fine-Grained
Lumbar Spine Segmentation
ABSTRACT: Accurate lumbar spine segmentation is crucial for diagnosing spinal
disorders. Existing methods typically use coarse-grained segmentation
strategies that lack the fine detail needed for precise diagnosis.
Additionally, their reliance on visual-only models hinders the capture of
anatomical semantics, leading to misclassified categories and poor segmentation
details. To address these limitations, we present ATM-Net, an innovative
framework that employs an anatomy-aware, text-guided, multi-modal fusion
mechanism for fine-grained segmentation of lumbar substructures, i.e.,
vertebrae (VBs), intervertebral discs (IDs), and spinal canal (SC). ATM-Net
adopts the Anatomy-aware Text Prompt Generator (ATPG) to adaptively convert
image annotations into anatomy-aware prompts in different views. These insights
are further integrated with image features via the Holistic Anatomy-aware
Semantic Fusion (HASF) module, building a comprehensive anatomical context. The
Channel-wise Contrastive Anatomy-Aware Enhancement (CCAE) module further
enhances class discrimination and refines segmentation through class-wise
channel-level multi-modal contrastive learning. Extensive experiments on the
MRSpineSeg and SPIDER datasets demonstrate that ATM-Net significantly
outperforms state-of-the-art methods, with consistent improvements regarding
class discrimination and segmentation details. For example, ATM-Net achieves
Dice of 79.39% and HD95 of 9.91 pixels on SPIDER, outperforming the competitive
SpineParseNet by 8.31% and 4.14 pixels, respectively.
|
2504.03478 | Spyros Kondylatos | Spyros Kondylatos, Nikolaos Ioannis Bountos, Ioannis Prapas, Angelos
Zavras, Gustau Camps-Valls, Ioannis Papoutsis | Probabilistic Machine Learning for Noisy Labels in Earth Observation | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Label noise poses a significant challenge in Earth Observation (EO), often
degrading the performance and reliability of supervised Machine Learning (ML)
models. Yet, given the critical nature of several EO applications, developing
robust and trustworthy ML solutions is essential. In this study, we take a step
in this direction by leveraging probabilistic ML to model input-dependent label
noise and quantify data uncertainty in EO tasks, accounting for the unique
noise sources inherent in the domain. We train uncertainty-aware probabilistic
models across a broad range of high-impact EO applications-spanning diverse
noise sources, input modalities, and ML configurations-and introduce a
dedicated pipeline to assess their accuracy and reliability. Our experimental
results show that the uncertainty-aware models consistently outperform the
standard deterministic approaches across most datasets and evaluation metrics.
Moreover, through rigorous uncertainty evaluation, we validate the reliability
of the predicted uncertainty estimates, enhancing the interpretability of model
predictions. Our findings emphasize the importance of modeling label noise and
incorporating uncertainty quantification in EO, paving the way for more
accurate, reliable, and trustworthy ML solutions in the field.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 14:36:33 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Kondylatos",
"Spyros",
""
],
[
"Bountos",
"Nikolaos Ioannis",
""
],
[
"Prapas",
"Ioannis",
""
],
[
"Zavras",
"Angelos",
""
],
[
"Camps-Valls",
"Gustau",
""
],
[
"Papoutsis",
"Ioannis",
""
]
] | TITLE: Probabilistic Machine Learning for Noisy Labels in Earth Observation
ABSTRACT: Label noise poses a significant challenge in Earth Observation (EO), often
degrading the performance and reliability of supervised Machine Learning (ML)
models. Yet, given the critical nature of several EO applications, developing
robust and trustworthy ML solutions is essential. In this study, we take a step
in this direction by leveraging probabilistic ML to model input-dependent label
noise and quantify data uncertainty in EO tasks, accounting for the unique
noise sources inherent in the domain. We train uncertainty-aware probabilistic
models across a broad range of high-impact EO applications-spanning diverse
noise sources, input modalities, and ML configurations-and introduce a
dedicated pipeline to assess their accuracy and reliability. Our experimental
results show that the uncertainty-aware models consistently outperform the
standard deterministic approaches across most datasets and evaluation metrics.
Moreover, through rigorous uncertainty evaluation, we validate the reliability
of the predicted uncertainty estimates, enhancing the interpretability of model
predictions. Our findings emphasize the importance of modeling label noise and
incorporating uncertainty quantification in EO, paving the way for more
accurate, reliable, and trustworthy ML solutions in the field.
|
2504.03486 | Shubham Kumar Nigam | Shubham Kumar Nigam, Balaramamahanthi Deepak Patnaik, Ajay Varghese
Thomas, Noel Shallum, Kripabandhu Ghosh and Arnab Bhattacharya | Structured Legal Document Generation in India: A Model-Agnostic Wrapper
Approach with VidhikDastaavej | null | null | null | null | cs.CL cs.AI cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Automating legal document drafting can significantly enhance efficiency,
reduce manual effort, and streamline legal workflows. While prior research has
explored tasks such as judgment prediction and case summarization, the
structured generation of private legal documents in the Indian legal domain
remains largely unaddressed. To bridge this gap, we introduce VidhikDastaavej,
a novel, anonymized dataset of private legal documents, and develop NyayaShilp,
a fine-tuned legal document generation model specifically adapted to Indian
legal texts. We propose a Model-Agnostic Wrapper (MAW), a two-step framework
that first generates structured section titles and then iteratively produces
content while leveraging retrieval-based mechanisms to ensure coherence and
factual accuracy. We benchmark multiple open-source LLMs, including
instruction-tuned and domain-adapted versions, alongside proprietary models for
comparison. Our findings indicate that while direct fine-tuning on small
datasets does not always yield improvements, our structured wrapper
significantly enhances coherence, factual adherence, and overall document
quality while mitigating hallucinations. To ensure real-world applicability, we
developed a Human-in-the-Loop (HITL) Document Generation System, an interactive
user interface that enables users to specify document types, refine section
details, and generate structured legal drafts. This tool allows legal
professionals and researchers to generate, validate, and refine AI-generated
legal documents efficiently. Extensive evaluations, including expert
assessments, confirm that our framework achieves high reliability in structured
legal drafting. This research establishes a scalable and adaptable foundation
for AI-assisted legal drafting in India, offering an effective approach to
structured legal document generation.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 14:41:50 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Nigam",
"Shubham Kumar",
""
],
[
"Patnaik",
"Balaramamahanthi Deepak",
""
],
[
"Thomas",
"Ajay Varghese",
""
],
[
"Shallum",
"Noel",
""
],
[
"Ghosh",
"Kripabandhu",
""
],
[
"Bhattacharya",
"Arnab",
""
]
] | TITLE: Structured Legal Document Generation in India: A Model-Agnostic Wrapper
Approach with VidhikDastaavej
ABSTRACT: Automating legal document drafting can significantly enhance efficiency,
reduce manual effort, and streamline legal workflows. While prior research has
explored tasks such as judgment prediction and case summarization, the
structured generation of private legal documents in the Indian legal domain
remains largely unaddressed. To bridge this gap, we introduce VidhikDastaavej,
a novel, anonymized dataset of private legal documents, and develop NyayaShilp,
a fine-tuned legal document generation model specifically adapted to Indian
legal texts. We propose a Model-Agnostic Wrapper (MAW), a two-step framework
that first generates structured section titles and then iteratively produces
content while leveraging retrieval-based mechanisms to ensure coherence and
factual accuracy. We benchmark multiple open-source LLMs, including
instruction-tuned and domain-adapted versions, alongside proprietary models for
comparison. Our findings indicate that while direct fine-tuning on small
datasets does not always yield improvements, our structured wrapper
significantly enhances coherence, factual adherence, and overall document
quality while mitigating hallucinations. To ensure real-world applicability, we
developed a Human-in-the-Loop (HITL) Document Generation System, an interactive
user interface that enables users to specify document types, refine section
details, and generate structured legal drafts. This tool allows legal
professionals and researchers to generate, validate, and refine AI-generated
legal documents efficiently. Extensive evaluations, including expert
assessments, confirm that our framework achieves high reliability in structured
legal drafting. This research establishes a scalable and adaptable foundation
for AI-assisted legal drafting in India, offering an effective approach to
structured legal document generation.
|
2504.03490 | Zihao He | Zihao He, Shengchuan Zhang, Runze Hu, Yunhang Shen and Yan Zhang | BUFF: Bayesian Uncertainty Guided Diffusion Probabilistic Model for
Single Image Super-Resolution | 9 pages, 5 figures, AAAI 2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Super-resolution (SR) techniques are critical for enhancing image quality,
particularly in scenarios where high-resolution imagery is essential yet
limited by hardware constraints. Existing diffusion models for SR have relied
predominantly on Gaussian models for noise generation, which often fall short
when dealing with the complex and variable texture inherent in natural scenes.
To address these deficiencies, we introduce the Bayesian Uncertainty Guided
Diffusion Probabilistic Model (BUFF). BUFF distinguishes itself by
incorporating a Bayesian network to generate high-resolution uncertainty masks.
These masks guide the diffusion process, allowing for the adjustment of noise
intensity in a manner that is both context-aware and adaptive. This novel
approach not only enhances the fidelity of super-resolved images to their
original high-resolution counterparts but also significantly mitigates
artifacts and blurring in areas characterized by complex textures and fine
details. The model demonstrates exceptional robustness against complex noise
patterns and showcases superior adaptability in handling textures and edges
within images. Empirical evidence, supported by visual results, illustrates the
model's robustness, especially in challenging scenarios, and its effectiveness
in addressing common SR issues such as blurring. Experimental evaluations
conducted on the DIV2K dataset reveal that BUFF achieves a notable improvement,
with a +0.61 increase compared to baseline in SSIM on BSD100, surpassing
traditional diffusion approaches by an average additional +0.20dB PSNR gain.
These findings underscore the potential of Bayesian methods in enhancing
diffusion processes for SR, paving the way for future advancements in the
field.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 14:43:45 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"He",
"Zihao",
""
],
[
"Zhang",
"Shengchuan",
""
],
[
"Hu",
"Runze",
""
],
[
"Shen",
"Yunhang",
""
],
[
"Zhang",
"Yan",
""
]
] | TITLE: BUFF: Bayesian Uncertainty Guided Diffusion Probabilistic Model for
Single Image Super-Resolution
ABSTRACT: Super-resolution (SR) techniques are critical for enhancing image quality,
particularly in scenarios where high-resolution imagery is essential yet
limited by hardware constraints. Existing diffusion models for SR have relied
predominantly on Gaussian models for noise generation, which often fall short
when dealing with the complex and variable texture inherent in natural scenes.
To address these deficiencies, we introduce the Bayesian Uncertainty Guided
Diffusion Probabilistic Model (BUFF). BUFF distinguishes itself by
incorporating a Bayesian network to generate high-resolution uncertainty masks.
These masks guide the diffusion process, allowing for the adjustment of noise
intensity in a manner that is both context-aware and adaptive. This novel
approach not only enhances the fidelity of super-resolved images to their
original high-resolution counterparts but also significantly mitigates
artifacts and blurring in areas characterized by complex textures and fine
details. The model demonstrates exceptional robustness against complex noise
patterns and showcases superior adaptability in handling textures and edges
within images. Empirical evidence, supported by visual results, illustrates the
model's robustness, especially in challenging scenarios, and its effectiveness
in addressing common SR issues such as blurring. Experimental evaluations
conducted on the DIV2K dataset reveal that BUFF achieves a notable improvement,
with a +0.61 increase compared to baseline in SSIM on BSD100, surpassing
traditional diffusion approaches by an average additional +0.20dB PSNR gain.
These findings underscore the potential of Bayesian methods in enhancing
diffusion processes for SR, paving the way for future advancements in the
field.
|
2504.03491 | Johannes Kirschner | Luis Barba, Johannes Kirschner, Tomas Aidukas, Manuel Guizar-Sicairos,
Benjam\'in B\'ejar | Diffusion Active Learning: Towards Data-Driven Experimental Design in
Computed Tomography | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce Diffusion Active Learning, a novel approach that combines
generative diffusion modeling with data-driven sequential experimental design
to adaptively acquire data for inverse problems. Although broadly applicable,
we focus on scientific computed tomography (CT) for experimental validation,
where structured prior datasets are available, and reducing data requirements
directly translates to shorter measurement times and lower X-ray doses. We
first pre-train an unconditional diffusion model on domain-specific CT
reconstructions. The diffusion model acts as a learned prior that is
data-dependent and captures the structure of the underlying data distribution,
which is then used in two ways: It drives the active learning process and also
improves the quality of the reconstructions. During the active learning loop,
we employ a variant of diffusion posterior sampling to generate conditional
data samples from the posterior distribution, ensuring consistency with the
current measurements. Using these samples, we quantify the uncertainty in the
current estimate to select the most informative next measurement. Our results
show substantial reductions in data acquisition requirements, corresponding to
lower X-ray doses, while simultaneously improving image reconstruction quality
across multiple real-world tomography datasets.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 14:46:48 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Barba",
"Luis",
""
],
[
"Kirschner",
"Johannes",
""
],
[
"Aidukas",
"Tomas",
""
],
[
"Guizar-Sicairos",
"Manuel",
""
],
[
"Béjar",
"Benjamín",
""
]
] | TITLE: Diffusion Active Learning: Towards Data-Driven Experimental Design in
Computed Tomography
ABSTRACT: We introduce Diffusion Active Learning, a novel approach that combines
generative diffusion modeling with data-driven sequential experimental design
to adaptively acquire data for inverse problems. Although broadly applicable,
we focus on scientific computed tomography (CT) for experimental validation,
where structured prior datasets are available, and reducing data requirements
directly translates to shorter measurement times and lower X-ray doses. We
first pre-train an unconditional diffusion model on domain-specific CT
reconstructions. The diffusion model acts as a learned prior that is
data-dependent and captures the structure of the underlying data distribution,
which is then used in two ways: It drives the active learning process and also
improves the quality of the reconstructions. During the active learning loop,
we employ a variant of diffusion posterior sampling to generate conditional
data samples from the posterior distribution, ensuring consistency with the
current measurements. Using these samples, we quantify the uncertainty in the
current estimate to select the most informative next measurement. Our results
show substantial reductions in data acquisition requirements, corresponding to
lower X-ray doses, while simultaneously improving image reconstruction quality
across multiple real-world tomography datasets.
|
2504.03494 | Alexander Windmann | Alexander Windmann, Henrik Steude, Daniel Boschmann, Oliver Niggemann | Quantifying Robustness: A Benchmarking Framework for Deep Learning
Forecasting in Cyber-Physical Systems | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Cyber-Physical Systems (CPS) in domains such as manufacturing and energy
distribution generate complex time series data crucial for Prognostics and
Health Management (PHM). While Deep Learning (DL) methods have demonstrated
strong forecasting capabilities, their adoption in industrial CPS remains
limited due insufficient robustness. Existing robustness evaluations primarily
focus on formal verification or adversarial perturbations, inadequately
representing the complexities encountered in real-world CPS scenarios. To
address this, we introduce a practical robustness definition grounded in
distributional robustness, explicitly tailored to industrial CPS, and propose a
systematic framework for robustness evaluation. Our framework simulates
realistic disturbances, such as sensor drift, noise and irregular sampling,
enabling thorough robustness analyses of forecasting models on real-world CPS
datasets. The robustness definition provides a standardized score to quantify
and compare model performance across diverse datasets, assisting in informed
model selection and architecture design. Through extensive empirical studies
evaluating prominent DL architectures (including recurrent, convolutional,
attention-based, modular, and structured state-space models) we demonstrate the
applicability and effectiveness of our approach. We publicly release our
robustness benchmark to encourage further research and reproducibility.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 14:50:48 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Windmann",
"Alexander",
""
],
[
"Steude",
"Henrik",
""
],
[
"Boschmann",
"Daniel",
""
],
[
"Niggemann",
"Oliver",
""
]
] | TITLE: Quantifying Robustness: A Benchmarking Framework for Deep Learning
Forecasting in Cyber-Physical Systems
ABSTRACT: Cyber-Physical Systems (CPS) in domains such as manufacturing and energy
distribution generate complex time series data crucial for Prognostics and
Health Management (PHM). While Deep Learning (DL) methods have demonstrated
strong forecasting capabilities, their adoption in industrial CPS remains
limited due insufficient robustness. Existing robustness evaluations primarily
focus on formal verification or adversarial perturbations, inadequately
representing the complexities encountered in real-world CPS scenarios. To
address this, we introduce a practical robustness definition grounded in
distributional robustness, explicitly tailored to industrial CPS, and propose a
systematic framework for robustness evaluation. Our framework simulates
realistic disturbances, such as sensor drift, noise and irregular sampling,
enabling thorough robustness analyses of forecasting models on real-world CPS
datasets. The robustness definition provides a standardized score to quantify
and compare model performance across diverse datasets, assisting in informed
model selection and architecture design. Through extensive empirical studies
evaluating prominent DL architectures (including recurrent, convolutional,
attention-based, modular, and structured state-space models) we demonstrate the
applicability and effectiveness of our approach. We publicly release our
robustness benchmark to encourage further research and reproducibility.
|
2504.03497 | Alex Young | Alex Young and Luan Vin\'icius Fiorio and Bo Yang and Boris Karanov
and Wim van Houtum and Ronald M. Aarts | Hybrid Real- and Complex-valued Neural Network Architecture | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose a \emph{hybrid} real- and complex-valued \emph{neural network}
(HNN) architecture, designed to combine the computational efficiency of
real-valued processing with the ability to effectively handle complex-valued
data. We illustrate the limitations of using real-valued neural networks
(RVNNs) for inherently complex-valued problems by showing how it learnt to
perform complex-valued convolution, but with notable inefficiencies stemming
from its real-valued constraints. To create the HNN, we propose to use building
blocks containing both real- and complex-valued paths, where information
between domains is exchanged through domain conversion functions. We also
introduce novel complex-valued activation functions, with higher generalisation
and parameterisation efficiency. HNN-specific architecture search techniques
are described to navigate the larger solution space. Experiments with the
AudioMNIST dataset demonstrate that the HNN reduces cross-entropy loss and
consumes less parameters compared to an RVNN for all considered cases. Such
results highlight the potential for the use of partially complex-valued
processing in neural networks and applications for HNNs in many signal
processing domains.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 14:52:44 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Young",
"Alex",
""
],
[
"Fiorio",
"Luan Vinícius",
""
],
[
"Yang",
"Bo",
""
],
[
"Karanov",
"Boris",
""
],
[
"van Houtum",
"Wim",
""
],
[
"Aarts",
"Ronald M.",
""
]
] | TITLE: Hybrid Real- and Complex-valued Neural Network Architecture
ABSTRACT: We propose a \emph{hybrid} real- and complex-valued \emph{neural network}
(HNN) architecture, designed to combine the computational efficiency of
real-valued processing with the ability to effectively handle complex-valued
data. We illustrate the limitations of using real-valued neural networks
(RVNNs) for inherently complex-valued problems by showing how it learnt to
perform complex-valued convolution, but with notable inefficiencies stemming
from its real-valued constraints. To create the HNN, we propose to use building
blocks containing both real- and complex-valued paths, where information
between domains is exchanged through domain conversion functions. We also
introduce novel complex-valued activation functions, with higher generalisation
and parameterisation efficiency. HNN-specific architecture search techniques
are described to navigate the larger solution space. Experiments with the
AudioMNIST dataset demonstrate that the HNN reduces cross-entropy loss and
consumes less parameters compared to an RVNN for all considered cases. Such
results highlight the potential for the use of partially complex-valued
processing in neural networks and applications for HNNs in many signal
processing domains.
|
2504.03501 | Ilan Naiman | Ilan Naiman, Emanuel Ben-Baruch, Oron Anschel, Alon Shoshan, Igor
Kviatkovsky, Manoj Aggarwal, Gerard Medioni | LV-MAE: Learning Long Video Representations through Masked-Embedding
Autoencoders | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this work, we introduce long-video masked-embedding autoencoders (LV-MAE),
a self-supervised learning framework for long video representation. Our
approach treats short- and long-span dependencies as two separate tasks. Such
decoupling allows for a more intuitive video processing where short-span
spatiotemporal primitives are first encoded and are then used to capture
long-range dependencies across consecutive video segments. To achieve this, we
leverage advanced off-the-shelf multimodal encoders to extract representations
from short segments within the long video, followed by pre-training a
masked-embedding autoencoder capturing high-level interactions across segments.
LV-MAE is highly efficient to train and enables the processing of much longer
videos by alleviating the constraint on the number of input frames.
Furthermore, unlike existing methods that typically pre-train on short-video
datasets, our approach offers self-supervised pre-training using long video
samples (e.g., 20+ minutes video clips) at scale. Using LV-MAE representations,
we achieve state-of-the-art results on three long-video benchmarks -- LVU,
COIN, and Breakfast -- employing only a simple classification head for either
attentive or linear probing. Finally, to assess LV-MAE pre-training and
visualize its reconstruction quality, we leverage the video-language aligned
space of short video representations to monitor LV-MAE through video-text
retrieval.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 14:56:27 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Naiman",
"Ilan",
""
],
[
"Ben-Baruch",
"Emanuel",
""
],
[
"Anschel",
"Oron",
""
],
[
"Shoshan",
"Alon",
""
],
[
"Kviatkovsky",
"Igor",
""
],
[
"Aggarwal",
"Manoj",
""
],
[
"Medioni",
"Gerard",
""
]
] | TITLE: LV-MAE: Learning Long Video Representations through Masked-Embedding
Autoencoders
ABSTRACT: In this work, we introduce long-video masked-embedding autoencoders (LV-MAE),
a self-supervised learning framework for long video representation. Our
approach treats short- and long-span dependencies as two separate tasks. Such
decoupling allows for a more intuitive video processing where short-span
spatiotemporal primitives are first encoded and are then used to capture
long-range dependencies across consecutive video segments. To achieve this, we
leverage advanced off-the-shelf multimodal encoders to extract representations
from short segments within the long video, followed by pre-training a
masked-embedding autoencoder capturing high-level interactions across segments.
LV-MAE is highly efficient to train and enables the processing of much longer
videos by alleviating the constraint on the number of input frames.
Furthermore, unlike existing methods that typically pre-train on short-video
datasets, our approach offers self-supervised pre-training using long video
samples (e.g., 20+ minutes video clips) at scale. Using LV-MAE representations,
we achieve state-of-the-art results on three long-video benchmarks -- LVU,
COIN, and Breakfast -- employing only a simple classification head for either
attentive or linear probing. Finally, to assess LV-MAE pre-training and
visualize its reconstruction quality, we leverage the video-language aligned
space of short video representations to monitor LV-MAE through video-text
retrieval.
|
2504.03510 | Shu Tan | Tan Shu, Li Shen | FADConv: A Frequency-Aware Dynamic Convolution for Farmland
Non-agriculturalization Identification and Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Cropland non-agriculturalization refers to the conversion of arable land into
non-agricultural uses such as forests, residential areas, and construction
sites. This phenomenon not only directly leads to the loss of cropland
resources but also poses systemic threats to food security and agricultural
sustainability. Accurate identification of cropland and non-cropland areas is
crucial for detecting and addressing this issue. Traditional CNNs employ static
convolution layers, while dynamic convolution studies demonstrate that
adaptively weighting multiple convolutional kernels through attention
mechanisms can enhance accuracy. However, existing dynamic convolution methods
relying on Global Average Pooling (GAP) for attention weight allocation suffer
from information loss, limiting segmentation precision. This paper proposes
Frequency-Aware Dynamic Convolution (FADConv) and a Frequency Attention (FAT)
module to address these limitations. Building upon the foundational structure
of dynamic convolution, we designed FADConv by integrating 2D Discrete Cosine
Transform (2D DCT) to capture frequency domain features and fuse them. FAT
module generates high-quality attention weights that replace the traditional
GAP method,making the combination between dynamic convolution kernels more
reasonable.Experiments on the GID and Hi-CNA datasets demonstrate that FADConv
significantly improves segmentation accuracy with minimal computational
overhead. For instance, ResNet18 with FADConv achieves 1.9% and 2.7% increases
in F1-score and IoU for cropland segmentation on GID, with only 58.87M
additional MAdds. Compared to other dynamic convolution approaches, FADConv
exhibits superior performance in cropland segmentation tasks.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 15:13:37 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Shu",
"Tan",
""
],
[
"Shen",
"Li",
""
]
] | TITLE: FADConv: A Frequency-Aware Dynamic Convolution for Farmland
Non-agriculturalization Identification and Segmentation
ABSTRACT: Cropland non-agriculturalization refers to the conversion of arable land into
non-agricultural uses such as forests, residential areas, and construction
sites. This phenomenon not only directly leads to the loss of cropland
resources but also poses systemic threats to food security and agricultural
sustainability. Accurate identification of cropland and non-cropland areas is
crucial for detecting and addressing this issue. Traditional CNNs employ static
convolution layers, while dynamic convolution studies demonstrate that
adaptively weighting multiple convolutional kernels through attention
mechanisms can enhance accuracy. However, existing dynamic convolution methods
relying on Global Average Pooling (GAP) for attention weight allocation suffer
from information loss, limiting segmentation precision. This paper proposes
Frequency-Aware Dynamic Convolution (FADConv) and a Frequency Attention (FAT)
module to address these limitations. Building upon the foundational structure
of dynamic convolution, we designed FADConv by integrating 2D Discrete Cosine
Transform (2D DCT) to capture frequency domain features and fuse them. FAT
module generates high-quality attention weights that replace the traditional
GAP method,making the combination between dynamic convolution kernels more
reasonable.Experiments on the GID and Hi-CNA datasets demonstrate that FADConv
significantly improves segmentation accuracy with minimal computational
overhead. For instance, ResNet18 with FADConv achieves 1.9% and 2.7% increases
in F1-score and IoU for cropland segmentation on GID, with only 58.87M
additional MAdds. Compared to other dynamic convolution approaches, FADConv
exhibits superior performance in cropland segmentation tasks.
|
2504.03520 | Hazem Ibrahim | Chen Wei Kuo, Kevin Chu, Nouar AlDahoul, Hazem Ibrahim, Talal Rahwan,
Yasir Zaki | Neutralizing the Narrative: AI-Powered Debiasing of Online News Articles | 23 pages, 3 figures | null | null | null | cs.CL cs.CY | http://creativecommons.org/licenses/by/4.0/ | Bias in news reporting significantly impacts public perception, particularly
regarding crime, politics, and societal issues. Traditional bias detection
methods, predominantly reliant on human moderation, suffer from subjective
interpretations and scalability constraints. Here, we introduce an AI-driven
framework leveraging advanced large language models (LLMs), specifically
GPT-4o, GPT-4o Mini, Gemini Pro, Gemini Flash, Llama 8B, and Llama 3B, to
systematically identify and mitigate biases in news articles. To this end, we
collect an extensive dataset consisting of over 30,000 crime-related articles
from five politically diverse news sources spanning a decade (2013-2023). Our
approach employs a two-stage methodology: (1) bias detection, where each LLM
scores and justifies biased content at the paragraph level, validated through
human evaluation for ground truth establishment, and (2) iterative debiasing
using GPT-4o Mini, verified by both automated reassessment and human reviewers.
Empirical results indicate GPT-4o Mini's superior accuracy in bias detection
and effectiveness in debiasing. Furthermore, our analysis reveals temporal and
geographical variations in media bias correlating with socio-political dynamics
and real-world events. This study contributes to scalable computational
methodologies for bias mitigation, promoting fairness and accountability in
news reporting.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 15:17:53 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Kuo",
"Chen Wei",
""
],
[
"Chu",
"Kevin",
""
],
[
"AlDahoul",
"Nouar",
""
],
[
"Ibrahim",
"Hazem",
""
],
[
"Rahwan",
"Talal",
""
],
[
"Zaki",
"Yasir",
""
]
] | TITLE: Neutralizing the Narrative: AI-Powered Debiasing of Online News Articles
ABSTRACT: Bias in news reporting significantly impacts public perception, particularly
regarding crime, politics, and societal issues. Traditional bias detection
methods, predominantly reliant on human moderation, suffer from subjective
interpretations and scalability constraints. Here, we introduce an AI-driven
framework leveraging advanced large language models (LLMs), specifically
GPT-4o, GPT-4o Mini, Gemini Pro, Gemini Flash, Llama 8B, and Llama 3B, to
systematically identify and mitigate biases in news articles. To this end, we
collect an extensive dataset consisting of over 30,000 crime-related articles
from five politically diverse news sources spanning a decade (2013-2023). Our
approach employs a two-stage methodology: (1) bias detection, where each LLM
scores and justifies biased content at the paragraph level, validated through
human evaluation for ground truth establishment, and (2) iterative debiasing
using GPT-4o Mini, verified by both automated reassessment and human reviewers.
Empirical results indicate GPT-4o Mini's superior accuracy in bias detection
and effectiveness in debiasing. Furthermore, our analysis reveals temporal and
geographical variations in media bias correlating with socio-political dynamics
and real-world events. This study contributes to scalable computational
methodologies for bias mitigation, promoting fairness and accountability in
news reporting.
|
2504.03546 | Khai Le-Duc | Khai Le-Duc, Tuyen Tran, Bach Phan Tat, Nguyen Kim Hai Bui, Quan Dang,
Hung-Phong Tran, Thanh-Thuy Nguyen, Ly Nguyen, Tuan-Minh Phan, Thi Thu Phuong
Tran, Chris Ngo, Nguyen X. Khanh, Thanh Nguyen-Tang | MultiMed-ST: Large-scale Many-to-many Multilingual Medical Speech
Translation | Preprint, 122 pages | null | null | null | cs.CL cs.AI cs.LG cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | Multilingual speech translation (ST) in the medical domain enhances patient
care by enabling efficient communication across language barriers, alleviating
specialized workforce shortages, and facilitating improved diagnosis and
treatment, particularly during pandemics. In this work, we present the first
systematic study on medical ST, to our best knowledge, by releasing
MultiMed-ST, a large-scale ST dataset for the medical domain, spanning all
translation directions in five languages: Vietnamese, English, German, French,
Traditional Chinese and Simplified Chinese, together with the models. With
290,000 samples, our dataset is the largest medical machine translation (MT)
dataset and the largest many-to-many multilingual ST among all domains.
Secondly, we present the most extensive analysis study in ST research to date,
including: empirical baselines, bilingual-multilingual comparative study,
end-to-end vs. cascaded comparative study, task-specific vs. multi-task
sequence-to-sequence (seq2seq) comparative study, code-switch analysis, and
quantitative-qualitative error analysis. All code, data, and models are
available online: https://github.com/leduckhai/MultiMed-ST.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 15:49:17 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Le-Duc",
"Khai",
""
],
[
"Tran",
"Tuyen",
""
],
[
"Tat",
"Bach Phan",
""
],
[
"Bui",
"Nguyen Kim Hai",
""
],
[
"Dang",
"Quan",
""
],
[
"Tran",
"Hung-Phong",
""
],
[
"Nguyen",
"Thanh-Thuy",
""
],
[
"Nguyen",
"Ly",
""
],
[
"Phan",
"Tuan-Minh",
""
],
[
"Tran",
"Thi Thu Phuong",
""
],
[
"Ngo",
"Chris",
""
],
[
"Khanh",
"Nguyen X.",
""
],
[
"Nguyen-Tang",
"Thanh",
""
]
] | TITLE: MultiMed-ST: Large-scale Many-to-many Multilingual Medical Speech
Translation
ABSTRACT: Multilingual speech translation (ST) in the medical domain enhances patient
care by enabling efficient communication across language barriers, alleviating
specialized workforce shortages, and facilitating improved diagnosis and
treatment, particularly during pandemics. In this work, we present the first
systematic study on medical ST, to our best knowledge, by releasing
MultiMed-ST, a large-scale ST dataset for the medical domain, spanning all
translation directions in five languages: Vietnamese, English, German, French,
Traditional Chinese and Simplified Chinese, together with the models. With
290,000 samples, our dataset is the largest medical machine translation (MT)
dataset and the largest many-to-many multilingual ST among all domains.
Secondly, we present the most extensive analysis study in ST research to date,
including: empirical baselines, bilingual-multilingual comparative study,
end-to-end vs. cascaded comparative study, task-specific vs. multi-task
sequence-to-sequence (seq2seq) comparative study, code-switch analysis, and
quantitative-qualitative error analysis. All code, data, and models are
available online: https://github.com/leduckhai/MultiMed-ST.
|
2504.03563 | Kuan-Chuan Peng | Kaidong Li, Tianxiao Zhang, Kuan-Chuan Peng, Guanghui Wang | PF3Det: A Prompted Foundation Feature Assisted Visual LiDAR 3D Detector | This paper is accepted to the CVPR 2025 Workshop on Distillation of
Foundation Models for Autonomous Driving (WDFM-AD) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D object detection is crucial for autonomous driving, leveraging both LiDAR
point clouds for precise depth information and camera images for rich semantic
information. Therefore, the multi-modal methods that combine both modalities
offer more robust detection results. However, efficiently fusing LiDAR points
and images remains challenging due to the domain gaps. In addition, the
performance of many models is limited by the amount of high quality labeled
data, which is expensive to create. The recent advances in foundation models,
which use large-scale pre-training on different modalities, enable better
multi-modal fusion. Combining the prompt engineering techniques for efficient
training, we propose the Prompted Foundational 3D Detector (PF3Det), which
integrates foundation model encoders and soft prompts to enhance LiDAR-camera
feature fusion. PF3Det achieves the state-of-the-art results under limited
training data, improving NDS by 1.19% and mAP by 2.42% on the nuScenes dataset,
demonstrating its efficiency in 3D detection.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 16:11:25 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Li",
"Kaidong",
""
],
[
"Zhang",
"Tianxiao",
""
],
[
"Peng",
"Kuan-Chuan",
""
],
[
"Wang",
"Guanghui",
""
]
] | TITLE: PF3Det: A Prompted Foundation Feature Assisted Visual LiDAR 3D Detector
ABSTRACT: 3D object detection is crucial for autonomous driving, leveraging both LiDAR
point clouds for precise depth information and camera images for rich semantic
information. Therefore, the multi-modal methods that combine both modalities
offer more robust detection results. However, efficiently fusing LiDAR points
and images remains challenging due to the domain gaps. In addition, the
performance of many models is limited by the amount of high quality labeled
data, which is expensive to create. The recent advances in foundation models,
which use large-scale pre-training on different modalities, enable better
multi-modal fusion. Combining the prompt engineering techniques for efficient
training, we propose the Prompted Foundational 3D Detector (PF3Det), which
integrates foundation model encoders and soft prompts to enhance LiDAR-camera
feature fusion. PF3Det achieves the state-of-the-art results under limited
training data, improving NDS by 1.19% and mAP by 2.42% on the nuScenes dataset,
demonstrating its efficiency in 3D detection.
|
2504.03581 | Xiangnan Feng | Xiangnan Feng, Johannes Wachs, Simone Daniotti, Frank Neffke | The building blocks of software work explain coding careers and language
popularity | 31 pages, 12 figures | null | null | null | econ.GN cs.CY q-fin.EC | http://creativecommons.org/licenses/by/4.0/ | Recent waves of technological transformation have fueled debates about the
changing nature of work. Yet to understand the future of work, we need to know
more about what people actually do in their jobs, going beyond educational
credentials or job descriptions. Here we analyze work in the global software
industry using tens of millions of Question and Answer posts on Stack Overflow
to create a fine-grained taxonomy of software tasks, the elementary building
blocks of software development work. These tasks predict salaries and job
requirements in real-world job ads. We also observe how individuals learn
within tasks and diversify into new tasks. Tasks that people acquire tend to be
related to their old ones, but of lower value, suggesting that they are easier.
An exception is users of Python, an increasingly popular programming language
known for its versatility. Python users enter tasks that tend to be
higher-value, providing an explanation for the language's growing popularity
based on the tasks Python enables its users to perform. In general, these
insights demonstrate the value of task taxonomies extracted at scale from large
datasets: they offer high resolution and near real-time descriptions of
changing labor markets. In the case of software tasks, they map such changes
for jobs at the forefront of a digitizing global economy.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 16:39:20 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Feng",
"Xiangnan",
""
],
[
"Wachs",
"Johannes",
""
],
[
"Daniotti",
"Simone",
""
],
[
"Neffke",
"Frank",
""
]
] | TITLE: The building blocks of software work explain coding careers and language
popularity
ABSTRACT: Recent waves of technological transformation have fueled debates about the
changing nature of work. Yet to understand the future of work, we need to know
more about what people actually do in their jobs, going beyond educational
credentials or job descriptions. Here we analyze work in the global software
industry using tens of millions of Question and Answer posts on Stack Overflow
to create a fine-grained taxonomy of software tasks, the elementary building
blocks of software development work. These tasks predict salaries and job
requirements in real-world job ads. We also observe how individuals learn
within tasks and diversify into new tasks. Tasks that people acquire tend to be
related to their old ones, but of lower value, suggesting that they are easier.
An exception is users of Python, an increasingly popular programming language
known for its versatility. Python users enter tasks that tend to be
higher-value, providing an explanation for the language's growing popularity
based on the tasks Python enables its users to perform. In general, these
insights demonstrate the value of task taxonomies extracted at scale from large
datasets: they offer high resolution and near real-time descriptions of
changing labor markets. In the case of software tasks, they map such changes
for jobs at the forefront of a digitizing global economy.
|
2504.03589 | Badhan Kumar Das | Badhan Kumar Das, Gengyan Zhao, Han Liu, Thomas J. Re, Dorin
Comaniciu, Eli Gibson, Andreas Maier | AdaViT: Adaptive Vision Transformer for Flexible Pretrain and Finetune
with Variable 3D Medical Image Modalities | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Pretrain techniques, whether supervised or self-supervised, are widely used
in deep learning to enhance model performance. In real-world clinical
scenarios, different sets of magnetic resonance (MR) contrasts are often
acquired for different subjects/cases, creating challenges for deep learning
models assuming consistent input modalities among all the cases and between
pretrain and finetune. Existing methods struggle to maintain performance when
there is an input modality/contrast set mismatch with the pretrained model,
often resulting in degraded accuracy. We propose an adaptive Vision Transformer
(AdaViT) framework capable of handling variable set of input modalities for
each case. We utilize a dynamic tokenizer to encode different input image
modalities to tokens and take advantage of the characteristics of the
transformer to build attention mechanism across variable length of tokens.
Through extensive experiments, we demonstrate that this architecture
effectively transfers supervised pretrained models to new datasets with
different input modality/contrast sets, resulting in superior performance on
zero-shot testing, few-shot finetuning, and backward transferring in brain
infarct and brain tumor segmentation tasks. Additionally, for self-supervised
pretrain, the proposed method is able to maximize the pretrain data and
facilitate transferring to diverse downstream tasks with variable sets of input
modalities.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 16:57:06 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Das",
"Badhan Kumar",
""
],
[
"Zhao",
"Gengyan",
""
],
[
"Liu",
"Han",
""
],
[
"Re",
"Thomas J.",
""
],
[
"Comaniciu",
"Dorin",
""
],
[
"Gibson",
"Eli",
""
],
[
"Maier",
"Andreas",
""
]
] | TITLE: AdaViT: Adaptive Vision Transformer for Flexible Pretrain and Finetune
with Variable 3D Medical Image Modalities
ABSTRACT: Pretrain techniques, whether supervised or self-supervised, are widely used
in deep learning to enhance model performance. In real-world clinical
scenarios, different sets of magnetic resonance (MR) contrasts are often
acquired for different subjects/cases, creating challenges for deep learning
models assuming consistent input modalities among all the cases and between
pretrain and finetune. Existing methods struggle to maintain performance when
there is an input modality/contrast set mismatch with the pretrained model,
often resulting in degraded accuracy. We propose an adaptive Vision Transformer
(AdaViT) framework capable of handling variable set of input modalities for
each case. We utilize a dynamic tokenizer to encode different input image
modalities to tokens and take advantage of the characteristics of the
transformer to build attention mechanism across variable length of tokens.
Through extensive experiments, we demonstrate that this architecture
effectively transfers supervised pretrained models to new datasets with
different input modality/contrast sets, resulting in superior performance on
zero-shot testing, few-shot finetuning, and backward transferring in brain
infarct and brain tumor segmentation tasks. Additionally, for self-supervised
pretrain, the proposed method is able to maximize the pretrain data and
facilitate transferring to diverse downstream tasks with variable sets of input
modalities.
|
2504.03600 | Jun Ma | Jun Ma, Zongxin Yang, Sumin Kim, Bihui Chen, Mohammed Baharoon,
Adibvafa Fallahpour, Reza Asakereh, Hongwei Lyu, and Bo Wang | MedSAM2: Segment Anything in 3D Medical Images and Videos | https://medsam2.github.io/ | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Medical image and video segmentation is a critical task for precision
medicine, which has witnessed considerable progress in developing task or
modality-specific and generalist models for 2D images. However, there have been
limited studies on building general-purpose models for 3D images and videos
with comprehensive user studies. Here, we present MedSAM2, a promptable
segmentation foundation model for 3D image and video segmentation. The model is
developed by fine-tuning the Segment Anything Model 2 on a large medical
dataset with over 455,000 3D image-mask pairs and 76,000 frames, outperforming
previous models across a wide range of organs, lesions, and imaging modalities.
Furthermore, we implement a human-in-the-loop pipeline to facilitate the
creation of large-scale datasets resulting in, to the best of our knowledge,
the most extensive user study to date, involving the annotation of 5,000 CT
lesions, 3,984 liver MRI lesions, and 251,550 echocardiogram video frames,
demonstrating that MedSAM2 can reduce manual costs by more than 85%. MedSAM2 is
also integrated into widely used platforms with user-friendly interfaces for
local and cloud deployment, making it a practical tool for supporting
efficient, scalable, and high-quality segmentation in both research and
healthcare environments.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 17:13:37 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Ma",
"Jun",
""
],
[
"Yang",
"Zongxin",
""
],
[
"Kim",
"Sumin",
""
],
[
"Chen",
"Bihui",
""
],
[
"Baharoon",
"Mohammed",
""
],
[
"Fallahpour",
"Adibvafa",
""
],
[
"Asakereh",
"Reza",
""
],
[
"Lyu",
"Hongwei",
""
],
[
"Wang",
"Bo",
""
]
] | TITLE: MedSAM2: Segment Anything in 3D Medical Images and Videos
ABSTRACT: Medical image and video segmentation is a critical task for precision
medicine, which has witnessed considerable progress in developing task or
modality-specific and generalist models for 2D images. However, there have been
limited studies on building general-purpose models for 3D images and videos
with comprehensive user studies. Here, we present MedSAM2, a promptable
segmentation foundation model for 3D image and video segmentation. The model is
developed by fine-tuning the Segment Anything Model 2 on a large medical
dataset with over 455,000 3D image-mask pairs and 76,000 frames, outperforming
previous models across a wide range of organs, lesions, and imaging modalities.
Furthermore, we implement a human-in-the-loop pipeline to facilitate the
creation of large-scale datasets resulting in, to the best of our knowledge,
the most extensive user study to date, involving the annotation of 5,000 CT
lesions, 3,984 liver MRI lesions, and 251,550 echocardiogram video frames,
demonstrating that MedSAM2 can reduce manual costs by more than 85%. MedSAM2 is
also integrated into widely used platforms with user-friendly interfaces for
local and cloud deployment, making it a practical tool for supporting
efficient, scalable, and high-quality segmentation in both research and
healthcare environments.
|
2504.03602 | Kai Lascheit | Kai Lascheit, Daniel Barath, Marc Pollefeys, Leonidas Guibas, Francis
Engelmann | Robust Human Registration with Body Part Segmentation on Noisy Point
Clouds | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Registering human meshes to 3D point clouds is essential for applications
such as augmented reality and human-robot interaction but often yields
imprecise results due to noise and background clutter in real-world data. We
introduce a hybrid approach that incorporates body-part segmentation into the
mesh fitting process, enhancing both human pose estimation and segmentation
accuracy. Our method first assigns body part labels to individual points, which
then guide a two-step SMPL-X fitting: initial pose and orientation estimation
using body part centroids, followed by global refinement of the point cloud
alignment. Additionally, we demonstrate that the fitted human mesh can refine
body part labels, leading to improved segmentation. Evaluations on the
cluttered and noisy real-world datasets InterCap, EgoBody, and BEHAVE show that
our approach significantly outperforms prior methods in both pose estimation
and segmentation accuracy. Code and results are available on our project
website: https://segfit.github.io
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 17:17:33 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Lascheit",
"Kai",
""
],
[
"Barath",
"Daniel",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Guibas",
"Leonidas",
""
],
[
"Engelmann",
"Francis",
""
]
] | TITLE: Robust Human Registration with Body Part Segmentation on Noisy Point
Clouds
ABSTRACT: Registering human meshes to 3D point clouds is essential for applications
such as augmented reality and human-robot interaction but often yields
imprecise results due to noise and background clutter in real-world data. We
introduce a hybrid approach that incorporates body-part segmentation into the
mesh fitting process, enhancing both human pose estimation and segmentation
accuracy. Our method first assigns body part labels to individual points, which
then guide a two-step SMPL-X fitting: initial pose and orientation estimation
using body part centroids, followed by global refinement of the point cloud
alignment. Additionally, we demonstrate that the fitted human mesh can refine
body part labels, leading to improved segmentation. Evaluations on the
cluttered and noisy real-world datasets InterCap, EgoBody, and BEHAVE show that
our approach significantly outperforms prior methods in both pose estimation
and segmentation accuracy. Code and results are available on our project
website: https://segfit.github.io
|
2504.03607 | Suhas Lohit | Yuyang Hu, Suhas Lohit, Ulugbek S. Kamilov, Tim K. Marks | Multimodal Diffusion Bridge with Attention-Based SAR Fusion for
Satellite Image Cloud Removal | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep learning has achieved some success in addressing the challenge of cloud
removal in optical satellite images, by fusing with synthetic aperture radar
(SAR) images. Recently, diffusion models have emerged as powerful tools for
cloud removal, delivering higher-quality estimation by sampling from cloud-free
distributions, compared to earlier methods. However, diffusion models initiate
sampling from pure Gaussian noise, which complicates the sampling trajectory
and results in suboptimal performance. Also, current methods fall short in
effectively fusing SAR and optical data. To address these limitations, we
propose Diffusion Bridges for Cloud Removal, DB-CR, which directly bridges
between the cloudy and cloud-free image distributions. In addition, we propose
a novel multimodal diffusion bridge architecture with a two-branch backbone for
multimodal image restoration, incorporating an efficient backbone and dedicated
cross-modality fusion blocks to effectively extract and fuse features from
synthetic aperture radar (SAR) and optical images. By formulating cloud removal
as a diffusion-bridge problem and leveraging this tailored architecture, DB-CR
achieves high-fidelity results while being computationally efficient. We
evaluated DB-CR on the SEN12MS-CR cloud-removal dataset, demonstrating that it
achieves state-of-the-art results.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 17:25:49 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Hu",
"Yuyang",
""
],
[
"Lohit",
"Suhas",
""
],
[
"Kamilov",
"Ulugbek S.",
""
],
[
"Marks",
"Tim K.",
""
]
] | TITLE: Multimodal Diffusion Bridge with Attention-Based SAR Fusion for
Satellite Image Cloud Removal
ABSTRACT: Deep learning has achieved some success in addressing the challenge of cloud
removal in optical satellite images, by fusing with synthetic aperture radar
(SAR) images. Recently, diffusion models have emerged as powerful tools for
cloud removal, delivering higher-quality estimation by sampling from cloud-free
distributions, compared to earlier methods. However, diffusion models initiate
sampling from pure Gaussian noise, which complicates the sampling trajectory
and results in suboptimal performance. Also, current methods fall short in
effectively fusing SAR and optical data. To address these limitations, we
propose Diffusion Bridges for Cloud Removal, DB-CR, which directly bridges
between the cloudy and cloud-free image distributions. In addition, we propose
a novel multimodal diffusion bridge architecture with a two-branch backbone for
multimodal image restoration, incorporating an efficient backbone and dedicated
cross-modality fusion blocks to effectively extract and fuse features from
synthetic aperture radar (SAR) and optical images. By formulating cloud removal
as a diffusion-bridge problem and leveraging this tailored architecture, DB-CR
achieves high-fidelity results while being computationally efficient. We
evaluated DB-CR on the SEN12MS-CR cloud-removal dataset, demonstrating that it
achieves state-of-the-art results.
|
2504.03612 | Bingxiang He | Bingxiang He, Wenbin Zhang, Jiaxi Song, Cheng Qian, Zixuan Fu, Bowen
Sun, Ning Ding, Haiwen Hong, Longtao Huang, Hui Xue, Ganqu Cui, Wanxiang Che,
Zhiyuan Liu, Maosong Sun | AIR: A Systematic Analysis of Annotations, Instructions, and Response
Pairs in Preference Dataset | 29 pages, 11 figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Preference learning is critical for aligning large language models (LLMs)
with human values, yet its success hinges on high-quality datasets comprising
three core components: Preference \textbf{A}nnotations, \textbf{I}nstructions,
and \textbf{R}esponse Pairs. Current approaches conflate these components,
obscuring their individual impacts and hindering systematic optimization. In
this work, we propose \textbf{AIR}, a component-wise analysis framework that
systematically isolates and optimizes each component while evaluating their
synergistic effects. Through rigorous experimentation, AIR reveals actionable
principles: annotation simplicity (point-wise generative scoring), instruction
inference stability (variance-based filtering across LLMs), and response pair
quality (moderate margins + high absolute scores). When combined, these
principles yield +5.3 average gains over baseline method, even with only 14k
high-quality pairs. Our work shifts preference dataset design from ad hoc
scaling to component-aware optimization, offering a blueprint for efficient,
reproducible alignment.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 17:33:07 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"He",
"Bingxiang",
""
],
[
"Zhang",
"Wenbin",
""
],
[
"Song",
"Jiaxi",
""
],
[
"Qian",
"Cheng",
""
],
[
"Fu",
"Zixuan",
""
],
[
"Sun",
"Bowen",
""
],
[
"Ding",
"Ning",
""
],
[
"Hong",
"Haiwen",
""
],
[
"Huang",
"Longtao",
""
],
[
"Xue",
"Hui",
""
],
[
"Cui",
"Ganqu",
""
],
[
"Che",
"Wanxiang",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Sun",
"Maosong",
""
]
] | TITLE: AIR: A Systematic Analysis of Annotations, Instructions, and Response
Pairs in Preference Dataset
ABSTRACT: Preference learning is critical for aligning large language models (LLMs)
with human values, yet its success hinges on high-quality datasets comprising
three core components: Preference \textbf{A}nnotations, \textbf{I}nstructions,
and \textbf{R}esponse Pairs. Current approaches conflate these components,
obscuring their individual impacts and hindering systematic optimization. In
this work, we propose \textbf{AIR}, a component-wise analysis framework that
systematically isolates and optimizes each component while evaluating their
synergistic effects. Through rigorous experimentation, AIR reveals actionable
principles: annotation simplicity (point-wise generative scoring), instruction
inference stability (variance-based filtering across LLMs), and response pair
quality (moderate margins + high absolute scores). When combined, these
principles yield +5.3 average gains over baseline method, even with only 14k
high-quality pairs. Our work shifts preference dataset design from ad hoc
scaling to component-aware optimization, offering a blueprint for efficient,
reproducible alignment.
|
2504.03621 | Laziz Hamdi | Laziz Hamdi, Amine Tamasna, Pascal Boisson, Thierry Paquet | VISTA-OCR: Towards generative and interactive end to end OCR models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | We introduce \textbf{VISTA-OCR} (Vision and Spatially-aware Text Analysis
OCR), a lightweight architecture that unifies text detection and recognition
within a single generative model. Unlike conventional methods that require
separate branches with dedicated parameters for text recognition and detection,
our approach leverages a Transformer decoder to sequentially generate text
transcriptions and their spatial coordinates in a unified branch. Built on an
encoder-decoder architecture, VISTA-OCR is progressively trained, starting with
the visual feature extraction phase, followed by multitask learning with
multimodal token generation. To address the increasing demand for versatile OCR
systems capable of advanced tasks, such as content-based text localization
\ref{content_based_localization}, we introduce new prompt-controllable OCR
tasks during pre-training.To enhance the model's capabilities, we built a new
dataset composed of real-world examples enriched with bounding box annotations
and synthetic samples. Although recent Vision Large Language Models (VLLMs) can
efficiently perform these tasks, their high computational cost remains a
barrier for practical deployment. In contrast, our VISTA$_{\text{omni}}$
variant processes both handwritten and printed documents with only 150M
parameters, interactively, by prompting. Extensive experiments on multiple
datasets demonstrate that VISTA-OCR achieves better performance compared to
state-of-the-art specialized models on standard OCR tasks while showing strong
potential for more sophisticated OCR applications, addressing the growing need
for interactive OCR systems. All code and annotations for VISTA-OCR will be
made publicly available upon acceptance.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 17:39:53 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Hamdi",
"Laziz",
""
],
[
"Tamasna",
"Amine",
""
],
[
"Boisson",
"Pascal",
""
],
[
"Paquet",
"Thierry",
""
]
] | TITLE: VISTA-OCR: Towards generative and interactive end to end OCR models
ABSTRACT: We introduce \textbf{VISTA-OCR} (Vision and Spatially-aware Text Analysis
OCR), a lightweight architecture that unifies text detection and recognition
within a single generative model. Unlike conventional methods that require
separate branches with dedicated parameters for text recognition and detection,
our approach leverages a Transformer decoder to sequentially generate text
transcriptions and their spatial coordinates in a unified branch. Built on an
encoder-decoder architecture, VISTA-OCR is progressively trained, starting with
the visual feature extraction phase, followed by multitask learning with
multimodal token generation. To address the increasing demand for versatile OCR
systems capable of advanced tasks, such as content-based text localization
\ref{content_based_localization}, we introduce new prompt-controllable OCR
tasks during pre-training.To enhance the model's capabilities, we built a new
dataset composed of real-world examples enriched with bounding box annotations
and synthetic samples. Although recent Vision Large Language Models (VLLMs) can
efficiently perform these tasks, their high computational cost remains a
barrier for practical deployment. In contrast, our VISTA$_{\text{omni}}$
variant processes both handwritten and printed documents with only 150M
parameters, interactively, by prompting. Extensive experiments on multiple
datasets demonstrate that VISTA-OCR achieves better performance compared to
state-of-the-art specialized models on standard OCR tasks while showing strong
potential for more sophisticated OCR applications, addressing the growing need
for interactive OCR systems. All code and annotations for VISTA-OCR will be
made publicly available upon acceptance.
|
2504.03625 | Ryan G. Dempsey | Ryan G. Dempsey, Jonathan Ethier, Halim Yanikomeroglu | Reciprocity-Aware Convolutional Neural Networks for Map-Based Path Loss
Prediction | 6 pages, 6 figures, 7 tables | null | null | null | cs.LG eess.SP | http://creativecommons.org/licenses/by-sa/4.0/ | Path loss modeling is a widely used technique for estimating point-to-point
losses along a communications link from transmitter (Tx) to receiver (Rx).
Accurate path loss predictions can optimize use of the radio frequency spectrum
and minimize unwanted interference. Modern path loss modeling often leverages
data-driven approaches, using machine learning to train models on drive test
measurement datasets. Drive tests primarily represent downlink scenarios, where
the Tx is located on a building and the Rx is located on a moving vehicle.
Consequently, trained models are frequently reserved for downlink coverage
estimation, lacking representation of uplink scenarios. In this paper, we
demonstrate that data augmentation can be used to train a path loss model that
is generalized to uplink, downlink, and backhaul scenarios, training using only
downlink drive test measurements. By adding a small number of synthetic samples
representing uplink scenarios to the training set, root mean squared error is
reduced by >8 dB on uplink examples in the test set.
| [
{
"version": "v1",
"created": "Fri, 4 Apr 2025 17:44:14 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Dempsey",
"Ryan G.",
""
],
[
"Ethier",
"Jonathan",
""
],
[
"Yanikomeroglu",
"Halim",
""
]
] | TITLE: Reciprocity-Aware Convolutional Neural Networks for Map-Based Path Loss
Prediction
ABSTRACT: Path loss modeling is a widely used technique for estimating point-to-point
losses along a communications link from transmitter (Tx) to receiver (Rx).
Accurate path loss predictions can optimize use of the radio frequency spectrum
and minimize unwanted interference. Modern path loss modeling often leverages
data-driven approaches, using machine learning to train models on drive test
measurement datasets. Drive tests primarily represent downlink scenarios, where
the Tx is located on a building and the Rx is located on a moving vehicle.
Consequently, trained models are frequently reserved for downlink coverage
estimation, lacking representation of uplink scenarios. In this paper, we
demonstrate that data augmentation can be used to train a path loss model that
is generalized to uplink, downlink, and backhaul scenarios, training using only
downlink drive test measurements. By adding a small number of synthetic samples
representing uplink scenarios to the training set, root mean squared error is
reduced by >8 dB on uplink examples in the test set.
|
2111.04333 | Su Wang | Su Wang, Zhiliang Wang, Tao Zhou, Xia Yin, Dongqi Han, Han Zhang,
Hongbin Sun, Xingang Shi, Jiahai Yang | threaTrace: Detecting and Tracing Host-based Threats in Node Level
Through Provenance Graph Learning | 13 pages, 6 figures | null | 10.1109/TIFS.2022.3208815 | null | cs.CR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Host-based threats such as Program Attack, Malware Implantation, and Advanced
Persistent Threats (APT), are commonly adopted by modern attackers. Recent
studies propose leveraging the rich contextual information in data provenance
to detect threats in a host. Data provenance is a directed acyclic graph
constructed from system audit data. Nodes in a provenance graph represent
system entities (e.g., $processes$ and $files$) and edges represent system
calls in the direction of information flow. However, previous studies, which
extract features of the whole provenance graph, are not sensitive to the small
number of threat-related entities and thus result in low performance when
hunting stealthy threats.
We present threaTrace, an anomaly-based detector that detects host-based
threats at system entity level without prior knowledge of attack patterns. We
tailor GraphSAGE, an inductive graph neural network, to learn every benign
entity's role in a provenance graph. threaTrace is a real-time system, which is
scalable of monitoring a long-term running host and capable of detecting
host-based intrusion in their early phase. We evaluate threaTrace on three
public datasets. The results show that threaTrace outperforms three
state-of-the-art host intrusion detection systems.
| [
{
"version": "v1",
"created": "Mon, 8 Nov 2021 08:48:26 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Wang",
"Su",
""
],
[
"Wang",
"Zhiliang",
""
],
[
"Zhou",
"Tao",
""
],
[
"Yin",
"Xia",
""
],
[
"Han",
"Dongqi",
""
],
[
"Zhang",
"Han",
""
],
[
"Sun",
"Hongbin",
""
],
[
"Shi",
"Xingang",
""
],
[
"Yang",
"Jiahai",
""
]
] | TITLE: threaTrace: Detecting and Tracing Host-based Threats in Node Level
Through Provenance Graph Learning
ABSTRACT: Host-based threats such as Program Attack, Malware Implantation, and Advanced
Persistent Threats (APT), are commonly adopted by modern attackers. Recent
studies propose leveraging the rich contextual information in data provenance
to detect threats in a host. Data provenance is a directed acyclic graph
constructed from system audit data. Nodes in a provenance graph represent
system entities (e.g., $processes$ and $files$) and edges represent system
calls in the direction of information flow. However, previous studies, which
extract features of the whole provenance graph, are not sensitive to the small
number of threat-related entities and thus result in low performance when
hunting stealthy threats.
We present threaTrace, an anomaly-based detector that detects host-based
threats at system entity level without prior knowledge of attack patterns. We
tailor GraphSAGE, an inductive graph neural network, to learn every benign
entity's role in a provenance graph. threaTrace is a real-time system, which is
scalable of monitoring a long-term running host and capable of detecting
host-based intrusion in their early phase. We evaluate threaTrace on three
public datasets. The results show that threaTrace outperforms three
state-of-the-art host intrusion detection systems.
|
2305.06361 | Chenguang Wang | Chenguang Wang, Zhang-Hua Fu, Pinyan Lu, Tianshu Yu | Efficient Training of Multi-task Neural Solver for Combinatorial
Optimization | Accepted by TMLR | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Efficiently training a multi-task neural solver for various combinatorial
optimization problems (COPs) has been less studied so far. Naive application of
conventional multi-task learning approaches often falls short in delivering a
high-quality, unified neural solver. This deficiency primarily stems from the
significant computational demands and a lack of adequate consideration for the
complexities inherent in COPs. In this paper, we propose a general and
efficient training paradigm to deliver a unified combinatorial multi-task
neural solver. To this end, we resort to the theoretical loss decomposition for
multiple tasks under an encoder-decoder framework, which enables more efficient
training via proper bandit task-sampling algorithms through an intra-task
influence matrix. By employing theoretically grounded approximations, our
method significantly enhances overall performance, regardless of whether it is
within constrained training budgets, across equivalent training epochs, or in
terms of generalization capabilities, when compared to conventional training
schedules. On the real-world datasets of TSPLib and CVRPLib, our method also
achieved the best results compared to single task learning and multi-task
learning approaches. Additionally, the influence matrix provides empirical
evidence supporting common practices in the field of learning to optimize,
further substantiating the effectiveness of our approach. Our code is
open-sourced and available at https://github.com/LOGO-CUHKSZ/MTL-COP.
| [
{
"version": "v1",
"created": "Wed, 10 May 2023 14:20:34 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Oct 2023 06:35:46 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 11:32:37 GMT"
},
{
"version": "v4",
"created": "Thu, 3 Apr 2025 11:31:44 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Wang",
"Chenguang",
""
],
[
"Fu",
"Zhang-Hua",
""
],
[
"Lu",
"Pinyan",
""
],
[
"Yu",
"Tianshu",
""
]
] | TITLE: Efficient Training of Multi-task Neural Solver for Combinatorial
Optimization
ABSTRACT: Efficiently training a multi-task neural solver for various combinatorial
optimization problems (COPs) has been less studied so far. Naive application of
conventional multi-task learning approaches often falls short in delivering a
high-quality, unified neural solver. This deficiency primarily stems from the
significant computational demands and a lack of adequate consideration for the
complexities inherent in COPs. In this paper, we propose a general and
efficient training paradigm to deliver a unified combinatorial multi-task
neural solver. To this end, we resort to the theoretical loss decomposition for
multiple tasks under an encoder-decoder framework, which enables more efficient
training via proper bandit task-sampling algorithms through an intra-task
influence matrix. By employing theoretically grounded approximations, our
method significantly enhances overall performance, regardless of whether it is
within constrained training budgets, across equivalent training epochs, or in
terms of generalization capabilities, when compared to conventional training
schedules. On the real-world datasets of TSPLib and CVRPLib, our method also
achieved the best results compared to single task learning and multi-task
learning approaches. Additionally, the influence matrix provides empirical
evidence supporting common practices in the field of learning to optimize,
further substantiating the effectiveness of our approach. Our code is
open-sourced and available at https://github.com/LOGO-CUHKSZ/MTL-COP.
|
2311.01479 | Litian Liu | Litian Liu, Yao Qin | Detecting Out-of-Distribution Through the Lens of Neural Collapse | CVPR 2025 main conference paper | null | null | null | cs.LG eess.IV | http://creativecommons.org/licenses/by/4.0/ | Out-of-Distribution (OOD) detection is critical for safe deployment; however,
existing detectors often struggle to generalize across datasets of varying
scales and model architectures, and some can incur high computational costs in
real-world applications. Inspired by the phenomenon of Neural Collapse, we
propose a versatile and efficient OOD detection method. Specifically, we
re-characterize prior observations that in-distribution (ID) samples form
clusters, demonstrating that, with appropriate centering, these clusters align
closely with model weight vectors. Additionally, we reveal that ID features
tend to expand into a simplex Equiangular Tight Frame, explaining the common
observation that ID features are situated farther from the origin than OOD
features. Incorporating both insights from Neural Collapse, our OOD detector
leverages feature proximity to weight vectors and complements this approach by
using feature norms to effectively filter out OOD samples. Extensive
experiments on off-the-shelf models demonstrate the robustness of our OOD
detector across diverse scenarios, mitigating generalization discrepancies and
enhancing overall performance, with inference latency comparable to that of the
basic softmax-confidence detector. Code is available here:
https://github.com/litianliu/NCI-OOD.
| [
{
"version": "v1",
"created": "Thu, 2 Nov 2023 05:18:28 GMT"
},
{
"version": "v2",
"created": "Tue, 7 Nov 2023 01:40:19 GMT"
},
{
"version": "v3",
"created": "Thu, 23 May 2024 04:25:02 GMT"
},
{
"version": "v4",
"created": "Fri, 24 May 2024 16:30:30 GMT"
},
{
"version": "v5",
"created": "Thu, 30 May 2024 18:59:12 GMT"
},
{
"version": "v6",
"created": "Mon, 14 Oct 2024 04:26:21 GMT"
},
{
"version": "v7",
"created": "Thu, 3 Apr 2025 04:16:58 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liu",
"Litian",
""
],
[
"Qin",
"Yao",
""
]
] | TITLE: Detecting Out-of-Distribution Through the Lens of Neural Collapse
ABSTRACT: Out-of-Distribution (OOD) detection is critical for safe deployment; however,
existing detectors often struggle to generalize across datasets of varying
scales and model architectures, and some can incur high computational costs in
real-world applications. Inspired by the phenomenon of Neural Collapse, we
propose a versatile and efficient OOD detection method. Specifically, we
re-characterize prior observations that in-distribution (ID) samples form
clusters, demonstrating that, with appropriate centering, these clusters align
closely with model weight vectors. Additionally, we reveal that ID features
tend to expand into a simplex Equiangular Tight Frame, explaining the common
observation that ID features are situated farther from the origin than OOD
features. Incorporating both insights from Neural Collapse, our OOD detector
leverages feature proximity to weight vectors and complements this approach by
using feature norms to effectively filter out OOD samples. Extensive
experiments on off-the-shelf models demonstrate the robustness of our OOD
detector across diverse scenarios, mitigating generalization discrepancies and
enhancing overall performance, with inference latency comparable to that of the
basic softmax-confidence detector. Code is available here:
https://github.com/litianliu/NCI-OOD.
|
2402.10512 | Jiale Li | Jiale Li, Zhihang Liu, Sean Longyu Ma, Chiu-Wing Sham, Chong Fu | A Novel Computing Paradigm for MobileNetV3 using Memristor | Published at the 2025 International Joint Conference on Neural
Networks (IJCNN 2025) | null | null | null | cs.AR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The increasing computational demands of deep learning models pose significant
challenges for edge devices. To address this, we propose a memristor-based
circuit design for MobileNetV3, specifically for image classification tasks.
Our design leverages the low power consumption and high integration density of
memristors, making it suitable for edge computing. The architecture includes
optimized memristive convolutional modules, batch normalization modules,
activation function modules, global average pooling modules, and fully
connected modules. Experimental results on the CIFAR-10 dataset show that our
memristor-based MobileNetV3 achieves over 90% accuracy while significantly
reducing inference time and energy consumption compared to traditional
implementations. This work demonstrates the potential of memristor-based
designs for efficient deployment of deep learning models in
resource-constrained environments.
| [
{
"version": "v1",
"created": "Fri, 16 Feb 2024 08:57:31 GMT"
},
{
"version": "v2",
"created": "Thu, 1 Aug 2024 07:13:51 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 04:00:06 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Li",
"Jiale",
""
],
[
"Liu",
"Zhihang",
""
],
[
"Ma",
"Sean Longyu",
""
],
[
"Sham",
"Chiu-Wing",
""
],
[
"Fu",
"Chong",
""
]
] | TITLE: A Novel Computing Paradigm for MobileNetV3 using Memristor
ABSTRACT: The increasing computational demands of deep learning models pose significant
challenges for edge devices. To address this, we propose a memristor-based
circuit design for MobileNetV3, specifically for image classification tasks.
Our design leverages the low power consumption and high integration density of
memristors, making it suitable for edge computing. The architecture includes
optimized memristive convolutional modules, batch normalization modules,
activation function modules, global average pooling modules, and fully
connected modules. Experimental results on the CIFAR-10 dataset show that our
memristor-based MobileNetV3 achieves over 90% accuracy while significantly
reducing inference time and energy consumption compared to traditional
implementations. This work demonstrates the potential of memristor-based
designs for efficient deployment of deep learning models in
resource-constrained environments.
|
2402.16442 | Maximilian B\"other | Maximilian B\"other, Abraham Sebastian, Pranjal Awasthi, Ana Klimovic,
Srikumar Ramalingam | On Distributed Larger-Than-Memory Subset Selection With Pairwise
Submodular Functions | accepted at MLSys 2025 | null | null | null | cs.LG cs.AI cs.CV cs.DC math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern datasets span billions of samples, making training on all available
data infeasible. Selecting a high quality subset helps in reducing training
costs and enhancing model quality. Submodularity, a discrete analogue of
convexity, is commonly used for solving such subset selection problems.
However, existing algorithms for optimizing submodular functions are
sequential, and the prior distributed methods require at least one central
machine to fit the target subset in DRAM. At billion datapoint scale, even the
subset may not fit a single machine, and the sequential algorithms are
prohibitively slow. In this paper, we relax the requirement of having a central
machine for the target subset by proposing a novel distributed bounding
algorithm with provable approximation guarantees. The algorithm iteratively
bounds the minimum and maximum utility values to select high quality points and
discard the unimportant ones. When bounding does not find the complete subset,
we use a multi-round, partition-based distributed greedy algorithm to identify
the remaining subset. We discuss how to implement these algorithms in a
distributed data processing framework and empirically analyze different
configurations. We find high quality subsets on CIFAR-100 and ImageNet with
marginal or no loss in quality compared to centralized methods, and scale to a
dataset with 13 billion points.
| [
{
"version": "v1",
"created": "Mon, 26 Feb 2024 09:38:39 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Mar 2025 13:02:27 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 08:19:38 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Böther",
"Maximilian",
""
],
[
"Sebastian",
"Abraham",
""
],
[
"Awasthi",
"Pranjal",
""
],
[
"Klimovic",
"Ana",
""
],
[
"Ramalingam",
"Srikumar",
""
]
] | TITLE: On Distributed Larger-Than-Memory Subset Selection With Pairwise
Submodular Functions
ABSTRACT: Modern datasets span billions of samples, making training on all available
data infeasible. Selecting a high quality subset helps in reducing training
costs and enhancing model quality. Submodularity, a discrete analogue of
convexity, is commonly used for solving such subset selection problems.
However, existing algorithms for optimizing submodular functions are
sequential, and the prior distributed methods require at least one central
machine to fit the target subset in DRAM. At billion datapoint scale, even the
subset may not fit a single machine, and the sequential algorithms are
prohibitively slow. In this paper, we relax the requirement of having a central
machine for the target subset by proposing a novel distributed bounding
algorithm with provable approximation guarantees. The algorithm iteratively
bounds the minimum and maximum utility values to select high quality points and
discard the unimportant ones. When bounding does not find the complete subset,
we use a multi-round, partition-based distributed greedy algorithm to identify
the remaining subset. We discuss how to implement these algorithms in a
distributed data processing framework and empirically analyze different
configurations. We find high quality subsets on CIFAR-100 and ImageNet with
marginal or no loss in quality compared to centralized methods, and scale to a
dataset with 13 billion points.
|
2404.11014 | Zhishu Shen | Kang Wang, Zhishu Shen, Zhen Lei, Tiehua Zhang | Towards Multi-agent Reinforcement Learning based Traffic Signal Control
through Spatio-temporal Hypergraphs | Accepted by IEEE Transactions on Mobile Computing | null | 10.1109/TMC.2025.3556243 | null | cs.MA cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Traffic signal control systems (TSCSs) are integral to intelligent traffic
management, fostering efficient vehicle flow. Traditional approaches often
simplify road networks into standard graphs, which results in a failure to
consider the dynamic nature of traffic data at neighboring intersections,
thereby neglecting higher-order interconnections necessary for real-time
control. To address this, we propose a novel TSCS framework to realize
intelligent traffic control. This framework collaborates with multiple
neighboring edge computing servers to collect traffic information across the
road network. To elevate the efficiency of traffic signal control, we have
crafted a multi-agent soft actor-critic (MA-SAC) reinforcement learning
algorithm. Within this algorithm, individual agents are deployed at each
intersection with a mandate to optimize traffic flow across the road network
collectively. Furthermore, we introduce hypergraph learning into the critic
network of MA-SAC to enable the spatio-temporal interactions from multiple
intersections in the road network. This method fuses hypergraph and
spatio-temporal graph structures to encode traffic data and capture the complex
spatio-temporal correlations between multiple intersections. Our empirical
evaluation, tested on varied datasets, demonstrates the superiority of our
framework in minimizing average vehicle travel times and sustaining
high-throughput performance. This work facilitates the development of more
intelligent urban traffic management solutions. We release the code to support
the reproducibility of this work at https://github.com/Edun-Eyes/TSC
| [
{
"version": "v1",
"created": "Wed, 17 Apr 2024 02:46:18 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 13:50:50 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Wang",
"Kang",
""
],
[
"Shen",
"Zhishu",
""
],
[
"Lei",
"Zhen",
""
],
[
"Zhang",
"Tiehua",
""
]
] | TITLE: Towards Multi-agent Reinforcement Learning based Traffic Signal Control
through Spatio-temporal Hypergraphs
ABSTRACT: Traffic signal control systems (TSCSs) are integral to intelligent traffic
management, fostering efficient vehicle flow. Traditional approaches often
simplify road networks into standard graphs, which results in a failure to
consider the dynamic nature of traffic data at neighboring intersections,
thereby neglecting higher-order interconnections necessary for real-time
control. To address this, we propose a novel TSCS framework to realize
intelligent traffic control. This framework collaborates with multiple
neighboring edge computing servers to collect traffic information across the
road network. To elevate the efficiency of traffic signal control, we have
crafted a multi-agent soft actor-critic (MA-SAC) reinforcement learning
algorithm. Within this algorithm, individual agents are deployed at each
intersection with a mandate to optimize traffic flow across the road network
collectively. Furthermore, we introduce hypergraph learning into the critic
network of MA-SAC to enable the spatio-temporal interactions from multiple
intersections in the road network. This method fuses hypergraph and
spatio-temporal graph structures to encode traffic data and capture the complex
spatio-temporal correlations between multiple intersections. Our empirical
evaluation, tested on varied datasets, demonstrates the superiority of our
framework in minimizing average vehicle travel times and sustaining
high-throughput performance. This work facilitates the development of more
intelligent urban traffic management solutions. We release the code to support
the reproducibility of this work at https://github.com/Edun-Eyes/TSC
|
2404.14745 | Runqi Wang | Runqi Wang and Caoyuan Ma and Guopeng Li and Hanrui Xu and Yuke Li and
Zheng Wang | You Think, You ACT: The New Task of Arbitrary Text to Motion Generation | Updated errors in author information | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text to Motion aims to generate human motions from texts. Existing settings
rely on limited Action Texts that include action labels, which limits
flexibility and practicability in scenarios difficult to describe directly.
This paper extends limited Action Texts to arbitrary ones. Scene texts without
explicit action labels can enhance the practicality of models in complex and
diverse industries such as virtual human interaction, robot behavior
generation, and film production, while also supporting the exploration of
potential implicit behavior patterns. However, newly introduced Scene Texts may
yield multiple reasonable output results, causing significant challenges in
existing data, framework, and evaluation. To address this practical issue, we
first create a new dataset HUMANML3D++ by extending texts of the largest
existing dataset HUMANML3D. Secondly, we propose a simple yet effective
framework that extracts action instructions from arbitrary texts and
subsequently generates motions. Furthermore, we also benchmark this new setting
with multi-solution metrics to address the inadequacies of existing
single-solution metrics. Extensive experiments indicate that Text to Motion in
this realistic setting is challenging, fostering new research in this practical
direction.
| [
{
"version": "v1",
"created": "Tue, 23 Apr 2024 04:54:32 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Jun 2024 07:46:24 GMT"
},
{
"version": "v3",
"created": "Tue, 27 Aug 2024 13:36:12 GMT"
},
{
"version": "v4",
"created": "Fri, 3 Jan 2025 07:20:48 GMT"
},
{
"version": "v5",
"created": "Thu, 3 Apr 2025 03:30:59 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Wang",
"Runqi",
""
],
[
"Ma",
"Caoyuan",
""
],
[
"Li",
"Guopeng",
""
],
[
"Xu",
"Hanrui",
""
],
[
"Li",
"Yuke",
""
],
[
"Wang",
"Zheng",
""
]
] | TITLE: You Think, You ACT: The New Task of Arbitrary Text to Motion Generation
ABSTRACT: Text to Motion aims to generate human motions from texts. Existing settings
rely on limited Action Texts that include action labels, which limits
flexibility and practicability in scenarios difficult to describe directly.
This paper extends limited Action Texts to arbitrary ones. Scene texts without
explicit action labels can enhance the practicality of models in complex and
diverse industries such as virtual human interaction, robot behavior
generation, and film production, while also supporting the exploration of
potential implicit behavior patterns. However, newly introduced Scene Texts may
yield multiple reasonable output results, causing significant challenges in
existing data, framework, and evaluation. To address this practical issue, we
first create a new dataset HUMANML3D++ by extending texts of the largest
existing dataset HUMANML3D. Secondly, we propose a simple yet effective
framework that extracts action instructions from arbitrary texts and
subsequently generates motions. Furthermore, we also benchmark this new setting
with multi-solution metrics to address the inadequacies of existing
single-solution metrics. Extensive experiments indicate that Text to Motion in
this realistic setting is challenging, fostering new research in this practical
direction.
|
2405.05256 | Zhizhong Li | Prannay Kaul, Zhizhong Li, Hao Yang, Yonatan Dukler, Ashwin
Swaminathan, C. J. Taylor, Stefano Soatto | THRONE: An Object-based Hallucination Benchmark for the Free-form
Generations of Large Vision-Language Models | In CVPR 2024. Code https://github.com/amazon-science/THRONE | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Mitigating hallucinations in large vision-language models (LVLMs) remains an
open problem. Recent benchmarks do not address hallucinations in open-ended
free-form responses, which we term "Type I hallucinations". Instead, they focus
on hallucinations responding to very specific question formats -- typically a
multiple-choice response regarding a particular object or attribute -- which we
term "Type II hallucinations". Additionally, such benchmarks often require
external API calls to models which are subject to change. In practice, we
observe that a reduction in Type II hallucinations does not lead to a reduction
in Type I hallucinations but rather that the two forms of hallucinations are
often anti-correlated. To address this, we propose THRONE, a novel object-based
automatic framework for quantitatively evaluating Type I hallucinations in LVLM
free-form outputs. We use public language models (LMs) to identify
hallucinations in LVLM responses and compute informative metrics. By evaluating
a large selection of recent LVLMs using public datasets, we show that an
improvement in existing metrics do not lead to a reduction in Type I
hallucinations, and that established benchmarks for measuring Type I
hallucinations are incomplete. Finally, we provide a simple and effective data
augmentation method to reduce Type I and Type II hallucinations as a strong
baseline. Code is now available at https://github.com/amazon-science/THRONE .
| [
{
"version": "v1",
"created": "Wed, 8 May 2024 17:59:11 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 17:59:23 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Kaul",
"Prannay",
""
],
[
"Li",
"Zhizhong",
""
],
[
"Yang",
"Hao",
""
],
[
"Dukler",
"Yonatan",
""
],
[
"Swaminathan",
"Ashwin",
""
],
[
"Taylor",
"C. J.",
""
],
[
"Soatto",
"Stefano",
""
]
] | TITLE: THRONE: An Object-based Hallucination Benchmark for the Free-form
Generations of Large Vision-Language Models
ABSTRACT: Mitigating hallucinations in large vision-language models (LVLMs) remains an
open problem. Recent benchmarks do not address hallucinations in open-ended
free-form responses, which we term "Type I hallucinations". Instead, they focus
on hallucinations responding to very specific question formats -- typically a
multiple-choice response regarding a particular object or attribute -- which we
term "Type II hallucinations". Additionally, such benchmarks often require
external API calls to models which are subject to change. In practice, we
observe that a reduction in Type II hallucinations does not lead to a reduction
in Type I hallucinations but rather that the two forms of hallucinations are
often anti-correlated. To address this, we propose THRONE, a novel object-based
automatic framework for quantitatively evaluating Type I hallucinations in LVLM
free-form outputs. We use public language models (LMs) to identify
hallucinations in LVLM responses and compute informative metrics. By evaluating
a large selection of recent LVLMs using public datasets, we show that an
improvement in existing metrics do not lead to a reduction in Type I
hallucinations, and that established benchmarks for measuring Type I
hallucinations are incomplete. Finally, we provide a simple and effective data
augmentation method to reduce Type I and Type II hallucinations as a strong
baseline. Code is now available at https://github.com/amazon-science/THRONE .
|
2405.08498 | Daqian Shao | Daqian Shao, Ashkan Soleymani, Francesco Quinzan, Marta Kwiatkowska | Learning Decision Policies with Instrumental Variables through Double
Machine Learning | Accepted at ICML 2024 | PMLR/2024/235:44489-44514 | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A common issue in learning decision-making policies in data-rich settings is
spurious correlations in the offline dataset, which can be caused by hidden
confounders. Instrumental variable (IV) regression, which utilises a key
unconfounded variable known as the instrument, is a standard technique for
learning causal relationships between confounded action, outcome, and context
variables. Most recent IV regression algorithms use a two-stage approach, where
a deep neural network (DNN) estimator learnt in the first stage is directly
plugged into the second stage, in which another DNN is used to estimate the
causal effect. Naively plugging the estimator can cause heavy bias in the
second stage, especially when regularisation bias is present in the first stage
estimator. We propose DML-IV, a non-linear IV regression method that reduces
the bias in two-stage IV regressions and effectively learns high-performing
policies. We derive a novel learning objective to reduce bias and design the
DML-IV algorithm following the double/debiased machine learning (DML)
framework. The learnt DML-IV estimator has strong convergence rate and
$O(N^{-1/2})$ suboptimality guarantees that match those when the dataset is
unconfounded. DML-IV outperforms state-of-the-art IV regression methods on IV
regression benchmarks and learns high-performing policies in the presence of
instruments.
| [
{
"version": "v1",
"created": "Tue, 14 May 2024 10:55:04 GMT"
},
{
"version": "v2",
"created": "Wed, 15 May 2024 12:05:18 GMT"
},
{
"version": "v3",
"created": "Fri, 28 Jun 2024 13:31:48 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Shao",
"Daqian",
""
],
[
"Soleymani",
"Ashkan",
""
],
[
"Quinzan",
"Francesco",
""
],
[
"Kwiatkowska",
"Marta",
""
]
] | TITLE: Learning Decision Policies with Instrumental Variables through Double
Machine Learning
ABSTRACT: A common issue in learning decision-making policies in data-rich settings is
spurious correlations in the offline dataset, which can be caused by hidden
confounders. Instrumental variable (IV) regression, which utilises a key
unconfounded variable known as the instrument, is a standard technique for
learning causal relationships between confounded action, outcome, and context
variables. Most recent IV regression algorithms use a two-stage approach, where
a deep neural network (DNN) estimator learnt in the first stage is directly
plugged into the second stage, in which another DNN is used to estimate the
causal effect. Naively plugging the estimator can cause heavy bias in the
second stage, especially when regularisation bias is present in the first stage
estimator. We propose DML-IV, a non-linear IV regression method that reduces
the bias in two-stage IV regressions and effectively learns high-performing
policies. We derive a novel learning objective to reduce bias and design the
DML-IV algorithm following the double/debiased machine learning (DML)
framework. The learnt DML-IV estimator has strong convergence rate and
$O(N^{-1/2})$ suboptimality guarantees that match those when the dataset is
unconfounded. DML-IV outperforms state-of-the-art IV regression methods on IV
regression benchmarks and learns high-performing policies in the presence of
instruments.
|
2405.11573 | Aditya Challa Dr | Aditya Challa, Sravan Danda, Laurent Najman, Snehanshu Saha | Quantile Activation: Correcting a Failure Mode of ML Models | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Standard ML models fail to infer the context distribution and suitably adapt.
For instance, the learning fails when the underlying distribution is actually a
mixture of distributions with contradictory labels. Learning also fails if
there is a shift between train and test distributions. Standard neural network
architectures like MLPs or CNNs are not equipped to handle this.
In this article, we propose a simple activation function, quantile activation
(QAct), that addresses this problem without significantly increasing
computational costs. The core idea is to "adapt" the outputs of each neuron to
its context distribution. The proposed quantile activation (QAct) outputs the
relative quantile position of neuron activations within their context
distribution, diverging from the direct numerical outputs common in traditional
networks.
A specific case of the above failure mode is when there is an inherent
distribution shift, i.e the test distribution differs slightly from the train
distribution. We validate the proposed activation function under covariate
shifts, using datasets designed to test robustness against distortions. Our
results demonstrate significantly better generalization across distortions
compared to conventional classifiers and other adaptive methods, across various
architectures. Although this paper presents a proof of concept, we find that
this approach unexpectedly outperforms DINOv2 (small), despite DINOv2 being
trained with a much larger network and dataset.
| [
{
"version": "v1",
"created": "Sun, 19 May 2024 14:42:19 GMT"
},
{
"version": "v2",
"created": "Tue, 24 Dec 2024 05:16:49 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 00:10:12 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Challa",
"Aditya",
""
],
[
"Danda",
"Sravan",
""
],
[
"Najman",
"Laurent",
""
],
[
"Saha",
"Snehanshu",
""
]
] | TITLE: Quantile Activation: Correcting a Failure Mode of ML Models
ABSTRACT: Standard ML models fail to infer the context distribution and suitably adapt.
For instance, the learning fails when the underlying distribution is actually a
mixture of distributions with contradictory labels. Learning also fails if
there is a shift between train and test distributions. Standard neural network
architectures like MLPs or CNNs are not equipped to handle this.
In this article, we propose a simple activation function, quantile activation
(QAct), that addresses this problem without significantly increasing
computational costs. The core idea is to "adapt" the outputs of each neuron to
its context distribution. The proposed quantile activation (QAct) outputs the
relative quantile position of neuron activations within their context
distribution, diverging from the direct numerical outputs common in traditional
networks.
A specific case of the above failure mode is when there is an inherent
distribution shift, i.e the test distribution differs slightly from the train
distribution. We validate the proposed activation function under covariate
shifts, using datasets designed to test robustness against distortions. Our
results demonstrate significantly better generalization across distortions
compared to conventional classifiers and other adaptive methods, across various
architectures. Although this paper presents a proof of concept, we find that
this approach unexpectedly outperforms DINOv2 (small), despite DINOv2 being
trained with a much larger network and dataset.
|
2405.14672 | Hanrong Zhang | Hanrong Zhang, Zhenting Wang, Boheng Li, Fulin Lin, Tingxu Han, Mingyu
Jin, Chenlu Zhan, Mengnan Du, Hongwei Wang, Shiqing Ma | Invisible Backdoor Attack against Self-supervised Learning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Self-supervised learning (SSL) models are vulnerable to backdoor attacks.
Existing backdoor attacks that are effective in SSL often involve noticeable
triggers, like colored patches or visible noise, which are vulnerable to human
inspection. This paper proposes an imperceptible and effective backdoor attack
against self-supervised models. We first find that existing imperceptible
triggers designed for supervised learning are less effective in compromising
self-supervised models. We then identify this ineffectiveness is attributed to
the overlap in distributions between the backdoor and augmented samples used in
SSL. Building on this insight, we design an attack using optimized triggers
disentangled with the augmented transformation in the SSL, while remaining
imperceptible to human vision. Experiments on five datasets and six SSL
algorithms demonstrate our attack is highly effective and stealthy. It also has
strong resistance to existing backdoor defenses. Our code can be found at
https://github.com/Zhang-Henry/INACTIVE.
| [
{
"version": "v1",
"created": "Thu, 23 May 2024 15:08:31 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 08:05:03 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zhang",
"Hanrong",
""
],
[
"Wang",
"Zhenting",
""
],
[
"Li",
"Boheng",
""
],
[
"Lin",
"Fulin",
""
],
[
"Han",
"Tingxu",
""
],
[
"Jin",
"Mingyu",
""
],
[
"Zhan",
"Chenlu",
""
],
[
"Du",
"Mengnan",
""
],
[
"Wang",
"Hongwei",
""
],
[
"Ma",
"Shiqing",
""
]
] | TITLE: Invisible Backdoor Attack against Self-supervised Learning
ABSTRACT: Self-supervised learning (SSL) models are vulnerable to backdoor attacks.
Existing backdoor attacks that are effective in SSL often involve noticeable
triggers, like colored patches or visible noise, which are vulnerable to human
inspection. This paper proposes an imperceptible and effective backdoor attack
against self-supervised models. We first find that existing imperceptible
triggers designed for supervised learning are less effective in compromising
self-supervised models. We then identify this ineffectiveness is attributed to
the overlap in distributions between the backdoor and augmented samples used in
SSL. Building on this insight, we design an attack using optimized triggers
disentangled with the augmented transformation in the SSL, while remaining
imperceptible to human vision. Experiments on five datasets and six SSL
algorithms demonstrate our attack is highly effective and stealthy. It also has
strong resistance to existing backdoor defenses. Our code can be found at
https://github.com/Zhang-Henry/INACTIVE.
|
2405.17939 | Yuxin Liu | Yuxin Liu, Deepika Tiwari, Cristian Bogdan, Benoit Baudry | Detecting and removing bloated dependencies in CommonJS packages | Revision submitted to Journal of Systems and Software (JSS) | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | JavaScript packages are notoriously prone to bloat, a factor that
significantly impacts the performance and maintainability of web applications.
While web bundlers and tree-shaking can mitigate this issue in client-side
applications, state-of-the-art techniques have limitations on the detection and
removal of bloat in server-side applications. In this paper, we present the
first study to investigate bloated dependencies within server-side JavaScript
applications, focusing on those built with the widely used and highly dynamic
CommonJS module system. We propose a trace-based dynamic analysis that monitors
the OS file system to determine which dependencies are not accessed during
runtime. To evaluate our approach, we curate an original dataset of 91 CommonJS
packages with a total of 50,488 dependencies. Compared to the state-of-the-art
dynamic and static approaches, our trace-based analysis demonstrates higher
accuracy in detecting bloated dependencies. Our analysis identifies 50.6% of
the 50,488 dependencies as bloated: 13.8% of direct dependencies and 51.3% of
indirect dependencies. Furthermore, removing only the direct bloated
dependencies by cleaning the dependency configuration file can remove a
significant share of unnecessary bloated indirect dependencies while preserving
functional correctness.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 08:04:01 GMT"
},
{
"version": "v2",
"created": "Sat, 18 Jan 2025 07:29:36 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 09:50:06 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liu",
"Yuxin",
""
],
[
"Tiwari",
"Deepika",
""
],
[
"Bogdan",
"Cristian",
""
],
[
"Baudry",
"Benoit",
""
]
] | TITLE: Detecting and removing bloated dependencies in CommonJS packages
ABSTRACT: JavaScript packages are notoriously prone to bloat, a factor that
significantly impacts the performance and maintainability of web applications.
While web bundlers and tree-shaking can mitigate this issue in client-side
applications, state-of-the-art techniques have limitations on the detection and
removal of bloat in server-side applications. In this paper, we present the
first study to investigate bloated dependencies within server-side JavaScript
applications, focusing on those built with the widely used and highly dynamic
CommonJS module system. We propose a trace-based dynamic analysis that monitors
the OS file system to determine which dependencies are not accessed during
runtime. To evaluate our approach, we curate an original dataset of 91 CommonJS
packages with a total of 50,488 dependencies. Compared to the state-of-the-art
dynamic and static approaches, our trace-based analysis demonstrates higher
accuracy in detecting bloated dependencies. Our analysis identifies 50.6% of
the 50,488 dependencies as bloated: 13.8% of direct dependencies and 51.3% of
indirect dependencies. Furthermore, removing only the direct bloated
dependencies by cleaning the dependency configuration file can remove a
significant share of unnecessary bloated indirect dependencies while preserving
functional correctness.
|
2406.03230 | Amelia Kawasaki | Amelia Kawasaki, Andrew Davis, Houssam Abbas | Defending Large Language Models Against Attacks With Residual Stream
Activation Analysis | Included in Proceedings of the Conference on Applied Machine Learning
in Information Security (CAMLIS 2024), Arlington, Virginia, USA, October
24-25, 2024 | null | null | null | cs.CR cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The widespread adoption of Large Language Models (LLMs), exemplified by
OpenAI's ChatGPT, brings to the forefront the imperative to defend against
adversarial threats on these models. These attacks, which manipulate an LLM's
output by introducing malicious inputs, undermine the model's integrity and the
trust users place in its outputs. In response to this challenge, our paper
presents an innovative defensive strategy, given white box access to an LLM,
that harnesses residual activation analysis between transformer layers of the
LLM. We apply a novel methodology for analyzing distinctive activation patterns
in the residual streams for attack prompt classification. We curate multiple
datasets to demonstrate how this method of classification has high accuracy
across multiple types of attack scenarios, including our newly-created attack
dataset. Furthermore, we enhance the model's resilience by integrating safety
fine-tuning techniques for LLMs in order to measure its effect on our
capability to detect attacks. The results underscore the effectiveness of our
approach in enhancing the detection and mitigation of adversarial inputs,
advancing the security framework within which LLMs operate.
| [
{
"version": "v1",
"created": "Wed, 5 Jun 2024 13:06:33 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Jun 2024 22:27:00 GMT"
},
{
"version": "v3",
"created": "Tue, 9 Jul 2024 04:39:46 GMT"
},
{
"version": "v4",
"created": "Wed, 13 Nov 2024 20:18:19 GMT"
},
{
"version": "v5",
"created": "Wed, 2 Apr 2025 22:12:47 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Kawasaki",
"Amelia",
""
],
[
"Davis",
"Andrew",
""
],
[
"Abbas",
"Houssam",
""
]
] | TITLE: Defending Large Language Models Against Attacks With Residual Stream
Activation Analysis
ABSTRACT: The widespread adoption of Large Language Models (LLMs), exemplified by
OpenAI's ChatGPT, brings to the forefront the imperative to defend against
adversarial threats on these models. These attacks, which manipulate an LLM's
output by introducing malicious inputs, undermine the model's integrity and the
trust users place in its outputs. In response to this challenge, our paper
presents an innovative defensive strategy, given white box access to an LLM,
that harnesses residual activation analysis between transformer layers of the
LLM. We apply a novel methodology for analyzing distinctive activation patterns
in the residual streams for attack prompt classification. We curate multiple
datasets to demonstrate how this method of classification has high accuracy
across multiple types of attack scenarios, including our newly-created attack
dataset. Furthermore, we enhance the model's resilience by integrating safety
fine-tuning techniques for LLMs in order to measure its effect on our
capability to detect attacks. The results underscore the effectiveness of our
approach in enhancing the detection and mitigation of adversarial inputs,
advancing the security framework within which LLMs operate.
|
2406.06965 | Ping Liu | Ping Liu, Qiqi Tao, Joey Tianyi Zhou | Evolving from Single-modal to Multi-modal Facial Deepfake Detection:
Progress and Challenges | P. Liu is with the Department of Computer Science and Engineering,
University of Nevada, Reno, NV, 89512. Q. Tao and J. Zhou are with Centre for
Frontier AI Research (CFAR), and Institute of High Performance Computing
(IHPC), A*STAR, Singapore. J. Zhou is also with Centre for Advanced
Technologies in Online Safety (CATOS), A*STAR, Singapore. J. Zhou is the
corresponding author | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | As synthetic media, including video, audio, and text, become increasingly
indistinguishable from real content, the risks of misinformation, identity
fraud, and social manipulation escalate. This survey traces the evolution of
deepfake detection from early single-modal methods to sophisticated multi-modal
approaches that integrate audio-visual and text-visual cues. We present a
structured taxonomy of detection techniques and analyze the transition from
GAN-based to diffusion model-driven deepfakes, which introduce new challenges
due to their heightened realism and robustness against detection. Unlike prior
surveys that primarily focus on single-modal detection or earlier deepfake
techniques, this work provides the most comprehensive study to date,
encompassing the latest advancements in multi-modal deepfake detection,
generalization challenges, proactive defense mechanisms, and emerging datasets
specifically designed to support new interpretability and reasoning tasks. We
further explore the role of Vision-Language Models (VLMs) and Multimodal Large
Language Models (MLLMs) in strengthening detection robustness against
increasingly sophisticated deepfake attacks. By systematically categorizing
existing methods and identifying emerging research directions, this survey
serves as a foundation for future advancements in combating AI-generated facial
forgeries. A curated list of all related papers can be found at
\href{https://github.com/qiqitao77/Comprehensive-Advances-in-Deepfake-Detection-Spanning-Diverse-Modalities}{https://github.com/qiqitao77/Awesome-Comprehensive-Deepfake-Detection}.
| [
{
"version": "v1",
"created": "Tue, 11 Jun 2024 05:48:04 GMT"
},
{
"version": "v2",
"created": "Sun, 14 Jul 2024 20:27:56 GMT"
},
{
"version": "v3",
"created": "Wed, 14 Aug 2024 15:38:49 GMT"
},
{
"version": "v4",
"created": "Thu, 3 Apr 2025 07:47:44 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liu",
"Ping",
""
],
[
"Tao",
"Qiqi",
""
],
[
"Zhou",
"Joey Tianyi",
""
]
] | TITLE: Evolving from Single-modal to Multi-modal Facial Deepfake Detection:
Progress and Challenges
ABSTRACT: As synthetic media, including video, audio, and text, become increasingly
indistinguishable from real content, the risks of misinformation, identity
fraud, and social manipulation escalate. This survey traces the evolution of
deepfake detection from early single-modal methods to sophisticated multi-modal
approaches that integrate audio-visual and text-visual cues. We present a
structured taxonomy of detection techniques and analyze the transition from
GAN-based to diffusion model-driven deepfakes, which introduce new challenges
due to their heightened realism and robustness against detection. Unlike prior
surveys that primarily focus on single-modal detection or earlier deepfake
techniques, this work provides the most comprehensive study to date,
encompassing the latest advancements in multi-modal deepfake detection,
generalization challenges, proactive defense mechanisms, and emerging datasets
specifically designed to support new interpretability and reasoning tasks. We
further explore the role of Vision-Language Models (VLMs) and Multimodal Large
Language Models (MLLMs) in strengthening detection robustness against
increasingly sophisticated deepfake attacks. By systematically categorizing
existing methods and identifying emerging research directions, this survey
serves as a foundation for future advancements in combating AI-generated facial
forgeries. A curated list of all related papers can be found at
\href{https://github.com/qiqitao77/Comprehensive-Advances-in-Deepfake-Detection-Spanning-Diverse-Modalities}{https://github.com/qiqitao77/Awesome-Comprehensive-Deepfake-Detection}.
|
2406.14349 | Ilaria Vascotto | Ilaria Vascotto, Alex Rodriguez, Alessandro Bonaita, Luca Bortolussi | When Can You Trust Your Explanations? A Robustness Analysis on Feature
Importances | Accepted at the 3rd World Conference on eXplainable Artificial
Intelligence (to be held in July 2025) | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent legislative regulations have underlined the need for accountable and
transparent artificial intelligence systems and have contributed to a growing
interest in the Explainable Artificial Intelligence (XAI) field. Nonetheless,
the lack of standardized criteria to validate explanation methodologies remains
a major obstacle to developing trustworthy systems. We address a crucial yet
often overlooked aspect of XAI, the robustness of explanations, which plays a
central role in ensuring trust in both the system and the provided explanation.
To this end, we propose a novel approach to analyse the robustness of neural
network explanations to non-adversarial perturbations, leveraging the manifold
hypothesis to produce new perturbed datapoints that resemble the observed data
distribution. We additionally present an ensemble method to aggregate various
explanations, showing how merging explanations can be beneficial for both
understanding the model's decision and evaluating the robustness. The aim of
our work is to provide practitioners with a framework for evaluating the
trustworthiness of model explanations. Experimental results on feature
importances derived from neural networks applied to tabular datasets highlight
the importance of robust explanations in practical applications.
| [
{
"version": "v1",
"created": "Thu, 20 Jun 2024 14:17:57 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 14:59:16 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Vascotto",
"Ilaria",
""
],
[
"Rodriguez",
"Alex",
""
],
[
"Bonaita",
"Alessandro",
""
],
[
"Bortolussi",
"Luca",
""
]
] | TITLE: When Can You Trust Your Explanations? A Robustness Analysis on Feature
Importances
ABSTRACT: Recent legislative regulations have underlined the need for accountable and
transparent artificial intelligence systems and have contributed to a growing
interest in the Explainable Artificial Intelligence (XAI) field. Nonetheless,
the lack of standardized criteria to validate explanation methodologies remains
a major obstacle to developing trustworthy systems. We address a crucial yet
often overlooked aspect of XAI, the robustness of explanations, which plays a
central role in ensuring trust in both the system and the provided explanation.
To this end, we propose a novel approach to analyse the robustness of neural
network explanations to non-adversarial perturbations, leveraging the manifold
hypothesis to produce new perturbed datapoints that resemble the observed data
distribution. We additionally present an ensemble method to aggregate various
explanations, showing how merging explanations can be beneficial for both
understanding the model's decision and evaluating the robustness. The aim of
our work is to provide practitioners with a framework for evaluating the
trustworthiness of model explanations. Experimental results on feature
importances derived from neural networks applied to tabular datasets highlight
the importance of robust explanations in practical applications.
|
2406.17961 | Md Mahadi Hasan Nahid | Md Mahadi Hasan Nahid, Davood Rafiei | NormTab: Improving Symbolic Reasoning in LLMs Through Tabular Data
Normalization | EMNLP 2024 (Findings) | null | null | null | cs.CL cs.AI cs.DB cs.IR | http://creativecommons.org/licenses/by/4.0/ | In recent years, Large Language Models (LLMs) have demonstrated remarkable
capabilities in parsing textual data and generating code. However, their
performance in tasks involving tabular data, especially those requiring
symbolic reasoning, faces challenges due to the structural variance and
inconsistency in table cell values often found in web tables. In this paper, we
introduce NormTab, a novel framework aimed at enhancing the symbolic reasoning
performance of LLMs by normalizing web tables. We study table normalization as
a stand-alone, one-time preprocessing step using LLMs to support symbolic
reasoning on tabular data. Our experimental evaluation, conducted on
challenging web table datasets such as WikiTableQuestion and TabFact,
demonstrates that leveraging NormTab significantly improves symbolic reasoning
performance, showcasing the importance and effectiveness of web table
normalization for enhancing LLM-based symbolic reasoning tasks.
| [
{
"version": "v1",
"created": "Tue, 25 Jun 2024 22:40:03 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 20:52:21 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Nahid",
"Md Mahadi Hasan",
""
],
[
"Rafiei",
"Davood",
""
]
] | TITLE: NormTab: Improving Symbolic Reasoning in LLMs Through Tabular Data
Normalization
ABSTRACT: In recent years, Large Language Models (LLMs) have demonstrated remarkable
capabilities in parsing textual data and generating code. However, their
performance in tasks involving tabular data, especially those requiring
symbolic reasoning, faces challenges due to the structural variance and
inconsistency in table cell values often found in web tables. In this paper, we
introduce NormTab, a novel framework aimed at enhancing the symbolic reasoning
performance of LLMs by normalizing web tables. We study table normalization as
a stand-alone, one-time preprocessing step using LLMs to support symbolic
reasoning on tabular data. Our experimental evaluation, conducted on
challenging web table datasets such as WikiTableQuestion and TabFact,
demonstrates that leveraging NormTab significantly improves symbolic reasoning
performance, showcasing the importance and effectiveness of web table
normalization for enhancing LLM-based symbolic reasoning tasks.
|
2407.06249 | Zeyu Liu | Zeyu Leo Liu, Shrey Pandit, Xi Ye, Eunsol Choi, Greg Durrett | CodeUpdateArena: Benchmarking Knowledge Editing on API Updates | Under Review | null | null | null | cs.CL cs.SE | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) are increasingly being used to synthesize and
reason about source code. However, the static nature of these models' knowledge
does not reflect the fact that libraries and API functions they invoke are
continuously evolving, with functionality being added or changing. While
numerous benchmarks evaluate how LLMs can generate code, no prior work has
studied how an LLMs' knowledge about code API functions can be updated. To fill
this gap, we present CodeUpdateArena, a benchmark for knowledge editing in the
code domain. An instance in our benchmark consists of a synthetic API function
update paired with a program synthesis example that uses the updated
functionality; our goal is to update an LLM to be able to solve this program
synthesis example without providing documentation of the update at inference
time. Compared to knowledge editing for facts encoded in text, success here is
more challenging: a code LLM must correctly reason about the semantics of the
modified function rather than just reproduce its syntax. Our dataset is
constructed by first prompting GPT-4 to generate atomic and executable function
updates. Then, for each update, we generate program synthesis examples whose
code solutions are prone to use the update. Our benchmark covers updates of
various types to 54 functions from seven diverse Python packages, with a total
of 670 program synthesis examples. Our experiments show that prepending
documentation of the update to open-source code LLMs (i.e., DeepSeek,
CodeLlama) does not allow them to incorporate changes for problem solving, and
existing knowledge editing techniques also have substantial room for
improvement. We hope our benchmark will inspire new methods for knowledge
updating in code LLMs.
| [
{
"version": "v1",
"created": "Mon, 8 Jul 2024 17:55:04 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Feb 2025 05:23:45 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 04:15:55 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liu",
"Zeyu Leo",
""
],
[
"Pandit",
"Shrey",
""
],
[
"Ye",
"Xi",
""
],
[
"Choi",
"Eunsol",
""
],
[
"Durrett",
"Greg",
""
]
] | TITLE: CodeUpdateArena: Benchmarking Knowledge Editing on API Updates
ABSTRACT: Large language models (LLMs) are increasingly being used to synthesize and
reason about source code. However, the static nature of these models' knowledge
does not reflect the fact that libraries and API functions they invoke are
continuously evolving, with functionality being added or changing. While
numerous benchmarks evaluate how LLMs can generate code, no prior work has
studied how an LLMs' knowledge about code API functions can be updated. To fill
this gap, we present CodeUpdateArena, a benchmark for knowledge editing in the
code domain. An instance in our benchmark consists of a synthetic API function
update paired with a program synthesis example that uses the updated
functionality; our goal is to update an LLM to be able to solve this program
synthesis example without providing documentation of the update at inference
time. Compared to knowledge editing for facts encoded in text, success here is
more challenging: a code LLM must correctly reason about the semantics of the
modified function rather than just reproduce its syntax. Our dataset is
constructed by first prompting GPT-4 to generate atomic and executable function
updates. Then, for each update, we generate program synthesis examples whose
code solutions are prone to use the update. Our benchmark covers updates of
various types to 54 functions from seven diverse Python packages, with a total
of 670 program synthesis examples. Our experiments show that prepending
documentation of the update to open-source code LLMs (i.e., DeepSeek,
CodeLlama) does not allow them to incorporate changes for problem solving, and
existing knowledge editing techniques also have substantial room for
improvement. We hope our benchmark will inspire new methods for knowledge
updating in code LLMs.
|
2407.07307 | Peifu Liu | Peifu Liu, Tingfa Xu, Jie Wang, Huan Chen, Huiyan Bai, Jianan Li | Dual-stage Hyperspectral Image Classification Model with Spectral
Supertoken | Accepted by ECCV 2024 | null | 10.1007/978-3-031-72754-2_21 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Hyperspectral image classification, a task that assigns pre-defined classes
to each pixel in a hyperspectral image of remote sensing scenes, often faces
challenges due to the neglect of correlations between spectrally similar
pixels. This oversight can lead to inaccurate edge definitions and difficulties
in managing minor spectral variations in contiguous areas. To address these
issues, we introduce the novel Dual-stage Spectral Supertoken Classifier
(DSTC), inspired by superpixel concepts. DSTC employs spectrum-derivative-based
pixel clustering to group pixels with similar spectral characteristics into
spectral supertokens. By projecting the classification of these tokens onto the
image space, we achieve pixel-level results that maintain regional
classification consistency and precise boundary. Moreover, recognizing the
diversity within tokens, we propose a class-proportion-based soft label. This
label adaptively assigns weights to different categories based on their
prevalence, effectively managing data distribution imbalances and enhancing
classification performance. Comprehensive experiments on WHU-OHS, IP, KSC, and
UP datasets corroborate the robust classification capabilities of DSTC and the
effectiveness of its individual components. Code will be publicly available at
https://github.com/laprf/DSTC.
| [
{
"version": "v1",
"created": "Wed, 10 Jul 2024 01:58:30 GMT"
},
{
"version": "v2",
"created": "Sat, 13 Jul 2024 08:12:06 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liu",
"Peifu",
""
],
[
"Xu",
"Tingfa",
""
],
[
"Wang",
"Jie",
""
],
[
"Chen",
"Huan",
""
],
[
"Bai",
"Huiyan",
""
],
[
"Li",
"Jianan",
""
]
] | TITLE: Dual-stage Hyperspectral Image Classification Model with Spectral
Supertoken
ABSTRACT: Hyperspectral image classification, a task that assigns pre-defined classes
to each pixel in a hyperspectral image of remote sensing scenes, often faces
challenges due to the neglect of correlations between spectrally similar
pixels. This oversight can lead to inaccurate edge definitions and difficulties
in managing minor spectral variations in contiguous areas. To address these
issues, we introduce the novel Dual-stage Spectral Supertoken Classifier
(DSTC), inspired by superpixel concepts. DSTC employs spectrum-derivative-based
pixel clustering to group pixels with similar spectral characteristics into
spectral supertokens. By projecting the classification of these tokens onto the
image space, we achieve pixel-level results that maintain regional
classification consistency and precise boundary. Moreover, recognizing the
diversity within tokens, we propose a class-proportion-based soft label. This
label adaptively assigns weights to different categories based on their
prevalence, effectively managing data distribution imbalances and enhancing
classification performance. Comprehensive experiments on WHU-OHS, IP, KSC, and
UP datasets corroborate the robust classification capabilities of DSTC and the
effectiveness of its individual components. Code will be publicly available at
https://github.com/laprf/DSTC.
|
2407.09495 | Emiel van Miltenburg | Emiel van Miltenburg | Image captioning in different languages | null | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | This short position paper provides a manually curated list of non-English
image captioning datasets (as of May 2024). Through this list, we can observe
the dearth of datasets in different languages: only 23 different languages are
represented. With the addition of the Crossmodal-3600 dataset (Thapliyal et
al., 2022, 36 languages) this number increases somewhat, but still this number
is small compared to the +/-500 institutional languages that are out there.
This paper closes with some open questions for the field of Vision & Language.
| [
{
"version": "v1",
"created": "Fri, 31 May 2024 09:37:54 GMT"
},
{
"version": "v2",
"created": "Wed, 30 Oct 2024 11:57:22 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 19:27:35 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"van Miltenburg",
"Emiel",
""
]
] | TITLE: Image captioning in different languages
ABSTRACT: This short position paper provides a manually curated list of non-English
image captioning datasets (as of May 2024). Through this list, we can observe
the dearth of datasets in different languages: only 23 different languages are
represented. With the addition of the Crossmodal-3600 dataset (Thapliyal et
al., 2022, 36 languages) this number increases somewhat, but still this number
is small compared to the +/-500 institutional languages that are out there.
This paper closes with some open questions for the field of Vision & Language.
|
2408.01581 | Ankur Mahesh | Ankur Mahesh, William Collins, Boris Bonev, Noah Brenowitz, Yair
Cohen, Peter Harrington, Karthik Kashinath, Thorsten Kurth, Joshua North,
Travis OBrien, Michael Pritchard, David Pruitt, Mark Risser, Shashank
Subramanian, Jared Willard | Huge Ensembles Part II: Properties of a Huge Ensemble of Hindcasts
Generated with Spherical Fourier Neural Operators | null | null | null | null | cs.LG physics.ao-ph | http://creativecommons.org/licenses/by/4.0/ | In Part I, we created an ensemble based on Spherical Fourier Neural
Operators. As initial condition perturbations, we used bred vectors, and as
model perturbations, we used multiple checkpoints trained independently from
scratch. Based on diagnostics that assess the ensemble's physical fidelity, our
ensemble has comparable performance to operational weather forecasting systems.
However, it requires orders of magnitude fewer computational resources. Here in
Part II, we generate a huge ensemble (HENS), with 7,424 members initialized
each day of summer 2023. We enumerate the technical requirements for running
huge ensembles at this scale. HENS precisely samples the tails of the forecast
distribution and presents a detailed sampling of internal variability. HENS has
two primary applications: (1) as a large dataset with which to study the
statistics and drivers of extreme weather and (2) as a weather forecasting
system. For extreme climate statistics, HENS samples events 4$\sigma$ away from
the ensemble mean. At each grid cell, HENS increases the skill of the most
accurate ensemble member and enhances coverage of possible future trajectories.
As a weather forecasting model, HENS issues extreme weather forecasts with
better uncertainty quantification. It also reduces the probability of outlier
events, in which the verification value lies outside the ensemble forecast
distribution.
| [
{
"version": "v1",
"created": "Fri, 2 Aug 2024 21:31:34 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Feb 2025 00:13:29 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 07:40:12 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Mahesh",
"Ankur",
""
],
[
"Collins",
"William",
""
],
[
"Bonev",
"Boris",
""
],
[
"Brenowitz",
"Noah",
""
],
[
"Cohen",
"Yair",
""
],
[
"Harrington",
"Peter",
""
],
[
"Kashinath",
"Karthik",
""
],
[
"Kurth",
"Thorsten",
""
],
[
"North",
"Joshua",
""
],
[
"OBrien",
"Travis",
""
],
[
"Pritchard",
"Michael",
""
],
[
"Pruitt",
"David",
""
],
[
"Risser",
"Mark",
""
],
[
"Subramanian",
"Shashank",
""
],
[
"Willard",
"Jared",
""
]
] | TITLE: Huge Ensembles Part II: Properties of a Huge Ensemble of Hindcasts
Generated with Spherical Fourier Neural Operators
ABSTRACT: In Part I, we created an ensemble based on Spherical Fourier Neural
Operators. As initial condition perturbations, we used bred vectors, and as
model perturbations, we used multiple checkpoints trained independently from
scratch. Based on diagnostics that assess the ensemble's physical fidelity, our
ensemble has comparable performance to operational weather forecasting systems.
However, it requires orders of magnitude fewer computational resources. Here in
Part II, we generate a huge ensemble (HENS), with 7,424 members initialized
each day of summer 2023. We enumerate the technical requirements for running
huge ensembles at this scale. HENS precisely samples the tails of the forecast
distribution and presents a detailed sampling of internal variability. HENS has
two primary applications: (1) as a large dataset with which to study the
statistics and drivers of extreme weather and (2) as a weather forecasting
system. For extreme climate statistics, HENS samples events 4$\sigma$ away from
the ensemble mean. At each grid cell, HENS increases the skill of the most
accurate ensemble member and enhances coverage of possible future trajectories.
As a weather forecasting model, HENS issues extreme weather forecasts with
better uncertainty quantification. It also reduces the probability of outlier
events, in which the verification value lies outside the ensemble forecast
distribution.
|
2408.11748 | Shehreen Azad | Shehreen Azad, Yash Jain, Rishit Garg, Yogesh S Rawat, Vibhav Vineet | Understanding Depth and Height Perception in Large Visual-Language
Models | Accepted in CVPRW 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Geometric understanding - including depth and height perception - is
fundamental to intelligence and crucial for navigating our environment. Despite
the impressive capabilities of large Vision Language Models (VLMs), it remains
unclear how well they possess the geometric understanding required for
practical applications in visual perception. In this work, we focus on
evaluating the geometric understanding of these models, specifically targeting
their ability to perceive the depth and height of objects in an image. To
address this, we introduce GeoMeter, a suite of benchmark datasets -
encompassing 2D and 3D scenarios - to rigorously evaluate these aspects. By
benchmarking 18 state-of-the-art VLMs, we found that although they excel in
perceiving basic geometric properties like shape and size, they consistently
struggle with depth and height perception. Our analysis reveal that these
challenges stem from shortcomings in their depth and height reasoning
capabilities and inherent biases. This study aims to pave the way for
developing VLMs with enhanced geometric understanding by emphasizing depth and
height perception as critical components necessary for real-world applications.
| [
{
"version": "v1",
"created": "Wed, 21 Aug 2024 16:16:18 GMT"
},
{
"version": "v2",
"created": "Thu, 22 Aug 2024 18:49:48 GMT"
},
{
"version": "v3",
"created": "Fri, 30 Aug 2024 13:52:12 GMT"
},
{
"version": "v4",
"created": "Thu, 3 Apr 2025 15:06:48 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Azad",
"Shehreen",
""
],
[
"Jain",
"Yash",
""
],
[
"Garg",
"Rishit",
""
],
[
"Rawat",
"Yogesh S",
""
],
[
"Vineet",
"Vibhav",
""
]
] | TITLE: Understanding Depth and Height Perception in Large Visual-Language
Models
ABSTRACT: Geometric understanding - including depth and height perception - is
fundamental to intelligence and crucial for navigating our environment. Despite
the impressive capabilities of large Vision Language Models (VLMs), it remains
unclear how well they possess the geometric understanding required for
practical applications in visual perception. In this work, we focus on
evaluating the geometric understanding of these models, specifically targeting
their ability to perceive the depth and height of objects in an image. To
address this, we introduce GeoMeter, a suite of benchmark datasets -
encompassing 2D and 3D scenarios - to rigorously evaluate these aspects. By
benchmarking 18 state-of-the-art VLMs, we found that although they excel in
perceiving basic geometric properties like shape and size, they consistently
struggle with depth and height perception. Our analysis reveal that these
challenges stem from shortcomings in their depth and height reasoning
capabilities and inherent biases. This study aims to pave the way for
developing VLMs with enhanced geometric understanding by emphasizing depth and
height perception as critical components necessary for real-world applications.
|
2409.06845 | Minmin Yang | Minmin Yang | Face Mask Removal with Region-attentive Face Inpainting | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | During the COVID-19 pandemic, face masks have become ubiquitous in our lives.
Face masks can cause some face recognition models to fail since they cover
significant portion of a face. In addition, removing face masks from captured
images or videos can be desirable, e.g., for better social interaction and for
image/video editing and enhancement purposes. Hence, we propose a generative
face inpainting method to effectively recover/reconstruct the masked part of a
face. Face inpainting is more challenging compared to traditional inpainting,
since it requires high fidelity while maintaining the identity at the same
time. Our proposed method includes a Multi-scale Channel-Spatial Attention
Module (M-CSAM) to mitigate the spatial information loss and learn the inter-
and intra-channel correlation. In addition, we introduce an approach enforcing
the supervised signal to focus on masked regions instead of the whole image. We
also synthesize our own Masked-Faces dataset from the CelebA dataset by
incorporating five different types of face masks, including surgical mask,
regular mask and scarves, which also cover the neck area. The experimental
results show that our proposed method outperforms different baselines in terms
of structural similarity index measure, peak signal-to-noise ratio and l1 loss,
while also providing better outputs qualitatively. The code will be made
publicly available. Code is available at GitHub.
| [
{
"version": "v1",
"created": "Tue, 10 Sep 2024 20:10:11 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 19:13:11 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Yang",
"Minmin",
""
]
] | TITLE: Face Mask Removal with Region-attentive Face Inpainting
ABSTRACT: During the COVID-19 pandemic, face masks have become ubiquitous in our lives.
Face masks can cause some face recognition models to fail since they cover
significant portion of a face. In addition, removing face masks from captured
images or videos can be desirable, e.g., for better social interaction and for
image/video editing and enhancement purposes. Hence, we propose a generative
face inpainting method to effectively recover/reconstruct the masked part of a
face. Face inpainting is more challenging compared to traditional inpainting,
since it requires high fidelity while maintaining the identity at the same
time. Our proposed method includes a Multi-scale Channel-Spatial Attention
Module (M-CSAM) to mitigate the spatial information loss and learn the inter-
and intra-channel correlation. In addition, we introduce an approach enforcing
the supervised signal to focus on masked regions instead of the whole image. We
also synthesize our own Masked-Faces dataset from the CelebA dataset by
incorporating five different types of face masks, including surgical mask,
regular mask and scarves, which also cover the neck area. The experimental
results show that our proposed method outperforms different baselines in terms
of structural similarity index measure, peak signal-to-noise ratio and l1 loss,
while also providing better outputs qualitatively. The code will be made
publicly available. Code is available at GitHub.
|
2409.09092 | Michael Juhasz | Michael Juhasz, Eric Chin, Youngsoo Choi, Joseph T. McKeown, Saad
Khairallah | Harnessing On-Machine Metrology Data for Prints with a Surrogate Model
for Laser Powder Directed Energy Deposition | 19 pages, 9 figures | null | null | null | eess.SY cond-mat.mtrl-sci cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, we leverage the massive amount of multi-modal on-machine
metrology data generated from Laser Powder Directed Energy Deposition (LP-DED)
to construct a comprehensive surrogate model of the 3D printing process. By
employing Dynamic Mode Decomposition with Control (DMDc), a data-driven
technique, we capture the complex physics inherent in this extensive dataset.
This physics-based surrogate model emphasizes thermodynamically significant
quantities, enabling us to accurately predict key process outcomes. The model
ingests 21 process parameters, including laser power, scan rate, and position,
while providing outputs such as melt pool temperature, melt pool size, and
other essential observables. Furthermore, it incorporates uncertainty
quantification to provide bounds on these predictions, enhancing reliability
and confidence in the results. We then deploy the surrogate model on a new,
unseen part and monitor the printing process as validation of the method. Our
experimental results demonstrate that the predictions align with actual
measurements with high accuracy, confirming the effectiveness of our approach.
This methodology not only facilitates real-time predictions but also operates
at process-relevant speeds, establishing a basis for implementing feedback
control in LP-DED.
| [
{
"version": "v1",
"created": "Thu, 12 Sep 2024 00:45:04 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 18:19:57 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Juhasz",
"Michael",
""
],
[
"Chin",
"Eric",
""
],
[
"Choi",
"Youngsoo",
""
],
[
"McKeown",
"Joseph T.",
""
],
[
"Khairallah",
"Saad",
""
]
] | TITLE: Harnessing On-Machine Metrology Data for Prints with a Surrogate Model
for Laser Powder Directed Energy Deposition
ABSTRACT: In this study, we leverage the massive amount of multi-modal on-machine
metrology data generated from Laser Powder Directed Energy Deposition (LP-DED)
to construct a comprehensive surrogate model of the 3D printing process. By
employing Dynamic Mode Decomposition with Control (DMDc), a data-driven
technique, we capture the complex physics inherent in this extensive dataset.
This physics-based surrogate model emphasizes thermodynamically significant
quantities, enabling us to accurately predict key process outcomes. The model
ingests 21 process parameters, including laser power, scan rate, and position,
while providing outputs such as melt pool temperature, melt pool size, and
other essential observables. Furthermore, it incorporates uncertainty
quantification to provide bounds on these predictions, enhancing reliability
and confidence in the results. We then deploy the surrogate model on a new,
unseen part and monitor the printing process as validation of the method. Our
experimental results demonstrate that the predictions align with actual
measurements with high accuracy, confirming the effectiveness of our approach.
This methodology not only facilitates real-time predictions but also operates
at process-relevant speeds, establishing a basis for implementing feedback
control in LP-DED.
|
2409.11506 | Michael Omori | Michael Omori, Prasad Tadepalli | Chess Rating Estimation from Moves and Clock Times Using a CNN-LSTM | Accepted CG 2024 (11 pages, 2 figures) | null | 10.1007/978-3-031-86585-5_1 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Current chess rating systems update ratings incrementally and may not always
accurately reflect a player's true strength at all times, especially for
rapidly improving players or very rusty players. To overcome this, we explore a
method to estimate player ratings directly from game moves and clock times. We
compiled a benchmark dataset from Lichess with over one million games,
encompassing various time controls and including move sequences and clock
times. Our model architecture comprises a CNN to learn positional features,
which are then integrated with clock-time data into a Bidirectional LSTM,
predicting player ratings after each move. The model achieved an MAE of 182
rating points on the test data. Additionally, we applied our model to the 2024
IEEE Big Data Cup Chess Puzzle Difficulty Competition dataset, predicted puzzle
ratings and achieved competitive results. This model is the first to use no
hand-crafted features to estimate chess ratings and also the first to output a
rating prediction after each move. Our method highlights the potential of using
move-based rating estimation for enhancing rating systems and potentially other
applications such as cheating detection.
| [
{
"version": "v1",
"created": "Tue, 17 Sep 2024 19:19:16 GMT"
},
{
"version": "v2",
"created": "Sat, 16 Nov 2024 00:39:44 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Omori",
"Michael",
""
],
[
"Tadepalli",
"Prasad",
""
]
] | TITLE: Chess Rating Estimation from Moves and Clock Times Using a CNN-LSTM
ABSTRACT: Current chess rating systems update ratings incrementally and may not always
accurately reflect a player's true strength at all times, especially for
rapidly improving players or very rusty players. To overcome this, we explore a
method to estimate player ratings directly from game moves and clock times. We
compiled a benchmark dataset from Lichess with over one million games,
encompassing various time controls and including move sequences and clock
times. Our model architecture comprises a CNN to learn positional features,
which are then integrated with clock-time data into a Bidirectional LSTM,
predicting player ratings after each move. The model achieved an MAE of 182
rating points on the test data. Additionally, we applied our model to the 2024
IEEE Big Data Cup Chess Puzzle Difficulty Competition dataset, predicted puzzle
ratings and achieved competitive results. This model is the first to use no
hand-crafted features to estimate chess ratings and also the first to output a
rating prediction after each move. Our method highlights the potential of using
move-based rating estimation for enhancing rating systems and potentially other
applications such as cheating detection.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.