Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2504.04052 | Youn-Yeol Yu | Youn-Yeol Yu, Jeongwhan Choi, Jaehyeon Park, Kookjin Lee, Noseong Park | PIORF: Physics-Informed Ollivier-Ricci Flow for Long-Range Interactions
in Mesh Graph Neural Networks | Accepted to ICLR 2025. Youn-Yeol Yu and Jeongwhan Choi contributed
equally to this work | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, data-driven simulators based on graph neural networks have gained
attention in modeling physical systems on unstructured meshes. However, they
struggle with long-range dependencies in fluid flows, particularly in refined
mesh regions. This challenge, known as the 'over-squashing' problem, hinders
information propagation. While existing graph rewiring methods address this
issue to some extent, they only consider graph topology, overlooking the
underlying physical phenomena. We propose Physics-Informed Ollivier-Ricci Flow
(PIORF), a novel rewiring method that combines physical correlations with graph
topology. PIORF uses Ollivier-Ricci curvature (ORC) to identify bottleneck
regions and connects these areas with nodes in high-velocity gradient nodes,
enabling long-range interactions and mitigating over-squashing. Our approach is
computationally efficient in rewiring edges and can scale to larger
simulations. Experimental results on 3 fluid dynamics benchmark datasets show
that PIORF consistently outperforms baseline models and existing rewiring
methods, achieving up to 26.2 improvement.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 04:14:05 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yu",
"Youn-Yeol",
""
],
[
"Choi",
"Jeongwhan",
""
],
[
"Park",
"Jaehyeon",
""
],
[
"Lee",
"Kookjin",
""
],
[
"Park",
"Noseong",
""
]
] | TITLE: PIORF: Physics-Informed Ollivier-Ricci Flow for Long-Range Interactions
in Mesh Graph Neural Networks
ABSTRACT: Recently, data-driven simulators based on graph neural networks have gained
attention in modeling physical systems on unstructured meshes. However, they
struggle with long-range dependencies in fluid flows, particularly in refined
mesh regions. This challenge, known as the 'over-squashing' problem, hinders
information propagation. While existing graph rewiring methods address this
issue to some extent, they only consider graph topology, overlooking the
underlying physical phenomena. We propose Physics-Informed Ollivier-Ricci Flow
(PIORF), a novel rewiring method that combines physical correlations with graph
topology. PIORF uses Ollivier-Ricci curvature (ORC) to identify bottleneck
regions and connects these areas with nodes in high-velocity gradient nodes,
enabling long-range interactions and mitigating over-squashing. Our approach is
computationally efficient in rewiring edges and can scale to larger
simulations. Experimental results on 3 fluid dynamics benchmark datasets show
that PIORF consistently outperforms baseline models and existing rewiring
methods, achieving up to 26.2 improvement.
|
2504.04055 | Mahid Ahmed | Mahid Ahmed, Ali Dogru, Chaoyang Zhang, and Chao Meng | Learning-Based Multi-Criteria Decision Model for Site Selection Problems | 6 pages, 4 figures, Proceedings of the IISE Annual Conference & Expo
2025 | null | null | null | cs.LG | http://creativecommons.org/publicdomain/zero/1.0/ | Strategically locating sawmills is critical for the efficiency,
profitability, and sustainability of timber supply chains, yet it involves a
series of complex decision-making affected by various factors, such as
proximity to resources and markets, proximity to roads and rail lines, distance
from the urban area, slope, labor market, and existing sawmill data. Although
conventional Multi-Criteria Decision-Making (MCDM) approaches utilize these
factors while locating facilities, they are susceptible to bias since they rely
heavily on expert opinions to determine the relative factor weights. Machine
learning (ML) models provide an objective, data-driven alternative for site
selection that derives these weights directly from the patterns in large
datasets without requiring subjective weighting. Additionally, ML models
autonomously identify critical features, eliminating the need for subjective
feature selection. In this study, we propose integrated ML and MCDM methods and
showcase the utility of this integrated model to improve sawmill location
decisions via a case study in Mississippi. This integrated model is flexible
and applicable to site selection problems across various industries.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 04:17:30 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ahmed",
"Mahid",
""
],
[
"Dogru",
"Ali",
""
],
[
"Zhang",
"Chaoyang",
""
],
[
"Meng",
"Chao",
""
]
] | TITLE: Learning-Based Multi-Criteria Decision Model for Site Selection Problems
ABSTRACT: Strategically locating sawmills is critical for the efficiency,
profitability, and sustainability of timber supply chains, yet it involves a
series of complex decision-making affected by various factors, such as
proximity to resources and markets, proximity to roads and rail lines, distance
from the urban area, slope, labor market, and existing sawmill data. Although
conventional Multi-Criteria Decision-Making (MCDM) approaches utilize these
factors while locating facilities, they are susceptible to bias since they rely
heavily on expert opinions to determine the relative factor weights. Machine
learning (ML) models provide an objective, data-driven alternative for site
selection that derives these weights directly from the patterns in large
datasets without requiring subjective weighting. Additionally, ML models
autonomously identify critical features, eliminating the need for subjective
feature selection. In this study, we propose integrated ML and MCDM methods and
showcase the utility of this integrated model to improve sawmill location
decisions via a case study in Mississippi. This integrated model is flexible
and applicable to site selection problems across various industries.
|
2504.04061 | Haohua Que | Haojia Gao, Haohua Que, Kunrong Li, Weihao Shan, Mingkai Liu, Rong
Zhao, Lei Mu, Xinghua Yang, Qi Wei, Fei Qiao | Mapping at First Sense: A Lightweight Neural Network-Based Indoor
Structures Prediction Method for Robot Autonomous Exploration | null | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Autonomous exploration in unknown environments is a critical challenge in
robotics, particularly for applications such as indoor navigation, search and
rescue, and service robotics. Traditional exploration strategies, such as
frontier-based methods, often struggle to efficiently utilize prior knowledge
of structural regularities in indoor spaces. To address this limitation, we
propose Mapping at First Sense, a lightweight neural network-based approach
that predicts unobserved areas in local maps, thereby enhancing exploration
efficiency. The core of our method, SenseMapNet, integrates convolutional and
transformerbased architectures to infer occluded regions while maintaining
computational efficiency for real-time deployment on resourceconstrained
robots. Additionally, we introduce SenseMapDataset, a curated dataset
constructed from KTH and HouseExpo environments, which facilitates training and
evaluation of neural models for indoor exploration. Experimental results
demonstrate that SenseMapNet achieves an SSIM (structural similarity) of 0.78,
LPIPS (perceptual quality) of 0.68, and an FID (feature distribution alignment)
of 239.79, outperforming conventional methods in map reconstruction quality.
Compared to traditional frontier-based exploration, our method reduces
exploration time by 46.5% (from 2335.56s to 1248.68s) while maintaining a high
coverage rate (88%) and achieving a reconstruction accuracy of 88%. The
proposed method represents a promising step toward efficient, learning-driven
robotic exploration in structured environments.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 05:19:09 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Gao",
"Haojia",
""
],
[
"Que",
"Haohua",
""
],
[
"Li",
"Kunrong",
""
],
[
"Shan",
"Weihao",
""
],
[
"Liu",
"Mingkai",
""
],
[
"Zhao",
"Rong",
""
],
[
"Mu",
"Lei",
""
],
[
"Yang",
"Xinghua",
""
],
[
"Wei",
"Qi",
""
],
[
"Qiao",
"Fei",
""
]
] | TITLE: Mapping at First Sense: A Lightweight Neural Network-Based Indoor
Structures Prediction Method for Robot Autonomous Exploration
ABSTRACT: Autonomous exploration in unknown environments is a critical challenge in
robotics, particularly for applications such as indoor navigation, search and
rescue, and service robotics. Traditional exploration strategies, such as
frontier-based methods, often struggle to efficiently utilize prior knowledge
of structural regularities in indoor spaces. To address this limitation, we
propose Mapping at First Sense, a lightweight neural network-based approach
that predicts unobserved areas in local maps, thereby enhancing exploration
efficiency. The core of our method, SenseMapNet, integrates convolutional and
transformerbased architectures to infer occluded regions while maintaining
computational efficiency for real-time deployment on resourceconstrained
robots. Additionally, we introduce SenseMapDataset, a curated dataset
constructed from KTH and HouseExpo environments, which facilitates training and
evaluation of neural models for indoor exploration. Experimental results
demonstrate that SenseMapNet achieves an SSIM (structural similarity) of 0.78,
LPIPS (perceptual quality) of 0.68, and an FID (feature distribution alignment)
of 239.79, outperforming conventional methods in map reconstruction quality.
Compared to traditional frontier-based exploration, our method reduces
exploration time by 46.5% (from 2335.56s to 1248.68s) while maintaining a high
coverage rate (88%) and achieving a reconstruction accuracy of 88%. The
proposed method represents a promising step toward efficient, learning-driven
robotic exploration in structured environments.
|
2504.04062 | Kepu Zhang | Kepu Zhang, Zhongxiang Sun, Weijie Yu, Xiaoxue Zang, Kai Zheng, Yang
Song, Han Li, Jun Xu | QE-RAG: A Robust Retrieval-Augmented Generation Benchmark for Query
Entry Errors | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retriever-augmented generation (RAG) has become a widely adopted approach for
enhancing the factual accuracy of large language models (LLMs). While current
benchmarks evaluate the performance of RAG methods from various perspectives,
they share a common assumption that user queries used for retrieval are
error-free. However, in real-world interactions between users and LLMs, query
entry errors such as keyboard proximity errors, visual similarity errors, and
spelling errors are frequent. The impact of these errors on current RAG methods
against such errors remains largely unexplored. To bridge this gap, we propose
QE-RAG, the first robust RAG benchmark designed specifically to evaluate
performance against query entry errors. We augment six widely used datasets by
injecting three common types of query entry errors into randomly selected user
queries at rates of 20\% and 40\%, simulating typical user behavior in
real-world scenarios. We analyze the impact of these errors on LLM outputs and
find that corrupted queries degrade model performance, which can be mitigated
through query correction and training a robust retriever for retrieving
relevant documents. Based on these insights, we propose a contrastive
learning-based robust retriever training method and a retrieval-augmented query
correction method. Extensive in-domain and cross-domain experiments reveal
that: (1) state-of-the-art RAG methods including sequential, branching, and
iterative methods, exhibit poor robustness to query entry errors; (2) our
method significantly enhances the robustness of RAG when handling query entry
errors and it's compatible with existing RAG methods, further improving their
robustness.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 05:24:08 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhang",
"Kepu",
""
],
[
"Sun",
"Zhongxiang",
""
],
[
"Yu",
"Weijie",
""
],
[
"Zang",
"Xiaoxue",
""
],
[
"Zheng",
"Kai",
""
],
[
"Song",
"Yang",
""
],
[
"Li",
"Han",
""
],
[
"Xu",
"Jun",
""
]
] | TITLE: QE-RAG: A Robust Retrieval-Augmented Generation Benchmark for Query
Entry Errors
ABSTRACT: Retriever-augmented generation (RAG) has become a widely adopted approach for
enhancing the factual accuracy of large language models (LLMs). While current
benchmarks evaluate the performance of RAG methods from various perspectives,
they share a common assumption that user queries used for retrieval are
error-free. However, in real-world interactions between users and LLMs, query
entry errors such as keyboard proximity errors, visual similarity errors, and
spelling errors are frequent. The impact of these errors on current RAG methods
against such errors remains largely unexplored. To bridge this gap, we propose
QE-RAG, the first robust RAG benchmark designed specifically to evaluate
performance against query entry errors. We augment six widely used datasets by
injecting three common types of query entry errors into randomly selected user
queries at rates of 20\% and 40\%, simulating typical user behavior in
real-world scenarios. We analyze the impact of these errors on LLM outputs and
find that corrupted queries degrade model performance, which can be mitigated
through query correction and training a robust retriever for retrieving
relevant documents. Based on these insights, we propose a contrastive
learning-based robust retriever training method and a retrieval-augmented query
correction method. Extensive in-domain and cross-domain experiments reveal
that: (1) state-of-the-art RAG methods including sequential, branching, and
iterative methods, exhibit poor robustness to query entry errors; (2) our
method significantly enhances the robustness of RAG when handling query entry
errors and it's compatible with existing RAG methods, further improving their
robustness.
|
2504.04066 | Mengyuan Liu | Mengyuan Liu, Yixiao Chen, Anning Tian, Xinmeng Wu, Mozhi Shen,
Tianchou Gong, Jeongkyu Lee | Performance Analysis of Deep Learning Models for Femur Segmentation in
MRI Scan | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Convolutional neural networks like U-Net excel in medical image segmentation,
while attention mechanisms and KAN enhance feature extraction. Meta's SAM 2
uses Vision Transformers for prompt-based segmentation without fine-tuning.
However, biases in these models impact generalization with limited data. In
this study, we systematically evaluate and compare the performance of three
CNN-based models, i.e., U-Net, Attention U-Net, and U-KAN, and one
transformer-based model, i.e., SAM 2 for segmenting femur bone structures in
MRI scan. The dataset comprises 11,164 MRI scans with detailed annotations of
femoral regions. Performance is assessed using the Dice Similarity Coefficient,
which ranges from 0.932 to 0.954. Attention U-Net achieves the highest overall
scores, while U-KAN demonstrated superior performance in anatomical regions
with a smaller region of interest, leveraging its enhanced learning capacity to
improve segmentation accuracy.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 05:47:56 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Mengyuan",
""
],
[
"Chen",
"Yixiao",
""
],
[
"Tian",
"Anning",
""
],
[
"Wu",
"Xinmeng",
""
],
[
"Shen",
"Mozhi",
""
],
[
"Gong",
"Tianchou",
""
],
[
"Lee",
"Jeongkyu",
""
]
] | TITLE: Performance Analysis of Deep Learning Models for Femur Segmentation in
MRI Scan
ABSTRACT: Convolutional neural networks like U-Net excel in medical image segmentation,
while attention mechanisms and KAN enhance feature extraction. Meta's SAM 2
uses Vision Transformers for prompt-based segmentation without fine-tuning.
However, biases in these models impact generalization with limited data. In
this study, we systematically evaluate and compare the performance of three
CNN-based models, i.e., U-Net, Attention U-Net, and U-KAN, and one
transformer-based model, i.e., SAM 2 for segmenting femur bone structures in
MRI scan. The dataset comprises 11,164 MRI scans with detailed annotations of
femoral regions. Performance is assessed using the Dice Similarity Coefficient,
which ranges from 0.932 to 0.954. Attention U-Net achieves the highest overall
scores, while U-KAN demonstrated superior performance in anatomical regions
with a smaller region of interest, leveraging its enhanced learning capacity to
improve segmentation accuracy.
|
2504.04072 | Satvik Golechha | Satvik Golechha, Adri\`a Garriga-Alonso | Among Us: A Sandbox for Agentic Deception | 17 pages, preprint | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Studying deception in AI agents is important and difficult due to the lack of
model organisms and sandboxes that elicit the behavior without asking the model
to act under specific conditions or inserting intentional backdoors. Extending
upon $\textit{AmongAgents}$, a text-based social-deduction game environment, we
aim to fix this by introducing Among Us as a rich sandbox where LLM-agents
exhibit human-style deception naturally while they think, speak, and act with
other agents or humans. We introduce Deception ELO as an unbounded measure of
deceptive capability, suggesting that frontier models win more because they're
better at deception, not at detecting it. We evaluate the effectiveness of AI
safety techniques (LLM-monitoring of outputs, linear probes on various
datasets, and sparse autoencoders) for detecting lying and deception in Among
Us, and find that they generalize very well out-of-distribution. We open-source
our sandbox as a benchmark for future alignment research and hope that this is
a good testbed to improve safety techniques to detect and remove
agentically-motivated deception, and to anticipate deceptive abilities in LLMs.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 06:09:32 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Golechha",
"Satvik",
""
],
[
"Garriga-Alonso",
"Adrià",
""
]
] | TITLE: Among Us: A Sandbox for Agentic Deception
ABSTRACT: Studying deception in AI agents is important and difficult due to the lack of
model organisms and sandboxes that elicit the behavior without asking the model
to act under specific conditions or inserting intentional backdoors. Extending
upon $\textit{AmongAgents}$, a text-based social-deduction game environment, we
aim to fix this by introducing Among Us as a rich sandbox where LLM-agents
exhibit human-style deception naturally while they think, speak, and act with
other agents or humans. We introduce Deception ELO as an unbounded measure of
deceptive capability, suggesting that frontier models win more because they're
better at deception, not at detecting it. We evaluate the effectiveness of AI
safety techniques (LLM-monitoring of outputs, linear probes on various
datasets, and sparse autoencoders) for detecting lying and deception in Among
Us, and find that they generalize very well out-of-distribution. We open-source
our sandbox as a benchmark for future alignment research and hope that this is
a good testbed to improve safety techniques to detect and remove
agentically-motivated deception, and to anticipate deceptive abilities in LLMs.
|
2504.04076 | Bing Wang | Bing Wang, Bingrui Zhao, Ximing Li, Changchun Li, Wanfu Gao,
Shengsheng Wang | Collaboration and Controversy Among Experts: Rumor Early Detection by
Tuning a Comment Generator | 11 pages, 5 figures. Accepted by SIGIR 2025. Code:
https://github.com/wangbing1416/CAMERED | null | null | null | cs.CL cs.SI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Over the past decade, social media platforms have been key in spreading
rumors, leading to significant negative impacts. To counter this, the community
has developed various Rumor Detection (RD) algorithms to automatically identify
them using user comments as evidence. However, these RD methods often fail in
the early stages of rumor propagation when only limited user comments are
available, leading the community to focus on a more challenging topic named
Rumor Early Detection (RED). Typically, existing RED methods learn from limited
semantics in early comments. However, our preliminary experiment reveals that
the RED models always perform best when the number of training and test
comments is consistent and extensive. This inspires us to address the RED issue
by generating more human-like comments to support this hypothesis. To implement
this idea, we tune a comment generator by simulating expert collaboration and
controversy and propose a new RED framework named CAMERED. Specifically, we
integrate a mixture-of-expert structure into a generative language model and
present a novel routing network for expert collaboration. Additionally, we
synthesize a knowledgeable dataset and design an adversarial learning strategy
to align the style of generated comments with real-world comments. We further
integrate generated and original comments with a mutual controversy fusion
module. Experimental results show that CAMERED outperforms state-of-the-art RED
baseline models and generation methods, demonstrating its effectiveness.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 06:21:01 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Bing",
""
],
[
"Zhao",
"Bingrui",
""
],
[
"Li",
"Ximing",
""
],
[
"Li",
"Changchun",
""
],
[
"Gao",
"Wanfu",
""
],
[
"Wang",
"Shengsheng",
""
]
] | TITLE: Collaboration and Controversy Among Experts: Rumor Early Detection by
Tuning a Comment Generator
ABSTRACT: Over the past decade, social media platforms have been key in spreading
rumors, leading to significant negative impacts. To counter this, the community
has developed various Rumor Detection (RD) algorithms to automatically identify
them using user comments as evidence. However, these RD methods often fail in
the early stages of rumor propagation when only limited user comments are
available, leading the community to focus on a more challenging topic named
Rumor Early Detection (RED). Typically, existing RED methods learn from limited
semantics in early comments. However, our preliminary experiment reveals that
the RED models always perform best when the number of training and test
comments is consistent and extensive. This inspires us to address the RED issue
by generating more human-like comments to support this hypothesis. To implement
this idea, we tune a comment generator by simulating expert collaboration and
controversy and propose a new RED framework named CAMERED. Specifically, we
integrate a mixture-of-expert structure into a generative language model and
present a novel routing network for expert collaboration. Additionally, we
synthesize a knowledgeable dataset and design an adversarial learning strategy
to align the style of generated comments with real-world comments. We further
integrate generated and original comments with a mutual controversy fusion
module. Experimental results show that CAMERED outperforms state-of-the-art RED
baseline models and generation methods, demonstrating its effectiveness.
|
2504.04083 | Ramakanth Kavuluru | Aviv Brokman and Xuguang Ai and Yuhang Jiang and Shashank Gupta and
Ramakanth Kavuluru | A Benchmark for End-to-End Zero-Shot Biomedical Relation Extraction with
LLMs: Experiments with OpenAI Models | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Objective: Zero-shot methodology promises to cut down on costs of dataset
annotation and domain expertise needed to make use of NLP. Generative large
language models trained to align with human goals have achieved high zero-shot
performance across a wide variety of tasks. As of yet, it is unclear how well
these models perform on biomedical relation extraction (RE). To address this
knowledge gap, we explore patterns in the performance of OpenAI LLMs across a
diverse sampling of RE tasks.
Methods: We use OpenAI GPT-4-turbo and their reasoning model o1 to conduct
end-to-end RE experiments on seven datasets. We use the JSON generation
capabilities of GPT models to generate structured output in two ways: (1) by
defining an explicit schema describing the structure of relations, and (2)
using a setting that infers the structure from the prompt language.
Results: Our work is the first to study and compare the performance of the
GPT-4 and o1 for the end-to-end zero-shot biomedical RE task across a broad
array of datasets. We found the zero-shot performances to be proximal to that
of fine-tuned methods. The limitations of this approach are that it performs
poorly on instances containing many relations and errs on the boundaries of
textual mentions.
Conclusion: Recent large language models exhibit promising zero-shot
capabilities in complex biomedical RE tasks, offering competitive performance
with reduced dataset curation and NLP modeling needs at the cost of increased
computing, potentially increasing medical community accessibility. Addressing
the limitations we identify could further boost reliability. The code, data,
and prompts for all our experiments are publicly available:
https://github.com/bionlproc/ZeroShotRE
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 07:08:54 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Brokman",
"Aviv",
""
],
[
"Ai",
"Xuguang",
""
],
[
"Jiang",
"Yuhang",
""
],
[
"Gupta",
"Shashank",
""
],
[
"Kavuluru",
"Ramakanth",
""
]
] | TITLE: A Benchmark for End-to-End Zero-Shot Biomedical Relation Extraction with
LLMs: Experiments with OpenAI Models
ABSTRACT: Objective: Zero-shot methodology promises to cut down on costs of dataset
annotation and domain expertise needed to make use of NLP. Generative large
language models trained to align with human goals have achieved high zero-shot
performance across a wide variety of tasks. As of yet, it is unclear how well
these models perform on biomedical relation extraction (RE). To address this
knowledge gap, we explore patterns in the performance of OpenAI LLMs across a
diverse sampling of RE tasks.
Methods: We use OpenAI GPT-4-turbo and their reasoning model o1 to conduct
end-to-end RE experiments on seven datasets. We use the JSON generation
capabilities of GPT models to generate structured output in two ways: (1) by
defining an explicit schema describing the structure of relations, and (2)
using a setting that infers the structure from the prompt language.
Results: Our work is the first to study and compare the performance of the
GPT-4 and o1 for the end-to-end zero-shot biomedical RE task across a broad
array of datasets. We found the zero-shot performances to be proximal to that
of fine-tuned methods. The limitations of this approach are that it performs
poorly on instances containing many relations and errs on the boundaries of
textual mentions.
Conclusion: Recent large language models exhibit promising zero-shot
capabilities in complex biomedical RE tasks, offering competitive performance
with reduced dataset curation and NLP modeling needs at the cost of increased
computing, potentially increasing medical community accessibility. Addressing
the limitations we identify could further boost reliability. The code, data,
and prompts for all our experiments are publicly available:
https://github.com/bionlproc/ZeroShotRE
|
2504.04085 | Xiao-Hui Li | Xiao-Hui Li and Fei Yin and Cheng-Lin Liu | DocSAM: Unified Document Image Segmentation via Query Decomposition and
Heterogeneous Mixed Learning | This paper has been accepted by CVPR 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Document image segmentation is crucial for document analysis and recognition
but remains challenging due to the diversity of document formats and
segmentation tasks. Existing methods often address these tasks separately,
resulting in limited generalization and resource wastage. This paper introduces
DocSAM, a transformer-based unified framework designed for various document
image segmentation tasks, such as document layout analysis, multi-granularity
text segmentation, and table structure recognition, by modelling these tasks as
a combination of instance and semantic segmentation. Specifically, DocSAM
employs Sentence-BERT to map category names from each dataset into semantic
queries that match the dimensionality of instance queries. These two sets of
queries interact through an attention mechanism and are cross-attended with
image features to predict instance and semantic segmentation masks. Instance
categories are predicted by computing the dot product between instance and
semantic queries, followed by softmax normalization of scores. Consequently,
DocSAM can be jointly trained on heterogeneous datasets, enhancing robustness
and generalization while reducing computational and storage resources.
Comprehensive evaluations show that DocSAM surpasses existing methods in
accuracy, efficiency, and adaptability, highlighting its potential for
advancing document image understanding and segmentation across various
applications. Codes are available at https://github.com/xhli-git/DocSAM.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 07:14:53 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Xiao-Hui",
""
],
[
"Yin",
"Fei",
""
],
[
"Liu",
"Cheng-Lin",
""
]
] | TITLE: DocSAM: Unified Document Image Segmentation via Query Decomposition and
Heterogeneous Mixed Learning
ABSTRACT: Document image segmentation is crucial for document analysis and recognition
but remains challenging due to the diversity of document formats and
segmentation tasks. Existing methods often address these tasks separately,
resulting in limited generalization and resource wastage. This paper introduces
DocSAM, a transformer-based unified framework designed for various document
image segmentation tasks, such as document layout analysis, multi-granularity
text segmentation, and table structure recognition, by modelling these tasks as
a combination of instance and semantic segmentation. Specifically, DocSAM
employs Sentence-BERT to map category names from each dataset into semantic
queries that match the dimensionality of instance queries. These two sets of
queries interact through an attention mechanism and are cross-attended with
image features to predict instance and semantic segmentation masks. Instance
categories are predicted by computing the dot product between instance and
semantic queries, followed by softmax normalization of scores. Consequently,
DocSAM can be jointly trained on heterogeneous datasets, enhancing robustness
and generalization while reducing computational and storage resources.
Comprehensive evaluations show that DocSAM surpasses existing methods in
accuracy, efficiency, and adaptability, highlighting its potential for
advancing document image understanding and segmentation across various
applications. Codes are available at https://github.com/xhli-git/DocSAM.
|
2504.04086 | Zekai Shen | Zekai Shen, Haitao Yuan, Xiaowei Mao, Congkang Lv, Shengnan Guo,
Youfang Lin, Huaiyu Wan | Towards An Efficient and Effective En Route Travel Time Estimation
Framework | Accepted by DASFAA 2025 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | En route travel time estimation (ER-TTE) focuses on predicting the travel
time of the remaining route. Existing ER-TTE methods always make re-estimation
which significantly hinders real-time performance, especially when faced with
the computational demands of simultaneous user requests. This results in delays
and reduced responsiveness in ER-TTE services. We propose a general efficient
framework U-ERTTE combining an Uncertainty-Guided Decision mechanism (UGD) and
Fine-Tuning with Meta-Learning (FTML) to address these challenges. UGD
quantifies the uncertainty and provides confidence intervals for the entire
route. It selectively re-estimates only when the actual travel time deviates
from the predicted confidence intervals, thereby optimizing the efficiency of
ER-TTE. To ensure the accuracy of confidence intervals and accurate predictions
that need to re-estimate, FTML is employed to train the model, enabling it to
learn general driving patterns and specific features to adapt to specific
tasks. Extensive experiments on two large-scale real datasets demonstrate that
the U-ERTTE framework significantly enhances inference speed and throughput
while maintaining high effectiveness. Our code is available at
https://github.com/shenzekai/U-ERTTE
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 07:15:26 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Shen",
"Zekai",
""
],
[
"Yuan",
"Haitao",
""
],
[
"Mao",
"Xiaowei",
""
],
[
"Lv",
"Congkang",
""
],
[
"Guo",
"Shengnan",
""
],
[
"Lin",
"Youfang",
""
],
[
"Wan",
"Huaiyu",
""
]
] | TITLE: Towards An Efficient and Effective En Route Travel Time Estimation
Framework
ABSTRACT: En route travel time estimation (ER-TTE) focuses on predicting the travel
time of the remaining route. Existing ER-TTE methods always make re-estimation
which significantly hinders real-time performance, especially when faced with
the computational demands of simultaneous user requests. This results in delays
and reduced responsiveness in ER-TTE services. We propose a general efficient
framework U-ERTTE combining an Uncertainty-Guided Decision mechanism (UGD) and
Fine-Tuning with Meta-Learning (FTML) to address these challenges. UGD
quantifies the uncertainty and provides confidence intervals for the entire
route. It selectively re-estimates only when the actual travel time deviates
from the predicted confidence intervals, thereby optimizing the efficiency of
ER-TTE. To ensure the accuracy of confidence intervals and accurate predictions
that need to re-estimate, FTML is employed to train the model, enabling it to
learn general driving patterns and specific features to adapt to specific
tasks. Extensive experiments on two large-scale real datasets demonstrate that
the U-ERTTE framework significantly enhances inference speed and throughput
while maintaining high effectiveness. Our code is available at
https://github.com/shenzekai/U-ERTTE
|
2504.04091 | David Manlove | Mathijs Barkel, Rachael Colley, Maxence Delorme, David Manlove,
William Pettersson | Operational research approaches and mathematical models for kidney
exchange: A literature survey and empirical evaluation | null | null | null | null | cs.DS | http://creativecommons.org/licenses/by/4.0/ | Kidney exchange is a transplant modality that has provided new opportunities
for living kidney donation in many countries around the world since 1991. It
has been extensively studied from an Operational Research (OR) perspective
since 2004. This article provides a comprehensive literature survey on OR
approaches to fundamental computational problems associated with kidney
exchange over the last two decades. We also summarise the key integer linear
programming (ILP) models for kidney exchange, showing how to model optimisation
problems involving only cycles and chains separately. This allows new combined
ILP models, not previously presented, to be obtained by amalgamating cycle and
chain models. We present a comprehensive empirical evaluation involving all
combined models from this paper in addition to bespoke software packages from
the literature involving advanced techniques. This focuses primarily on
computation times for 49 methods applied to 4,320 problem instances of varying
sizes that reflect the characteristics of real kidney exchange datasets,
corresponding to over 200,000 algorithm executions. We have made our
implementations of all cycle and chain models described in this paper, together
with all instances used for the experiments, and a web application to visualise
our experimental results, publicly available.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 07:35:12 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Barkel",
"Mathijs",
""
],
[
"Colley",
"Rachael",
""
],
[
"Delorme",
"Maxence",
""
],
[
"Manlove",
"David",
""
],
[
"Pettersson",
"William",
""
]
] | TITLE: Operational research approaches and mathematical models for kidney
exchange: A literature survey and empirical evaluation
ABSTRACT: Kidney exchange is a transplant modality that has provided new opportunities
for living kidney donation in many countries around the world since 1991. It
has been extensively studied from an Operational Research (OR) perspective
since 2004. This article provides a comprehensive literature survey on OR
approaches to fundamental computational problems associated with kidney
exchange over the last two decades. We also summarise the key integer linear
programming (ILP) models for kidney exchange, showing how to model optimisation
problems involving only cycles and chains separately. This allows new combined
ILP models, not previously presented, to be obtained by amalgamating cycle and
chain models. We present a comprehensive empirical evaluation involving all
combined models from this paper in addition to bespoke software packages from
the literature involving advanced techniques. This focuses primarily on
computation times for 49 methods applied to 4,320 problem instances of varying
sizes that reflect the characteristics of real kidney exchange datasets,
corresponding to over 200,000 algorithm executions. We have made our
implementations of all cycle and chain models described in this paper, together
with all instances used for the experiments, and a web application to visualise
our experimental results, publicly available.
|
2504.04099 | Chunzhao Xie | Chunzhao Xie, Tongxuan Liu, Lei Jiang, Yuting Zeng, jinrong Guo,
Yunheng Shen, Weizhe Huang, Jing Li, Xiaohua Xu | TARAC: Mitigating Hallucination in LVLMs via Temporal Attention
Real-time Accumulative Connection | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Vision-Language Models have demonstrated remarkable performance across
various tasks; however, the challenge of hallucinations constrains their
practical applications. The hallucination problem arises from multiple factors,
including the inherent hallucinations in language models, the limitations of
visual encoders in perception, and biases introduced by multimodal data.
Extensive research has explored ways to mitigate hallucinations. For instance,
OPERA prevents the model from overly focusing on "anchor tokens", thereby
reducing hallucinations, whereas VCD mitigates hallucinations by employing a
contrastive decoding approach. In this paper, we investigate the correlation
between the decay of attention to image tokens and the occurrence of
hallucinations. Based on this finding, we propose Temporal Attention Real-time
Accumulative Connection (TARAC), a novel training-free method that dynamically
accumulates and updates LVLMs' attention on image tokens during generation. By
enhancing the model's attention to image tokens, TARAC mitigates hallucinations
caused by the decay of attention on image tokens. We validate the effectiveness
of TARAC across multiple models and datasets, demonstrating that our approach
substantially mitigates hallucinations. In particular, TARAC reduces $C_S$ by
25.2 and $C_I$ by 8.7 compared to VCD on the CHAIR benchmark.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 07:57:11 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Xie",
"Chunzhao",
""
],
[
"Liu",
"Tongxuan",
""
],
[
"Jiang",
"Lei",
""
],
[
"Zeng",
"Yuting",
""
],
[
"Guo",
"jinrong",
""
],
[
"Shen",
"Yunheng",
""
],
[
"Huang",
"Weizhe",
""
],
[
"Li",
"Jing",
""
],
[
"Xu",
"Xiaohua",
""
]
] | TITLE: TARAC: Mitigating Hallucination in LVLMs via Temporal Attention
Real-time Accumulative Connection
ABSTRACT: Large Vision-Language Models have demonstrated remarkable performance across
various tasks; however, the challenge of hallucinations constrains their
practical applications. The hallucination problem arises from multiple factors,
including the inherent hallucinations in language models, the limitations of
visual encoders in perception, and biases introduced by multimodal data.
Extensive research has explored ways to mitigate hallucinations. For instance,
OPERA prevents the model from overly focusing on "anchor tokens", thereby
reducing hallucinations, whereas VCD mitigates hallucinations by employing a
contrastive decoding approach. In this paper, we investigate the correlation
between the decay of attention to image tokens and the occurrence of
hallucinations. Based on this finding, we propose Temporal Attention Real-time
Accumulative Connection (TARAC), a novel training-free method that dynamically
accumulates and updates LVLMs' attention on image tokens during generation. By
enhancing the model's attention to image tokens, TARAC mitigates hallucinations
caused by the decay of attention on image tokens. We validate the effectiveness
of TARAC across multiple models and datasets, demonstrating that our approach
substantially mitigates hallucinations. In particular, TARAC reduces $C_S$ by
25.2 and $C_I$ by 8.7 compared to VCD on the CHAIR benchmark.
|
2504.04104 | Mengbai Xiao | Haofei Yin, Mengbai Xiao, Rouzhou Lu, Xiao Zhang, Dongxiao Yu,
Guanghui Zhang | PipeDec: Low-Latency Pipeline-based Inference with Dynamic Speculative
Decoding towards Large-scale Models | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autoregressive large language model inference primarily consists of two
stages: pre-filling and decoding. Decoding involves sequential computation for
each token, which leads to significant latency. Speculative decoding is a
technique that leverages the draft model combined with large model verification
to enhance parallelism without sacrificing accuracy. However, existing external
prediction methods face challenges in adapting to multi-node serial
deployments. While they can maintain speedup under such conditions, the high
latency of multi-node deployments ultimately results in low overall efficiency.
We propose a speculative decoding framework named PipeDec to address the low
global resource utilization of single tasks in pipeline deployments thereby
reducing decoding latency. We integrate a draft model into the pipeline of the
large model and immediately forward each prediction from the draft model to
subsequent pipeline stages. A dynamic prediction tree manages prediction
sequences across nodes, enabling efficient updating and pruning. This approach
leverages the draft model's predictions to utilize all pipeline nodes for
parallel decoding of a single task. Experiments were conducted using LLama3.2
1B as the draft model in conjunction with a 14-stage parallel pipeline to
accelerate LLama3.1 70B by six different types of datasets. During the decoding
phase of a single task, PipeDec achieved a 4.46x-7.79x speedup compared to
traditional pipeline parallelism and a 2.2x-2.69x speedup compared to baseline
tree-based speculative decoding methods. The code will be released after the
review process.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 08:31:10 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yin",
"Haofei",
""
],
[
"Xiao",
"Mengbai",
""
],
[
"Lu",
"Rouzhou",
""
],
[
"Zhang",
"Xiao",
""
],
[
"Yu",
"Dongxiao",
""
],
[
"Zhang",
"Guanghui",
""
]
] | TITLE: PipeDec: Low-Latency Pipeline-based Inference with Dynamic Speculative
Decoding towards Large-scale Models
ABSTRACT: Autoregressive large language model inference primarily consists of two
stages: pre-filling and decoding. Decoding involves sequential computation for
each token, which leads to significant latency. Speculative decoding is a
technique that leverages the draft model combined with large model verification
to enhance parallelism without sacrificing accuracy. However, existing external
prediction methods face challenges in adapting to multi-node serial
deployments. While they can maintain speedup under such conditions, the high
latency of multi-node deployments ultimately results in low overall efficiency.
We propose a speculative decoding framework named PipeDec to address the low
global resource utilization of single tasks in pipeline deployments thereby
reducing decoding latency. We integrate a draft model into the pipeline of the
large model and immediately forward each prediction from the draft model to
subsequent pipeline stages. A dynamic prediction tree manages prediction
sequences across nodes, enabling efficient updating and pruning. This approach
leverages the draft model's predictions to utilize all pipeline nodes for
parallel decoding of a single task. Experiments were conducted using LLama3.2
1B as the draft model in conjunction with a 14-stage parallel pipeline to
accelerate LLama3.1 70B by six different types of datasets. During the decoding
phase of a single task, PipeDec achieved a 4.46x-7.79x speedup compared to
traditional pipeline parallelism and a 2.2x-2.69x speedup compared to baseline
tree-based speculative decoding methods. The code will be released after the
review process.
|
2504.04105 | Ruiqi Zhang | Ruiqi Zhang, Jingfeng Wu, Licong Lin, Peter L. Bartlett | Minimax Optimal Convergence of Gradient Descent in Logistic Regression
via Large and Adaptive Stepsizes | 27 pages | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | We study $\textit{gradient descent}$ (GD) for logistic regression on linearly
separable data with stepsizes that adapt to the current risk, scaled by a
constant hyperparameter $\eta$. We show that after at most $1/\gamma^2$ burn-in
steps, GD achieves a risk upper bounded by $\exp(-\Theta(\eta))$, where
$\gamma$ is the margin of the dataset. As $\eta$ can be arbitrarily large, GD
attains an arbitrarily small risk $\textit{immediately after the burn-in
steps}$, though the risk evolution may be $\textit{non-monotonic}$.
We further construct hard datasets with margin $\gamma$, where any batch or
online first-order method requires $\Omega(1/\gamma^2)$ steps to find a linear
separator. Thus, GD with large, adaptive stepsizes is $\textit{minimax
optimal}$ among first-order batch methods. Notably, the classical
$\textit{Perceptron}$ (Novikoff, 1962), a first-order online method, also
achieves a step complexity of $1/\gamma^2$, matching GD even in constants.
Finally, our GD analysis extends to a broad class of loss functions and
certain two-layer networks.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 08:34:20 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhang",
"Ruiqi",
""
],
[
"Wu",
"Jingfeng",
""
],
[
"Lin",
"Licong",
""
],
[
"Bartlett",
"Peter L.",
""
]
] | TITLE: Minimax Optimal Convergence of Gradient Descent in Logistic Regression
via Large and Adaptive Stepsizes
ABSTRACT: We study $\textit{gradient descent}$ (GD) for logistic regression on linearly
separable data with stepsizes that adapt to the current risk, scaled by a
constant hyperparameter $\eta$. We show that after at most $1/\gamma^2$ burn-in
steps, GD achieves a risk upper bounded by $\exp(-\Theta(\eta))$, where
$\gamma$ is the margin of the dataset. As $\eta$ can be arbitrarily large, GD
attains an arbitrarily small risk $\textit{immediately after the burn-in
steps}$, though the risk evolution may be $\textit{non-monotonic}$.
We further construct hard datasets with margin $\gamma$, where any batch or
online first-order method requires $\Omega(1/\gamma^2)$ steps to find a linear
separator. Thus, GD with large, adaptive stepsizes is $\textit{minimax
optimal}$ among first-order batch methods. Notably, the classical
$\textit{Perceptron}$ (Novikoff, 1962), a first-order online method, also
achieves a step complexity of $1/\gamma^2$, matching GD even in constants.
Finally, our GD analysis extends to a broad class of loss functions and
certain two-layer networks.
|
2504.04120 | Bingxu Wang | Bingxu Wang, Kunzhi Cai, Yuqi Zhang and Yachong Guo | Transformer representation learning is necessary for dynamic multi-modal
physiological data on small-cohort patients | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Postoperative delirium (POD), a severe neuropsychiatric complication
affecting nearly 50% of high-risk surgical patients, is defined as an acute
disorder of attention and cognition, It remains significantly underdiagnosed in
the intensive care units (ICUs) due to subjective monitoring methods. Early and
accurate diagnosis of POD is critical and achievable. Here, we propose a POD
prediction framework comprising a Transformer representation model followed by
traditional machine learning algorithms. Our approaches utilizes multi-modal
physiological data, including amplitude-integrated electroencephalography
(aEEG), vital signs, electrocardiographic monitor data as well as hemodynamic
parameters. We curated the first multi-modal POD dataset encompassing two
patient types and evaluated the various Transformer architectures for
representation learning. Empirical results indicate a consistent improvements
of sensitivity and Youden index in patient TYPE I using Transformer
representations, particularly our fusion adaptation of Pathformer. By enabling
effective delirium diagnosis from postoperative day 1 to 3, our extensive
experimental findings emphasize the potential of multi-modal physiological data
and highlight the necessity of representation learning via multi-modal
Transformer architecture in clinical diagnosis.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 09:31:39 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Bingxu",
""
],
[
"Cai",
"Kunzhi",
""
],
[
"Zhang",
"Yuqi",
""
],
[
"Guo",
"Yachong",
""
]
] | TITLE: Transformer representation learning is necessary for dynamic multi-modal
physiological data on small-cohort patients
ABSTRACT: Postoperative delirium (POD), a severe neuropsychiatric complication
affecting nearly 50% of high-risk surgical patients, is defined as an acute
disorder of attention and cognition, It remains significantly underdiagnosed in
the intensive care units (ICUs) due to subjective monitoring methods. Early and
accurate diagnosis of POD is critical and achievable. Here, we propose a POD
prediction framework comprising a Transformer representation model followed by
traditional machine learning algorithms. Our approaches utilizes multi-modal
physiological data, including amplitude-integrated electroencephalography
(aEEG), vital signs, electrocardiographic monitor data as well as hemodynamic
parameters. We curated the first multi-modal POD dataset encompassing two
patient types and evaluated the various Transformer architectures for
representation learning. Empirical results indicate a consistent improvements
of sensitivity and Youden index in patient TYPE I using Transformer
representations, particularly our fusion adaptation of Pathformer. By enabling
effective delirium diagnosis from postoperative day 1 to 3, our extensive
experimental findings emphasize the potential of multi-modal physiological data
and highlight the necessity of representation learning via multi-modal
Transformer architecture in clinical diagnosis.
|
2504.04121 | Lixiang Xu | Lixiang Xu, Xianwei Ding, Xin Yuan, Zhanlong Wang, Lu Bai, Enhong
Chen, Philip S. Yu, and Yuanyan Tang | Improving Question Embeddings with Cognitiv Representation Optimization
for Knowledge Tracing | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | The Knowledge Tracing (KT) aims to track changes in students' knowledge
status and predict their future answers based on their historical answer
records. Current research on KT modeling focuses on predicting student' future
performance based on existing, unupdated records of student learning
interactions. However, these approaches ignore the distractors (such as
slipping and guessing) in the answering process and overlook that static
cognitive representations are temporary and limited. Most of them assume that
there are no distractors in the answering process and that the record
representations fully represent the students' level of understanding and
proficiency in knowledge. In this case, it may lead to many insynergy and
incoordination issue in the original records. Therefore we propose a Cognitive
Representation Optimization for Knowledge Tracing (CRO-KT) model, which
utilizes a dynamic programming algorithm to optimize structure of cognitive
representations. This ensures that the structure matches the students'
cognitive patterns in terms of the difficulty of the exercises. Furthermore, we
use the co-optimization algorithm to optimize the cognitive representations of
the sub-target exercises in terms of the overall situation of exercises
responses by considering all the exercises with co-relationships as a single
goal. Meanwhile, the CRO-KT model fuses the learned relational embeddings from
the bipartite graph with the optimized record representations in a weighted
manner, enhancing the expression of students' cognition. Finally, experiments
are conducted on three publicly available datasets respectively to validate the
effectiveness of the proposed cognitive representation optimization model.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 09:32:03 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Xu",
"Lixiang",
""
],
[
"Ding",
"Xianwei",
""
],
[
"Yuan",
"Xin",
""
],
[
"Wang",
"Zhanlong",
""
],
[
"Bai",
"Lu",
""
],
[
"Chen",
"Enhong",
""
],
[
"Yu",
"Philip S.",
""
],
[
"Tang",
"Yuanyan",
""
]
] | TITLE: Improving Question Embeddings with Cognitiv Representation Optimization
for Knowledge Tracing
ABSTRACT: The Knowledge Tracing (KT) aims to track changes in students' knowledge
status and predict their future answers based on their historical answer
records. Current research on KT modeling focuses on predicting student' future
performance based on existing, unupdated records of student learning
interactions. However, these approaches ignore the distractors (such as
slipping and guessing) in the answering process and overlook that static
cognitive representations are temporary and limited. Most of them assume that
there are no distractors in the answering process and that the record
representations fully represent the students' level of understanding and
proficiency in knowledge. In this case, it may lead to many insynergy and
incoordination issue in the original records. Therefore we propose a Cognitive
Representation Optimization for Knowledge Tracing (CRO-KT) model, which
utilizes a dynamic programming algorithm to optimize structure of cognitive
representations. This ensures that the structure matches the students'
cognitive patterns in terms of the difficulty of the exercises. Furthermore, we
use the co-optimization algorithm to optimize the cognitive representations of
the sub-target exercises in terms of the overall situation of exercises
responses by considering all the exercises with co-relationships as a single
goal. Meanwhile, the CRO-KT model fuses the learned relational embeddings from
the bipartite graph with the optimized record representations in a weighted
manner, enhancing the expression of students' cognition. Finally, experiments
are conducted on three publicly available datasets respectively to validate the
effectiveness of the proposed cognitive representation optimization model.
|
2504.04124 | Abdul Hannan Khan | Muhammad Ahmed Ullah Khan, Abdul Hannan Khan, Andreas Dengel | EMF: Event Meta Formers for Event-based Real-time Traffic Object
Detection | 10 pages, 2 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Event cameras have higher temporal resolution, and require less storage and
bandwidth compared to traditional RGB cameras. However, due to relatively
lagging performance of event-based approaches, event cameras have not yet
replace traditional cameras in performance-critical applications like
autonomous driving. Recent approaches in event-based object detection try to
bridge this gap by employing computationally expensive transformer-based
solutions. However, due to their resource-intensive components, these solutions
fail to exploit the sparsity and higher temporal resolution of event cameras
efficiently. Moreover, these solutions are adopted from the vision domain,
lacking specificity to the event cameras. In this work, we explore efficient
and performant alternatives to recurrent vision transformer models and propose
a novel event-based object detection backbone. The proposed backbone employs a
novel Event Progression Extractor module, tailored specifically for event data,
and uses Metaformer concept with convolution-based efficient components. We
evaluate the resultant model on well-established traffic object detection
benchmarks and conduct cross-dataset evaluation to test its ability to
generalize. The proposed model outperforms the state-of-the-art on Prophesee
Gen1 dataset by 1.6 mAP while reducing inference time by 14%. Our proposed EMF
becomes the fastest DNN-based architecture in the domain by outperforming most
efficient event-based object detectors. Moreover, the proposed model shows
better ability to generalize to unseen data and scales better with the
abundance of data.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 09:48:40 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Khan",
"Muhammad Ahmed Ullah",
""
],
[
"Khan",
"Abdul Hannan",
""
],
[
"Dengel",
"Andreas",
""
]
] | TITLE: EMF: Event Meta Formers for Event-based Real-time Traffic Object
Detection
ABSTRACT: Event cameras have higher temporal resolution, and require less storage and
bandwidth compared to traditional RGB cameras. However, due to relatively
lagging performance of event-based approaches, event cameras have not yet
replace traditional cameras in performance-critical applications like
autonomous driving. Recent approaches in event-based object detection try to
bridge this gap by employing computationally expensive transformer-based
solutions. However, due to their resource-intensive components, these solutions
fail to exploit the sparsity and higher temporal resolution of event cameras
efficiently. Moreover, these solutions are adopted from the vision domain,
lacking specificity to the event cameras. In this work, we explore efficient
and performant alternatives to recurrent vision transformer models and propose
a novel event-based object detection backbone. The proposed backbone employs a
novel Event Progression Extractor module, tailored specifically for event data,
and uses Metaformer concept with convolution-based efficient components. We
evaluate the resultant model on well-established traffic object detection
benchmarks and conduct cross-dataset evaluation to test its ability to
generalize. The proposed model outperforms the state-of-the-art on Prophesee
Gen1 dataset by 1.6 mAP while reducing inference time by 14%. Our proposed EMF
becomes the fastest DNN-based architecture in the domain by outperforming most
efficient event-based object detectors. Moreover, the proposed model shows
better ability to generalize to unseen data and scales better with the
abundance of data.
|
2504.04126 | Zhenzhi Wang | Zhenzhi Wang, Yixuan Li, Yanhong Zeng, Yuwei Guo, Dahua Lin, Tianfan
Xue, Bo Dai | Multi-identity Human Image Animation with Structural Video Diffusion | 11 pages | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Generating human videos from a single image while ensuring high visual
quality and precise control is a challenging task, especially in complex
scenarios involving multiple individuals and interactions with objects.
Existing methods, while effective for single-human cases, often fail to handle
the intricacies of multi-identity interactions because they struggle to
associate the correct pairs of human appearance and pose condition and model
the distribution of 3D-aware dynamics. To address these limitations, we present
Structural Video Diffusion, a novel framework designed for generating realistic
multi-human videos. Our approach introduces two core innovations:
identity-specific embeddings to maintain consistent appearances across
individuals and a structural learning mechanism that incorporates depth and
surface-normal cues to model human-object interactions. Additionally, we expand
existing human video dataset with 25K new videos featuring diverse multi-human
and object interaction scenarios, providing a robust foundation for training.
Experimental results demonstrate that Structural Video Diffusion achieves
superior performance in generating lifelike, coherent videos for multiple
subjects with dynamic and rich interactions, advancing the state of
human-centric video generation.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 10:03:49 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Zhenzhi",
""
],
[
"Li",
"Yixuan",
""
],
[
"Zeng",
"Yanhong",
""
],
[
"Guo",
"Yuwei",
""
],
[
"Lin",
"Dahua",
""
],
[
"Xue",
"Tianfan",
""
],
[
"Dai",
"Bo",
""
]
] | TITLE: Multi-identity Human Image Animation with Structural Video Diffusion
ABSTRACT: Generating human videos from a single image while ensuring high visual
quality and precise control is a challenging task, especially in complex
scenarios involving multiple individuals and interactions with objects.
Existing methods, while effective for single-human cases, often fail to handle
the intricacies of multi-identity interactions because they struggle to
associate the correct pairs of human appearance and pose condition and model
the distribution of 3D-aware dynamics. To address these limitations, we present
Structural Video Diffusion, a novel framework designed for generating realistic
multi-human videos. Our approach introduces two core innovations:
identity-specific embeddings to maintain consistent appearances across
individuals and a structural learning mechanism that incorporates depth and
surface-normal cues to model human-object interactions. Additionally, we expand
existing human video dataset with 25K new videos featuring diverse multi-human
and object interaction scenarios, providing a robust foundation for training.
Experimental results demonstrate that Structural Video Diffusion achieves
superior performance in generating lifelike, coherent videos for multiple
subjects with dynamic and rich interactions, advancing the state of
human-centric video generation.
|
2504.04128 | Chaoxiong Ma | Chaoxiong Ma and Yan Liang and Huixia Zhang and Hao Sun | Guaranteeing consistency in evidence fusion: A novel perspective on
credibility | 29 pages, 10 figures | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | It is explored that available credible evidence fusion schemes suffer from
the potential inconsistency because credibility calculation and Dempster's
combination rule-based fusion are sequentially performed in an open-loop style.
This paper constructs evidence credibility from the perspective of the degree
of support for events within the framework of discrimination (FOD) and proposes
an iterative credible evidence fusion (ICEF) to overcome the inconsistency in
view of close-loop control. On one hand, the ICEF introduces the fusion result
into credibility assessment to establish the correlation between credibility
and the fusion result. On the other hand, arithmetic-geometric divergence is
promoted based on the exponential normalization of plausibility and belief
functions to measure evidence conflict, called plausibility-belief
arithmetic-geometric divergence (PBAGD), which is superior in capturing the
correlation and difference of FOD subsets, identifying abnormal sources, and
reducing their fusion weights. The ICEF is compared with traditional methods by
combining different evidence difference measure forms via numerical examples to
verify its performance. Simulations on numerical examples and benchmark
datasets reflect the adaptability of PBAGD to the proposed fusion strategy.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 10:12:32 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ma",
"Chaoxiong",
""
],
[
"Liang",
"Yan",
""
],
[
"Zhang",
"Huixia",
""
],
[
"Sun",
"Hao",
""
]
] | TITLE: Guaranteeing consistency in evidence fusion: A novel perspective on
credibility
ABSTRACT: It is explored that available credible evidence fusion schemes suffer from
the potential inconsistency because credibility calculation and Dempster's
combination rule-based fusion are sequentially performed in an open-loop style.
This paper constructs evidence credibility from the perspective of the degree
of support for events within the framework of discrimination (FOD) and proposes
an iterative credible evidence fusion (ICEF) to overcome the inconsistency in
view of close-loop control. On one hand, the ICEF introduces the fusion result
into credibility assessment to establish the correlation between credibility
and the fusion result. On the other hand, arithmetic-geometric divergence is
promoted based on the exponential normalization of plausibility and belief
functions to measure evidence conflict, called plausibility-belief
arithmetic-geometric divergence (PBAGD), which is superior in capturing the
correlation and difference of FOD subsets, identifying abnormal sources, and
reducing their fusion weights. The ICEF is compared with traditional methods by
combining different evidence difference measure forms via numerical examples to
verify its performance. Simulations on numerical examples and benchmark
datasets reflect the adaptability of PBAGD to the proposed fusion strategy.
|
2504.04130 | Iulian-Marius T\u{a}iatu | Andrei-Alexandru Preda, Iulian-Marius T\u{a}iatu, Dumitru-Clementin
Cercel | Scaling Federated Learning Solutions with Kubernetes for Synthesizing
Histopathology Images | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-sa/4.0/ | In the field of deep learning, large architectures often obtain the best
performance for many tasks, but also require massive datasets. In the
histological domain, tissue images are expensive to obtain and constitute
sensitive medical information, raising concerns about data scarcity and
privacy. Vision Transformers are state-of-the-art computer vision models that
have proven helpful in many tasks, including image classification. In this
work, we combine vision Transformers with generative adversarial networks to
generate histopathological images related to colorectal cancer and test their
quality by augmenting a training dataset, leading to improved classification
accuracy. Then, we replicate this performance using the federated learning
technique and a realistic Kubernetes setup with multiple nodes, simulating a
scenario where the training dataset is split among several hospitals unable to
share their information directly due to privacy concerns.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 10:32:56 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Preda",
"Andrei-Alexandru",
""
],
[
"Tăiatu",
"Iulian-Marius",
""
],
[
"Cercel",
"Dumitru-Clementin",
""
]
] | TITLE: Scaling Federated Learning Solutions with Kubernetes for Synthesizing
Histopathology Images
ABSTRACT: In the field of deep learning, large architectures often obtain the best
performance for many tasks, but also require massive datasets. In the
histological domain, tissue images are expensive to obtain and constitute
sensitive medical information, raising concerns about data scarcity and
privacy. Vision Transformers are state-of-the-art computer vision models that
have proven helpful in many tasks, including image classification. In this
work, we combine vision Transformers with generative adversarial networks to
generate histopathological images related to colorectal cancer and test their
quality by augmenting a training dataset, leading to improved classification
accuracy. Then, we replicate this performance using the federated learning
technique and a realistic Kubernetes setup with multiple nodes, simulating a
scenario where the training dataset is split among several hospitals unable to
share their information directly due to privacy concerns.
|
2504.04131 | Michael Bommarito | Michael J Bommarito, Daniel Martin Katz, Jillian Bommarito | Precise Legal Sentence Boundary Detection for Retrieval at Scale:
NUPunkt and CharBoundary | 12 pages, 5 figures, 6 tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | We present NUPunkt and CharBoundary, two sentence boundary detection
libraries optimized for high-precision, high-throughput processing of legal
text in large-scale applications such as due diligence, e-discovery, and legal
research. These libraries address the critical challenges posed by legal
documents containing specialized citations, abbreviations, and complex sentence
structures that confound general-purpose sentence boundary detectors.
Our experimental evaluation on five diverse legal datasets comprising over
25,000 documents and 197,000 annotated sentence boundaries demonstrates that
NUPunkt achieves 91.1% precision while processing 10 million characters per
second with modest memory requirements (432 MB). CharBoundary models offer
balanced and adjustable precision-recall tradeoffs, with the large model
achieving the highest F1 score (0.782) among all tested methods.
Notably, NUPunkt provides a 29-32% precision improvement over general-purpose
tools while maintaining exceptional throughput, processing multi-million
document collections in minutes rather than hours. Both libraries run
efficiently on standard CPU hardware without requiring specialized
accelerators. NUPunkt is implemented in pure Python with zero external
dependencies, while CharBoundary relies only on scikit-learn and optional ONNX
runtime integration for optimized performance. Both libraries are available
under the MIT license, can be installed via PyPI, and can be interactively
tested at https://sentences.aleainstitute.ai/.
These libraries address critical precision issues in retrieval-augmented
generation systems by preserving coherent legal concepts across sentences,
where each percentage improvement in precision yields exponentially greater
reductions in context fragmentation, creating cascading benefits throughout
retrieval pipelines and significantly enhancing downstream reasoning quality.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 10:48:34 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Bommarito",
"Michael J",
""
],
[
"Katz",
"Daniel Martin",
""
],
[
"Bommarito",
"Jillian",
""
]
] | TITLE: Precise Legal Sentence Boundary Detection for Retrieval at Scale:
NUPunkt and CharBoundary
ABSTRACT: We present NUPunkt and CharBoundary, two sentence boundary detection
libraries optimized for high-precision, high-throughput processing of legal
text in large-scale applications such as due diligence, e-discovery, and legal
research. These libraries address the critical challenges posed by legal
documents containing specialized citations, abbreviations, and complex sentence
structures that confound general-purpose sentence boundary detectors.
Our experimental evaluation on five diverse legal datasets comprising over
25,000 documents and 197,000 annotated sentence boundaries demonstrates that
NUPunkt achieves 91.1% precision while processing 10 million characters per
second with modest memory requirements (432 MB). CharBoundary models offer
balanced and adjustable precision-recall tradeoffs, with the large model
achieving the highest F1 score (0.782) among all tested methods.
Notably, NUPunkt provides a 29-32% precision improvement over general-purpose
tools while maintaining exceptional throughput, processing multi-million
document collections in minutes rather than hours. Both libraries run
efficiently on standard CPU hardware without requiring specialized
accelerators. NUPunkt is implemented in pure Python with zero external
dependencies, while CharBoundary relies only on scikit-learn and optional ONNX
runtime integration for optimized performance. Both libraries are available
under the MIT license, can be installed via PyPI, and can be interactively
tested at https://sentences.aleainstitute.ai/.
These libraries address critical precision issues in retrieval-augmented
generation systems by preserving coherent legal concepts across sentences,
where each percentage improvement in precision yields exponentially greater
reductions in context fragmentation, creating cascading benefits throughout
retrieval pipelines and significantly enhancing downstream reasoning quality.
|
2504.04138 | Mridul Kumar | Mridul Kumar, Deepali Jain, Zeeshan Saifi, Soami Daya Krishnananda | Predicting Soil Macronutrient Levels: A Machine Learning Approach Models
Trained on pH, Conductivity, and Average Power of Acid-Base Solutions | null | null | null | null | cs.LG cs.AI physics.bio-ph q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Soil macronutrients, particularly potassium ions (K$^+$), are indispensable
for plant health, underpinning various physiological and biological processes,
and facilitating the management of both biotic and abiotic stresses. Deficient
macronutrient content results in stunted growth, delayed maturation, and
increased vulnerability to environmental stressors, thereby accentuating the
imperative for precise soil nutrient monitoring. Traditional techniques such as
chemical assays, atomic absorption spectroscopy, inductively coupled plasma
optical emission spectroscopy, and electrochemical methods, albeit advanced,
are prohibitively expensive and time-intensive, thus unsuitable for real-time
macronutrient assessment. In this study, we propose an innovative soil testing
protocol utilizing a dataset derived from synthetic solutions to model soil
behaviour. The dataset encompasses physical properties including conductivity
and pH, with a concentration on three key macronutrients: nitrogen (N),
phosphorus (P), and potassium (K). Four machine learning algorithms were
applied to the dataset, with random forest regressors and neural networks being
selected for the prediction of soil nutrient concentrations. Comparative
analysis with laboratory soil testing results revealed prediction errors of
23.6% for phosphorus and 16% for potassium using the random forest model, and
26.3% for phosphorus and 21.8% for potassium using the neural network model.
This methodology illustrates a cost-effective and efficacious strategy for
real-time soil nutrient monitoring, offering substantial advancements over
conventional techniques and enhancing the capability to sustain optimal
nutrient levels conducive to robust crop growth.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 11:04:48 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Kumar",
"Mridul",
""
],
[
"Jain",
"Deepali",
""
],
[
"Saifi",
"Zeeshan",
""
],
[
"Krishnananda",
"Soami Daya",
""
]
] | TITLE: Predicting Soil Macronutrient Levels: A Machine Learning Approach Models
Trained on pH, Conductivity, and Average Power of Acid-Base Solutions
ABSTRACT: Soil macronutrients, particularly potassium ions (K$^+$), are indispensable
for plant health, underpinning various physiological and biological processes,
and facilitating the management of both biotic and abiotic stresses. Deficient
macronutrient content results in stunted growth, delayed maturation, and
increased vulnerability to environmental stressors, thereby accentuating the
imperative for precise soil nutrient monitoring. Traditional techniques such as
chemical assays, atomic absorption spectroscopy, inductively coupled plasma
optical emission spectroscopy, and electrochemical methods, albeit advanced,
are prohibitively expensive and time-intensive, thus unsuitable for real-time
macronutrient assessment. In this study, we propose an innovative soil testing
protocol utilizing a dataset derived from synthetic solutions to model soil
behaviour. The dataset encompasses physical properties including conductivity
and pH, with a concentration on three key macronutrients: nitrogen (N),
phosphorus (P), and potassium (K). Four machine learning algorithms were
applied to the dataset, with random forest regressors and neural networks being
selected for the prediction of soil nutrient concentrations. Comparative
analysis with laboratory soil testing results revealed prediction errors of
23.6% for phosphorus and 16% for potassium using the random forest model, and
26.3% for phosphorus and 21.8% for potassium using the neural network model.
This methodology illustrates a cost-effective and efficacious strategy for
real-time soil nutrient monitoring, offering substantial advancements over
conventional techniques and enhancing the capability to sustain optimal
nutrient levels conducive to robust crop growth.
|
2504.04158 | Yunlong Lin | Yunlong Lin, Zixu Lin, Haoyu Chen, Panwang Pan, Chenxin Li, Sixiang
Chen, Yeying Jin, Wenbo Li, Xinghao Ding | JarvisIR: Elevating Autonomous Driving Perception with Intelligent Image
Restoration | 25 pages, 15 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-centric perception systems struggle with unpredictable and coupled
weather degradations in the wild. Current solutions are often limited, as they
either depend on specific degradation priors or suffer from significant domain
gaps. To enable robust and autonomous operation in real-world conditions, we
propose JarvisIR, a VLM-powered agent that leverages the VLM as a controller to
manage multiple expert restoration models. To further enhance system
robustness, reduce hallucinations, and improve generalizability in real-world
adverse weather, JarvisIR employs a novel two-stage framework consisting of
supervised fine-tuning and human feedback alignment. Specifically, to address
the lack of paired data in real-world scenarios, the human feedback alignment
enables the VLM to be fine-tuned effectively on large-scale real-world data in
an unsupervised manner. To support the training and evaluation of JarvisIR, we
introduce CleanBench, a comprehensive dataset consisting of high-quality and
large-scale instruction-responses pairs, including 150K synthetic entries and
80K real entries. Extensive experiments demonstrate that JarvisIR exhibits
superior decision-making and restoration capabilities. Compared with existing
methods, it achieves a 50% improvement in the average of all perception metrics
on CleanBench-Real. Project page: https://cvpr2025-jarvisir.github.io/.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 12:38:55 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lin",
"Yunlong",
""
],
[
"Lin",
"Zixu",
""
],
[
"Chen",
"Haoyu",
""
],
[
"Pan",
"Panwang",
""
],
[
"Li",
"Chenxin",
""
],
[
"Chen",
"Sixiang",
""
],
[
"Jin",
"Yeying",
""
],
[
"Li",
"Wenbo",
""
],
[
"Ding",
"Xinghao",
""
]
] | TITLE: JarvisIR: Elevating Autonomous Driving Perception with Intelligent Image
Restoration
ABSTRACT: Vision-centric perception systems struggle with unpredictable and coupled
weather degradations in the wild. Current solutions are often limited, as they
either depend on specific degradation priors or suffer from significant domain
gaps. To enable robust and autonomous operation in real-world conditions, we
propose JarvisIR, a VLM-powered agent that leverages the VLM as a controller to
manage multiple expert restoration models. To further enhance system
robustness, reduce hallucinations, and improve generalizability in real-world
adverse weather, JarvisIR employs a novel two-stage framework consisting of
supervised fine-tuning and human feedback alignment. Specifically, to address
the lack of paired data in real-world scenarios, the human feedback alignment
enables the VLM to be fine-tuned effectively on large-scale real-world data in
an unsupervised manner. To support the training and evaluation of JarvisIR, we
introduce CleanBench, a comprehensive dataset consisting of high-quality and
large-scale instruction-responses pairs, including 150K synthetic entries and
80K real entries. Extensive experiments demonstrate that JarvisIR exhibits
superior decision-making and restoration capabilities. Compared with existing
methods, it achieves a 50% improvement in the average of all perception metrics
on CleanBench-Real. Project page: https://cvpr2025-jarvisir.github.io/.
|
2504.04178 | Bohao Wang | Bohao Wang, Feng Liu, Jiawei Chen, Xingyu Lou, Changwang Zhang, Jun
Wang, Yuegang Sun, Yan Feng, Chun Chen, Can Wang | MSL: Not All Tokens Are What You Need for Tuning LLM as a Recommender | null | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs), known for their comprehension capabilities and
extensive knowledge, have been increasingly applied to recommendation systems
(RS). Given the fundamental gap between the mechanism of LLMs and the
requirement of RS, researchers have focused on fine-tuning LLMs with
recommendation-specific data to enhance their performance. Language Modeling
Loss (LML), originally designed for language generation tasks, is commonly
adopted. However, we identify two critical limitations of LML: 1) it exhibits
significant divergence from the recommendation objective; 2) it erroneously
treats all fictitious item descriptions as negative samples, introducing
misleading training signals.
To address these limitations, we propose a novel Masked Softmax Loss (MSL)
tailored for fine-tuning LLMs on recommendation. MSL improves LML by
identifying and masking invalid tokens that could lead to fictitious item
descriptions during loss computation. This strategy can effectively avoid the
interference from erroneous negative signals and ensure well alignment with the
recommendation objective supported by theoretical guarantees. During
implementation, we identify a potential challenge related to gradient vanishing
of MSL. To overcome this, we further introduce the temperature coefficient and
propose an Adaptive Temperature Strategy (ATS) that adaptively adjusts the
temperature without requiring extensive hyperparameter tuning. Extensive
experiments conducted on four public datasets further validate the
effectiveness of MSL, achieving an average improvement of 42.24% in NDCG@10.
The code is available at https://github.com/WANGBohaO-jpg/MSL.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 13:48:33 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Bohao",
""
],
[
"Liu",
"Feng",
""
],
[
"Chen",
"Jiawei",
""
],
[
"Lou",
"Xingyu",
""
],
[
"Zhang",
"Changwang",
""
],
[
"Wang",
"Jun",
""
],
[
"Sun",
"Yuegang",
""
],
[
"Feng",
"Yan",
""
],
[
"Chen",
"Chun",
""
],
[
"Wang",
"Can",
""
]
] | TITLE: MSL: Not All Tokens Are What You Need for Tuning LLM as a Recommender
ABSTRACT: Large language models (LLMs), known for their comprehension capabilities and
extensive knowledge, have been increasingly applied to recommendation systems
(RS). Given the fundamental gap between the mechanism of LLMs and the
requirement of RS, researchers have focused on fine-tuning LLMs with
recommendation-specific data to enhance their performance. Language Modeling
Loss (LML), originally designed for language generation tasks, is commonly
adopted. However, we identify two critical limitations of LML: 1) it exhibits
significant divergence from the recommendation objective; 2) it erroneously
treats all fictitious item descriptions as negative samples, introducing
misleading training signals.
To address these limitations, we propose a novel Masked Softmax Loss (MSL)
tailored for fine-tuning LLMs on recommendation. MSL improves LML by
identifying and masking invalid tokens that could lead to fictitious item
descriptions during loss computation. This strategy can effectively avoid the
interference from erroneous negative signals and ensure well alignment with the
recommendation objective supported by theoretical guarantees. During
implementation, we identify a potential challenge related to gradient vanishing
of MSL. To overcome this, we further introduce the temperature coefficient and
propose an Adaptive Temperature Strategy (ATS) that adaptively adjusts the
temperature without requiring extensive hyperparameter tuning. Extensive
experiments conducted on four public datasets further validate the
effectiveness of MSL, achieving an average improvement of 42.24% in NDCG@10.
The code is available at https://github.com/WANGBohaO-jpg/MSL.
|
2504.04185 | Yuanchao Wu | Dong Liu, Yuanchao Wu, Bowen Tong, Jiansong Deng | SDEIT: Semantic-Driven Electrical Impedance Tomography | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Regularization methods using prior knowledge are essential in solving
ill-posed inverse problems such as Electrical Impedance Tomography (EIT).
However, designing effective regularization and integrating prior information
into EIT remains challenging due to the complexity and variability of
anatomical structures. In this work, we introduce SDEIT, a novel
semantic-driven framework that integrates Stable Diffusion 3.5 into EIT,
marking the first use of large-scale text-to-image generation models in EIT.
SDEIT employs natural language prompts as semantic priors to guide the
reconstruction process. By coupling an implicit neural representation (INR)
network with a plug-and-play optimization scheme that leverages SD-generated
images as generative priors, SDEIT improves structural consistency and recovers
fine details. Importantly, this method does not rely on paired training
datasets, increasing its adaptability to varied EIT scenarios. Extensive
experiments on both simulated and experimental data demonstrate that SDEIT
outperforms state-of-the-art techniques, offering superior accuracy and
robustness. This work opens a new pathway for integrating multimodal priors
into ill-posed inverse problems like EIT.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 14:08:58 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Dong",
""
],
[
"Wu",
"Yuanchao",
""
],
[
"Tong",
"Bowen",
""
],
[
"Deng",
"Jiansong",
""
]
] | TITLE: SDEIT: Semantic-Driven Electrical Impedance Tomography
ABSTRACT: Regularization methods using prior knowledge are essential in solving
ill-posed inverse problems such as Electrical Impedance Tomography (EIT).
However, designing effective regularization and integrating prior information
into EIT remains challenging due to the complexity and variability of
anatomical structures. In this work, we introduce SDEIT, a novel
semantic-driven framework that integrates Stable Diffusion 3.5 into EIT,
marking the first use of large-scale text-to-image generation models in EIT.
SDEIT employs natural language prompts as semantic priors to guide the
reconstruction process. By coupling an implicit neural representation (INR)
network with a plug-and-play optimization scheme that leverages SD-generated
images as generative priors, SDEIT improves structural consistency and recovers
fine details. Importantly, this method does not rely on paired training
datasets, increasing its adaptability to varied EIT scenarios. Extensive
experiments on both simulated and experimental data demonstrate that SDEIT
outperforms state-of-the-art techniques, offering superior accuracy and
robustness. This work opens a new pathway for integrating multimodal priors
into ill-posed inverse problems like EIT.
|
2504.04187 | Chuadhry Mujeeb Ahmed | Chuadhry Mujeeb Ahmed (Newcastle University UK) | AttackLLM: LLM-based Attack Pattern Generation for an Industrial Control
System | null | null | null | null | cs.CR cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Malicious examples are crucial for evaluating the robustness of machine
learning algorithms under attack, particularly in Industrial Control Systems
(ICS). However, collecting normal and attack data in ICS environments is
challenging due to the scarcity of testbeds and the high cost of human
expertise. Existing datasets are often limited by the domain expertise of
practitioners, making the process costly and inefficient. The lack of
comprehensive attack pattern data poses a significant problem for developing
robust anomaly detection methods. In this paper, we propose a novel approach
that combines data-centric and design-centric methodologies to generate attack
patterns using large language models (LLMs). Our results demonstrate that the
attack patterns generated by LLMs not only surpass the quality and quantity of
those created by human experts but also offer a scalable solution that does not
rely on expensive testbeds or pre-existing attack examples. This multi-agent
based approach presents a promising avenue for enhancing the security and
resilience of ICS environments.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 14:11:47 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ahmed",
"Chuadhry Mujeeb",
"",
"Newcastle University UK"
]
] | TITLE: AttackLLM: LLM-based Attack Pattern Generation for an Industrial Control
System
ABSTRACT: Malicious examples are crucial for evaluating the robustness of machine
learning algorithms under attack, particularly in Industrial Control Systems
(ICS). However, collecting normal and attack data in ICS environments is
challenging due to the scarcity of testbeds and the high cost of human
expertise. Existing datasets are often limited by the domain expertise of
practitioners, making the process costly and inefficient. The lack of
comprehensive attack pattern data poses a significant problem for developing
robust anomaly detection methods. In this paper, we propose a novel approach
that combines data-centric and design-centric methodologies to generate attack
patterns using large language models (LLMs). Our results demonstrate that the
attack patterns generated by LLMs not only surpass the quality and quantity of
those created by human experts but also offer a scalable solution that does not
rely on expensive testbeds or pre-existing attack examples. This multi-agent
based approach presents a promising avenue for enhancing the security and
resilience of ICS environments.
|
2504.04188 | Qunwei Li | Qunwei Li, Linghui Li, Jianbin Lin, Wenliang Zhong | Towards Principled Learning for Re-ranking in Recommender Systems | null | null | null | null | cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | As the final stage of recommender systems, re-ranking presents ordered item
lists to users that best match their interests. It plays such a critical role
and has become a trending research topic with much attention from both academia
and industry. Recent advances of re-ranking are focused on attentive listwise
modeling of interactions and mutual influences among items to be re-ranked.
However, principles to guide the learning process of a re-ranker, and to
measure the quality of the output of the re-ranker, have been always missing.
In this paper, we study such principles to learn a good re-ranker. Two
principles are proposed, including convergence consistency and adversarial
consistency. These two principles can be applied in the learning of a generic
re-ranker and improve its performance. We validate such a finding by various
baseline methods over different datasets.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 14:14:36 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Qunwei",
""
],
[
"Li",
"Linghui",
""
],
[
"Lin",
"Jianbin",
""
],
[
"Zhong",
"Wenliang",
""
]
] | TITLE: Towards Principled Learning for Re-ranking in Recommender Systems
ABSTRACT: As the final stage of recommender systems, re-ranking presents ordered item
lists to users that best match their interests. It plays such a critical role
and has become a trending research topic with much attention from both academia
and industry. Recent advances of re-ranking are focused on attentive listwise
modeling of interactions and mutual influences among items to be re-ranked.
However, principles to guide the learning process of a re-ranker, and to
measure the quality of the output of the re-ranker, have been always missing.
In this paper, we study such principles to learn a good re-ranker. Two
principles are proposed, including convergence consistency and adversarial
consistency. These two principles can be applied in the learning of a generic
re-ranker and improve its performance. We validate such a finding by various
baseline methods over different datasets.
|
2504.04199 | Zihuai Zhao | Zihuai Zhao, Wenqi Fan, Yao Wu, Qing Li | Investigating and Mitigating Stereotype-aware Unfairness in LLM-based
Recommendations | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Language Models (LLMs) have demonstrated unprecedented language
understanding and reasoning capabilities to capture diverse user preferences
and advance personalized recommendations. Despite the growing interest in
LLM-based personalized recommendations, unique challenges are brought to the
trustworthiness of LLM-based recommender systems (LLM-RS), since LLMs are
likely to inherit stereotypes that are embedded ubiquitously in word embeddings
due to their training on large-scale uncurated datasets. This leads to LLM-RS
exhibiting stereotypical linguistic associations between users and items.
However, there remains a lack of studies investigating the simultaneous
existence of stereotypes between users and items in LLM-RS. To bridge this gap,
this study reveals a new variant of fairness between stereotype groups
containing both users and items, to quantify discrimination against stereotypes
in LLM-RS. Moreover, in this paper, to mitigate stereotype-aware unfairness in
textual user and item information, we propose a novel framework (MoS), in which
an insightful stereotype-wise routing strategy over multiple
stereotype-relevant experts is designed to learn unbiased representations
against different stereotypes in LLM- RS. Extensive experiments are conducted
to analyze the influence of stereotype-aware fairness in LLM-RS and the
effectiveness of our proposed methods, which consistently outperform
competitive benchmarks under various fairness settings.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 15:09:39 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhao",
"Zihuai",
""
],
[
"Fan",
"Wenqi",
""
],
[
"Wu",
"Yao",
""
],
[
"Li",
"Qing",
""
]
] | TITLE: Investigating and Mitigating Stereotype-aware Unfairness in LLM-based
Recommendations
ABSTRACT: Large Language Models (LLMs) have demonstrated unprecedented language
understanding and reasoning capabilities to capture diverse user preferences
and advance personalized recommendations. Despite the growing interest in
LLM-based personalized recommendations, unique challenges are brought to the
trustworthiness of LLM-based recommender systems (LLM-RS), since LLMs are
likely to inherit stereotypes that are embedded ubiquitously in word embeddings
due to their training on large-scale uncurated datasets. This leads to LLM-RS
exhibiting stereotypical linguistic associations between users and items.
However, there remains a lack of studies investigating the simultaneous
existence of stereotypes between users and items in LLM-RS. To bridge this gap,
this study reveals a new variant of fairness between stereotype groups
containing both users and items, to quantify discrimination against stereotypes
in LLM-RS. Moreover, in this paper, to mitigate stereotype-aware unfairness in
textual user and item information, we propose a novel framework (MoS), in which
an insightful stereotype-wise routing strategy over multiple
stereotype-relevant experts is designed to learn unbiased representations
against different stereotypes in LLM- RS. Extensive experiments are conducted
to analyze the influence of stereotype-aware fairness in LLM-RS and the
effectiveness of our proposed methods, which consistently outperform
competitive benchmarks under various fairness settings.
|
2504.04217 | Milad Rabiei | Farbod Younesi, Milad Rabiei, Soroush Keivanfard, Mohsen Sharifi,
Marzieh Ghayour Najafabadi, Bahar Moadeli, Arshia Jafari, Mohammad Hossein
Moaiyeri | An Optimized Density-Based Lane Keeping System for A Cost-Efficient
Autonomous Vehicle Platform: AurigaBot V1 | 12 pages, 14 figures | null | null | null | cs.RO cs.SY eess.SY | http://creativecommons.org/licenses/by/4.0/ | The development of self-driving cars has garnered significant attention from
researchers, universities, and industries worldwide. Autonomous vehicles
integrate numerous subsystems, including lane tracking, object detection, and
vehicle control, which require thorough testing and validation. Scaled-down
vehicles offer a cost-effective and accessible platform for experimentation,
providing researchers with opportunities to optimize algorithms under
constraints of limited computational power. This paper presents a four-wheeled
autonomous vehicle platform designed to facilitate research and prototyping in
autonomous driving. Key contributions include (1) a novel density-based
clustering approach utilizing histogram statistics for landmark tracking, (2) a
lateral controller, and (3) the integration of these innovations into a
cohesive platform. Additionally, the paper explores object detection through
systematic dataset augmentation and introduces an autonomous parking procedure.
The results demonstrate the platform's effectiveness in achieving reliable lane
tracking under varying lighting conditions, smooth trajectory following, and
consistent object detection performance. Though developed for small-scale
vehicles, these modular solutions are adaptable for full-scale autonomous
systems, offering a versatile and cost-efficient framework for advancing
research and industry applications.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 16:07:08 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Younesi",
"Farbod",
""
],
[
"Rabiei",
"Milad",
""
],
[
"Keivanfard",
"Soroush",
""
],
[
"Sharifi",
"Mohsen",
""
],
[
"Najafabadi",
"Marzieh Ghayour",
""
],
[
"Moadeli",
"Bahar",
""
],
[
"Jafari",
"Arshia",
""
],
[
"Moaiyeri",
"Mohammad Hossein",
""
]
] | TITLE: An Optimized Density-Based Lane Keeping System for A Cost-Efficient
Autonomous Vehicle Platform: AurigaBot V1
ABSTRACT: The development of self-driving cars has garnered significant attention from
researchers, universities, and industries worldwide. Autonomous vehicles
integrate numerous subsystems, including lane tracking, object detection, and
vehicle control, which require thorough testing and validation. Scaled-down
vehicles offer a cost-effective and accessible platform for experimentation,
providing researchers with opportunities to optimize algorithms under
constraints of limited computational power. This paper presents a four-wheeled
autonomous vehicle platform designed to facilitate research and prototyping in
autonomous driving. Key contributions include (1) a novel density-based
clustering approach utilizing histogram statistics for landmark tracking, (2) a
lateral controller, and (3) the integration of these innovations into a
cohesive platform. Additionally, the paper explores object detection through
systematic dataset augmentation and introduces an autonomous parking procedure.
The results demonstrate the platform's effectiveness in achieving reliable lane
tracking under varying lighting conditions, smooth trajectory following, and
consistent object detection performance. Though developed for small-scale
vehicles, these modular solutions are adaptable for full-scale autonomous
systems, offering a versatile and cost-efficient framework for advancing
research and industry applications.
|
2504.04237 | Zhiyu He | Zhiyu He, Zhixin Ling, Jiayu Li, Zhiqiang Guo, Weizhi Ma, Xinchen Luo,
Min Zhang, Guorui Zhou | Short Video Segment-level User Dynamic Interests Modeling in
Personalized Recommendation | This paper has been accepted by SIGIR 2025 | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The rapid growth of short videos has necessitated effective recommender
systems to match users with content tailored to their evolving preferences.
Current video recommendation models primarily treat each video as a whole,
overlooking the dynamic nature of user preferences with specific video
segments. In contrast, our research focuses on segment-level user interest
modeling, which is crucial for understanding how users' preferences evolve
during video browsing. To capture users' dynamic segment interests, we propose
an innovative model that integrates a hybrid representation module, a
multi-modal user-video encoder, and a segment interest decoder. Our model
addresses the challenges of capturing dynamic interest patterns, missing
segment-level labels, and fusing different modalities, achieving precise
segment-level interest prediction. We present two downstream tasks to evaluate
the effectiveness of our segment interest modeling approach: video-skip
prediction and short video recommendation. Our experiments on real-world short
video datasets with diverse modalities show promising results on both tasks. It
demonstrates that segment-level interest modeling brings a deep understanding
of user engagement and enhances video recommendations. We also release a unique
dataset that includes segment-level video data and diverse user behaviors,
enabling further research in segment-level interest modeling. This work
pioneers a novel perspective on understanding user segment-level preference,
offering the potential for more personalized and engaging short video
experiences.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 17:45:32 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"He",
"Zhiyu",
""
],
[
"Ling",
"Zhixin",
""
],
[
"Li",
"Jiayu",
""
],
[
"Guo",
"Zhiqiang",
""
],
[
"Ma",
"Weizhi",
""
],
[
"Luo",
"Xinchen",
""
],
[
"Zhang",
"Min",
""
],
[
"Zhou",
"Guorui",
""
]
] | TITLE: Short Video Segment-level User Dynamic Interests Modeling in
Personalized Recommendation
ABSTRACT: The rapid growth of short videos has necessitated effective recommender
systems to match users with content tailored to their evolving preferences.
Current video recommendation models primarily treat each video as a whole,
overlooking the dynamic nature of user preferences with specific video
segments. In contrast, our research focuses on segment-level user interest
modeling, which is crucial for understanding how users' preferences evolve
during video browsing. To capture users' dynamic segment interests, we propose
an innovative model that integrates a hybrid representation module, a
multi-modal user-video encoder, and a segment interest decoder. Our model
addresses the challenges of capturing dynamic interest patterns, missing
segment-level labels, and fusing different modalities, achieving precise
segment-level interest prediction. We present two downstream tasks to evaluate
the effectiveness of our segment interest modeling approach: video-skip
prediction and short video recommendation. Our experiments on real-world short
video datasets with diverse modalities show promising results on both tasks. It
demonstrates that segment-level interest modeling brings a deep understanding
of user engagement and enhances video recommendations. We also release a unique
dataset that includes segment-level video data and diverse user behaviors,
enabling further research in segment-level interest modeling. This work
pioneers a novel perspective on understanding user segment-level preference,
offering the potential for more personalized and engaging short video
experiences.
|
2504.04244 | Avijit Saha Asru | Avijit Saha Asru, Hamed Khosravi, Imtiaz Ahmed, Abdullahil Azeem | From Automation to Autonomy in Smart Manufacturing: A Bayesian
Optimization Framework for Modeling Multi-Objective Experimentation and
Sequential Decision Making | null | International Journal of Advanced Manufacturing Technology (2025) | 10.1007/s00170-025-15407-z | null | cs.LG cs.AI cs.SY eess.SY math.OC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Discovering novel materials with desired properties is essential for driving
innovation. Industry 4.0 and smart manufacturing have promised transformative
advances in this area through real-time data integration and automated
production planning and control. However, the reliance on automation alone has
often fallen short, lacking the flexibility needed for complex processes. To
fully unlock the potential of smart manufacturing, we must evolve from
automation to autonomous systems that go beyond rigid programming and can
dynamically optimize the search for solutions. Current discovery approaches are
often slow, requiring numerous trials to find optimal combinations, and costly,
particularly when optimizing multiple properties simultaneously. This paper
proposes a Bayesian multi-objective sequential decision-making (BMSDM)
framework that can intelligently select experiments as manufacturing
progresses, guiding us toward the discovery of optimal design faster and more
efficiently. The framework leverages sequential learning through Bayesian
Optimization, which iteratively refines a statistical model representing the
underlying manufacturing process. This statistical model acts as a surrogate,
allowing for efficient exploration and optimization without requiring numerous
real-world experiments. This approach can significantly reduce the time and
cost of data collection required by traditional experimental designs. The
proposed framework is compared with traditional DoE methods and two other
multi-objective optimization methods. Using a manufacturing dataset, we
evaluate and compare the performance of these approaches across five evaluation
metrics. BMSDM comprehensively outperforms the competing methods in
multi-objective decision-making scenarios. Our proposed approach represents a
significant leap forward in creating an intelligent autonomous platform capable
of novel material discovery.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 18:21:20 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Asru",
"Avijit Saha",
""
],
[
"Khosravi",
"Hamed",
""
],
[
"Ahmed",
"Imtiaz",
""
],
[
"Azeem",
"Abdullahil",
""
]
] | TITLE: From Automation to Autonomy in Smart Manufacturing: A Bayesian
Optimization Framework for Modeling Multi-Objective Experimentation and
Sequential Decision Making
ABSTRACT: Discovering novel materials with desired properties is essential for driving
innovation. Industry 4.0 and smart manufacturing have promised transformative
advances in this area through real-time data integration and automated
production planning and control. However, the reliance on automation alone has
often fallen short, lacking the flexibility needed for complex processes. To
fully unlock the potential of smart manufacturing, we must evolve from
automation to autonomous systems that go beyond rigid programming and can
dynamically optimize the search for solutions. Current discovery approaches are
often slow, requiring numerous trials to find optimal combinations, and costly,
particularly when optimizing multiple properties simultaneously. This paper
proposes a Bayesian multi-objective sequential decision-making (BMSDM)
framework that can intelligently select experiments as manufacturing
progresses, guiding us toward the discovery of optimal design faster and more
efficiently. The framework leverages sequential learning through Bayesian
Optimization, which iteratively refines a statistical model representing the
underlying manufacturing process. This statistical model acts as a surrogate,
allowing for efficient exploration and optimization without requiring numerous
real-world experiments. This approach can significantly reduce the time and
cost of data collection required by traditional experimental designs. The
proposed framework is compared with traditional DoE methods and two other
multi-objective optimization methods. Using a manufacturing dataset, we
evaluate and compare the performance of these approaches across five evaluation
metrics. BMSDM comprehensively outperforms the competing methods in
multi-objective decision-making scenarios. Our proposed approach represents a
significant leap forward in creating an intelligent autonomous platform capable
of novel material discovery.
|
2504.04252 | Muhammad Osama Zeeshan Zeeshan | Muhammad Osama Zeeshan and Marco Pedersoli and Alessandro Lameiras
Koerich and Eric Grange | Progressive Multi-Source Domain Adaptation for Personalized Facial
Expression Recognition | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Personalized facial expression recognition (FER) involves adapting a machine
learning model using samples from labeled sources and unlabeled target domains.
Given the challenges of recognizing subtle expressions with considerable
interpersonal variability, state-of-the-art unsupervised domain adaptation
(UDA) methods focus on the multi-source UDA (MSDA) setting, where each domain
corresponds to a specific subject, and improve model accuracy and robustness.
However, when adapting to a specific target, the diverse nature of multiple
source domains translates to a large shift between source and target data.
State-of-the-art MSDA methods for FER address this domain shift by considering
all the sources to adapt to the target representations. Nevertheless, adapting
to a target subject presents significant challenges due to large distributional
differences between source and target domains, often resulting in negative
transfer. In addition, integrating all sources simultaneously increases
computational costs and causes misalignment with the target. To address these
issues, we propose a progressive MSDA approach that gradually introduces
information from subjects based on their similarity to the target subject. This
will ensure that only the most relevant sources from the target are selected,
which helps avoid the negative transfer caused by dissimilar sources. We first
exploit the closest sources to reduce the distribution shift with the target
and then move towards the furthest while only considering the most relevant
sources based on the predetermined threshold. Furthermore, to mitigate
catastrophic forgetting caused by the incremental introduction of source
subjects, we implemented a density-based memory mechanism that preserves the
most relevant historical source samples for adaptation. Our experiments show
the effectiveness of our proposed method on pain datasets: Biovid and
UNBC-McMaster.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 19:14:51 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zeeshan",
"Muhammad Osama",
""
],
[
"Pedersoli",
"Marco",
""
],
[
"Koerich",
"Alessandro Lameiras",
""
],
[
"Grange",
"Eric",
""
]
] | TITLE: Progressive Multi-Source Domain Adaptation for Personalized Facial
Expression Recognition
ABSTRACT: Personalized facial expression recognition (FER) involves adapting a machine
learning model using samples from labeled sources and unlabeled target domains.
Given the challenges of recognizing subtle expressions with considerable
interpersonal variability, state-of-the-art unsupervised domain adaptation
(UDA) methods focus on the multi-source UDA (MSDA) setting, where each domain
corresponds to a specific subject, and improve model accuracy and robustness.
However, when adapting to a specific target, the diverse nature of multiple
source domains translates to a large shift between source and target data.
State-of-the-art MSDA methods for FER address this domain shift by considering
all the sources to adapt to the target representations. Nevertheless, adapting
to a target subject presents significant challenges due to large distributional
differences between source and target domains, often resulting in negative
transfer. In addition, integrating all sources simultaneously increases
computational costs and causes misalignment with the target. To address these
issues, we propose a progressive MSDA approach that gradually introduces
information from subjects based on their similarity to the target subject. This
will ensure that only the most relevant sources from the target are selected,
which helps avoid the negative transfer caused by dissimilar sources. We first
exploit the closest sources to reduce the distribution shift with the target
and then move towards the furthest while only considering the most relevant
sources based on the predetermined threshold. Furthermore, to mitigate
catastrophic forgetting caused by the incremental introduction of source
subjects, we implemented a density-based memory mechanism that preserves the
most relevant historical source samples for adaptation. Our experiments show
the effectiveness of our proposed method on pain datasets: Biovid and
UNBC-McMaster.
|
2504.04259 | Maximilian Eberlein | Clemens C. Christoph, Maximilian Eberlein, Filippos Katsimalis, Arturo
Roberti, Aristotelis Sympetheros, Michel R. Vogt, Davide Liconti, Chenyu
Yang, Barnabas Gavin Cangan, Ronan J. Hinchet, Robert K. Katzschmann | ORCA: An Open-Source, Reliable, Cost-Effective, Anthropomorphic Robotic
Hand for Uninterrupted Dexterous Task Learning | This work has been submitted to the IEEE for possible publication | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | General-purpose robots should possess humanlike dexterity and agility to
perform tasks with the same versatility as us. A human-like form factor further
enables the use of vast datasets of human-hand interactions. However, the
primary bottleneck in dexterous manipulation lies not only in software but
arguably even more in hardware. Robotic hands that approach human capabilities
are often prohibitively expensive, bulky, or require enterprise-level
maintenance, limiting their accessibility for broader research and practical
applications. What if the research community could get started with reliable
dexterous hands within a day? We present the open-source ORCA hand, a reliable
and anthropomorphic 17-DoF tendon-driven robotic hand with integrated tactile
sensors, fully assembled in less than eight hours and built for a material cost
below 2,000 CHF. We showcase ORCA's key design features such as popping joints,
auto-calibration, and tensioning systems that significantly reduce complexity
while increasing reliability, accuracy, and robustness. We benchmark the ORCA
hand across a variety of tasks, ranging from teleoperation and imitation
learning to zero-shot sim-to-real reinforcement learning. Furthermore, we
demonstrate its durability, withstanding more than 10,000 continuous operation
cycles - equivalent to approximately 20 hours - without hardware failure, the
only constraint being the duration of the experiment itself. All design files,
source code, and documentation will be available at https://www.orcahand.com/.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 19:34:34 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Christoph",
"Clemens C.",
""
],
[
"Eberlein",
"Maximilian",
""
],
[
"Katsimalis",
"Filippos",
""
],
[
"Roberti",
"Arturo",
""
],
[
"Sympetheros",
"Aristotelis",
""
],
[
"Vogt",
"Michel R.",
""
],
[
"Liconti",
"Davide",
""
],
[
"Yang",
"Chenyu",
""
],
[
"Cangan",
"Barnabas Gavin",
""
],
[
"Hinchet",
"Ronan J.",
""
],
[
"Katzschmann",
"Robert K.",
""
]
] | TITLE: ORCA: An Open-Source, Reliable, Cost-Effective, Anthropomorphic Robotic
Hand for Uninterrupted Dexterous Task Learning
ABSTRACT: General-purpose robots should possess humanlike dexterity and agility to
perform tasks with the same versatility as us. A human-like form factor further
enables the use of vast datasets of human-hand interactions. However, the
primary bottleneck in dexterous manipulation lies not only in software but
arguably even more in hardware. Robotic hands that approach human capabilities
are often prohibitively expensive, bulky, or require enterprise-level
maintenance, limiting their accessibility for broader research and practical
applications. What if the research community could get started with reliable
dexterous hands within a day? We present the open-source ORCA hand, a reliable
and anthropomorphic 17-DoF tendon-driven robotic hand with integrated tactile
sensors, fully assembled in less than eight hours and built for a material cost
below 2,000 CHF. We showcase ORCA's key design features such as popping joints,
auto-calibration, and tensioning systems that significantly reduce complexity
while increasing reliability, accuracy, and robustness. We benchmark the ORCA
hand across a variety of tasks, ranging from teleoperation and imitation
learning to zero-shot sim-to-real reinforcement learning. Furthermore, we
demonstrate its durability, withstanding more than 10,000 continuous operation
cycles - equivalent to approximately 20 hours - without hardware failure, the
only constraint being the duration of the experiment itself. All design files,
source code, and documentation will be available at https://www.orcahand.com/.
|
2504.04271 | Mete Ahishali | Mete Ahishali, Anis Ur Rahman, Einari Heinaro, Samuli Junttila | ADA-Net: Attention-Guided Domain Adaptation Network with Contrastive
Learning for Standing Dead Tree Segmentation Using Aerial Imagery | null | null | null | null | cs.CV cs.LG eess.IV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Information on standing dead trees is important for understanding forest
ecosystem functioning and resilience but has been lacking over large geographic
regions. Climate change has caused large-scale tree mortality events that can
remain undetected due to limited data. In this study, we propose a novel method
for segmenting standing dead trees using aerial multispectral orthoimages.
Because access to annotated datasets has been a significant problem in forest
remote sensing due to the need for forest expertise, we introduce a method for
domain transfer by leveraging domain adaptation to learn a transformation from
a source domain X to target domain Y. In this Image-to-Image translation task,
we aim to utilize available annotations in the target domain by pre-training a
segmentation network. When images from a new study site without annotations are
introduced (source domain X), these images are transformed into the target
domain. Then, transfer learning is applied by inferring the pre-trained network
on domain-adapted images. In addition to investigating the feasibility of
current domain adaptation approaches for this objective, we propose a novel
approach called the Attention-guided Domain Adaptation Network (ADA-Net) with
enhanced contrastive learning. Accordingly, the ADA-Net approach provides new
state-of-the-art domain adaptation performance levels outperforming existing
approaches. We have evaluated the proposed approach using two datasets from
Finland and the US. The USA images are converted to the Finland domain, and we
show that the synthetic USA2Finland dataset exhibits similar characteristics to
the Finland domain images. The software implementation is shared at
https://github.com/meteahishali/ADA-Net. The data is publicly available at
https://www.kaggle.com/datasets/meteahishali/aerial-imagery-for-standing-dead-tree-segmentation.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 19:55:02 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ahishali",
"Mete",
""
],
[
"Rahman",
"Anis Ur",
""
],
[
"Heinaro",
"Einari",
""
],
[
"Junttila",
"Samuli",
""
]
] | TITLE: ADA-Net: Attention-Guided Domain Adaptation Network with Contrastive
Learning for Standing Dead Tree Segmentation Using Aerial Imagery
ABSTRACT: Information on standing dead trees is important for understanding forest
ecosystem functioning and resilience but has been lacking over large geographic
regions. Climate change has caused large-scale tree mortality events that can
remain undetected due to limited data. In this study, we propose a novel method
for segmenting standing dead trees using aerial multispectral orthoimages.
Because access to annotated datasets has been a significant problem in forest
remote sensing due to the need for forest expertise, we introduce a method for
domain transfer by leveraging domain adaptation to learn a transformation from
a source domain X to target domain Y. In this Image-to-Image translation task,
we aim to utilize available annotations in the target domain by pre-training a
segmentation network. When images from a new study site without annotations are
introduced (source domain X), these images are transformed into the target
domain. Then, transfer learning is applied by inferring the pre-trained network
on domain-adapted images. In addition to investigating the feasibility of
current domain adaptation approaches for this objective, we propose a novel
approach called the Attention-guided Domain Adaptation Network (ADA-Net) with
enhanced contrastive learning. Accordingly, the ADA-Net approach provides new
state-of-the-art domain adaptation performance levels outperforming existing
approaches. We have evaluated the proposed approach using two datasets from
Finland and the US. The USA images are converted to the Finland domain, and we
show that the synthetic USA2Finland dataset exhibits similar characteristics to
the Finland domain images. The software implementation is shared at
https://github.com/meteahishali/ADA-Net. The data is publicly available at
https://www.kaggle.com/datasets/meteahishali/aerial-imagery-for-standing-dead-tree-segmentation.
|
2504.04275 | T\'ulio Sousa De Gois | T\'ulio Sousa de Gois, Paloma Batista Cardoso | negativas: a prototype for searching and classifying sentential negation
in speech data | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Negation is a universal feature of natural languages. In Brazilian
Portuguese, the most commonly used negation particle is n\~ao, which can scope
over nouns or verbs. When it scopes over a verb, n\~ao can occur in three
positions: pre-verbal (NEG1), double negation (NEG2), or post-verbal (NEG3),
e.g., n\~ao gosto, n\~ao gosto n\~ao, gosto n\~ao ("I do not like it"). From a
variationist perspective, these structures are different forms of expressing
negation. Pragmatically, they serve distinct communicative functions, such as
politeness and modal evaluation. Despite their grammatical acceptability, these
forms differ in frequency. NEG1 dominates across Brazilian regions, while NEG2
and NEG3 appear more rarely, suggesting its use is contextually restricted.
This low-frequency challenges research, often resulting in subjective,
non-generalizable interpretations of verbal negation with n\~ao. To address
this, we developed negativas, a tool for automatically identifying NEG1, NEG2,
and NEG3 in transcribed data. The tool's development involved four stages: i)
analyzing a dataset of 22 interviews from the Falares Sergipanos database,
annotated by three linguists, ii) creating a code using natural language
processing (NLP) techniques, iii) running the tool, iv) evaluating accuracy.
Inter-annotator consistency, measured using Fleiss' Kappa, was moderate (0.57).
The tool identified 3,338 instances of n\~ao, classifying 2,085 as NEG1, NEG2,
or NEG3, achieving a 93% success rate. However, negativas has limitations. NEG1
accounted for 91.5% of identified structures, while NEG2 and NEG3 represented
7.2% and 1.2%, respectively. The tool struggled with NEG2, sometimes
misclassifying instances as overlapping structures (NEG1/NEG2/NEG3).
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 20:09:04 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"de Gois",
"Túlio Sousa",
""
],
[
"Cardoso",
"Paloma Batista",
""
]
] | TITLE: negativas: a prototype for searching and classifying sentential negation
in speech data
ABSTRACT: Negation is a universal feature of natural languages. In Brazilian
Portuguese, the most commonly used negation particle is n\~ao, which can scope
over nouns or verbs. When it scopes over a verb, n\~ao can occur in three
positions: pre-verbal (NEG1), double negation (NEG2), or post-verbal (NEG3),
e.g., n\~ao gosto, n\~ao gosto n\~ao, gosto n\~ao ("I do not like it"). From a
variationist perspective, these structures are different forms of expressing
negation. Pragmatically, they serve distinct communicative functions, such as
politeness and modal evaluation. Despite their grammatical acceptability, these
forms differ in frequency. NEG1 dominates across Brazilian regions, while NEG2
and NEG3 appear more rarely, suggesting its use is contextually restricted.
This low-frequency challenges research, often resulting in subjective,
non-generalizable interpretations of verbal negation with n\~ao. To address
this, we developed negativas, a tool for automatically identifying NEG1, NEG2,
and NEG3 in transcribed data. The tool's development involved four stages: i)
analyzing a dataset of 22 interviews from the Falares Sergipanos database,
annotated by three linguists, ii) creating a code using natural language
processing (NLP) techniques, iii) running the tool, iv) evaluating accuracy.
Inter-annotator consistency, measured using Fleiss' Kappa, was moderate (0.57).
The tool identified 3,338 instances of n\~ao, classifying 2,085 as NEG1, NEG2,
or NEG3, achieving a 93% success rate. However, negativas has limitations. NEG1
accounted for 91.5% of identified structures, while NEG2 and NEG3 represented
7.2% and 1.2%, respectively. The tool struggled with NEG2, sometimes
misclassifying instances as overlapping structures (NEG1/NEG2/NEG3).
|
2504.04279 | Hongchao Fang | Hongchao Fang, Can Qin, Ran Xu, Feng Liu, Yixin Liu, Lichao Sun,
Dongwon Lee, Lifu Huang, Wenpeng Yin | Could AI Trace and Explain the Origins of AI-Generated Images and Text? | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | AI-generated content is becoming increasingly prevalent in the real world,
leading to serious ethical and societal concerns. For instance, adversaries
might exploit large multimodal models (LMMs) to create images that violate
ethical or legal standards, while paper reviewers may misuse large language
models (LLMs) to generate reviews without genuine intellectual effort. While
prior work has explored detecting AI-generated images and texts, and
occasionally tracing their source models, there is a lack of a systematic and
fine-grained comparative study. Important dimensions--such as AI-generated
images vs. text, fully vs. partially AI-generated images, and general vs.
malicious use cases--remain underexplored. Furthermore, whether AI systems like
GPT-4o can explain why certain forged content is attributed to specific
generative models is still an open question, with no existing benchmark
addressing this. To fill this gap, we introduce AI-FAKER, a comprehensive
multimodal dataset with over 280,000 samples spanning multiple LLMs and LMMs,
covering both general and malicious use cases for AI-generated images and
texts. Our experiments reveal two key findings: (i) AI authorship detection
depends not only on the generated output but also on the model's original
training intent; and (ii) GPT-4o provides highly consistent but less specific
explanations when analyzing content produced by OpenAI's own models, such as
DALL-E and GPT-4o itself.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 20:51:54 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Fang",
"Hongchao",
""
],
[
"Qin",
"Can",
""
],
[
"Xu",
"Ran",
""
],
[
"Liu",
"Feng",
""
],
[
"Liu",
"Yixin",
""
],
[
"Sun",
"Lichao",
""
],
[
"Lee",
"Dongwon",
""
],
[
"Huang",
"Lifu",
""
],
[
"Yin",
"Wenpeng",
""
]
] | TITLE: Could AI Trace and Explain the Origins of AI-Generated Images and Text?
ABSTRACT: AI-generated content is becoming increasingly prevalent in the real world,
leading to serious ethical and societal concerns. For instance, adversaries
might exploit large multimodal models (LMMs) to create images that violate
ethical or legal standards, while paper reviewers may misuse large language
models (LLMs) to generate reviews without genuine intellectual effort. While
prior work has explored detecting AI-generated images and texts, and
occasionally tracing their source models, there is a lack of a systematic and
fine-grained comparative study. Important dimensions--such as AI-generated
images vs. text, fully vs. partially AI-generated images, and general vs.
malicious use cases--remain underexplored. Furthermore, whether AI systems like
GPT-4o can explain why certain forged content is attributed to specific
generative models is still an open question, with no existing benchmark
addressing this. To fill this gap, we introduce AI-FAKER, a comprehensive
multimodal dataset with over 280,000 samples spanning multiple LLMs and LMMs,
covering both general and malicious use cases for AI-generated images and
texts. Our experiments reveal two key findings: (i) AI authorship detection
depends not only on the generated output but also on the model's original
training intent; and (ii) GPT-4o provides highly consistent but less specific
explanations when analyzing content produced by OpenAI's own models, such as
DALL-E and GPT-4o itself.
|
2504.04283 | Xiao Lin | Xiao Lin, Zhichen Zeng, Tianxin Wei, Zhining Liu, Yuzhong chen,
Hanghang Tong | CATS: Mitigating Correlation Shift for Multivariate Time Series
Classification | null | null | null | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Unsupervised Domain Adaptation (UDA) leverages labeled source data to train
models for unlabeled target data. Given the prevalence of multivariate time
series (MTS) data across various domains, the UDA task for MTS classification
has emerged as a critical challenge. However, for MTS data, correlations
between variables often vary across domains, whereas most existing UDA works
for MTS classification have overlooked this essential characteristic. To bridge
this gap, we introduce a novel domain shift, {\em correlation shift}, measuring
domain differences in multivariate correlation. To mitigate correlation shift,
we propose a scalable and parameter-efficient \underline{C}orrelation
\underline{A}dapter for M\underline{TS} (CATS). Designed as a plug-and-play
technique compatible with various Transformer variants, CATS employs temporal
convolution to capture local temporal patterns and a graph attention module to
model the changing multivariate correlation. The adapter reweights the target
correlations to align the source correlations with a theoretically guaranteed
precision. A correlation alignment loss is further proposed to mitigate
correlation shift, bypassing the alignment challenge from the non-i.i.d. nature
of MTS data. Extensive experiments on four real-world datasets demonstrate that
(1) compared with vanilla Transformer-based models, CATS increases over $10\%$
average accuracy while only adding around $1\%$ parameters, and (2) all
Transformer variants equipped with CATS either reach or surpass
state-of-the-art baselines.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 21:08:47 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lin",
"Xiao",
""
],
[
"Zeng",
"Zhichen",
""
],
[
"Wei",
"Tianxin",
""
],
[
"Liu",
"Zhining",
""
],
[
"chen",
"Yuzhong",
""
],
[
"Tong",
"Hanghang",
""
]
] | TITLE: CATS: Mitigating Correlation Shift for Multivariate Time Series
Classification
ABSTRACT: Unsupervised Domain Adaptation (UDA) leverages labeled source data to train
models for unlabeled target data. Given the prevalence of multivariate time
series (MTS) data across various domains, the UDA task for MTS classification
has emerged as a critical challenge. However, for MTS data, correlations
between variables often vary across domains, whereas most existing UDA works
for MTS classification have overlooked this essential characteristic. To bridge
this gap, we introduce a novel domain shift, {\em correlation shift}, measuring
domain differences in multivariate correlation. To mitigate correlation shift,
we propose a scalable and parameter-efficient \underline{C}orrelation
\underline{A}dapter for M\underline{TS} (CATS). Designed as a plug-and-play
technique compatible with various Transformer variants, CATS employs temporal
convolution to capture local temporal patterns and a graph attention module to
model the changing multivariate correlation. The adapter reweights the target
correlations to align the source correlations with a theoretically guaranteed
precision. A correlation alignment loss is further proposed to mitigate
correlation shift, bypassing the alignment challenge from the non-i.i.d. nature
of MTS data. Extensive experiments on four real-world datasets demonstrate that
(1) compared with vanilla Transformer-based models, CATS increases over $10\%$
average accuracy while only adding around $1\%$ parameters, and (2) all
Transformer variants equipped with CATS either reach or surpass
state-of-the-art baselines.
|
2504.04289 | Junyi Geng | Yufei Jiang, Yuanzhu Zhan, Harsh Vardhan Gupta, Chinmay Borde, Junyi
Geng | A Self-Supervised Learning Approach with Differentiable Optimization for
UAV Trajectory Planning | null | null | null | null | cs.RO cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While Unmanned Aerial Vehicles (UAVs) have gained significant traction across
various fields, path planning in 3D environments remains a critical challenge,
particularly under size, weight, and power (SWAP) constraints. Traditional
modular planning systems often introduce latency and suboptimal performance due
to limited information sharing and local minima issues. End-to-end learning
approaches streamline the pipeline by mapping sensory observations directly to
actions but require large-scale datasets, face significant sim-to-real gaps, or
lack dynamical feasibility. In this paper, we propose a self-supervised UAV
trajectory planning pipeline that integrates a learning-based depth perception
with differentiable trajectory optimization. A 3D cost map guides UAV behavior
without expert demonstrations or human labels. Additionally, we incorporate a
neural network-based time allocation strategy to improve the efficiency and
optimality. The system thus combines robust learning-based perception with
reliable physics-based optimization for improved generalizability and
interpretability. Both simulation and real-world experiments validate our
approach across various environments, demonstrating its effectiveness and
robustness. Our method achieves a 31.33% improvement in position tracking error
and 49.37% reduction in control effort compared to the state-of-the-art.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 22:09:13 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Jiang",
"Yufei",
""
],
[
"Zhan",
"Yuanzhu",
""
],
[
"Gupta",
"Harsh Vardhan",
""
],
[
"Borde",
"Chinmay",
""
],
[
"Geng",
"Junyi",
""
]
] | TITLE: A Self-Supervised Learning Approach with Differentiable Optimization for
UAV Trajectory Planning
ABSTRACT: While Unmanned Aerial Vehicles (UAVs) have gained significant traction across
various fields, path planning in 3D environments remains a critical challenge,
particularly under size, weight, and power (SWAP) constraints. Traditional
modular planning systems often introduce latency and suboptimal performance due
to limited information sharing and local minima issues. End-to-end learning
approaches streamline the pipeline by mapping sensory observations directly to
actions but require large-scale datasets, face significant sim-to-real gaps, or
lack dynamical feasibility. In this paper, we propose a self-supervised UAV
trajectory planning pipeline that integrates a learning-based depth perception
with differentiable trajectory optimization. A 3D cost map guides UAV behavior
without expert demonstrations or human labels. Additionally, we incorporate a
neural network-based time allocation strategy to improve the efficiency and
optimality. The system thus combines robust learning-based perception with
reliable physics-based optimization for improved generalizability and
interpretability. Both simulation and real-world experiments validate our
approach across various environments, demonstrating its effectiveness and
robustness. Our method achieves a 31.33% improvement in position tracking error
and 49.37% reduction in control effort compared to the state-of-the-art.
|
2504.04299 | Mohammad (Matt) Namvarpour | Mohammad (Matt) Namvarpour, Harrison Pauwels, Afsaneh Razi | AI-induced sexual harassment: Investigating Contextual Characteristics
and User Reactions of Sexual Harassment by a Companion Chatbot | Accepted for publication at CSCW 2025. This is a pre-publication
version; the final version will be available through the ACM Digital Library | null | null | null | cs.HC cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Advancements in artificial intelligence (AI) have led to the increase of
conversational agents like Replika, designed to provide social interaction and
emotional support. However, reports of these AI systems engaging in
inappropriate sexual behaviors with users have raised significant concerns. In
this study, we conducted a thematic analysis of user reviews from the Google
Play Store to investigate instances of sexual harassment by the Replika
chatbot. From a dataset of 35,105 negative reviews, we identified 800 relevant
cases for analysis. Our findings revealed that users frequently experience
unsolicited sexual advances, persistent inappropriate behavior, and failures of
the chatbot to respect user boundaries. Users expressed feelings of discomfort,
violation of privacy, and disappointment, particularly when seeking a platonic
or therapeutic AI companion. This study highlights the potential harms
associated with AI companions and underscores the need for developers to
implement effective safeguards and ethical guidelines to prevent such
incidents. By shedding light on user experiences of AI-induced harassment, we
contribute to the understanding of AI-related risks and emphasize the
importance of corporate responsibility in developing safer and more ethical AI
systems.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 23:04:37 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Mohammad",
"",
"",
"Matt"
],
[
"Namvarpour",
"",
""
],
[
"Pauwels",
"Harrison",
""
],
[
"Razi",
"Afsaneh",
""
]
] | TITLE: AI-induced sexual harassment: Investigating Contextual Characteristics
and User Reactions of Sexual Harassment by a Companion Chatbot
ABSTRACT: Advancements in artificial intelligence (AI) have led to the increase of
conversational agents like Replika, designed to provide social interaction and
emotional support. However, reports of these AI systems engaging in
inappropriate sexual behaviors with users have raised significant concerns. In
this study, we conducted a thematic analysis of user reviews from the Google
Play Store to investigate instances of sexual harassment by the Replika
chatbot. From a dataset of 35,105 negative reviews, we identified 800 relevant
cases for analysis. Our findings revealed that users frequently experience
unsolicited sexual advances, persistent inappropriate behavior, and failures of
the chatbot to respect user boundaries. Users expressed feelings of discomfort,
violation of privacy, and disappointment, particularly when seeking a platonic
or therapeutic AI companion. This study highlights the potential harms
associated with AI companions and underscores the need for developers to
implement effective safeguards and ethical guidelines to prevent such
incidents. By shedding light on user experiences of AI-induced harassment, we
contribute to the understanding of AI-related risks and emphasize the
importance of corporate responsibility in developing safer and more ethical AI
systems.
|
2504.04301 | Shenyang Liu | Saleh Almohaimeed, Shenyang Liu, May Alsofyani, Saad Almohaimeed,
Liqiang Wang | Sigma: A dataset for text-to-code semantic parsing with statistical
analysis | 2023 International Conference on Machine Learning and Applications
(ICMLA) This version includes more details than the conference version | null | 10.1109/ICMLA58977.2023.00125 | null | cs.LG cs.AI cs.DB | http://creativecommons.org/licenses/by/4.0/ | In the domain of semantic parsing, significant progress has been achieved in
Text-to-SQL and question-answering tasks, both of which focus on extracting
information from data sources in their native formats. However, the inherent
constraints of their formal meaning representations, such as SQL programming
language or basic logical forms, hinder their ability to analyze data from
various perspectives, such as conducting statistical analyses. To address this
limitation and inspire research in this field, we design SIGMA, a new dataset
for Text-to-Code semantic parsing with statistical analysis. SIGMA comprises
6000 questions with corresponding Python code labels, spanning across 160
databases. Half of the questions involve query types, which return information
in its original format, while the remaining 50% are statistical analysis
questions, which perform statistical operations on the data. The Python code
labels in our dataset cover 4 types of query types and 40 types of statistical
analysis patterns. We evaluated the SIGMA dataset using three different
baseline models: LGESQL, SmBoP, and SLSQL. The experimental results show that
the LGESQL model with ELECTRA outperforms all other models, achieving 83.37%
structure accuracy. In terms of execution accuracy, the SmBoP model, when
combined with GraPPa and T5, reaches 76.38%.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 23:30:20 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Almohaimeed",
"Saleh",
""
],
[
"Liu",
"Shenyang",
""
],
[
"Alsofyani",
"May",
""
],
[
"Almohaimeed",
"Saad",
""
],
[
"Wang",
"Liqiang",
""
]
] | TITLE: Sigma: A dataset for text-to-code semantic parsing with statistical
analysis
ABSTRACT: In the domain of semantic parsing, significant progress has been achieved in
Text-to-SQL and question-answering tasks, both of which focus on extracting
information from data sources in their native formats. However, the inherent
constraints of their formal meaning representations, such as SQL programming
language or basic logical forms, hinder their ability to analyze data from
various perspectives, such as conducting statistical analyses. To address this
limitation and inspire research in this field, we design SIGMA, a new dataset
for Text-to-Code semantic parsing with statistical analysis. SIGMA comprises
6000 questions with corresponding Python code labels, spanning across 160
databases. Half of the questions involve query types, which return information
in its original format, while the remaining 50% are statistical analysis
questions, which perform statistical operations on the data. The Python code
labels in our dataset cover 4 types of query types and 40 types of statistical
analysis patterns. We evaluated the SIGMA dataset using three different
baseline models: LGESQL, SmBoP, and SLSQL. The experimental results show that
the LGESQL model with ELECTRA outperforms all other models, achieving 83.37%
structure accuracy. In terms of execution accuracy, the SmBoP model, when
combined with GraPPa and T5, reaches 76.38%.
|
2504.04302 | Anjan Bellamkonda | Anjan Bellamkonda, Laksh Bharani and Harivatsan Selvam | AbsInf: A Lightweight Object to Represent float('inf') in Dijkstra's
Algorithm | 13 pages, 3 figures. One bar chart was created using OPENAI's ChatGPT
and included as Figure 2. One image was downloaded from Wikipedia and cited
in the References section (used via local file instead of URL). Benchmarks
performed using CPython 3.12.0 and Python 3.13 across Azure and local Windows
machines. Code available at https://github.com/AnjanB3012/abstract-infinity | null | null | null | cs.PL cs.DS | http://creativecommons.org/licenses/by/4.0/ | We introduce AbsInf, a lightweight abstract object designed as a
high-performance alternative to Python's native float('inf') within pathfinding
algorithms. Implemented as a C-based Python extension, AbsInf bypasses IEEE-754
float coercion and dynamic type dispatch, offering constant-time dominance
comparisons and arithmetic neutrality. When integrated into Dijkstra's
algorithm without altering its logic, AbsInf reduces runtime by up to 17.2%,
averaging 9.74% across diverse synthetic and real-world graph datasets. This
optimization highlights the performance trade-offs in high-frequency
algorithmic constructs, where a symbolic use of infinity permits efficient
abstraction. Our findings contribute to the broader discourse on lightweight
architectural enhancements for interpreted languages, particularly in
performance-critical control flows.
| [
{
"version": "v1",
"created": "Sat, 5 Apr 2025 23:37:55 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Bellamkonda",
"Anjan",
""
],
[
"Bharani",
"Laksh",
""
],
[
"Selvam",
"Harivatsan",
""
]
] | TITLE: AbsInf: A Lightweight Object to Represent float('inf') in Dijkstra's
Algorithm
ABSTRACT: We introduce AbsInf, a lightweight abstract object designed as a
high-performance alternative to Python's native float('inf') within pathfinding
algorithms. Implemented as a C-based Python extension, AbsInf bypasses IEEE-754
float coercion and dynamic type dispatch, offering constant-time dominance
comparisons and arithmetic neutrality. When integrated into Dijkstra's
algorithm without altering its logic, AbsInf reduces runtime by up to 17.2%,
averaging 9.74% across diverse synthetic and real-world graph datasets. This
optimization highlights the performance trade-offs in high-frequency
algorithmic constructs, where a symbolic use of infinity permits efficient
abstraction. Our findings contribute to the broader discourse on lightweight
architectural enhancements for interpreted languages, particularly in
performance-critical control flows.
|
2504.04336 | Cong Sun | Cong Sun and Kurt Teichman and Yiliang Zhou and Brian Critelli and
David Nauheim and Graham Keir and Xindi Wang and Judy Zhong and Adam E
Flanders and George Shih and Yifan Peng | Generative Large Language Models Trained for Detecting Errors in
Radiology Reports | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this retrospective study, a dataset was constructed with two parts. The
first part included 1,656 synthetic chest radiology reports generated by GPT-4
using specified prompts, with 828 being error-free synthetic reports and 828
containing errors. The second part included 614 reports: 307 error-free reports
between 2011 and 2016 from the MIMIC-CXR database and 307 corresponding
synthetic reports with errors generated by GPT-4 on the basis of these
MIMIC-CXR reports and specified prompts. All errors were categorized into four
types: negation, left/right, interval change, and transcription errors. Then,
several models, including Llama-3, GPT-4, and BiomedBERT, were refined using
zero-shot prompting, few-shot prompting, or fine-tuning strategies. Finally,
the performance of these models was evaluated using the F1 score, 95\%
confidence interval (CI) and paired-sample t-tests on our constructed dataset,
with the prediction results further assessed by radiologists. Using zero-shot
prompting, the fine-tuned Llama-3-70B-Instruct model achieved the best
performance with the following F1 scores: 0.769 for negation errors, 0.772 for
left/right errors, 0.750 for interval change errors, 0.828 for transcription
errors, and 0.780 overall. In the real-world evaluation phase, two radiologists
reviewed 200 randomly selected reports output by the model. Of these, 99 were
confirmed to contain errors detected by the models by both radiologists, and
163 were confirmed to contain model-detected errors by at least one
radiologist. Generative LLMs, fine-tuned on synthetic and MIMIC-CXR radiology
reports, greatly enhanced error detection in radiology reports.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 03:02:36 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Sun",
"Cong",
""
],
[
"Teichman",
"Kurt",
""
],
[
"Zhou",
"Yiliang",
""
],
[
"Critelli",
"Brian",
""
],
[
"Nauheim",
"David",
""
],
[
"Keir",
"Graham",
""
],
[
"Wang",
"Xindi",
""
],
[
"Zhong",
"Judy",
""
],
[
"Flanders",
"Adam E",
""
],
[
"Shih",
"George",
""
],
[
"Peng",
"Yifan",
""
]
] | TITLE: Generative Large Language Models Trained for Detecting Errors in
Radiology Reports
ABSTRACT: In this retrospective study, a dataset was constructed with two parts. The
first part included 1,656 synthetic chest radiology reports generated by GPT-4
using specified prompts, with 828 being error-free synthetic reports and 828
containing errors. The second part included 614 reports: 307 error-free reports
between 2011 and 2016 from the MIMIC-CXR database and 307 corresponding
synthetic reports with errors generated by GPT-4 on the basis of these
MIMIC-CXR reports and specified prompts. All errors were categorized into four
types: negation, left/right, interval change, and transcription errors. Then,
several models, including Llama-3, GPT-4, and BiomedBERT, were refined using
zero-shot prompting, few-shot prompting, or fine-tuning strategies. Finally,
the performance of these models was evaluated using the F1 score, 95\%
confidence interval (CI) and paired-sample t-tests on our constructed dataset,
with the prediction results further assessed by radiologists. Using zero-shot
prompting, the fine-tuned Llama-3-70B-Instruct model achieved the best
performance with the following F1 scores: 0.769 for negation errors, 0.772 for
left/right errors, 0.750 for interval change errors, 0.828 for transcription
errors, and 0.780 overall. In the real-world evaluation phase, two radiologists
reviewed 200 randomly selected reports output by the model. Of these, 99 were
confirmed to contain errors detected by the models by both radiologists, and
163 were confirmed to contain model-detected errors by at least one
radiologist. Generative LLMs, fine-tuned on synthetic and MIMIC-CXR radiology
reports, greatly enhanced error detection in radiology reports.
|
2504.04338 | Xunjiang Gu | Alexander Naumann, Xunjiang Gu, Tolga Dimlioglu, Mariusz Bojarski,
Alperen Degirmenci, Alexander Popov, Devansh Bisla, Marco Pavone, Urs
M\"uller, Boris Ivanovic | Data Scaling Laws for End-to-End Autonomous Driving | 15 pages, 11 figures, 4 tables, CVPR 2025 Workshop on Autonomous
Driving | null | null | null | cs.RO cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Autonomous vehicle (AV) stacks have traditionally relied on decomposed
approaches, with separate modules handling perception, prediction, and
planning. However, this design introduces information loss during inter-module
communication, increases computational overhead, and can lead to compounding
errors. To address these challenges, recent works have proposed architectures
that integrate all components into an end-to-end differentiable model, enabling
holistic system optimization. This shift emphasizes data engineering over
software integration, offering the potential to enhance system performance by
simply scaling up training resources. In this work, we evaluate the performance
of a simple end-to-end driving architecture on internal driving datasets
ranging in size from 16 to 8192 hours with both open-loop metrics and
closed-loop simulations. Specifically, we investigate how much additional
training data is needed to achieve a target performance gain, e.g., a 5%
improvement in motion prediction accuracy. By understanding the relationship
between model performance and training dataset size, we aim to provide insights
for data-driven decision-making in autonomous driving development.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 03:23:48 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Naumann",
"Alexander",
""
],
[
"Gu",
"Xunjiang",
""
],
[
"Dimlioglu",
"Tolga",
""
],
[
"Bojarski",
"Mariusz",
""
],
[
"Degirmenci",
"Alperen",
""
],
[
"Popov",
"Alexander",
""
],
[
"Bisla",
"Devansh",
""
],
[
"Pavone",
"Marco",
""
],
[
"Müller",
"Urs",
""
],
[
"Ivanovic",
"Boris",
""
]
] | TITLE: Data Scaling Laws for End-to-End Autonomous Driving
ABSTRACT: Autonomous vehicle (AV) stacks have traditionally relied on decomposed
approaches, with separate modules handling perception, prediction, and
planning. However, this design introduces information loss during inter-module
communication, increases computational overhead, and can lead to compounding
errors. To address these challenges, recent works have proposed architectures
that integrate all components into an end-to-end differentiable model, enabling
holistic system optimization. This shift emphasizes data engineering over
software integration, offering the potential to enhance system performance by
simply scaling up training resources. In this work, we evaluate the performance
of a simple end-to-end driving architecture on internal driving datasets
ranging in size from 16 to 8192 hours with both open-loop metrics and
closed-loop simulations. Specifically, we investigate how much additional
training data is needed to achieve a target performance gain, e.g., a 5%
improvement in motion prediction accuracy. By understanding the relationship
between model performance and training dataset size, we aim to provide insights
for data-driven decision-making in autonomous driving development.
|
2504.04339 | Peng Gao | Peng Gao, Yujian Lee, Zailong Chen, Hui zhang, Xubo Liu, Yiyang Hu,
Guquang Jing | NCL-CIR: Noise-aware Contrastive Learning for Composed Image Retrieval | Has been accepted by ICASSP2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Composed Image Retrieval (CIR) seeks to find a target image using a
multi-modal query, which combines an image with modification text to pinpoint
the target. While recent CIR methods have shown promise, they mainly focus on
exploring relationships between the query pairs (image and text) through data
augmentation or model design. These methods often assume perfect alignment
between queries and target images, an idealized scenario rarely encountered in
practice. In reality, pairs are often partially or completely mismatched due to
issues like inaccurate modification texts, low-quality target images, and
annotation errors. Ignoring these mismatches leads to numerous False Positive
Pair (FFPs) denoted as noise pairs in the dataset, causing the model to overfit
and ultimately reducing its performance. To address this problem, we propose
the Noise-aware Contrastive Learning for CIR (NCL-CIR), comprising two key
components: the Weight Compensation Block (WCB) and the Noise-pair Filter Block
(NFB). The WCB coupled with diverse weight maps can ensure more stable token
representations of multi-modal queries and target images. Meanwhile, the NFB,
in conjunction with the Gaussian Mixture Model (GMM) predicts noise pairs by
evaluating loss distributions, and generates soft labels correspondingly,
allowing for the design of the soft-label based Noise Contrastive Estimation
(NCE) loss function. Consequently, the overall architecture helps to mitigate
the influence of mismatched and partially matched samples, with experimental
results demonstrating that NCL-CIR achieves exceptional performance on the
benchmark datasets.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 03:27:23 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Gao",
"Peng",
""
],
[
"Lee",
"Yujian",
""
],
[
"Chen",
"Zailong",
""
],
[
"zhang",
"Hui",
""
],
[
"Liu",
"Xubo",
""
],
[
"Hu",
"Yiyang",
""
],
[
"Jing",
"Guquang",
""
]
] | TITLE: NCL-CIR: Noise-aware Contrastive Learning for Composed Image Retrieval
ABSTRACT: Composed Image Retrieval (CIR) seeks to find a target image using a
multi-modal query, which combines an image with modification text to pinpoint
the target. While recent CIR methods have shown promise, they mainly focus on
exploring relationships between the query pairs (image and text) through data
augmentation or model design. These methods often assume perfect alignment
between queries and target images, an idealized scenario rarely encountered in
practice. In reality, pairs are often partially or completely mismatched due to
issues like inaccurate modification texts, low-quality target images, and
annotation errors. Ignoring these mismatches leads to numerous False Positive
Pair (FFPs) denoted as noise pairs in the dataset, causing the model to overfit
and ultimately reducing its performance. To address this problem, we propose
the Noise-aware Contrastive Learning for CIR (NCL-CIR), comprising two key
components: the Weight Compensation Block (WCB) and the Noise-pair Filter Block
(NFB). The WCB coupled with diverse weight maps can ensure more stable token
representations of multi-modal queries and target images. Meanwhile, the NFB,
in conjunction with the Gaussian Mixture Model (GMM) predicts noise pairs by
evaluating loss distributions, and generates soft labels correspondingly,
allowing for the design of the soft-label based Noise Contrastive Estimation
(NCE) loss function. Consequently, the overall architecture helps to mitigate
the influence of mismatched and partially matched samples, with experimental
results demonstrating that NCL-CIR achieves exceptional performance on the
benchmark datasets.
|
2504.04340 | Ying Zhao | Ying Zhao | AnomalyHybrid: A Domain-agnostic Generative Framework for General
Anomaly Detection | Accepted to CVPR 2025 workshop on Harnessing Generative Models for
Synthetic Visual Datasets (SyntaGen) | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anomaly generation is an effective way to mitigate data scarcity for anomaly
detection task. Most existing works shine at industrial anomaly generation with
multiple specialists or large generative models, rarely generalizing to
anomalies in other applications. In this paper, we present AnomalyHybrid, a
domain-agnostic framework designed to generate authentic and diverse anomalies
simply by combining the reference and target images. AnomalyHybrid is a
Generative Adversarial Network(GAN)-based framework having two decoders that
integrate the appearance of reference image into the depth and edge structures
of target image respectively. With the help of depth decoders, AnomalyHybrid
achieves authentic generation especially for the anomalies with depth values
changing, such a s protrusion and dent. More, it relaxes the fine granularity
structural control of the edge decoder and brings more diversity. Without using
annotations, AnomalyHybrid is easily trained with sets of color, depth and edge
of same images having different augmentations. Extensive experiments carried on
HeliconiusButterfly, MVTecAD and MVTec3D datasets demonstrate that
AnomalyHybrid surpasses the GAN-based state-of-the-art on anomaly generation
and its downstream anomaly classification, detection and segmentation tasks. On
MVTecAD dataset, AnomalyHybrid achieves 2.06/0.32 IS/LPIPS for anomaly
generation, 52.6 Acc for anomaly classification with ResNet34, 97.3/72.9 AP for
image/pixel-level anomaly detection with a simple UNet.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 03:28:30 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhao",
"Ying",
""
]
] | TITLE: AnomalyHybrid: A Domain-agnostic Generative Framework for General
Anomaly Detection
ABSTRACT: Anomaly generation is an effective way to mitigate data scarcity for anomaly
detection task. Most existing works shine at industrial anomaly generation with
multiple specialists or large generative models, rarely generalizing to
anomalies in other applications. In this paper, we present AnomalyHybrid, a
domain-agnostic framework designed to generate authentic and diverse anomalies
simply by combining the reference and target images. AnomalyHybrid is a
Generative Adversarial Network(GAN)-based framework having two decoders that
integrate the appearance of reference image into the depth and edge structures
of target image respectively. With the help of depth decoders, AnomalyHybrid
achieves authentic generation especially for the anomalies with depth values
changing, such a s protrusion and dent. More, it relaxes the fine granularity
structural control of the edge decoder and brings more diversity. Without using
annotations, AnomalyHybrid is easily trained with sets of color, depth and edge
of same images having different augmentations. Extensive experiments carried on
HeliconiusButterfly, MVTecAD and MVTec3D datasets demonstrate that
AnomalyHybrid surpasses the GAN-based state-of-the-art on anomaly generation
and its downstream anomaly classification, detection and segmentation tasks. On
MVTecAD dataset, AnomalyHybrid achieves 2.06/0.32 IS/LPIPS for anomaly
generation, 52.6 Acc for anomaly classification with ResNet34, 97.3/72.9 AP for
image/pixel-level anomaly detection with a simple UNet.
|
2504.04348 | Xiaohui Jiang | Shihao Wang, Zhiding Yu, Xiaohui Jiang, Shiyi Lan, Min Shi, Nadine
Chang, Jan Kautz, Ying Li, Jose M. Alvarez | OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving
with Counterfactual Reasoning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advances in vision-language models (VLMs) have led to a growing interest
in autonomous driving to leverage their strong reasoning capabilities. However,
extending these capabilities from 2D to full 3D understanding is crucial for
real-world applications. To address this challenge, we propose OmniDrive, a
holistic vision-language dataset that aligns agent models with 3D driving tasks
through counterfactual reasoning. This approach enhances decision-making by
evaluating potential scenarios and their outcomes, similar to human drivers
considering alternative actions. Our counterfactual-based synthetic data
annotation process generates large-scale, high-quality datasets, providing
denser supervision signals that bridge planning trajectories and language-based
reasoning. Futher, we explore two advanced OmniDrive-Agent frameworks, namely
Omni-L and Omni-Q, to assess the importance of vision-language alignment versus
3D perception, revealing critical insights into designing effective LLM-agents.
Significant improvements on the DriveLM Q\&A benchmark and nuScenes open-loop
planning demonstrate the effectiveness of our dataset and methods.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 03:54:21 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Shihao",
""
],
[
"Yu",
"Zhiding",
""
],
[
"Jiang",
"Xiaohui",
""
],
[
"Lan",
"Shiyi",
""
],
[
"Shi",
"Min",
""
],
[
"Chang",
"Nadine",
""
],
[
"Kautz",
"Jan",
""
],
[
"Li",
"Ying",
""
],
[
"Alvarez",
"Jose M.",
""
]
] | TITLE: OmniDrive: A Holistic Vision-Language Dataset for Autonomous Driving
with Counterfactual Reasoning
ABSTRACT: The advances in vision-language models (VLMs) have led to a growing interest
in autonomous driving to leverage their strong reasoning capabilities. However,
extending these capabilities from 2D to full 3D understanding is crucial for
real-world applications. To address this challenge, we propose OmniDrive, a
holistic vision-language dataset that aligns agent models with 3D driving tasks
through counterfactual reasoning. This approach enhances decision-making by
evaluating potential scenarios and their outcomes, similar to human drivers
considering alternative actions. Our counterfactual-based synthetic data
annotation process generates large-scale, high-quality datasets, providing
denser supervision signals that bridge planning trajectories and language-based
reasoning. Futher, we explore two advanced OmniDrive-Agent frameworks, namely
Omni-L and Omni-Q, to assess the importance of vision-language alignment versus
3D perception, revealing critical insights into designing effective LLM-agents.
Significant improvements on the DriveLM Q\&A benchmark and nuScenes open-loop
planning demonstrate the effectiveness of our dataset and methods.
|
2504.04363 | Shenyang Liu | Shenyang Liu, Saleh Almohaimeed, Liqiang Wang | REFORMER: A ChatGPT-Driven Data Synthesis Framework Elevating
Text-to-SQL Models | 2024 International Conference on Machine Learning and Applications
(ICMLA) | null | 10.1109/ICMLA61862.2024.00119 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | The existing Text-to-SQL models suffer from a shortage of training data,
inhibiting their ability to fully facilitate the applications of SQL queries in
new domains. To address this challenge, various data synthesis techniques have
been employed to generate more diverse and higher quality data. In this paper,
we propose REFORMER, a framework that leverages ChatGPT's prowess without the
need for additional training, to facilitate the synthesis of (question, SQL
query) pairs tailored to new domains. Our data augmentation approach is based
on a "retrieve-and-edit" method, where we generate new questions by filling
masked question using explanation of SQL queries with the help of ChatGPT.
Furthermore, we demonstrate that cycle consistency remains a valuable method of
validation when applied appropriately. Our experimental results show that
REFORMER consistently outperforms previous data augmentation methods. To
further investigate the power of ChatGPT and create a general data augmentation
method, we also generate the new data by paraphrasing the question in the
dataset and by paraphrasing the description of a new SQL query that is
generated by ChatGPT as well. Our results affirm that paraphrasing questions
generated by ChatGPT help augment the original data.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 05:27:37 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Shenyang",
""
],
[
"Almohaimeed",
"Saleh",
""
],
[
"Wang",
"Liqiang",
""
]
] | TITLE: REFORMER: A ChatGPT-Driven Data Synthesis Framework Elevating
Text-to-SQL Models
ABSTRACT: The existing Text-to-SQL models suffer from a shortage of training data,
inhibiting their ability to fully facilitate the applications of SQL queries in
new domains. To address this challenge, various data synthesis techniques have
been employed to generate more diverse and higher quality data. In this paper,
we propose REFORMER, a framework that leverages ChatGPT's prowess without the
need for additional training, to facilitate the synthesis of (question, SQL
query) pairs tailored to new domains. Our data augmentation approach is based
on a "retrieve-and-edit" method, where we generate new questions by filling
masked question using explanation of SQL queries with the help of ChatGPT.
Furthermore, we demonstrate that cycle consistency remains a valuable method of
validation when applied appropriately. Our experimental results show that
REFORMER consistently outperforms previous data augmentation methods. To
further investigate the power of ChatGPT and create a general data augmentation
method, we also generate the new data by paraphrasing the question in the
dataset and by paraphrasing the description of a new SQL query that is
generated by ChatGPT as well. Our results affirm that paraphrasing questions
generated by ChatGPT help augment the original data.
|
2504.04367 | Vinod P | Sameera K. M., Vinod P., Anderson Rocha, Rafidha Rehiman K. A., Mauro
Conti | WeiDetect: Weibull Distribution-Based Defense against Poisoning Attacks
in Federated Learning for Network Intrusion Detection Systems | null | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In the era of data expansion, ensuring data privacy has become increasingly
critical, posing significant challenges to traditional AI-based applications.
In addition, the increasing adoption of IoT devices has introduced significant
cybersecurity challenges, making traditional Network Intrusion Detection
Systems (NIDS) less effective against evolving threats, and privacy concerns
and regulatory restrictions limit their deployment. Federated Learning (FL) has
emerged as a promising solution, allowing decentralized model training while
maintaining data privacy to solve these issues. However, despite implementing
privacy-preserving technologies, FL systems remain vulnerable to adversarial
attacks. Furthermore, data distribution among clients is not heterogeneous in
the FL scenario. We propose WeiDetect, a two-phase, server-side defense
mechanism for FL-based NIDS that detects malicious participants to address
these challenges. In the first phase, local models are evaluated using a
validation dataset to generate validation scores. These scores are then
analyzed using a Weibull distribution, identifying and removing malicious
models. We conducted experiments to evaluate the effectiveness of our approach
in diverse attack settings. Our evaluation included two popular datasets,
CIC-Darknet2020 and CSE-CIC-IDS2018, tested under non-IID data distributions.
Our findings highlight that WeiDetect outperforms state-of-the-art defense
approaches, improving higher target class recall up to 70% and enhancing the
global model's F1 score by 1% to 14%.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 05:31:24 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"M.",
"Sameera K.",
""
],
[
"P.",
"Vinod",
""
],
[
"Rocha",
"Anderson",
""
],
[
"A.",
"Rafidha Rehiman K.",
""
],
[
"Conti",
"Mauro",
""
]
] | TITLE: WeiDetect: Weibull Distribution-Based Defense against Poisoning Attacks
in Federated Learning for Network Intrusion Detection Systems
ABSTRACT: In the era of data expansion, ensuring data privacy has become increasingly
critical, posing significant challenges to traditional AI-based applications.
In addition, the increasing adoption of IoT devices has introduced significant
cybersecurity challenges, making traditional Network Intrusion Detection
Systems (NIDS) less effective against evolving threats, and privacy concerns
and regulatory restrictions limit their deployment. Federated Learning (FL) has
emerged as a promising solution, allowing decentralized model training while
maintaining data privacy to solve these issues. However, despite implementing
privacy-preserving technologies, FL systems remain vulnerable to adversarial
attacks. Furthermore, data distribution among clients is not heterogeneous in
the FL scenario. We propose WeiDetect, a two-phase, server-side defense
mechanism for FL-based NIDS that detects malicious participants to address
these challenges. In the first phase, local models are evaluated using a
validation dataset to generate validation scores. These scores are then
analyzed using a Weibull distribution, identifying and removing malicious
models. We conducted experiments to evaluate the effectiveness of our approach
in diverse attack settings. Our evaluation included two popular datasets,
CIC-Darknet2020 and CSE-CIC-IDS2018, tested under non-IID data distributions.
Our findings highlight that WeiDetect outperforms state-of-the-art defense
approaches, improving higher target class recall up to 70% and enhancing the
global model's F1 score by 1% to 14%.
|
2504.04371 | Satyajeet Sahoo Mr | Satyajeet Sahoo and Jhareswar Maiti | A Novel Cholesky Kernel based Support Vector Classifier | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Support Vector Machine (SVM) is a popular supervised classification model
that works by first finding the margin boundaries for the training data classes
and then calculating the decision boundary, which is then used to classify the
test data. This study demonstrates limitations of traditional support vector
classification which uses cartesian coordinate geometry to find the margin and
decision boundaries in an input space using only a few support vectors, without
considering data variance and correlation. Subsequently, the study proposes a
new Cholesky Kernel that adjusts for the effects of variance-covariance
structure of the data in the decision boundary equation and margin
calculations. The study demonstrates that SVM model is valid only in the
Euclidean space, and the Cholesky kernel obtained by decomposing covariance
matrix acts as a transformation matrix, which when applied on the original data
transforms the data from the input space to the Euclidean space. The
effectiveness of the Cholesky kernel based SVM classifier is demonstrated by
classifying the Wisconsin Breast Cancer (Diagnostic) Dataset and comparing with
traditional SVM approaches. The Cholesky kernel based SVM model shows marked
improvement in the precision, recall and F1 scores compared to linear and other
kernel SVMs.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 05:57:33 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Sahoo",
"Satyajeet",
""
],
[
"Maiti",
"Jhareswar",
""
]
] | TITLE: A Novel Cholesky Kernel based Support Vector Classifier
ABSTRACT: Support Vector Machine (SVM) is a popular supervised classification model
that works by first finding the margin boundaries for the training data classes
and then calculating the decision boundary, which is then used to classify the
test data. This study demonstrates limitations of traditional support vector
classification which uses cartesian coordinate geometry to find the margin and
decision boundaries in an input space using only a few support vectors, without
considering data variance and correlation. Subsequently, the study proposes a
new Cholesky Kernel that adjusts for the effects of variance-covariance
structure of the data in the decision boundary equation and margin
calculations. The study demonstrates that SVM model is valid only in the
Euclidean space, and the Cholesky kernel obtained by decomposing covariance
matrix acts as a transformation matrix, which when applied on the original data
transforms the data from the input space to the Euclidean space. The
effectiveness of the Cholesky kernel based SVM classifier is demonstrated by
classifying the Wisconsin Breast Cancer (Diagnostic) Dataset and comparing with
traditional SVM approaches. The Cholesky kernel based SVM model shows marked
improvement in the precision, recall and F1 scores compared to linear and other
kernel SVMs.
|
2504.04373 | Shenyang Liu | Shenyang Liu, Yang Gao, Shaoyan Zhai, Liqiang Wang | StyleRec: A Benchmark Dataset for Prompt Recovery in Writing Style
Transformation | 2024 IEEE International Conference on Big Data (BigData) | null | 10.1109/BigData62323.2024.10825143 | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Prompt Recovery, reconstructing prompts from the outputs of large language
models (LLMs), has grown in importance as LLMs become ubiquitous. Most users
access LLMs through APIs without internal model weights, relying only on
outputs and logits, which complicates recovery. This paper explores a unique
prompt recovery task focused on reconstructing prompts for style transfer and
rephrasing, rather than typical question-answering. We introduce a dataset
created with LLM assistance, ensuring quality through multiple techniques, and
test methods like zero-shot, few-shot, jailbreak, chain-of-thought,
fine-tuning, and a novel canonical-prompt fallback for poor-performing cases.
Our results show that one-shot and fine-tuning yield the best outcomes but
highlight flaws in traditional sentence similarity metrics for evaluating
prompt recovery. Contributions include (1) a benchmark dataset, (2)
comprehensive experiments on prompt recovery strategies, and (3) identification
of limitations in current evaluation metrics, all of which advance general
prompt recovery research, where the structure of the input prompt is
unrestricted.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 06:02:28 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Shenyang",
""
],
[
"Gao",
"Yang",
""
],
[
"Zhai",
"Shaoyan",
""
],
[
"Wang",
"Liqiang",
""
]
] | TITLE: StyleRec: A Benchmark Dataset for Prompt Recovery in Writing Style
Transformation
ABSTRACT: Prompt Recovery, reconstructing prompts from the outputs of large language
models (LLMs), has grown in importance as LLMs become ubiquitous. Most users
access LLMs through APIs without internal model weights, relying only on
outputs and logits, which complicates recovery. This paper explores a unique
prompt recovery task focused on reconstructing prompts for style transfer and
rephrasing, rather than typical question-answering. We introduce a dataset
created with LLM assistance, ensuring quality through multiple techniques, and
test methods like zero-shot, few-shot, jailbreak, chain-of-thought,
fine-tuning, and a novel canonical-prompt fallback for poor-performing cases.
Our results show that one-shot and fine-tuning yield the best outcomes but
highlight flaws in traditional sentence similarity metrics for evaluating
prompt recovery. Contributions include (1) a benchmark dataset, (2)
comprehensive experiments on prompt recovery strategies, and (3) identification
of limitations in current evaluation metrics, all of which advance general
prompt recovery research, where the structure of the input prompt is
unrestricted.
|
2504.04374 | Jiyu Tian | Jiyu Tian, Mingchu Li, Liming Chen, Zumin Wang | iADCPS: Time Series Anomaly Detection for Evolving Cyber-physical
Systems via Incremental Meta-learning | null | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Anomaly detection for cyber-physical systems (ADCPS) is crucial in
identifying faults and potential attacks by analyzing the time series of sensor
measurements and actuator states. However, current methods lack adaptation to
data distribution shifts in both temporal and spatial dimensions as
cyber-physical systems evolve. To tackle this issue, we propose an incremental
meta-learning-based approach, namely iADCPS, which can continuously update the
model through limited evolving normal samples to reconcile the distribution gap
between evolving and historical time series. Specifically, We first introduce a
temporal mixup strategy to align data for data-level generalization which is
then combined with the one-class meta-learning approach for model-level
generalization. Furthermore, we develop a non-parametric dynamic threshold to
adaptively adjust the threshold based on the probability density of the
abnormal scores without any anomaly supervision. We empirically evaluate the
effectiveness of the iADCPS using three publicly available datasets PUMP, SWaT,
and WADI. The experimental results demonstrate that our method achieves 99.0%,
93.1%, and 78.7% F1-Score, respectively, which outperforms the state-of-the-art
(SOTA) ADCPS method, especially in the context of the evolving CPSs.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 06:02:31 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Tian",
"Jiyu",
""
],
[
"Li",
"Mingchu",
""
],
[
"Chen",
"Liming",
""
],
[
"Wang",
"Zumin",
""
]
] | TITLE: iADCPS: Time Series Anomaly Detection for Evolving Cyber-physical
Systems via Incremental Meta-learning
ABSTRACT: Anomaly detection for cyber-physical systems (ADCPS) is crucial in
identifying faults and potential attacks by analyzing the time series of sensor
measurements and actuator states. However, current methods lack adaptation to
data distribution shifts in both temporal and spatial dimensions as
cyber-physical systems evolve. To tackle this issue, we propose an incremental
meta-learning-based approach, namely iADCPS, which can continuously update the
model through limited evolving normal samples to reconcile the distribution gap
between evolving and historical time series. Specifically, We first introduce a
temporal mixup strategy to align data for data-level generalization which is
then combined with the one-class meta-learning approach for model-level
generalization. Furthermore, we develop a non-parametric dynamic threshold to
adaptively adjust the threshold based on the probability density of the
abnormal scores without any anomaly supervision. We empirically evaluate the
effectiveness of the iADCPS using three publicly available datasets PUMP, SWaT,
and WADI. The experimental results demonstrate that our method achieves 99.0%,
93.1%, and 78.7% F1-Score, respectively, which outperforms the state-of-the-art
(SOTA) ADCPS method, especially in the context of the evolving CPSs.
|
2504.04375 | Ruoyan Li | Ruoyan Li, Zijie Huang, Yizhou Sun, Wei Wang | From Coarse to Fine: A Physics-Informed Self-Guided Flow Diffusion Model | null | null | null | null | cs.CE | http://creativecommons.org/licenses/by/4.0/ | Machine learning methods are widely explored as a promising way to
reconstruct high-fidelity computational fluid dynamics (CFD) data from
faster-to-compute low-fidelity input. Diffusion models have achieved great
success as they can reconstruct high-fidelity data from low-fidelity inputs at
arbitrary resolution without re-training. However, most existing approaches
assume that low-fidelity data is generated artificially via downsampling
high-fidelity data. In reality, low-fidelity data is produced by numerical
solvers that use a coarser resolution from the start, leading to substantial
differences compared to high-fidelity data, especially in the long-range.
Solver-generated low-fidelity data usually sacrifices fine-grained details,
such as small-scale vortices compared to high-fidelity ones. To bridge this
gap, we propose \model, a novel diffusion model for reconstruction, where both
low- and high-fidelity data are straight from numerical solvers. Our findings
show that state-of-the-art models struggle to generate fine-scale details when
faced with solver-generated low-fidelity inputs. To address this challenge, we
propose an \textit{Importance Weight} strategy during training that serves as a
form of self-guidance, along with a training-free \textit{Residual Correction}
approach during inference that embeds physical insights into the model.
Together, these techniques steer the diffusion model toward more accurate
reconstructions. Experimental results on four 2D turbulent flow datasets
demonstrate the efficacy of our proposed method.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 06:03:44 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Ruoyan",
""
],
[
"Huang",
"Zijie",
""
],
[
"Sun",
"Yizhou",
""
],
[
"Wang",
"Wei",
""
]
] | TITLE: From Coarse to Fine: A Physics-Informed Self-Guided Flow Diffusion Model
ABSTRACT: Machine learning methods are widely explored as a promising way to
reconstruct high-fidelity computational fluid dynamics (CFD) data from
faster-to-compute low-fidelity input. Diffusion models have achieved great
success as they can reconstruct high-fidelity data from low-fidelity inputs at
arbitrary resolution without re-training. However, most existing approaches
assume that low-fidelity data is generated artificially via downsampling
high-fidelity data. In reality, low-fidelity data is produced by numerical
solvers that use a coarser resolution from the start, leading to substantial
differences compared to high-fidelity data, especially in the long-range.
Solver-generated low-fidelity data usually sacrifices fine-grained details,
such as small-scale vortices compared to high-fidelity ones. To bridge this
gap, we propose \model, a novel diffusion model for reconstruction, where both
low- and high-fidelity data are straight from numerical solvers. Our findings
show that state-of-the-art models struggle to generate fine-scale details when
faced with solver-generated low-fidelity inputs. To address this challenge, we
propose an \textit{Importance Weight} strategy during training that serves as a
form of self-guidance, along with a training-free \textit{Residual Correction}
approach during inference that embeds physical insights into the model.
Together, these techniques steer the diffusion model toward more accurate
reconstructions. Experimental results on four 2D turbulent flow datasets
demonstrate the efficacy of our proposed method.
|
2504.04377 | Priyanshu Kumar | Priyanshu Kumar, Devansh Jain, Akhila Yerukola, Liwei Jiang, Himanshu
Beniwal, Thomas Hartvigsen, Maarten Sap | PolyGuard: A Multilingual Safety Moderation Tool for 17 Languages | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Truly multilingual safety moderation efforts for Large Language Models (LLMs)
have been hindered by a narrow focus on a small set of languages (e.g.,
English, Chinese) as well as a limited scope of safety definition, resulting in
significant gaps in moderation capabilities. To bridge these gaps, we release
POLYGUARD, a new state-of-the-art multilingual safety model for safeguarding
LLM generations, and the corresponding training and evaluation datasets.
POLYGUARD is trained on POLYGUARDMIX, the largest multilingual safety training
corpus to date containing 1.91M samples across 17 languages (e.g., Chinese,
Czech, English, Hindi). We also introduce POLYGUARDPROMPTS, a high quality
multilingual benchmark with 29K samples for the evaluation of safety
guardrails. Created by combining naturally occurring multilingual human-LLM
interactions and human-verified machine translations of an English-only safety
dataset (WildGuardMix; Han et al., 2024), our datasets contain prompt-output
pairs with labels of prompt harmfulness, response harmfulness, and response
refusal. Through extensive evaluations across multiple safety and toxicity
benchmarks, we demonstrate that POLYGUARD outperforms existing state-of-the-art
open-weight and commercial safety classifiers by 5.5%. Our contributions
advance efforts toward safer multilingual LLMs for all global users.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 06:09:21 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Kumar",
"Priyanshu",
""
],
[
"Jain",
"Devansh",
""
],
[
"Yerukola",
"Akhila",
""
],
[
"Jiang",
"Liwei",
""
],
[
"Beniwal",
"Himanshu",
""
],
[
"Hartvigsen",
"Thomas",
""
],
[
"Sap",
"Maarten",
""
]
] | TITLE: PolyGuard: A Multilingual Safety Moderation Tool for 17 Languages
ABSTRACT: Truly multilingual safety moderation efforts for Large Language Models (LLMs)
have been hindered by a narrow focus on a small set of languages (e.g.,
English, Chinese) as well as a limited scope of safety definition, resulting in
significant gaps in moderation capabilities. To bridge these gaps, we release
POLYGUARD, a new state-of-the-art multilingual safety model for safeguarding
LLM generations, and the corresponding training and evaluation datasets.
POLYGUARD is trained on POLYGUARDMIX, the largest multilingual safety training
corpus to date containing 1.91M samples across 17 languages (e.g., Chinese,
Czech, English, Hindi). We also introduce POLYGUARDPROMPTS, a high quality
multilingual benchmark with 29K samples for the evaluation of safety
guardrails. Created by combining naturally occurring multilingual human-LLM
interactions and human-verified machine translations of an English-only safety
dataset (WildGuardMix; Han et al., 2024), our datasets contain prompt-output
pairs with labels of prompt harmfulness, response harmfulness, and response
refusal. Through extensive evaluations across multiple safety and toxicity
benchmarks, we demonstrate that POLYGUARD outperforms existing state-of-the-art
open-weight and commercial safety classifiers by 5.5%. Our contributions
advance efforts toward safer multilingual LLMs for all global users.
|
2504.04383 | Ximing Lu | Ximing Lu, Seungju Han, David Acuna, Hyunwoo Kim, Jaehun Jung, Shrimai
Prabhumoye, Niklas Muennighoff, Mostofa Patwary, Mohammad Shoeybi, Bryan
Catanzaro, Yejin Choi | Retro-Search: Exploring Untaken Paths for Deeper and Efficient Reasoning | Code and data will be publicly released upon internal approval | null | null | null | cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large reasoning models exhibit remarkable reasoning capabilities via long,
elaborate reasoning trajectories. Supervised fine-tuning on such reasoning
traces, also known as distillation, can be a cost-effective way to boost
reasoning capabilities of student models. However, empirical observations
reveal that these reasoning trajectories are often suboptimal, switching
excessively between different lines of thought, resulting in under-thinking,
over-thinking, and even degenerate responses. We introduce Retro-Search, an
MCTS-inspired search algorithm, for distilling higher quality reasoning paths
from large reasoning models. Retro-Search retrospectively revises reasoning
paths to discover better, yet shorter traces, which can then lead to student
models with enhanced reasoning capabilities with shorter, thus faster
inference. Our approach can enable two use cases: self-improvement, where
models are fine-tuned on their own Retro-Search-ed thought traces, and
weak-to-strong improvement, where a weaker model revises stronger model's
thought traces via Retro-Search. For self-improving, R1-distill-7B, fine-tuned
on its own Retro-Search-ed traces, reduces the average reasoning length by
31.2% while improving performance by 7.7% across seven math benchmarks. For
weak-to-strong improvement, we retrospectively revise R1-671B's traces from the
OpenThoughts dataset using R1-distill-32B as the Retro-Search-er, a model 20x
smaller. Qwen2.5-32B, fine-tuned on this refined data, achieves performance
comparable to R1-distill-32B, yielding an 11.3% reduction in reasoning length
and a 2.4% performance improvement compared to fine-tuning on the original
OpenThoughts data. Our work counters recently emergent viewpoints that question
the relevance of search algorithms in the era of large reasoning models, by
demonstrating that there are still opportunities for algorithmic advancements,
even for frontier models.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 06:23:27 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lu",
"Ximing",
""
],
[
"Han",
"Seungju",
""
],
[
"Acuna",
"David",
""
],
[
"Kim",
"Hyunwoo",
""
],
[
"Jung",
"Jaehun",
""
],
[
"Prabhumoye",
"Shrimai",
""
],
[
"Muennighoff",
"Niklas",
""
],
[
"Patwary",
"Mostofa",
""
],
[
"Shoeybi",
"Mohammad",
""
],
[
"Catanzaro",
"Bryan",
""
],
[
"Choi",
"Yejin",
""
]
] | TITLE: Retro-Search: Exploring Untaken Paths for Deeper and Efficient Reasoning
ABSTRACT: Large reasoning models exhibit remarkable reasoning capabilities via long,
elaborate reasoning trajectories. Supervised fine-tuning on such reasoning
traces, also known as distillation, can be a cost-effective way to boost
reasoning capabilities of student models. However, empirical observations
reveal that these reasoning trajectories are often suboptimal, switching
excessively between different lines of thought, resulting in under-thinking,
over-thinking, and even degenerate responses. We introduce Retro-Search, an
MCTS-inspired search algorithm, for distilling higher quality reasoning paths
from large reasoning models. Retro-Search retrospectively revises reasoning
paths to discover better, yet shorter traces, which can then lead to student
models with enhanced reasoning capabilities with shorter, thus faster
inference. Our approach can enable two use cases: self-improvement, where
models are fine-tuned on their own Retro-Search-ed thought traces, and
weak-to-strong improvement, where a weaker model revises stronger model's
thought traces via Retro-Search. For self-improving, R1-distill-7B, fine-tuned
on its own Retro-Search-ed traces, reduces the average reasoning length by
31.2% while improving performance by 7.7% across seven math benchmarks. For
weak-to-strong improvement, we retrospectively revise R1-671B's traces from the
OpenThoughts dataset using R1-distill-32B as the Retro-Search-er, a model 20x
smaller. Qwen2.5-32B, fine-tuned on this refined data, achieves performance
comparable to R1-distill-32B, yielding an 11.3% reduction in reasoning length
and a 2.4% performance improvement compared to fine-tuning on the original
OpenThoughts data. Our work counters recently emergent viewpoints that question
the relevance of search algorithms in the era of large reasoning models, by
demonstrating that there are still opportunities for algorithmic advancements,
even for frontier models.
|
2504.04386 | Yi Xu | Yi Xu, Weicong Qin, Weijie Yu, Ming He, Jianping Fan, Jun Xu | Decoding Recommendation Behaviors of In-Context Learning LLMs Through
Gradient Descent | 12 pages, 9 figures | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, there has been a growing trend in utilizing large language models
(LLMs) for recommender systems, referred to as LLMRec. A notable approach
within this trend is not to fine-tune these models directly but instead to
leverage In-Context Learning (ICL) methods tailored for LLMRec, denoted as
LLM-ICL Rec. Many contemporary techniques focus on harnessing ICL content to
enhance LLMRec performance.
However, optimizing LLMRec with ICL content presents unresolved challenges.
Specifically, two key issues stand out: (1) the limited understanding of why
using a few demonstrations without model fine-tuning can lead to better
performance compared to zero-shot recommendations. (2) the lack of evaluation
metrics for demonstrations in LLM-ICL Rec and the absence of the theoretical
analysis and practical design for optimizing the generation of ICL content for
recommendation contexts.
To address these two main issues, we propose a theoretical model, the LLM-ICL
Recommendation Equivalent Gradient Descent model (LRGD) in this paper, which
connects recommendation generation with gradient descent dynamics. We
demonstrate that the ICL inference process in LLM aligns with the training
procedure of its dual model, producing token predictions equivalent to the dual
model's testing outputs. Building on these theoretical insights, we propose an
evaluation metric for assessing demonstration quality. We integrate
perturbations and regularizations in LRGD to enhance the robustness of the
recommender system. To further improve demonstration effectiveness, prevent
performance collapse, and ensure long-term adaptability, we also propose a
two-stage optimization process in practice. Extensive experiments and detailed
analysis on three Amazon datasets validate the theoretical equivalence and
support the effectiveness of our theoretical analysis and practical module
design.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 06:36:45 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Xu",
"Yi",
""
],
[
"Qin",
"Weicong",
""
],
[
"Yu",
"Weijie",
""
],
[
"He",
"Ming",
""
],
[
"Fan",
"Jianping",
""
],
[
"Xu",
"Jun",
""
]
] | TITLE: Decoding Recommendation Behaviors of In-Context Learning LLMs Through
Gradient Descent
ABSTRACT: Recently, there has been a growing trend in utilizing large language models
(LLMs) for recommender systems, referred to as LLMRec. A notable approach
within this trend is not to fine-tune these models directly but instead to
leverage In-Context Learning (ICL) methods tailored for LLMRec, denoted as
LLM-ICL Rec. Many contemporary techniques focus on harnessing ICL content to
enhance LLMRec performance.
However, optimizing LLMRec with ICL content presents unresolved challenges.
Specifically, two key issues stand out: (1) the limited understanding of why
using a few demonstrations without model fine-tuning can lead to better
performance compared to zero-shot recommendations. (2) the lack of evaluation
metrics for demonstrations in LLM-ICL Rec and the absence of the theoretical
analysis and practical design for optimizing the generation of ICL content for
recommendation contexts.
To address these two main issues, we propose a theoretical model, the LLM-ICL
Recommendation Equivalent Gradient Descent model (LRGD) in this paper, which
connects recommendation generation with gradient descent dynamics. We
demonstrate that the ICL inference process in LLM aligns with the training
procedure of its dual model, producing token predictions equivalent to the dual
model's testing outputs. Building on these theoretical insights, we propose an
evaluation metric for assessing demonstration quality. We integrate
perturbations and regularizations in LRGD to enhance the robustness of the
recommender system. To further improve demonstration effectiveness, prevent
performance collapse, and ensure long-term adaptability, we also propose a
two-stage optimization process in practice. Extensive experiments and detailed
analysis on three Amazon datasets validate the theoretical equivalence and
support the effectiveness of our theoretical analysis and practical module
design.
|
2504.04395 | Jake Grigsby | Jake Grigsby, Yuqi Xie, Justin Sasek, Steven Zheng, Yuke Zhu | Human-Level Competitive Pok\'emon via Scalable Offline Reinforcement
Learning with Transformers | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Competitive Pok\'emon Singles (CPS) is a popular strategy game where players
learn to exploit their opponent based on imperfect information in battles that
can last more than one hundred stochastic turns. AI research in CPS has been
led by heuristic tree search and online self-play, but the game may also create
a platform to study adaptive policies trained offline on large datasets. We
develop a pipeline to reconstruct the first-person perspective of an agent from
logs saved from the third-person perspective of a spectator, thereby unlocking
a dataset of real human battles spanning more than a decade that grows larger
every day. This dataset enables a black-box approach where we train large
sequence models to adapt to their opponent based solely on their input
trajectory while selecting moves without explicit search of any kind. We study
a progression from imitation learning to offline RL and offline fine-tuning on
self-play data in the hardcore competitive setting of Pok\'emon's four oldest
(and most partially observed) game generations. The resulting agents outperform
a recent LLM Agent approach and a strong heuristic search engine. While playing
anonymously in online battles against humans, our best agents climb to rankings
inside the top 10% of active players.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 07:35:15 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Grigsby",
"Jake",
""
],
[
"Xie",
"Yuqi",
""
],
[
"Sasek",
"Justin",
""
],
[
"Zheng",
"Steven",
""
],
[
"Zhu",
"Yuke",
""
]
] | TITLE: Human-Level Competitive Pok\'emon via Scalable Offline Reinforcement
Learning with Transformers
ABSTRACT: Competitive Pok\'emon Singles (CPS) is a popular strategy game where players
learn to exploit their opponent based on imperfect information in battles that
can last more than one hundred stochastic turns. AI research in CPS has been
led by heuristic tree search and online self-play, but the game may also create
a platform to study adaptive policies trained offline on large datasets. We
develop a pipeline to reconstruct the first-person perspective of an agent from
logs saved from the third-person perspective of a spectator, thereby unlocking
a dataset of real human battles spanning more than a decade that grows larger
every day. This dataset enables a black-box approach where we train large
sequence models to adapt to their opponent based solely on their input
trajectory while selecting moves without explicit search of any kind. We study
a progression from imitation learning to offline RL and offline fine-tuning on
self-play data in the hardcore competitive setting of Pok\'emon's four oldest
(and most partially observed) game generations. The resulting agents outperform
a recent LLM Agent approach and a strong heuristic search engine. While playing
anonymously in online battles against humans, our best agents climb to rankings
inside the top 10% of active players.
|
2504.04400 | Bowen Zheng | Bowen Zheng, Enze Liu, Zhongfu Chen, Zhongrui Ma, Yue Wang, Wayne Xin
Zhao, Ji-Rong Wen | Pre-training Generative Recommender with Multi-Identifier Item
Tokenization | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative recommendation autoregressively generates item identifiers to
recommend potential items. Existing methods typically adopt a one-to-one
mapping strategy, where each item is represented by a single identifier.
However, this scheme poses issues, such as suboptimal semantic modeling for
low-frequency items and limited diversity in token sequence data. To overcome
these limitations, we propose MTGRec, which leverages Multi-identifier item
Tokenization to augment token sequence data for Generative Recommender
pre-training. Our approach involves two key innovations: multi-identifier item
tokenization and curriculum recommender pre-training. For multi-identifier item
tokenization, we leverage the RQ-VAE as the tokenizer backbone and treat model
checkpoints from adjacent training epochs as semantically relevant tokenizers.
This allows each item to be associated with multiple identifiers, enabling a
single user interaction sequence to be converted into several token sequences
as different data groups. For curriculum recommender pre-training, we introduce
a curriculum learning scheme guided by data influence estimation, dynamically
adjusting the sampling probability of each data group during recommender
pre-training. After pre-training, we fine-tune the model using a single
tokenizer to ensure accurate item identification for recommendation. Extensive
experiments on three public benchmark datasets demonstrate that MTGRec
significantly outperforms both traditional and generative recommendation
baselines in terms of effectiveness and scalability.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 08:03:03 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zheng",
"Bowen",
""
],
[
"Liu",
"Enze",
""
],
[
"Chen",
"Zhongfu",
""
],
[
"Ma",
"Zhongrui",
""
],
[
"Wang",
"Yue",
""
],
[
"Zhao",
"Wayne Xin",
""
],
[
"Wen",
"Ji-Rong",
""
]
] | TITLE: Pre-training Generative Recommender with Multi-Identifier Item
Tokenization
ABSTRACT: Generative recommendation autoregressively generates item identifiers to
recommend potential items. Existing methods typically adopt a one-to-one
mapping strategy, where each item is represented by a single identifier.
However, this scheme poses issues, such as suboptimal semantic modeling for
low-frequency items and limited diversity in token sequence data. To overcome
these limitations, we propose MTGRec, which leverages Multi-identifier item
Tokenization to augment token sequence data for Generative Recommender
pre-training. Our approach involves two key innovations: multi-identifier item
tokenization and curriculum recommender pre-training. For multi-identifier item
tokenization, we leverage the RQ-VAE as the tokenizer backbone and treat model
checkpoints from adjacent training epochs as semantically relevant tokenizers.
This allows each item to be associated with multiple identifiers, enabling a
single user interaction sequence to be converted into several token sequences
as different data groups. For curriculum recommender pre-training, we introduce
a curriculum learning scheme guided by data influence estimation, dynamically
adjusting the sampling probability of each data group during recommender
pre-training. After pre-training, we fine-tune the model using a single
tokenizer to ensure accurate item identification for recommendation. Extensive
experiments on three public benchmark datasets demonstrate that MTGRec
significantly outperforms both traditional and generative recommendation
baselines in terms of effectiveness and scalability.
|
2504.04405 | Bowen Zheng | Bowen Zheng, Hongyu Lu, Yu Chen, Wayne Xin Zhao, Ji-Rong Wen | Universal Item Tokenization for Transferable Generative Recommendation | null | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recently, generative recommendation has emerged as a promising paradigm,
attracting significant research attention. The basic framework involves an item
tokenizer, which represents each item as a sequence of codes serving as its
identifier, and a generative recommender that predicts the next item by
autoregressively generating the target item identifier. However, in existing
methods, both the tokenizer and the recommender are typically domain-specific,
limiting their ability for effective transfer or adaptation to new domains. To
this end, we propose UTGRec, a Universal item Tokenization approach for
transferable Generative Recommendation. Specifically, we design a universal
item tokenizer for encoding rich item semantics by adapting a multimodal large
language model (MLLM). By devising tree-structured codebooks, we discretize
content representations into corresponding codes for item tokenization. To
effectively learn the universal item tokenizer on multiple domains, we
introduce two key techniques in our approach. For raw content reconstruction,
we employ dual lightweight decoders to reconstruct item text and images from
discrete representations to capture general knowledge embedded in the content.
For collaborative knowledge integration, we assume that co-occurring items are
similar and integrate collaborative signals through co-occurrence alignment and
reconstruction. Finally, we present a joint learning framework to pre-train and
adapt the transferable generative recommender across multiple domains.
Extensive experiments on four public datasets demonstrate the superiority of
UTGRec compared to both traditional and generative recommendation baselines.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 08:07:49 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zheng",
"Bowen",
""
],
[
"Lu",
"Hongyu",
""
],
[
"Chen",
"Yu",
""
],
[
"Zhao",
"Wayne Xin",
""
],
[
"Wen",
"Ji-Rong",
""
]
] | TITLE: Universal Item Tokenization for Transferable Generative Recommendation
ABSTRACT: Recently, generative recommendation has emerged as a promising paradigm,
attracting significant research attention. The basic framework involves an item
tokenizer, which represents each item as a sequence of codes serving as its
identifier, and a generative recommender that predicts the next item by
autoregressively generating the target item identifier. However, in existing
methods, both the tokenizer and the recommender are typically domain-specific,
limiting their ability for effective transfer or adaptation to new domains. To
this end, we propose UTGRec, a Universal item Tokenization approach for
transferable Generative Recommendation. Specifically, we design a universal
item tokenizer for encoding rich item semantics by adapting a multimodal large
language model (MLLM). By devising tree-structured codebooks, we discretize
content representations into corresponding codes for item tokenization. To
effectively learn the universal item tokenizer on multiple domains, we
introduce two key techniques in our approach. For raw content reconstruction,
we employ dual lightweight decoders to reconstruct item text and images from
discrete representations to capture general knowledge embedded in the content.
For collaborative knowledge integration, we assume that co-occurring items are
similar and integrate collaborative signals through co-occurrence alignment and
reconstruction. Finally, we present a joint learning framework to pre-train and
adapt the transferable generative recommender across multiple domains.
Extensive experiments on four public datasets demonstrate the superiority of
UTGRec compared to both traditional and generative recommendation baselines.
|
2504.04422 | Luming Yin | Hongliang Liang, Luming Yin, Guohao Wu, Yuxiang Li, Qiuping Yi, and
Lei Wang | LeakGuard: Detecting Memory Leaks Accurately and Scalably | 21 pages, 5 figures, conference paper on memory leak detection | null | null | null | cs.CR cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Memory leaks are prevalent in various real-world software projects, thereby
leading to serious attacks like denial-of-service. Though prior methods for
detecting memory leaks made significant advance, they often suffer from low
accuracy and weak scalability for testing large and complex programs. In this
paper we present LeakGuard, a memory leak detection tool which provides
satisfactory balance of accuracy and scalability. For accuracy, LeakGuard
analyzes the behaviors of library and developer-defined memory allocation and
deallocation functions in a path-sensitive manner and generates function
summaries for them in a bottom-up approach. Additionally, we develop a pointer
escape analysis technique to model the transfer of pointer ownership. For
scalability, LeakGuard examines each function of interest independently by
using its function summary and under-constrained symbolic execution technique,
which effectively mitigates path explosion problem. Our extensive evaluation on
18 real-world software projects and standard benchmark datasets demonstrates
that LeakGuard achieves significant advancements in multiple aspects: it
exhibits superior MAD function identification capability compared to Goshawk,
outperforms five state-of-the-art methods in defect detection accuracy, and
successfully identifies 129 previously undetected memory leak bugs, all of
which have been independently verified and confirmed by the respective
development teams.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 09:11:37 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liang",
"Hongliang",
""
],
[
"Yin",
"Luming",
""
],
[
"Wu",
"Guohao",
""
],
[
"Li",
"Yuxiang",
""
],
[
"Yi",
"Qiuping",
""
],
[
"Wang",
"Lei",
""
]
] | TITLE: LeakGuard: Detecting Memory Leaks Accurately and Scalably
ABSTRACT: Memory leaks are prevalent in various real-world software projects, thereby
leading to serious attacks like denial-of-service. Though prior methods for
detecting memory leaks made significant advance, they often suffer from low
accuracy and weak scalability for testing large and complex programs. In this
paper we present LeakGuard, a memory leak detection tool which provides
satisfactory balance of accuracy and scalability. For accuracy, LeakGuard
analyzes the behaviors of library and developer-defined memory allocation and
deallocation functions in a path-sensitive manner and generates function
summaries for them in a bottom-up approach. Additionally, we develop a pointer
escape analysis technique to model the transfer of pointer ownership. For
scalability, LeakGuard examines each function of interest independently by
using its function summary and under-constrained symbolic execution technique,
which effectively mitigates path explosion problem. Our extensive evaluation on
18 real-world software projects and standard benchmark datasets demonstrates
that LeakGuard achieves significant advancements in multiple aspects: it
exhibits superior MAD function identification capability compared to Goshawk,
outperforms five state-of-the-art methods in defect detection accuracy, and
successfully identifies 129 previously undetected memory leak bugs, all of
which have been independently verified and confirmed by the respective
development teams.
|
2504.04428 | Yuto Shibata | Yuto Shibata, Keitaro Tanaka, Yoshiaki Bando, Keisuke Imoto, Hirokatsu
Kataoka, Yoshimitsu Aoki | Formula-Supervised Sound Event Detection: Pre-Training Without Real Data | Accepted by ICASSP 2025 | null | null | null | cs.SD cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose a novel formula-driven supervised learning (FDSL)
framework for pre-training an environmental sound analysis model by leveraging
acoustic signals parametrically synthesized through formula-driven methods.
Specifically, we outline detailed procedures and evaluate their effectiveness
for sound event detection (SED). The SED task, which involves estimating the
types and timings of sound events, is particularly challenged by the difficulty
of acquiring a sufficient quantity of accurately labeled training data.
Moreover, it is well known that manually annotated labels often contain noises
and are significantly influenced by the subjective judgment of annotators. To
address these challenges, we propose a novel pre-training method that utilizes
a synthetic dataset, Formula-SED, where acoustic data are generated solely
based on mathematical formulas. The proposed method enables large-scale
pre-training by using the synthesis parameters applied at each time step as
ground truth labels, thereby eliminating label noise and bias. We demonstrate
that large-scale pre-training with Formula-SED significantly enhances model
accuracy and accelerates training, as evidenced by our results in the DESED
dataset used for DCASE2023 Challenge Task 4. The project page is at
https://yutoshibata07.github.io/Formula-SED/
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 09:47:26 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Shibata",
"Yuto",
""
],
[
"Tanaka",
"Keitaro",
""
],
[
"Bando",
"Yoshiaki",
""
],
[
"Imoto",
"Keisuke",
""
],
[
"Kataoka",
"Hirokatsu",
""
],
[
"Aoki",
"Yoshimitsu",
""
]
] | TITLE: Formula-Supervised Sound Event Detection: Pre-Training Without Real Data
ABSTRACT: In this paper, we propose a novel formula-driven supervised learning (FDSL)
framework for pre-training an environmental sound analysis model by leveraging
acoustic signals parametrically synthesized through formula-driven methods.
Specifically, we outline detailed procedures and evaluate their effectiveness
for sound event detection (SED). The SED task, which involves estimating the
types and timings of sound events, is particularly challenged by the difficulty
of acquiring a sufficient quantity of accurately labeled training data.
Moreover, it is well known that manually annotated labels often contain noises
and are significantly influenced by the subjective judgment of annotators. To
address these challenges, we propose a novel pre-training method that utilizes
a synthetic dataset, Formula-SED, where acoustic data are generated solely
based on mathematical formulas. The proposed method enables large-scale
pre-training by using the synthesis parameters applied at each time step as
ground truth labels, thereby eliminating label noise and bias. We demonstrate
that large-scale pre-training with Formula-SED significantly enhances model
accuracy and accelerates training, as evidenced by our results in the DESED
dataset used for DCASE2023 Challenge Task 4. The project page is at
https://yutoshibata07.github.io/Formula-SED/
|
2504.04435 | Bharani Jayakumar | Tatiana Merkulova and Bharani Jayakumar | Evaluation framework for Image Segmentation Algorithms | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | This paper presents a comprehensive evaluation framework for image
segmentation algorithms, encompassing naive methods, machine learning
approaches, and deep learning techniques. We begin by introducing the
fundamental concepts and importance of image segmentation, and the role of
interactive segmentation in enhancing accuracy. A detailed background theory
section explores various segmentation methods, including thresholding, edge
detection, region growing, feature extraction, random forests, support vector
machines, convolutional neural networks, U-Net, and Mask R-CNN. The
implementation and experimental setup are thoroughly described, highlighting
three primary approaches: algorithm assisting user, user assisting algorithm,
and hybrid methods. Evaluation metrics such as Intersection over Union (IoU),
computation time, and user interaction time are employed to measure
performance. A comparative analysis presents detailed results, emphasizing the
strengths, limitations, and trade-offs of each method. The paper concludes with
insights into the practical applicability of these approaches across various
scenarios and outlines future work, focusing on expanding datasets, developing
more representative approaches, integrating real-time feedback, and exploring
weakly supervised and self-supervised learning paradigms to enhance
segmentation accuracy and efficiency. Keywords: Image Segmentation, Interactive
Segmentation, Machine Learning, Deep Learning, Computer Vision
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 10:20:26 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Merkulova",
"Tatiana",
""
],
[
"Jayakumar",
"Bharani",
""
]
] | TITLE: Evaluation framework for Image Segmentation Algorithms
ABSTRACT: This paper presents a comprehensive evaluation framework for image
segmentation algorithms, encompassing naive methods, machine learning
approaches, and deep learning techniques. We begin by introducing the
fundamental concepts and importance of image segmentation, and the role of
interactive segmentation in enhancing accuracy. A detailed background theory
section explores various segmentation methods, including thresholding, edge
detection, region growing, feature extraction, random forests, support vector
machines, convolutional neural networks, U-Net, and Mask R-CNN. The
implementation and experimental setup are thoroughly described, highlighting
three primary approaches: algorithm assisting user, user assisting algorithm,
and hybrid methods. Evaluation metrics such as Intersection over Union (IoU),
computation time, and user interaction time are employed to measure
performance. A comparative analysis presents detailed results, emphasizing the
strengths, limitations, and trade-offs of each method. The paper concludes with
insights into the practical applicability of these approaches across various
scenarios and outlines future work, focusing on expanding datasets, developing
more representative approaches, integrating real-time feedback, and exploring
weakly supervised and self-supervised learning paradigms to enhance
segmentation accuracy and efficiency. Keywords: Image Segmentation, Interactive
Segmentation, Machine Learning, Deep Learning, Computer Vision
|
2504.04443 | Zheyu Chen | Zheyu Chen, Jinfeng Xu, Yutong Wei and Ziyue Peng | Squeeze and Excitation: A Weighted Graph Contrastive Learning for
Collaborative Filtering | Accepted by SIGIR 2025 | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Contrastive Learning (CL) has recently emerged as a powerful technique in
recommendation systems, particularly for its capability to harness
self-supervised signals from perturbed views to mitigate the persistent
challenge of data sparsity. The process of constructing perturbed views of the
user-item bipartite graph and performing contrastive learning between perturbed
views in a graph convolutional network (GCN) is called graph contrastive
learning (GCL), which aims to enhance the robustness of representation
learning. Although existing GCL-based models are effective, the weight
assignment method for perturbed views has not been fully explored. A critical
problem in existing GCL-based models is the irrational allocation of feature
attention. This problem limits the model's ability to effectively leverage
crucial features, resulting in suboptimal performance. To address this, we
propose a Weighted Graph Contrastive Learning framework (WeightedGCL).
Specifically, WeightedGCL applies a robust perturbation strategy, which
perturbs only the view of the final GCN layer. In addition, WeightedGCL
incorporates a squeeze and excitation network (SENet) to dynamically weight the
features of the perturbed views. Our WeightedGCL strengthens the model's focus
on crucial features and reduces the impact of less relevant information.
Extensive experiments on widely used datasets demonstrate that our WeightedGCL
achieves significant accuracy improvements compared to competitive baselines.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 11:30:59 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chen",
"Zheyu",
""
],
[
"Xu",
"Jinfeng",
""
],
[
"Wei",
"Yutong",
""
],
[
"Peng",
"Ziyue",
""
]
] | TITLE: Squeeze and Excitation: A Weighted Graph Contrastive Learning for
Collaborative Filtering
ABSTRACT: Contrastive Learning (CL) has recently emerged as a powerful technique in
recommendation systems, particularly for its capability to harness
self-supervised signals from perturbed views to mitigate the persistent
challenge of data sparsity. The process of constructing perturbed views of the
user-item bipartite graph and performing contrastive learning between perturbed
views in a graph convolutional network (GCN) is called graph contrastive
learning (GCL), which aims to enhance the robustness of representation
learning. Although existing GCL-based models are effective, the weight
assignment method for perturbed views has not been fully explored. A critical
problem in existing GCL-based models is the irrational allocation of feature
attention. This problem limits the model's ability to effectively leverage
crucial features, resulting in suboptimal performance. To address this, we
propose a Weighted Graph Contrastive Learning framework (WeightedGCL).
Specifically, WeightedGCL applies a robust perturbation strategy, which
perturbs only the view of the final GCN layer. In addition, WeightedGCL
incorporates a squeeze and excitation network (SENet) to dynamically weight the
features of the perturbed views. Our WeightedGCL strengthens the model's focus
on crucial features and reduces the impact of less relevant information.
Extensive experiments on widely used datasets demonstrate that our WeightedGCL
achieves significant accuracy improvements compared to competitive baselines.
|
2504.04452 | Jinfeng Xu | Jinfeng Xu, Zheyu Chen, Wei Wang, Xiping Hu, Sang-Wook Kim, and Edith
C. H. Ngai | COHESION: Composite Graph Convolutional Network with Dual-Stage Fusion
for Multimodal Recommendation | Accepted by CIKM 2024 | null | null | null | cs.IR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent works in multimodal recommendations, which leverage diverse modal
information to address data sparsity and enhance recommendation accuracy, have
garnered considerable interest. Two key processes in multimodal recommendations
are modality fusion and representation learning. Previous approaches in
modality fusion often employ simplistic attentive or pre-defined strategies at
early or late stages, failing to effectively handle irrelevant information
among modalities. In representation learning, prior research has constructed
heterogeneous and homogeneous graph structures encapsulating user-item,
user-user, and item-item relationships to better capture user interests and
item profiles. Modality fusion and representation learning were considered as
two independent processes in previous work. In this paper, we reveal that these
two processes are complementary and can support each other. Specifically,
powerful representation learning enhances modality fusion, while effective
fusion improves representation quality. Stemming from these two processes, we
introduce a COmposite grapH convolutional nEtwork with dual-stage fuSION for
the multimodal recommendation, named COHESION. Specifically, it introduces a
dual-stage fusion strategy to reduce the impact of irrelevant information,
refining all modalities using ID embedding in the early stage and fusing their
representations at the late stage. It also proposes a composite graph
convolutional network that utilizes user-item, user-user, and item-item graphs
to extract heterogeneous and homogeneous latent relationships within users and
items. Besides, it introduces a novel adaptive optimization to ensure balanced
and reasonable representations across modalities. Extensive experiments on
three widely used datasets demonstrate the significant superiority of COHESION
over various competitive baselines.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 11:42:49 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Xu",
"Jinfeng",
""
],
[
"Chen",
"Zheyu",
""
],
[
"Wang",
"Wei",
""
],
[
"Hu",
"Xiping",
""
],
[
"Kim",
"Sang-Wook",
""
],
[
"Ngai",
"Edith C. H.",
""
]
] | TITLE: COHESION: Composite Graph Convolutional Network with Dual-Stage Fusion
for Multimodal Recommendation
ABSTRACT: Recent works in multimodal recommendations, which leverage diverse modal
information to address data sparsity and enhance recommendation accuracy, have
garnered considerable interest. Two key processes in multimodal recommendations
are modality fusion and representation learning. Previous approaches in
modality fusion often employ simplistic attentive or pre-defined strategies at
early or late stages, failing to effectively handle irrelevant information
among modalities. In representation learning, prior research has constructed
heterogeneous and homogeneous graph structures encapsulating user-item,
user-user, and item-item relationships to better capture user interests and
item profiles. Modality fusion and representation learning were considered as
two independent processes in previous work. In this paper, we reveal that these
two processes are complementary and can support each other. Specifically,
powerful representation learning enhances modality fusion, while effective
fusion improves representation quality. Stemming from these two processes, we
introduce a COmposite grapH convolutional nEtwork with dual-stage fuSION for
the multimodal recommendation, named COHESION. Specifically, it introduces a
dual-stage fusion strategy to reduce the impact of irrelevant information,
refining all modalities using ID embedding in the early stage and fusing their
representations at the late stage. It also proposes a composite graph
convolutional network that utilizes user-item, user-user, and item-item graphs
to extract heterogeneous and homogeneous latent relationships within users and
items. Besides, it introduces a novel adaptive optimization to ensure balanced
and reasonable representations across modalities. Extensive experiments on
three widely used datasets demonstrate the significant superiority of COHESION
over various competitive baselines.
|
2504.04457 | Alejandro Fontan | Alejandro Fontan, Tobias Fischer, Javier Civera and Michael Milford | VSLAM-LAB: A Comprehensive Framework for Visual SLAM Methods and
Datasets | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Visual Simultaneous Localization and Mapping (VSLAM) research faces
significant challenges due to fragmented toolchains, complex system
configurations, and inconsistent evaluation methodologies. To address these
issues, we present VSLAM-LAB, a unified framework designed to streamline the
development, evaluation, and deployment of VSLAM systems. VSLAM-LAB simplifies
the entire workflow by enabling seamless compilation and configuration of VSLAM
algorithms, automated dataset downloading and preprocessing, and standardized
experiment design, execution, and evaluation--all accessible through a single
command-line interface. The framework supports a wide range of VSLAM systems
and datasets, offering broad compatibility and extendability while promoting
reproducibility through consistent evaluation metrics and analysis tools. By
reducing implementation complexity and minimizing configuration overhead,
VSLAM-LAB empowers researchers to focus on advancing VSLAM methodologies and
accelerates progress toward scalable, real-world solutions. We demonstrate the
ease with which user-relevant benchmarks can be created: here, we introduce
difficulty-level-based categories, but one could envision environment-specific
or condition-specific categories.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 12:02:19 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Fontan",
"Alejandro",
""
],
[
"Fischer",
"Tobias",
""
],
[
"Civera",
"Javier",
""
],
[
"Milford",
"Michael",
""
]
] | TITLE: VSLAM-LAB: A Comprehensive Framework for Visual SLAM Methods and
Datasets
ABSTRACT: Visual Simultaneous Localization and Mapping (VSLAM) research faces
significant challenges due to fragmented toolchains, complex system
configurations, and inconsistent evaluation methodologies. To address these
issues, we present VSLAM-LAB, a unified framework designed to streamline the
development, evaluation, and deployment of VSLAM systems. VSLAM-LAB simplifies
the entire workflow by enabling seamless compilation and configuration of VSLAM
algorithms, automated dataset downloading and preprocessing, and standardized
experiment design, execution, and evaluation--all accessible through a single
command-line interface. The framework supports a wide range of VSLAM systems
and datasets, offering broad compatibility and extendability while promoting
reproducibility through consistent evaluation metrics and analysis tools. By
reducing implementation complexity and minimizing configuration overhead,
VSLAM-LAB empowers researchers to focus on advancing VSLAM methodologies and
accelerates progress toward scalable, real-world solutions. We demonstrate the
ease with which user-relevant benchmarks can be created: here, we introduce
difficulty-level-based categories, but one could envision environment-specific
or condition-specific categories.
|
2504.04458 | Bashir Alam | Bashir Alam, Masa Cirkovic, Mete Harun Akcay, Md Kaf Shahrier,
Sebastien Lafond, Hergys Rexha, Kurt Benke, Sepinoud Azimi, and Janan Arslan | CALF: A Conditionally Adaptive Loss Function to Mitigate
Class-Imbalanced Segmentation | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Imbalanced datasets pose a considerable challenge in training deep learning
(DL) models for medical diagnostics, particularly for segmentation tasks.
Imbalance may be associated with annotation quality limited annotated datasets,
rare cases, or small-scale regions of interest (ROIs). These conditions
adversely affect model training and performance, leading to segmentation
boundaries which deviate from the true ROIs. Traditional loss functions, such
as Binary Cross Entropy, replicate annotation biases and limit model
generalization. We propose a novel, statistically driven, conditionally
adaptive loss function (CALF) tailored to accommodate the conditions of
imbalanced datasets in DL training. It employs a data-driven methodology by
estimating imbalance severity using statistical methods of skewness and
kurtosis, then applies an appropriate transformation to balance the training
dataset while preserving data heterogeneity. This transformative approach
integrates a multifaceted process, encompassing preprocessing, dataset
filtering, and dynamic loss selection to achieve optimal outcomes. We benchmark
our method against conventional loss functions using qualitative and
quantitative evaluations. Experiments using large-scale open-source datasets
(i.e., UPENN-GBM, UCSF, LGG, and BraTS) validate our approach, demonstrating
substantial segmentation improvements. Code availability:
https://anonymous.4open.science/r/MICCAI-Submission-43F9/.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 12:03:33 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Alam",
"Bashir",
""
],
[
"Cirkovic",
"Masa",
""
],
[
"Akcay",
"Mete Harun",
""
],
[
"Shahrier",
"Md Kaf",
""
],
[
"Lafond",
"Sebastien",
""
],
[
"Rexha",
"Hergys",
""
],
[
"Benke",
"Kurt",
""
],
[
"Azimi",
"Sepinoud",
""
],
[
"Arslan",
"Janan",
""
]
] | TITLE: CALF: A Conditionally Adaptive Loss Function to Mitigate
Class-Imbalanced Segmentation
ABSTRACT: Imbalanced datasets pose a considerable challenge in training deep learning
(DL) models for medical diagnostics, particularly for segmentation tasks.
Imbalance may be associated with annotation quality limited annotated datasets,
rare cases, or small-scale regions of interest (ROIs). These conditions
adversely affect model training and performance, leading to segmentation
boundaries which deviate from the true ROIs. Traditional loss functions, such
as Binary Cross Entropy, replicate annotation biases and limit model
generalization. We propose a novel, statistically driven, conditionally
adaptive loss function (CALF) tailored to accommodate the conditions of
imbalanced datasets in DL training. It employs a data-driven methodology by
estimating imbalance severity using statistical methods of skewness and
kurtosis, then applies an appropriate transformation to balance the training
dataset while preserving data heterogeneity. This transformative approach
integrates a multifaceted process, encompassing preprocessing, dataset
filtering, and dynamic loss selection to achieve optimal outcomes. We benchmark
our method against conventional loss functions using qualitative and
quantitative evaluations. Experiments using large-scale open-source datasets
(i.e., UPENN-GBM, UCSF, LGG, and BraTS) validate our approach, demonstrating
substantial segmentation improvements. Code availability:
https://anonymous.4open.science/r/MICCAI-Submission-43F9/.
|
2504.04463 | Guandong Li | Guandong Li, Mengxia Ye | Spatial-Geometry Enhanced 3D Dynamic Snake Convolutional Neural Network
for Hyperspectral Image Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Deep neural networks face several challenges in hyperspectral image
classification, including complex and sparse ground object distributions, small
clustered structures, and elongated multi-branch features that often lead to
missing detections. To better adapt to ground object distributions and achieve
adaptive dynamic feature responses while skipping redundant information, this
paper proposes a Spatial-Geometry Enhanced 3D Dynamic Snake Network (SG-DSCNet)
based on an improved 3D-DenseNet model. The network employs Dynamic Snake
Convolution (DSCConv), which introduces deformable offsets to enhance kernel
flexibility through constrained self-learning, thereby improving regional
perception of ground objects. Additionally, we propose a multi-view feature
fusion strategy that generates multiple morphological kernel templates from
DSCConv to observe target structures from different perspectives and achieve
efficient feature fusion through summarizing key characteristics. This dynamic
approach enables the model to focus more flexibly on critical spatial
structures when processing different regions, rather than relying on fixed
receptive fields of single static kernels. The DSC module enhances model
representation capability through dynamic kernel aggregation without increasing
network depth or width. Experimental results demonstrate superior performance
on the IN, UP, and KSC datasets, outperforming mainstream hyperspectral
classification methods.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 12:21:39 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Guandong",
""
],
[
"Ye",
"Mengxia",
""
]
] | TITLE: Spatial-Geometry Enhanced 3D Dynamic Snake Convolutional Neural Network
for Hyperspectral Image Classification
ABSTRACT: Deep neural networks face several challenges in hyperspectral image
classification, including complex and sparse ground object distributions, small
clustered structures, and elongated multi-branch features that often lead to
missing detections. To better adapt to ground object distributions and achieve
adaptive dynamic feature responses while skipping redundant information, this
paper proposes a Spatial-Geometry Enhanced 3D Dynamic Snake Network (SG-DSCNet)
based on an improved 3D-DenseNet model. The network employs Dynamic Snake
Convolution (DSCConv), which introduces deformable offsets to enhance kernel
flexibility through constrained self-learning, thereby improving regional
perception of ground objects. Additionally, we propose a multi-view feature
fusion strategy that generates multiple morphological kernel templates from
DSCConv to observe target structures from different perspectives and achieve
efficient feature fusion through summarizing key characteristics. This dynamic
approach enables the model to focus more flexibly on critical spatial
structures when processing different regions, rather than relying on fixed
receptive fields of single static kernels. The DSC module enhances model
representation capability through dynamic kernel aggregation without increasing
network depth or width. Experimental results demonstrate superior performance
on the IN, UP, and KSC datasets, outperforming mainstream hyperspectral
classification methods.
|
2504.04473 | Plaban Kumar Bhowmick | Archana Sahu, Plaban Kumar Bhowmick | Directed Graph-alignment Approach for Identification of Gaps in Short
Answers | 30 pages, 11 figures | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we have presented a method for identifying missing items known
as gaps in the student answers by comparing them against the corresponding
model answer/reference answers, automatically. The gaps can be identified at
word, phrase or sentence level. The identified gaps are useful in providing
feedback to the students for formative assessment. The problem of gap
identification has been modelled as an alignment of a pair of directed graphs
representing a student answer and the corresponding model answer for a given
question. To validate the proposed approach, the gap annotated student answers
considering answers from three widely known datasets in the short answer
grading domain, namely, University of North Texas (UNT), SciEntsBank, and
Beetle have been developed and this gap annotated student answers' dataset is
available at: https://github.com/sahuarchana7/gaps-answers-dataset. Evaluation
metrics used in the traditional machine learning tasks have been adopted to
evaluate the task of gap identification. Though performance of the proposed
approach varies across the datasets and the types of the answers, overall the
performance is observed to be promising.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 13:04:28 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Sahu",
"Archana",
""
],
[
"Bhowmick",
"Plaban Kumar",
""
]
] | TITLE: Directed Graph-alignment Approach for Identification of Gaps in Short
Answers
ABSTRACT: In this paper, we have presented a method for identifying missing items known
as gaps in the student answers by comparing them against the corresponding
model answer/reference answers, automatically. The gaps can be identified at
word, phrase or sentence level. The identified gaps are useful in providing
feedback to the students for formative assessment. The problem of gap
identification has been modelled as an alignment of a pair of directed graphs
representing a student answer and the corresponding model answer for a given
question. To validate the proposed approach, the gap annotated student answers
considering answers from three widely known datasets in the short answer
grading domain, namely, University of North Texas (UNT), SciEntsBank, and
Beetle have been developed and this gap annotated student answers' dataset is
available at: https://github.com/sahuarchana7/gaps-answers-dataset. Evaluation
metrics used in the traditional machine learning tasks have been adopted to
evaluate the task of gap identification. Though performance of the proposed
approach varies across the datasets and the types of the answers, overall the
performance is observed to be promising.
|
2504.04482 | Mengx Dai | Mengxia Dai, Wenqian Luo, Tianyang Li | Statistical Guarantees Of False Discovery Rate In Medical Instance
Segmentation Tasks Based on Conformal Risk Control | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Instance segmentation plays a pivotal role in medical image analysis by
enabling precise localization and delineation of lesions, tumors, and
anatomical structures. Although deep learning models such as Mask R-CNN and
BlendMask have achieved remarkable progress, their application in high-risk
medical scenarios remains constrained by confidence calibration issues, which
may lead to misdiagnosis. To address this challenge, we propose a robust
quality control framework based on conformal prediction theory. This framework
innovatively constructs a risk-aware dynamic threshold mechanism that
adaptively adjusts segmentation decision boundaries according to clinical
requirements.Specifically, we design a \textbf{calibration-aware loss function}
that dynamically tunes the segmentation threshold based on a user-defined risk
level $\alpha$. Utilizing exchangeable calibration data, this method ensures
that the expected FNR or FDR on test data remains below $\alpha$ with high
probability. The framework maintains compatibility with mainstream segmentation
models (e.g., Mask R-CNN, BlendMask+ResNet-50-FPN) and datasets (PASCAL VOC
format) without requiring architectural modifications. Empirical results
demonstrate that we rigorously bound the FDR metric marginally over the test
set via our developed calibration framework.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 13:31:19 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Dai",
"Mengxia",
""
],
[
"Luo",
"Wenqian",
""
],
[
"Li",
"Tianyang",
""
]
] | TITLE: Statistical Guarantees Of False Discovery Rate In Medical Instance
Segmentation Tasks Based on Conformal Risk Control
ABSTRACT: Instance segmentation plays a pivotal role in medical image analysis by
enabling precise localization and delineation of lesions, tumors, and
anatomical structures. Although deep learning models such as Mask R-CNN and
BlendMask have achieved remarkable progress, their application in high-risk
medical scenarios remains constrained by confidence calibration issues, which
may lead to misdiagnosis. To address this challenge, we propose a robust
quality control framework based on conformal prediction theory. This framework
innovatively constructs a risk-aware dynamic threshold mechanism that
adaptively adjusts segmentation decision boundaries according to clinical
requirements.Specifically, we design a \textbf{calibration-aware loss function}
that dynamically tunes the segmentation threshold based on a user-defined risk
level $\alpha$. Utilizing exchangeable calibration data, this method ensures
that the expected FNR or FDR on test data remains below $\alpha$ with high
probability. The framework maintains compatibility with mainstream segmentation
models (e.g., Mask R-CNN, BlendMask+ResNet-50-FPN) and datasets (PASCAL VOC
format) without requiring architectural modifications. Empirical results
demonstrate that we rigorously bound the FDR metric marginally over the test
set via our developed calibration framework.
|
2504.04494 | Marin Ben\v{c}evi\'c | Marin Ben\v{c}evi\'c, Robert \v{S}ojo, Irena Gali\'c | Skin Color Measurement from Dermatoscopic Images: An Evaluation on a
Synthetic Dataset | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This paper presents a comprehensive evaluation of skin color measurement
methods from dermatoscopic images using a synthetic dataset (S-SYNTH) with
controlled ground-truth melanin content, lesion shapes, hair models, and 18
distinct lighting conditions. This allows for rigorous assessment of the
robustness and invariance to lighting conditions. We assess four classes of
image colorimetry approaches: segmentation-based, patch-based, color
quantization, and neural networks. We use these methods to estimate the
Individual Typology Angle (ITA) and Fitzpatrick types from dermatoscopic
images. Our results show that segmentation-based and color quantization methods
yield robust, lighting-invariant estimates, whereas patch-based approaches
exhibit significant lighting-dependent biases that require calibration.
Furthermore, neural network models, particularly when combined with heavy
blurring to reduce overfitting, can provide light-invariant Fitzpatrick
predictions, although their generalization to real-world images remains
unverified. We conclude with practical recommendations for designing fair and
reliable skin color estimation methods.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 13:57:34 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Benčević",
"Marin",
""
],
[
"Šojo",
"Robert",
""
],
[
"Galić",
"Irena",
""
]
] | TITLE: Skin Color Measurement from Dermatoscopic Images: An Evaluation on a
Synthetic Dataset
ABSTRACT: This paper presents a comprehensive evaluation of skin color measurement
methods from dermatoscopic images using a synthetic dataset (S-SYNTH) with
controlled ground-truth melanin content, lesion shapes, hair models, and 18
distinct lighting conditions. This allows for rigorous assessment of the
robustness and invariance to lighting conditions. We assess four classes of
image colorimetry approaches: segmentation-based, patch-based, color
quantization, and neural networks. We use these methods to estimate the
Individual Typology Angle (ITA) and Fitzpatrick types from dermatoscopic
images. Our results show that segmentation-based and color quantization methods
yield robust, lighting-invariant estimates, whereas patch-based approaches
exhibit significant lighting-dependent biases that require calibration.
Furthermore, neural network models, particularly when combined with heavy
blurring to reduce overfitting, can provide light-invariant Fitzpatrick
predictions, although their generalization to real-world images remains
unverified. We conclude with practical recommendations for designing fair and
reliable skin color estimation methods.
|
2504.04497 | Wang Yuqing | Yuqing Wang, Yan Wang, Hailiang Tang, Xiaoji Niu | SELC: Self-Supervised Efficient Local Correspondence Learning for Low
Quality Images | 8 pages, 4 figures | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate and stable feature matching is critical for computer vision tasks,
particularly in applications such as Simultaneous Localization and Mapping
(SLAM). While recent learning-based feature matching methods have demonstrated
promising performance in challenging spatiotemporal scenarios, they still face
inherent trade-offs between accuracy and computational efficiency in specific
settings. In this paper, we propose a lightweight feature matching network
designed to establish sparse, stable, and consistent correspondence between
multiple frames. The proposed method eliminates the dependency on manual
annotations during training and mitigates feature drift through a hybrid
self-supervised paradigm. Extensive experiments validate three key advantages:
(1) Our method operates without dependency on external prior knowledge and
seamlessly incorporates its hybrid training mechanism into original datasets.
(2) Benchmarked against state-of-the-art deep learning-based methods, our
approach maintains equivalent computational efficiency at low-resolution scales
while achieving a 2-10x improvement in computational efficiency for
high-resolution inputs. (3) Comparative evaluations demonstrate that the
proposed hybrid self-supervised scheme effectively mitigates feature drift in
long-term tracking while maintaining consistent representation across image
sequences.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 14:14:43 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Yuqing",
""
],
[
"Wang",
"Yan",
""
],
[
"Tang",
"Hailiang",
""
],
[
"Niu",
"Xiaoji",
""
]
] | TITLE: SELC: Self-Supervised Efficient Local Correspondence Learning for Low
Quality Images
ABSTRACT: Accurate and stable feature matching is critical for computer vision tasks,
particularly in applications such as Simultaneous Localization and Mapping
(SLAM). While recent learning-based feature matching methods have demonstrated
promising performance in challenging spatiotemporal scenarios, they still face
inherent trade-offs between accuracy and computational efficiency in specific
settings. In this paper, we propose a lightweight feature matching network
designed to establish sparse, stable, and consistent correspondence between
multiple frames. The proposed method eliminates the dependency on manual
annotations during training and mitigates feature drift through a hybrid
self-supervised paradigm. Extensive experiments validate three key advantages:
(1) Our method operates without dependency on external prior knowledge and
seamlessly incorporates its hybrid training mechanism into original datasets.
(2) Benchmarked against state-of-the-art deep learning-based methods, our
approach maintains equivalent computational efficiency at low-resolution scales
while achieving a 2-10x improvement in computational efficiency for
high-resolution inputs. (3) Comparative evaluations demonstrate that the
proposed hybrid self-supervised scheme effectively mitigates feature drift in
long-term tracking while maintaining consistent representation across image
sequences.
|
2504.04506 | Netta Shafir | Netta Shafir, Guy Hacohen, Daphna Weinshall | Active Learning with a Noisy Annotator | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Active Learning (AL) aims to reduce annotation costs by strategically
selecting the most informative samples for labeling. However, most active
learning methods struggle in the low-budget regime where only a few labeled
examples are available. This issue becomes even more pronounced when annotators
provide noisy labels. A common AL approach for the low- and mid-budget regimes
focuses on maximizing the coverage of the labeled set across the entire
dataset. We propose a novel framework called Noise-Aware Active Sampling (NAS)
that extends existing greedy, coverage-based active learning strategies to
handle noisy annotations. NAS identifies regions that remain uncovered due to
the selection of noisy representatives and enables resampling from these areas.
We introduce a simple yet effective noise filtering approach suitable for the
low-budget regime, which leverages the inner mechanism of NAS and can be
applied for noise filtering before model training. On multiple computer vision
benchmarks, including CIFAR100 and ImageNet subsets, NAS significantly improves
performance for standard active learning methods across different noise types
and rates.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 14:27:27 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Shafir",
"Netta",
""
],
[
"Hacohen",
"Guy",
""
],
[
"Weinshall",
"Daphna",
""
]
] | TITLE: Active Learning with a Noisy Annotator
ABSTRACT: Active Learning (AL) aims to reduce annotation costs by strategically
selecting the most informative samples for labeling. However, most active
learning methods struggle in the low-budget regime where only a few labeled
examples are available. This issue becomes even more pronounced when annotators
provide noisy labels. A common AL approach for the low- and mid-budget regimes
focuses on maximizing the coverage of the labeled set across the entire
dataset. We propose a novel framework called Noise-Aware Active Sampling (NAS)
that extends existing greedy, coverage-based active learning strategies to
handle noisy annotations. NAS identifies regions that remain uncovered due to
the selection of noisy representatives and enables resampling from these areas.
We introduce a simple yet effective noise filtering approach suitable for the
low-budget regime, which leverages the inner mechanism of NAS and can be
applied for noise filtering before model training. On multiple computer vision
benchmarks, including CIFAR100 and ImageNet subsets, NAS significantly improves
performance for standard active learning methods across different noise types
and rates.
|
2504.04510 | Shijian Wang | Shijian Wang, Linxin Song, Ryotaro Shimizu, Masayuki Goto, Hanqian Wu | Attributed Synthetic Data Generation for Zero-shot Domain-specific Image
Classification | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zero-shot domain-specific image classification is challenging in classifying
real images without ground-truth in-domain training examples. Recent research
involved knowledge from texts with a text-to-image model to generate in-domain
training images in zero-shot scenarios. However, existing methods heavily rely
on simple prompt strategies, limiting the diversity of synthetic training
images, thus leading to inferior performance compared to real images. In this
paper, we propose AttrSyn, which leverages large language models to generate
attributed prompts. These prompts allow for the generation of more diverse
attributed synthetic images. Experiments for zero-shot domain-specific image
classification on two fine-grained datasets show that training with synthetic
images generated by AttrSyn significantly outperforms CLIP's zero-shot
classification under most situations and consistently surpasses simple prompt
strategies.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 14:54:10 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Shijian",
""
],
[
"Song",
"Linxin",
""
],
[
"Shimizu",
"Ryotaro",
""
],
[
"Goto",
"Masayuki",
""
],
[
"Wu",
"Hanqian",
""
]
] | TITLE: Attributed Synthetic Data Generation for Zero-shot Domain-specific Image
Classification
ABSTRACT: Zero-shot domain-specific image classification is challenging in classifying
real images without ground-truth in-domain training examples. Recent research
involved knowledge from texts with a text-to-image model to generate in-domain
training images in zero-shot scenarios. However, existing methods heavily rely
on simple prompt strategies, limiting the diversity of synthetic training
images, thus leading to inferior performance compared to real images. In this
paper, we propose AttrSyn, which leverages large language models to generate
attributed prompts. These prompts allow for the generation of more diverse
attributed synthetic images. Experiments for zero-shot domain-specific image
classification on two fine-grained datasets show that training with synthetic
images generated by AttrSyn significantly outperforms CLIP's zero-shot
classification under most situations and consistently surpasses simple prompt
strategies.
|
2504.04517 | Jiancheng Pan | Jiancheng Pan, Yanxing Liu, Xiao He, Long Peng, Jiahao Li, Yuze Sun,
Xiaomeng Huang | Enhance Then Search: An Augmentation-Search Strategy with Foundation
Models for Cross-Domain Few-Shot Object Detection | 9 pages, 6 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Foundation models pretrained on extensive datasets, such as GroundingDINO and
LAE-DINO, have performed remarkably in the cross-domain few-shot object
detection (CD-FSOD) task. Through rigorous few-shot training, we found that the
integration of image-based data augmentation techniques and grid-based
sub-domain search strategy significantly enhances the performance of these
foundation models. Building upon GroundingDINO, we employed several widely used
image augmentation methods and established optimization objectives to
effectively navigate the expansive domain space in search of optimal
sub-domains. This approach facilitates efficient few-shot object detection and
introduces an approach to solving the CD-FSOD problem by efficiently searching
for the optimal parameter configuration from the foundation model. Our findings
substantially advance the practical deployment of vision-language models in
data-scarce environments, offering critical insights into optimizing their
cross-domain generalization capabilities without labor-intensive retraining.
Code is available at https://github.com/jaychempan/ETS.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 15:30:35 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Pan",
"Jiancheng",
""
],
[
"Liu",
"Yanxing",
""
],
[
"He",
"Xiao",
""
],
[
"Peng",
"Long",
""
],
[
"Li",
"Jiahao",
""
],
[
"Sun",
"Yuze",
""
],
[
"Huang",
"Xiaomeng",
""
]
] | TITLE: Enhance Then Search: An Augmentation-Search Strategy with Foundation
Models for Cross-Domain Few-Shot Object Detection
ABSTRACT: Foundation models pretrained on extensive datasets, such as GroundingDINO and
LAE-DINO, have performed remarkably in the cross-domain few-shot object
detection (CD-FSOD) task. Through rigorous few-shot training, we found that the
integration of image-based data augmentation techniques and grid-based
sub-domain search strategy significantly enhances the performance of these
foundation models. Building upon GroundingDINO, we employed several widely used
image augmentation methods and established optimization objectives to
effectively navigate the expansive domain space in search of optimal
sub-domains. This approach facilitates efficient few-shot object detection and
introduces an approach to solving the CD-FSOD problem by efficiently searching
for the optimal parameter configuration from the foundation model. Our findings
substantially advance the practical deployment of vision-language models in
data-scarce environments, offering critical insights into optimizing their
cross-domain generalization capabilities without labor-intensive retraining.
Code is available at https://github.com/jaychempan/ETS.
|
2504.04519 | Junjie Jiang | Junjie Jiang, Zelin Wang, Manqi Zhao, Yin Li, DongSheng Jiang | SAM2MOT: A Novel Paradigm of Multi-Object Tracking by Segmentation | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Segment Anything 2 (SAM2) enables robust single-object tracking using
segmentation. To extend this to multi-object tracking (MOT), we propose
SAM2MOT, introducing a novel Tracking by Segmentation paradigm. Unlike Tracking
by Detection or Tracking by Query, SAM2MOT directly generates tracking boxes
from segmentation masks, reducing reliance on detection accuracy. SAM2MOT has
two key advantages: zero-shot generalization, allowing it to work across
datasets without fine-tuning, and strong object association, inherited from
SAM2. To further improve performance, we integrate a trajectory manager system
for precise object addition and removal, and a cross-object interaction module
to handle occlusions. Experiments on DanceTrack, UAVDT, and BDD100K show
state-of-the-art results. Notably, SAM2MOT outperforms existing methods on
DanceTrack by +2.1 HOTA and +4.5 IDF1, highlighting its effectiveness in MOT.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 15:32:08 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Jiang",
"Junjie",
""
],
[
"Wang",
"Zelin",
""
],
[
"Zhao",
"Manqi",
""
],
[
"Li",
"Yin",
""
],
[
"Jiang",
"DongSheng",
""
]
] | TITLE: SAM2MOT: A Novel Paradigm of Multi-Object Tracking by Segmentation
ABSTRACT: Segment Anything 2 (SAM2) enables robust single-object tracking using
segmentation. To extend this to multi-object tracking (MOT), we propose
SAM2MOT, introducing a novel Tracking by Segmentation paradigm. Unlike Tracking
by Detection or Tracking by Query, SAM2MOT directly generates tracking boxes
from segmentation masks, reducing reliance on detection accuracy. SAM2MOT has
two key advantages: zero-shot generalization, allowing it to work across
datasets without fine-tuning, and strong object association, inherited from
SAM2. To further improve performance, we integrate a trajectory manager system
for precise object addition and removal, and a cross-object interaction module
to handle occlusions. Experiments on DanceTrack, UAVDT, and BDD100K show
state-of-the-art results. Notably, SAM2MOT outperforms existing methods on
DanceTrack by +2.1 HOTA and +4.5 IDF1, highlighting its effectiveness in MOT.
|
2504.04532 | Moinak Bhattacharya | Moinak Bhattacharya, Saumya Gupta, Annie Singh, Chao Chen, Gagandeep
Singh, Prateek Prasanna | BrainMRDiff: A Diffusion Model for Anatomically Consistent Brain MRI
Synthesis | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate brain tumor diagnosis relies on the assessment of multiple Magnetic
Resonance Imaging (MRI) sequences. However, in clinical practice, the
acquisition of certain sequences may be affected by factors like motion
artifacts or contrast agent contraindications, leading to suboptimal outcome,
such as poor image quality. This can then affect image interpretation by
radiologists. Synthesizing high quality MRI sequences has thus become a
critical research focus. Though recent advancements in controllable generative
AI have facilitated the synthesis of diagnostic quality MRI, ensuring
anatomical accuracy remains a significant challenge. Preserving critical
structural relationships between different anatomical regions is essential, as
even minor structural or topological inconsistencies can compromise diagnostic
validity. In this work, we propose BrainMRDiff, a novel topology-preserving,
anatomy-guided diffusion model for synthesizing brain MRI, leveraging brain and
tumor anatomies as conditioning inputs. To achieve this, we introduce two key
modules: Tumor+Structure Aggregation (TSA) and Topology-Guided Anatomy
Preservation (TGAP). TSA integrates diverse anatomical structures with tumor
information, forming a comprehensive conditioning mechanism for the diffusion
process. TGAP enforces topological consistency during reverse denoising
diffusion process; both these modules ensure that the generated image respects
anatomical integrity. Experimental results demonstrate that BrainMRDiff
surpasses existing baselines, achieving performance improvements of 23.33% on
the BraTS-AG dataset and 33.33% on the BraTS-Met dataset. Code will be made
publicly available soon.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 16:16:50 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Bhattacharya",
"Moinak",
""
],
[
"Gupta",
"Saumya",
""
],
[
"Singh",
"Annie",
""
],
[
"Chen",
"Chao",
""
],
[
"Singh",
"Gagandeep",
""
],
[
"Prasanna",
"Prateek",
""
]
] | TITLE: BrainMRDiff: A Diffusion Model for Anatomically Consistent Brain MRI
Synthesis
ABSTRACT: Accurate brain tumor diagnosis relies on the assessment of multiple Magnetic
Resonance Imaging (MRI) sequences. However, in clinical practice, the
acquisition of certain sequences may be affected by factors like motion
artifacts or contrast agent contraindications, leading to suboptimal outcome,
such as poor image quality. This can then affect image interpretation by
radiologists. Synthesizing high quality MRI sequences has thus become a
critical research focus. Though recent advancements in controllable generative
AI have facilitated the synthesis of diagnostic quality MRI, ensuring
anatomical accuracy remains a significant challenge. Preserving critical
structural relationships between different anatomical regions is essential, as
even minor structural or topological inconsistencies can compromise diagnostic
validity. In this work, we propose BrainMRDiff, a novel topology-preserving,
anatomy-guided diffusion model for synthesizing brain MRI, leveraging brain and
tumor anatomies as conditioning inputs. To achieve this, we introduce two key
modules: Tumor+Structure Aggregation (TSA) and Topology-Guided Anatomy
Preservation (TGAP). TSA integrates diverse anatomical structures with tumor
information, forming a comprehensive conditioning mechanism for the diffusion
process. TGAP enforces topological consistency during reverse denoising
diffusion process; both these modules ensure that the generated image respects
anatomical integrity. Experimental results demonstrate that BrainMRDiff
surpasses existing baselines, achieving performance improvements of 23.33% on
the BraTS-AG dataset and 33.33% on the BraTS-Met dataset. Code will be made
publicly available soon.
|
2504.04533 | Han Wang | Han Wang and Donghe Chen and Tengjie Zheng and Lin Cheng and Shengping
Gong | Confidence-Aware Learning Optimal Terminal Guidance via Gaussian Process
Regression | null | null | null | null | eess.SY cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Modern aerospace guidance systems demand rigorous constraint satisfaction,
optimal performance, and computational efficiency. Traditional analytical
methods struggle to simultaneously satisfy these requirements. While data
driven methods have shown promise in learning optimal guidance strategy,
challenges still persist in generating well-distributed optimal dataset and
ensuring the reliability and trustworthiness of learned strategies. This paper
presents a confidence-aware learning framework that addresses these
limitations. First, a region-controllable optimal data generation method is
proposed leveraging Hamiltonian state transition matrices, enabling efficient
generation of optimal trajectories of specified data distribution. Then, to
obtain a lightweight and effective dataset for efficient strategy learning, an
error-distribution-smoothing method is incorporated to employ data filtering,
which reduces dataset size by almost 90% while preserving prediction accuracy.
To assess the operational domain of the learned strategy, a confidence-aware
learning guidance strategy is proposed based on gaussian process regression,
achieving constraint satisfaction even beyond training distributions. Numerical
simulations validate the effectiveness and reliability of the proposed learning
framework in terms of data generation, data filtering and strategy learning.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 16:17:29 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Han",
""
],
[
"Chen",
"Donghe",
""
],
[
"Zheng",
"Tengjie",
""
],
[
"Cheng",
"Lin",
""
],
[
"Gong",
"Shengping",
""
]
] | TITLE: Confidence-Aware Learning Optimal Terminal Guidance via Gaussian Process
Regression
ABSTRACT: Modern aerospace guidance systems demand rigorous constraint satisfaction,
optimal performance, and computational efficiency. Traditional analytical
methods struggle to simultaneously satisfy these requirements. While data
driven methods have shown promise in learning optimal guidance strategy,
challenges still persist in generating well-distributed optimal dataset and
ensuring the reliability and trustworthiness of learned strategies. This paper
presents a confidence-aware learning framework that addresses these
limitations. First, a region-controllable optimal data generation method is
proposed leveraging Hamiltonian state transition matrices, enabling efficient
generation of optimal trajectories of specified data distribution. Then, to
obtain a lightweight and effective dataset for efficient strategy learning, an
error-distribution-smoothing method is incorporated to employ data filtering,
which reduces dataset size by almost 90% while preserving prediction accuracy.
To assess the operational domain of the learned strategy, a confidence-aware
learning guidance strategy is proposed based on gaussian process regression,
achieving constraint satisfaction even beyond training distributions. Numerical
simulations validate the effectiveness and reliability of the proposed learning
framework in terms of data generation, data filtering and strategy learning.
|
2504.04534 | Anantharaman Janakiraman | Anantharaman Janakiraman, Behnaz Ghoraani | An Empirical Comparison of Text Summarization: A Multi-Dimensional
Evaluation of Large Language Models | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Text summarization is crucial for mitigating information overload across
domains like journalism, medicine, and business. This research evaluates
summarization performance across 17 large language models (OpenAI, Google,
Anthropic, open-source) using a novel multi-dimensional framework. We assessed
models on seven diverse datasets (BigPatent, BillSum, CNN/DailyMail, PubMed,
SAMSum, WikiHow, XSum) at three output lengths (50, 100, 150 tokens) using
metrics for factual consistency, semantic similarity, lexical overlap, and
human-like quality, while also considering efficiency factors. Our findings
reveal significant performance differences, with specific models excelling in
factual accuracy (deepseek-v3), human-like quality (claude-3-5-sonnet), and
processing efficiency/cost-effectiveness (gemini-1.5-flash, gemini-2.0-flash).
Performance varies dramatically by dataset, with models struggling on technical
domains but performing well on conversational content. We identified a critical
tension between factual consistency (best at 50 tokens) and perceived quality
(best at 150 tokens). Our analysis provides evidence-based recommendations for
different use cases, from high-stakes applications requiring factual accuracy
to resource-constrained environments needing efficient processing. This
comprehensive approach enhances evaluation methodology by integrating quality
metrics with operational considerations, incorporating trade-offs between
accuracy, efficiency, and cost-effectiveness to guide model selection for
specific applications.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 16:24:22 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Janakiraman",
"Anantharaman",
""
],
[
"Ghoraani",
"Behnaz",
""
]
] | TITLE: An Empirical Comparison of Text Summarization: A Multi-Dimensional
Evaluation of Large Language Models
ABSTRACT: Text summarization is crucial for mitigating information overload across
domains like journalism, medicine, and business. This research evaluates
summarization performance across 17 large language models (OpenAI, Google,
Anthropic, open-source) using a novel multi-dimensional framework. We assessed
models on seven diverse datasets (BigPatent, BillSum, CNN/DailyMail, PubMed,
SAMSum, WikiHow, XSum) at three output lengths (50, 100, 150 tokens) using
metrics for factual consistency, semantic similarity, lexical overlap, and
human-like quality, while also considering efficiency factors. Our findings
reveal significant performance differences, with specific models excelling in
factual accuracy (deepseek-v3), human-like quality (claude-3-5-sonnet), and
processing efficiency/cost-effectiveness (gemini-1.5-flash, gemini-2.0-flash).
Performance varies dramatically by dataset, with models struggling on technical
domains but performing well on conversational content. We identified a critical
tension between factual consistency (best at 50 tokens) and perceived quality
(best at 150 tokens). Our analysis provides evidence-based recommendations for
different use cases, from high-stakes applications requiring factual accuracy
to resource-constrained environments needing efficient processing. This
comprehensive approach enhances evaluation methodology by integrating quality
metrics with operational considerations, incorporating trade-offs between
accuracy, efficiency, and cost-effectiveness to guide model selection for
specific applications.
|
2504.04540 | Weichen Zhang | Weichen Zhang, Ruiying Peng, Chen Gao, Jianjie Fang, Xin Zeng, Kaiyuan
Li, Ziyou Wang, Jinqiang Cui, Xin Wang, Xinlei Chen, Yong Li | The Point, the Vision and the Text: Does Point Cloud Boost Spatial
Reasoning of Large Language Models? | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | 3D Large Language Models (LLMs) leveraging spatial information in point
clouds for 3D spatial reasoning attract great attention. Despite some promising
results, the role of point clouds in 3D spatial reasoning remains
under-explored. In this work, we comprehensively evaluate and analyze these
models to answer the research question: \textit{Does point cloud truly boost
the spatial reasoning capacities of 3D LLMs?} We first evaluate the spatial
reasoning capacity of LLMs with different input modalities by replacing the
point cloud with the visual and text counterparts. We then propose a novel 3D
QA (Question-answering) benchmark, ScanReQA, that comprehensively evaluates
models' understanding of binary spatial relationships. Our findings reveal
several critical insights: 1) LLMs without point input could even achieve
competitive performance even in a zero-shot manner; 2) existing 3D LLMs
struggle to comprehend the binary spatial relationships; 3) 3D LLMs exhibit
limitations in exploiting the structural coordinates in point clouds for
fine-grained spatial reasoning. We think these conclusions can help the next
step of 3D LLMs and also offer insights for foundation models in other
modalities. We release datasets and reproducible codes in the anonymous project
page: https://3d-llm.xyz.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 16:38:48 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhang",
"Weichen",
""
],
[
"Peng",
"Ruiying",
""
],
[
"Gao",
"Chen",
""
],
[
"Fang",
"Jianjie",
""
],
[
"Zeng",
"Xin",
""
],
[
"Li",
"Kaiyuan",
""
],
[
"Wang",
"Ziyou",
""
],
[
"Cui",
"Jinqiang",
""
],
[
"Wang",
"Xin",
""
],
[
"Chen",
"Xinlei",
""
],
[
"Li",
"Yong",
""
]
] | TITLE: The Point, the Vision and the Text: Does Point Cloud Boost Spatial
Reasoning of Large Language Models?
ABSTRACT: 3D Large Language Models (LLMs) leveraging spatial information in point
clouds for 3D spatial reasoning attract great attention. Despite some promising
results, the role of point clouds in 3D spatial reasoning remains
under-explored. In this work, we comprehensively evaluate and analyze these
models to answer the research question: \textit{Does point cloud truly boost
the spatial reasoning capacities of 3D LLMs?} We first evaluate the spatial
reasoning capacity of LLMs with different input modalities by replacing the
point cloud with the visual and text counterparts. We then propose a novel 3D
QA (Question-answering) benchmark, ScanReQA, that comprehensively evaluates
models' understanding of binary spatial relationships. Our findings reveal
several critical insights: 1) LLMs without point input could even achieve
competitive performance even in a zero-shot manner; 2) existing 3D LLMs
struggle to comprehend the binary spatial relationships; 3) 3D LLMs exhibit
limitations in exploiting the structural coordinates in point clouds for
fine-grained spatial reasoning. We think these conclusions can help the next
step of 3D LLMs and also offer insights for foundation models in other
modalities. We release datasets and reproducible codes in the anonymous project
page: https://3d-llm.xyz.
|
2504.04541 | Bharadwaj Dogga | Bharadwaj Dogga, Anoop Sathyan, and Kelly Cohen | A model agnostic eXplainable AI based fuzzy framework for sensor
constrained Aerospace maintenance applications | null | null | null | null | cs.CE | http://creativecommons.org/licenses/by/4.0/ | Machine Learning methods have extensively evolved to support industrial big
data methods and their corresponding need in gas turbine maintenance and
prognostics. However, most unsupervised methods need extensively labeled data
to perform predictions across many dimensions. The cutting edge of small and
medium applications do not necessarily maintain operational sensors and data
acquisition with rising costs and diminishing profits. We propose a framework
to make sensor maintenance priority decisions using a combination of SHAP,
UMAP, Fuzzy C-means clustering. An aerospace jet engine dataset is used as a
case study.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 16:41:29 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Dogga",
"Bharadwaj",
""
],
[
"Sathyan",
"Anoop",
""
],
[
"Cohen",
"Kelly",
""
]
] | TITLE: A model agnostic eXplainable AI based fuzzy framework for sensor
constrained Aerospace maintenance applications
ABSTRACT: Machine Learning methods have extensively evolved to support industrial big
data methods and their corresponding need in gas turbine maintenance and
prognostics. However, most unsupervised methods need extensively labeled data
to perform predictions across many dimensions. The cutting edge of small and
medium applications do not necessarily maintain operational sensors and data
acquisition with rising costs and diminishing profits. We propose a framework
to make sensor maintenance priority decisions using a combination of SHAP,
UMAP, Fuzzy C-means clustering. An aerospace jet engine dataset is used as a
case study.
|
2504.04549 | Han Yuan | Han Yuan, Lican Kang, Yong Li | Opening the black box of deep learning: Validating the statistical
association between explainable artificial intelligence (XAI) and clinical
domain knowledge in fundus image-based glaucoma diagnosis | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | While deep learning has exhibited remarkable predictive capabilities in
various medical image tasks, its inherent black-box nature has hindered its
widespread implementation in real-world healthcare settings. Our objective is
to unveil the decision-making processes of deep learning models in the context
of glaucoma classification by employing several Class Activation Map (CAM)
techniques to generate model focus regions and comparing them with clinical
domain knowledge of the anatomical area (optic cup, optic disk, and blood
vessels). Four deep neural networks, including VGG-11, ResNet-18, DeiT-Tiny,
and Swin Transformer-Tiny, were developed using binary diagnostic labels of
glaucoma and five CAM methods (Grad-CAM, XGrad-CAM, Score-CAM, Eigen-CAM, and
Layer-CAM) were employed to highlight the model focus area. We applied the
paired-sample t-test to compare the percentage of anatomies in the model focus
area to the proportion of anatomies in the entire image. After that, Pearson's
and Spearman's correlation tests were implemented to examine the relationship
between model predictive ability and the percentage of anatomical structures in
the model focus area. On five public glaucoma datasets, all deep learning
models consistently displayed statistically significantly higher percentages of
anatomical structures in the focus area than the proportions of anatomical
structures in the entire image. Also, we validated the positive relationship
between the percentage of anatomical structures in the focus area and model
predictive performance. Our study provides evidence of the convergence of
decision logic between deep neural networks and human clinicians through
rigorous statistical tests. We anticipate that it can help alleviate
clinicians' concerns regarding the trustworthiness of deep learning in
healthcare. For reproducibility, the code and dataset have been released at
GitHub.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 16:57:34 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yuan",
"Han",
""
],
[
"Kang",
"Lican",
""
],
[
"Li",
"Yong",
""
]
] | TITLE: Opening the black box of deep learning: Validating the statistical
association between explainable artificial intelligence (XAI) and clinical
domain knowledge in fundus image-based glaucoma diagnosis
ABSTRACT: While deep learning has exhibited remarkable predictive capabilities in
various medical image tasks, its inherent black-box nature has hindered its
widespread implementation in real-world healthcare settings. Our objective is
to unveil the decision-making processes of deep learning models in the context
of glaucoma classification by employing several Class Activation Map (CAM)
techniques to generate model focus regions and comparing them with clinical
domain knowledge of the anatomical area (optic cup, optic disk, and blood
vessels). Four deep neural networks, including VGG-11, ResNet-18, DeiT-Tiny,
and Swin Transformer-Tiny, were developed using binary diagnostic labels of
glaucoma and five CAM methods (Grad-CAM, XGrad-CAM, Score-CAM, Eigen-CAM, and
Layer-CAM) were employed to highlight the model focus area. We applied the
paired-sample t-test to compare the percentage of anatomies in the model focus
area to the proportion of anatomies in the entire image. After that, Pearson's
and Spearman's correlation tests were implemented to examine the relationship
between model predictive ability and the percentage of anatomical structures in
the model focus area. On five public glaucoma datasets, all deep learning
models consistently displayed statistically significantly higher percentages of
anatomical structures in the focus area than the proportions of anatomical
structures in the entire image. Also, we validated the positive relationship
between the percentage of anatomical structures in the focus area and model
predictive performance. Our study provides evidence of the convergence of
decision logic between deep neural networks and human clinicians through
rigorous statistical tests. We anticipate that it can help alleviate
clinicians' concerns regarding the trustworthiness of deep learning in
healthcare. For reproducibility, the code and dataset have been released at
GitHub.
|
2504.04550 | Alkesh Patel | Alkesh Patel, Vibhav Chitalia, Yinfei Yang | Advancing Egocentric Video Question Answering with Multimodal Large
Language Models | 8 pages | null | null | null | cs.CV cs.AI cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Egocentric Video Question Answering (QA) requires models to handle
long-horizon temporal reasoning, first-person perspectives, and specialized
challenges like frequent camera movement. This paper systematically evaluates
both proprietary and open-source Multimodal Large Language Models (MLLMs) on
QaEgo4Dv2 - a refined dataset of egocentric videos derived from QaEgo4D. Four
popular MLLMs (GPT-4o, Gemini-1.5-Pro, Video-LLaVa-7B and Qwen2-VL-7B-Instruct)
are assessed using zero-shot and fine-tuned approaches for both OpenQA and
CloseQA settings. We introduce QaEgo4Dv2 to mitigate annotation noise in
QaEgo4D, enabling more reliable comparison. Our results show that fine-tuned
Video-LLaVa-7B and Qwen2-VL-7B-Instruct achieve new state-of-the-art
performance, surpassing previous benchmarks by up to +2.6% ROUGE/METEOR (for
OpenQA) and +13% accuracy (for CloseQA). We also present a thorough error
analysis, indicating the model's difficulty in spatial reasoning and
fine-grained object recognition - key areas for future improvement.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 16:58:23 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Patel",
"Alkesh",
""
],
[
"Chitalia",
"Vibhav",
""
],
[
"Yang",
"Yinfei",
""
]
] | TITLE: Advancing Egocentric Video Question Answering with Multimodal Large
Language Models
ABSTRACT: Egocentric Video Question Answering (QA) requires models to handle
long-horizon temporal reasoning, first-person perspectives, and specialized
challenges like frequent camera movement. This paper systematically evaluates
both proprietary and open-source Multimodal Large Language Models (MLLMs) on
QaEgo4Dv2 - a refined dataset of egocentric videos derived from QaEgo4D. Four
popular MLLMs (GPT-4o, Gemini-1.5-Pro, Video-LLaVa-7B and Qwen2-VL-7B-Instruct)
are assessed using zero-shot and fine-tuned approaches for both OpenQA and
CloseQA settings. We introduce QaEgo4Dv2 to mitigate annotation noise in
QaEgo4D, enabling more reliable comparison. Our results show that fine-tuned
Video-LLaVa-7B and Qwen2-VL-7B-Instruct achieve new state-of-the-art
performance, surpassing previous benchmarks by up to +2.6% ROUGE/METEOR (for
OpenQA) and +13% accuracy (for CloseQA). We also present a thorough error
analysis, indicating the model's difficulty in spatial reasoning and
fine-grained object recognition - key areas for future improvement.
|
2504.04562 | Rui Gan | Rui Gan, Pei Li, Keke Long, Bocheng An, Junwei You, Keshu Wu, Bin Ran | Planning Safety Trajectories with Dual-Phase, Physics-Informed, and
Transportation Knowledge-Driven Large Language Models | null | null | null | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Foundation models have demonstrated strong reasoning and generalization
capabilities in driving-related tasks, including scene understanding, planning,
and control. However, they still face challenges in hallucinations,
uncertainty, and long inference latency. While existing foundation models have
general knowledge of avoiding collisions, they often lack
transportation-specific safety knowledge. To overcome these limitations, we
introduce LetsPi, a physics-informed, dual-phase, knowledge-driven framework
for safe, human-like trajectory planning. To prevent hallucinations and
minimize uncertainty, this hybrid framework integrates Large Language Model
(LLM) reasoning with physics-informed social force dynamics. LetsPi leverages
the LLM to analyze driving scenes and historical information, providing
appropriate parameters and target destinations (goals) for the social force
model, which then generates the future trajectory. Moreover, the dual-phase
architecture balances reasoning and computational efficiency through its Memory
Collection phase and Fast Inference phase. The Memory Collection phase
leverages the physics-informed LLM to process and refine planning results
through reasoning, reflection, and memory modules, storing safe, high-quality
driving experiences in a memory bank. Surrogate safety measures and
physics-informed prompt techniques are introduced to enhance the LLM's
knowledge of transportation safety and physical force, respectively. The Fast
Inference phase extracts similar driving experiences as few-shot examples for
new scenarios, while simplifying input-output requirements to enable rapid
trajectory planning without compromising safety. Extensive experiments using
the HighD dataset demonstrate that LetsPi outperforms baseline models across
five safety metrics.See PDF for project Github link.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 17:34:33 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Gan",
"Rui",
""
],
[
"Li",
"Pei",
""
],
[
"Long",
"Keke",
""
],
[
"An",
"Bocheng",
""
],
[
"You",
"Junwei",
""
],
[
"Wu",
"Keshu",
""
],
[
"Ran",
"Bin",
""
]
] | TITLE: Planning Safety Trajectories with Dual-Phase, Physics-Informed, and
Transportation Knowledge-Driven Large Language Models
ABSTRACT: Foundation models have demonstrated strong reasoning and generalization
capabilities in driving-related tasks, including scene understanding, planning,
and control. However, they still face challenges in hallucinations,
uncertainty, and long inference latency. While existing foundation models have
general knowledge of avoiding collisions, they often lack
transportation-specific safety knowledge. To overcome these limitations, we
introduce LetsPi, a physics-informed, dual-phase, knowledge-driven framework
for safe, human-like trajectory planning. To prevent hallucinations and
minimize uncertainty, this hybrid framework integrates Large Language Model
(LLM) reasoning with physics-informed social force dynamics. LetsPi leverages
the LLM to analyze driving scenes and historical information, providing
appropriate parameters and target destinations (goals) for the social force
model, which then generates the future trajectory. Moreover, the dual-phase
architecture balances reasoning and computational efficiency through its Memory
Collection phase and Fast Inference phase. The Memory Collection phase
leverages the physics-informed LLM to process and refine planning results
through reasoning, reflection, and memory modules, storing safe, high-quality
driving experiences in a memory bank. Surrogate safety measures and
physics-informed prompt techniques are introduced to enhance the LLM's
knowledge of transportation safety and physical force, respectively. The Fast
Inference phase extracts similar driving experiences as few-shot examples for
new scenarios, while simplifying input-output requirements to enable rapid
trajectory planning without compromising safety. Extensive experiments using
the HighD dataset demonstrate that LetsPi outperforms baseline models across
five safety metrics.See PDF for project Github link.
|
2504.04566 | Muzammal Naseer | Maregu Assefa, Muzammal Naseer, Iyyakutti Iyappan Ganapathi, Syed
Sadaf Ali, Mohamed L Seghier, and Naoufel Werghi | DyCON: Dynamic Uncertainty-aware Consistency and Contrastive Learning
for Semi-supervised Medical Image Segmentation | Accepted to CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Semi-supervised learning in medical image segmentation leverages unlabeled
data to reduce annotation burdens through consistency learning. However,
current methods struggle with class imbalance and high uncertainty from
pathology variations, leading to inaccurate segmentation in 3D medical images.
To address these challenges, we present DyCON, a Dynamic Uncertainty-aware
Consistency and Contrastive Learning framework that enhances the generalization
of consistency methods with two complementary losses: Uncertainty-aware
Consistency Loss (UnCL) and Focal Entropy-aware Contrastive Loss (FeCL). UnCL
enforces global consistency by dynamically weighting the contribution of each
voxel to the consistency loss based on its uncertainty, preserving
high-uncertainty regions instead of filtering them out. Initially, UnCL
prioritizes learning from uncertain voxels with lower penalties, encouraging
the model to explore challenging regions. As training progress, the penalty
shift towards confident voxels to refine predictions and ensure global
consistency. Meanwhile, FeCL enhances local feature discrimination in
imbalanced regions by introducing dual focal mechanisms and adaptive confidence
adjustments into the contrastive principle. These mechanisms jointly
prioritizes hard positives and negatives while focusing on uncertain sample
pairs, effectively capturing subtle lesion variations under class imbalance.
Extensive evaluations on four diverse medical image segmentation datasets
(ISLES'22, BraTS'19, LA, Pancreas) show DyCON's superior performance against
SOTA methods.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 17:50:22 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Assefa",
"Maregu",
""
],
[
"Naseer",
"Muzammal",
""
],
[
"Ganapathi",
"Iyyakutti Iyappan",
""
],
[
"Ali",
"Syed Sadaf",
""
],
[
"Seghier",
"Mohamed L",
""
],
[
"Werghi",
"Naoufel",
""
]
] | TITLE: DyCON: Dynamic Uncertainty-aware Consistency and Contrastive Learning
for Semi-supervised Medical Image Segmentation
ABSTRACT: Semi-supervised learning in medical image segmentation leverages unlabeled
data to reduce annotation burdens through consistency learning. However,
current methods struggle with class imbalance and high uncertainty from
pathology variations, leading to inaccurate segmentation in 3D medical images.
To address these challenges, we present DyCON, a Dynamic Uncertainty-aware
Consistency and Contrastive Learning framework that enhances the generalization
of consistency methods with two complementary losses: Uncertainty-aware
Consistency Loss (UnCL) and Focal Entropy-aware Contrastive Loss (FeCL). UnCL
enforces global consistency by dynamically weighting the contribution of each
voxel to the consistency loss based on its uncertainty, preserving
high-uncertainty regions instead of filtering them out. Initially, UnCL
prioritizes learning from uncertain voxels with lower penalties, encouraging
the model to explore challenging regions. As training progress, the penalty
shift towards confident voxels to refine predictions and ensure global
consistency. Meanwhile, FeCL enhances local feature discrimination in
imbalanced regions by introducing dual focal mechanisms and adaptive confidence
adjustments into the contrastive principle. These mechanisms jointly
prioritizes hard positives and negatives while focusing on uncertain sample
pairs, effectively capturing subtle lesion variations under class imbalance.
Extensive evaluations on four diverse medical image segmentation datasets
(ISLES'22, BraTS'19, LA, Pancreas) show DyCON's superior performance against
SOTA methods.
|
2504.04569 | Chitranshu Harbola | Chitranshu Harbola and Anupam Purwar | KnowsLM: A framework for evaluation of small language models for
knowledge augmentation and humanised conversations | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | In the evolving landscape of conversational AI, generating concise,
context-aware, and human-like dialogue using small and medium-sized language
models (LLMs) remains a complex challenge. This study investigates the
influence of LoRA rank, dataset scale, and prompt prefix design on both
knowledge retention and stylistic alignment. While fine-tuning improves fluency
and enables stylistic customization, its ability to integrate unseen knowledge
is constrained -- particularly with smaller datasets. Conversely, RAG-augmented
models, equipped to incorporate external documents at inference, demonstrated
superior factual accuracy on out-of-distribution prompts, though they lacked
the stylistic consistency achieved by fine-tuning. Evaluations by LLM-based
judges across knowledge accuracy, conversational quality, and conciseness
suggest that fine-tuning is best suited for tone adaptation, whereas RAG excels
at real-time knowledge augmentation.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 17:58:08 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Harbola",
"Chitranshu",
""
],
[
"Purwar",
"Anupam",
""
]
] | TITLE: KnowsLM: A framework for evaluation of small language models for
knowledge augmentation and humanised conversations
ABSTRACT: In the evolving landscape of conversational AI, generating concise,
context-aware, and human-like dialogue using small and medium-sized language
models (LLMs) remains a complex challenge. This study investigates the
influence of LoRA rank, dataset scale, and prompt prefix design on both
knowledge retention and stylistic alignment. While fine-tuning improves fluency
and enables stylistic customization, its ability to integrate unseen knowledge
is constrained -- particularly with smaller datasets. Conversely, RAG-augmented
models, equipped to incorporate external documents at inference, demonstrated
superior factual accuracy on out-of-distribution prompts, though they lacked
the stylistic consistency achieved by fine-tuning. Evaluations by LLM-based
judges across knowledge accuracy, conversational quality, and conciseness
suggest that fine-tuning is best suited for tone adaptation, whereas RAG excels
at real-time knowledge augmentation.
|
2504.04572 | Mohamed Eltahir | Mohamed Eltahir, Osamah Sarraj, Mohammed Bremoo, Mohammed Khurd,
Abdulrahman Alfrihidi, Taha Alshatiri, Mohammad Almatrafi, Tanveer Hussain | Multimodal Lengthy Videos Retrieval Framework and Evaluation Metric | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Precise video retrieval requires multi-modal correlations to handle unseen
vocabulary and scenes, becoming more complex for lengthy videos where models
must perform effectively without prior training on a specific dataset. We
introduce a unified framework that combines a visual matching stream and an
aural matching stream with a unique subtitles-based video segmentation
approach. Additionally, the aural stream includes a complementary audio-based
two-stage retrieval mechanism that enhances performance on long-duration
videos. Considering the complex nature of retrieval from lengthy videos and its
corresponding evaluation, we introduce a new retrieval evaluation method
specifically designed for long-video retrieval to support further research. We
conducted experiments on the YouCook2 benchmark, showing promising retrieval
performance.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 18:18:09 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Eltahir",
"Mohamed",
""
],
[
"Sarraj",
"Osamah",
""
],
[
"Bremoo",
"Mohammed",
""
],
[
"Khurd",
"Mohammed",
""
],
[
"Alfrihidi",
"Abdulrahman",
""
],
[
"Alshatiri",
"Taha",
""
],
[
"Almatrafi",
"Mohammad",
""
],
[
"Hussain",
"Tanveer",
""
]
] | TITLE: Multimodal Lengthy Videos Retrieval Framework and Evaluation Metric
ABSTRACT: Precise video retrieval requires multi-modal correlations to handle unseen
vocabulary and scenes, becoming more complex for lengthy videos where models
must perform effectively without prior training on a specific dataset. We
introduce a unified framework that combines a visual matching stream and an
aural matching stream with a unique subtitles-based video segmentation
approach. Additionally, the aural stream includes a complementary audio-based
two-stage retrieval mechanism that enhances performance on long-duration
videos. Considering the complex nature of retrieval from lengthy videos and its
corresponding evaluation, we introduce a new retrieval evaluation method
specifically designed for long-video retrieval to support further research. We
conducted experiments on the YouCook2 benchmark, showing promising retrieval
performance.
|
2504.04573 | Jieyi Zhang | Jieyi Zhang, Wenqiang Xu, Zhenjun Yu, Pengfei Xie, Tutian Tang and
Cewu Lu | DexTOG: Learning Task-Oriented Dexterous Grasp with Language | null | IEEE Robotics and Automation Letters, vol. 10, no. 2, pp.
995-1002, Feb. 2025 | 10.1109/LRA.2024.3518116 | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | This study introduces a novel language-guided diffusion-based learning
framework, DexTOG, aimed at advancing the field of task-oriented grasping (TOG)
with dexterous hands. Unlike existing methods that mainly focus on 2-finger
grippers, this research addresses the complexities of dexterous manipulation,
where the system must identify non-unique optimal grasp poses under specific
task constraints, cater to multiple valid grasps, and search in a high
degree-of-freedom configuration space in grasp planning. The proposed DexTOG
includes a diffusion-based grasp pose generation model, DexDiffu, and a data
engine to support the DexDiffu. By leveraging DexTOG, we also proposed a new
dataset, DexTOG-80K, which was developed using a shadow robot hand to perform
various tasks on 80 objects from 5 categories, showcasing the dexterity and
multi-tasking capabilities of the robotic hand. This research not only presents
a significant leap in dexterous TOG but also provides a comprehensive dataset
and simulation validation, setting a new benchmark in robotic manipulation
research.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 18:23:10 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhang",
"Jieyi",
""
],
[
"Xu",
"Wenqiang",
""
],
[
"Yu",
"Zhenjun",
""
],
[
"Xie",
"Pengfei",
""
],
[
"Tang",
"Tutian",
""
],
[
"Lu",
"Cewu",
""
]
] | TITLE: DexTOG: Learning Task-Oriented Dexterous Grasp with Language
ABSTRACT: This study introduces a novel language-guided diffusion-based learning
framework, DexTOG, aimed at advancing the field of task-oriented grasping (TOG)
with dexterous hands. Unlike existing methods that mainly focus on 2-finger
grippers, this research addresses the complexities of dexterous manipulation,
where the system must identify non-unique optimal grasp poses under specific
task constraints, cater to multiple valid grasps, and search in a high
degree-of-freedom configuration space in grasp planning. The proposed DexTOG
includes a diffusion-based grasp pose generation model, DexDiffu, and a data
engine to support the DexDiffu. By leveraging DexTOG, we also proposed a new
dataset, DexTOG-80K, which was developed using a shadow robot hand to perform
various tasks on 80 objects from 5 categories, showcasing the dexterity and
multi-tasking capabilities of the robotic hand. This research not only presents
a significant leap in dexterous TOG but also provides a comprehensive dataset
and simulation validation, setting a new benchmark in robotic manipulation
research.
|
2504.04586 | Kyoungjun Park | Kyoungjun Park, Zhiyuan He, Cheng Luo, Yi Xu, Lili Qiu, Changhan Ge,
Muhammad Muaz, Yuqing Yang | Joint Optimization of Handoff and Video Rate in LEO Satellite Networks | null | null | null | null | cs.NI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Low Earth Orbit (LEO) satellite communication presents a promising solution
for delivering Internet access to users in remote regions. Given that video
content is expected to dominate network traffic in LEO satellite systems, this
study presents a new video-aware mobility management framework specifically
designed for such networks. By combining simulation models with real-world
datasets, we highlight the critical role of handoff strategies and throughput
prediction algorithms in both single-user and multi-user video streaming
scenarios. Building on these insights, we introduce a suite of innovative
algorithms that jointly determine satellite selection and video bitrate to
enhance users' quality of experience (QoE). Initially, we design model
predictive control (MPC) and reinforcement learning (RL) based methods for
individual users, then extend the approach to manage multiple users sharing a
satellite. Notably, we incorporate centralized training with distributed
inference in our RL design to develop distributed policies informed by a global
view. The effectiveness of our approach is validated through trace-driven
simulations and testbed experiments.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 18:58:22 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Park",
"Kyoungjun",
""
],
[
"He",
"Zhiyuan",
""
],
[
"Luo",
"Cheng",
""
],
[
"Xu",
"Yi",
""
],
[
"Qiu",
"Lili",
""
],
[
"Ge",
"Changhan",
""
],
[
"Muaz",
"Muhammad",
""
],
[
"Yang",
"Yuqing",
""
]
] | TITLE: Joint Optimization of Handoff and Video Rate in LEO Satellite Networks
ABSTRACT: Low Earth Orbit (LEO) satellite communication presents a promising solution
for delivering Internet access to users in remote regions. Given that video
content is expected to dominate network traffic in LEO satellite systems, this
study presents a new video-aware mobility management framework specifically
designed for such networks. By combining simulation models with real-world
datasets, we highlight the critical role of handoff strategies and throughput
prediction algorithms in both single-user and multi-user video streaming
scenarios. Building on these insights, we introduce a suite of innovative
algorithms that jointly determine satellite selection and video bitrate to
enhance users' quality of experience (QoE). Initially, we design model
predictive control (MPC) and reinforcement learning (RL) based methods for
individual users, then extend the approach to manage multiple users sharing a
satellite. Notably, we incorporate centralized training with distributed
inference in our RL design to develop distributed policies informed by a global
view. The effectiveness of our approach is validated through trace-driven
simulations and testbed experiments.
|
2504.04589 | Yicheng Gu | Yicheng Gu, Runsong Zhang, Lauri Juvela, Zhizheng Wu | Diff-SSL-G-Comp: Towards a Large-Scale and Diverse Dataset for Virtual
Analog Modeling | Submitted to DAFx 2025 | null | null | null | cs.SD eess.AS eess.SP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Virtual Analog (VA) modeling aims to simulate the behavior of hardware
circuits via algorithms to replicate their tone digitally. Dynamic Range
Compressor (DRC) is an audio processing module that controls the dynamics of a
track by reducing and amplifying the volumes of loud and quiet sounds, which is
essential in music production. In recent years, neural-network-based VA
modeling has shown great potential in producing high-fidelity models. However,
due to the lack of data quantity and diversity, their generalization ability in
different parameter settings and input sounds is still limited. To tackle this
problem, we present Diff-SSL-G-Comp, the first large-scale and diverse dataset
for modeling the SSL 500 G-Bus Compressor. Specifically, we manually collected
175 unmastered songs from the Cambridge Multitrack Library. We recorded the
compressed audio in 220 parameter combinations, resulting in an extensive
2528-hour dataset with diverse genres, instruments, tempos, and keys. Moreover,
to facilitate the use of our proposed dataset, we conducted benchmark
experiments in various open-sourced black-box and grey-box models, as well as
white-box plugins. We also conducted ablation studies in different data subsets
to illustrate the effectiveness of improved data diversity and quantity. The
dataset and demos are on our project page:
http://www.yichenggu.com/DiffSSLGComp/.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 19:19:53 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Gu",
"Yicheng",
""
],
[
"Zhang",
"Runsong",
""
],
[
"Juvela",
"Lauri",
""
],
[
"Wu",
"Zhizheng",
""
]
] | TITLE: Diff-SSL-G-Comp: Towards a Large-Scale and Diverse Dataset for Virtual
Analog Modeling
ABSTRACT: Virtual Analog (VA) modeling aims to simulate the behavior of hardware
circuits via algorithms to replicate their tone digitally. Dynamic Range
Compressor (DRC) is an audio processing module that controls the dynamics of a
track by reducing and amplifying the volumes of loud and quiet sounds, which is
essential in music production. In recent years, neural-network-based VA
modeling has shown great potential in producing high-fidelity models. However,
due to the lack of data quantity and diversity, their generalization ability in
different parameter settings and input sounds is still limited. To tackle this
problem, we present Diff-SSL-G-Comp, the first large-scale and diverse dataset
for modeling the SSL 500 G-Bus Compressor. Specifically, we manually collected
175 unmastered songs from the Cambridge Multitrack Library. We recorded the
compressed audio in 220 parameter combinations, resulting in an extensive
2528-hour dataset with diverse genres, instruments, tempos, and keys. Moreover,
to facilitate the use of our proposed dataset, we conducted benchmark
experiments in various open-sourced black-box and grey-box models, as well as
white-box plugins. We also conducted ablation studies in different data subsets
to illustrate the effectiveness of improved data diversity and quantity. The
dataset and demos are on our project page:
http://www.yichenggu.com/DiffSSLGComp/.
|
2504.04597 | Haebeom Jung | Haebeom Jung, Namtae Kim, Jungwoo Kim, Jaesik Park | Targetless LiDAR-Camera Calibration with Anchored 3D Gaussians | Project page: https://zang09.github.io/tlc-calib-site | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present a targetless LiDAR-camera calibration method that jointly
optimizes sensor poses and scene geometry from arbitrary scenes, without
relying on traditional calibration targets such as checkerboards or spherical
reflectors. Our approach leverages a 3D Gaussian-based scene representation. We
first freeze reliable LiDAR points as anchors, then jointly optimize the poses
and auxiliary Gaussian parameters in a fully differentiable manner using a
photometric loss. This joint optimization significantly reduces sensor
misalignment, resulting in higher rendering quality and consistently improved
PSNR compared to the carefully calibrated poses provided in popular datasets.
We validate our method through extensive experiments on two real-world
autonomous driving datasets, KITTI-360 and Waymo, each featuring distinct
sensor configurations. Additionally, we demonstrate the robustness of our
approach using a custom LiDAR-camera setup, confirming strong performance
across diverse hardware configurations.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 20:00:01 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Jung",
"Haebeom",
""
],
[
"Kim",
"Namtae",
""
],
[
"Kim",
"Jungwoo",
""
],
[
"Park",
"Jaesik",
""
]
] | TITLE: Targetless LiDAR-Camera Calibration with Anchored 3D Gaussians
ABSTRACT: We present a targetless LiDAR-camera calibration method that jointly
optimizes sensor poses and scene geometry from arbitrary scenes, without
relying on traditional calibration targets such as checkerboards or spherical
reflectors. Our approach leverages a 3D Gaussian-based scene representation. We
first freeze reliable LiDAR points as anchors, then jointly optimize the poses
and auxiliary Gaussian parameters in a fully differentiable manner using a
photometric loss. This joint optimization significantly reduces sensor
misalignment, resulting in higher rendering quality and consistently improved
PSNR compared to the carefully calibrated poses provided in popular datasets.
We validate our method through extensive experiments on two real-world
autonomous driving datasets, KITTI-360 and Waymo, each featuring distinct
sensor configurations. Additionally, we demonstrate the robustness of our
approach using a custom LiDAR-camera setup, confirming strong performance
across diverse hardware configurations.
|
2504.04613 | Kleanthis Malialis | Kleanthis Malialis and Stylianos Filippou and Christos G. Panayiotou
and Marios M. Polycarpou | SiameseDuo++: Active Learning from Data Streams with Dual Augmented
Siamese Networks | null | Neurocomputing, Volume 637, 2025, 130083, ISSN 0925-2312 | 10.1016/j.neucom.2025.130083 | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Data stream mining, also known as stream learning, is a growing area which
deals with learning from high-speed arriving data. Its relevance has surged
recently due to its wide range of applicability, such as, critical
infrastructure monitoring, social media analysis, and recommender systems. The
design of stream learning methods faces significant research challenges; from
the nonstationary nature of the data (referred to as concept drift) and the
fact that data streams are typically not annotated with the ground truth, to
the requirement that such methods should process large amounts of data in
real-time with limited memory. This work proposes the SiameseDuo++ method,
which uses active learning to automatically select instances for a human expert
to label according to a budget. Specifically, it incrementally trains two
siamese neural networks which operate in synergy, augmented by generated
examples. Both the proposed active learning strategy and augmentation operate
in the latent space. SiameseDuo++ addresses the aforementioned challenges by
operating with limited memory and limited labelling budget. Simulation
experiments show that the proposed method outperforms strong baselines and
state-of-the-art methods in terms of learning speed and/or performance. To
promote open science we publicly release our code and datasets.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 20:45:25 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Malialis",
"Kleanthis",
""
],
[
"Filippou",
"Stylianos",
""
],
[
"Panayiotou",
"Christos G.",
""
],
[
"Polycarpou",
"Marios M.",
""
]
] | TITLE: SiameseDuo++: Active Learning from Data Streams with Dual Augmented
Siamese Networks
ABSTRACT: Data stream mining, also known as stream learning, is a growing area which
deals with learning from high-speed arriving data. Its relevance has surged
recently due to its wide range of applicability, such as, critical
infrastructure monitoring, social media analysis, and recommender systems. The
design of stream learning methods faces significant research challenges; from
the nonstationary nature of the data (referred to as concept drift) and the
fact that data streams are typically not annotated with the ground truth, to
the requirement that such methods should process large amounts of data in
real-time with limited memory. This work proposes the SiameseDuo++ method,
which uses active learning to automatically select instances for a human expert
to label according to a budget. Specifically, it incrementally trains two
siamese neural networks which operate in synergy, augmented by generated
examples. Both the proposed active learning strategy and augmentation operate
in the latent space. SiameseDuo++ addresses the aforementioned challenges by
operating with limited memory and limited labelling budget. Simulation
experiments show that the proposed method outperforms strong baselines and
state-of-the-art methods in terms of learning speed and/or performance. To
promote open science we publicly release our code and datasets.
|
2504.04615 | Eleftherios Vlahakis | Eleftherios E. Vlahakis, Lars Lindemann and Dimos V. Dimarogonas | Conformal Data-driven Control of Stochastic Multi-Agent Systems under
Collaborative Signal Temporal Logic Specifications | 8 pages, 2 figures, submitted to CDC2025 | null | null | null | eess.SY cs.MA cs.SY | http://creativecommons.org/licenses/by/4.0/ | We study the control of stochastic discrete-time linear multi-agent systems
(MAS) subject to additive stochastic noise and collaborative signal temporal
logic (STL) specifications to be satisfied with a desired probability. Given
available disturbance datasets, we leverage conformal prediction (CP) to
address the underlying chance-constrained multi-agent STL synthesis problem in
a distribution-free manner. By introducing nonconformity scores as functions of
prediction regions (PRs) of error trajectories, we develop an iterative
PR-scaling and disturbance-feedback synthesis approach to bound training error
trajectory samples. These bounds are then calibrated using a separate dataset,
providing probabilistic guarantees via CP. Subsequently, we relax the
underlying stochastic optimal control problem by tightening the robustness
functions of collaborative tasks based on their Lipschitz constants and the
computed error bounds. To address scalability, we exploit the compositional
structure of the multi-agent STL formula and propose a
model-predictive-control-like algorithm, where agent-level problems are solved
in a distributed fashion. Lastly, we showcase the benefits of the proposed
method in comparison with [1] via an illustrative example.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 20:53:49 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Vlahakis",
"Eleftherios E.",
""
],
[
"Lindemann",
"Lars",
""
],
[
"Dimarogonas",
"Dimos V.",
""
]
] | TITLE: Conformal Data-driven Control of Stochastic Multi-Agent Systems under
Collaborative Signal Temporal Logic Specifications
ABSTRACT: We study the control of stochastic discrete-time linear multi-agent systems
(MAS) subject to additive stochastic noise and collaborative signal temporal
logic (STL) specifications to be satisfied with a desired probability. Given
available disturbance datasets, we leverage conformal prediction (CP) to
address the underlying chance-constrained multi-agent STL synthesis problem in
a distribution-free manner. By introducing nonconformity scores as functions of
prediction regions (PRs) of error trajectories, we develop an iterative
PR-scaling and disturbance-feedback synthesis approach to bound training error
trajectory samples. These bounds are then calibrated using a separate dataset,
providing probabilistic guarantees via CP. Subsequently, we relax the
underlying stochastic optimal control problem by tightening the robustness
functions of collaborative tasks based on their Lipschitz constants and the
computed error bounds. To address scalability, we exploit the compositional
structure of the multi-agent STL formula and propose a
model-predictive-control-like algorithm, where agent-level problems are solved
in a distributed fashion. Lastly, we showcase the benefits of the proposed
method in comparison with [1] via an illustrative example.
|
2504.04616 | Qi Zhang | Qi Zhang, Huitong Pan, Zhijia Chen, Longin Jan Latecki, Cornelia
Caragea, Eduard Dragut | DynClean: Training Dynamics-based Label Cleaning for
Distantly-Supervised Named Entity Recognition | Accepted to NAACL2025-Findings | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distantly Supervised Named Entity Recognition (DS-NER) has attracted
attention due to its scalability and ability to automatically generate labeled
data. However, distant annotation introduces many mislabeled instances,
limiting its performance. Most of the existing work attempt to solve this
problem by developing intricate models to learn from the noisy labels. An
alternative approach is to attempt to clean the labeled data, thus increasing
the quality of distant labels. This approach has received little attention for
NER. In this paper, we propose a training dynamics-based label cleaning
approach, which leverages the behavior of a model as training progresses to
characterize the distantly annotated samples. We also introduce an automatic
threshold estimation strategy to locate the errors in distant labels. Extensive
experimental results demonstrate that: (1) models trained on our cleaned DS-NER
datasets, which were refined by directly removing identified erroneous
annotations, achieve significant improvements in F1-score, ranging from 3.18%
to 8.95%; and (2) our method outperforms numerous advanced DS-NER approaches
across four datasets.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 20:54:42 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zhang",
"Qi",
""
],
[
"Pan",
"Huitong",
""
],
[
"Chen",
"Zhijia",
""
],
[
"Latecki",
"Longin Jan",
""
],
[
"Caragea",
"Cornelia",
""
],
[
"Dragut",
"Eduard",
""
]
] | TITLE: DynClean: Training Dynamics-based Label Cleaning for
Distantly-Supervised Named Entity Recognition
ABSTRACT: Distantly Supervised Named Entity Recognition (DS-NER) has attracted
attention due to its scalability and ability to automatically generate labeled
data. However, distant annotation introduces many mislabeled instances,
limiting its performance. Most of the existing work attempt to solve this
problem by developing intricate models to learn from the noisy labels. An
alternative approach is to attempt to clean the labeled data, thus increasing
the quality of distant labels. This approach has received little attention for
NER. In this paper, we propose a training dynamics-based label cleaning
approach, which leverages the behavior of a model as training progresses to
characterize the distantly annotated samples. We also introduce an automatic
threshold estimation strategy to locate the errors in distant labels. Extensive
experimental results demonstrate that: (1) models trained on our cleaned DS-NER
datasets, which were refined by directly removing identified erroneous
annotations, achieve significant improvements in F1-score, ranging from 3.18%
to 8.95%; and (2) our method outperforms numerous advanced DS-NER approaches
across four datasets.
|
2504.04640 | Eylon Caplan | Eylon Caplan, Tania Chakraborty, Dan Goldwasser | Splits! A Flexible Dataset for Evaluating a Model's Demographic Social
Inference | Under review for COLM 2025 | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Understanding how people of various demographics think, feel, and express
themselves (collectively called group expression) is essential for social
science and underlies the assessment of bias in Large Language Models (LLMs).
While LLMs can effectively summarize group expression when provided with
empirical examples, coming up with generalizable theories of how a group's
expression manifests in real-world text is challenging. In this paper, we
define a new task called Group Theorization, in which a system must write
theories that differentiate expression across demographic groups. We make
available a large dataset on this task, Splits!, constructed by splitting
Reddit posts by neutral topics (e.g. sports, cooking, and movies) and by
demographics (e.g. occupation, religion, and race). Finally, we suggest a
simple evaluation framework for assessing how effectively a method can generate
'better' theories about group expression, backed by human validation. We
publicly release the raw corpora and evaluation scripts for Splits! to help
researchers assess how methods infer--and potentially misrepresent--group
differences in expression. We make Splits! and our evaluation module available
at https://github.com/eyloncaplan/splits.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 23:17:07 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Caplan",
"Eylon",
""
],
[
"Chakraborty",
"Tania",
""
],
[
"Goldwasser",
"Dan",
""
]
] | TITLE: Splits! A Flexible Dataset for Evaluating a Model's Demographic Social
Inference
ABSTRACT: Understanding how people of various demographics think, feel, and express
themselves (collectively called group expression) is essential for social
science and underlies the assessment of bias in Large Language Models (LLMs).
While LLMs can effectively summarize group expression when provided with
empirical examples, coming up with generalizable theories of how a group's
expression manifests in real-world text is challenging. In this paper, we
define a new task called Group Theorization, in which a system must write
theories that differentiate expression across demographic groups. We make
available a large dataset on this task, Splits!, constructed by splitting
Reddit posts by neutral topics (e.g. sports, cooking, and movies) and by
demographics (e.g. occupation, religion, and race). Finally, we suggest a
simple evaluation framework for assessing how effectively a method can generate
'better' theories about group expression, backed by human validation. We
publicly release the raw corpora and evaluation scripts for Splits! to help
researchers assess how methods infer--and potentially misrepresent--group
differences in expression. We make Splits! and our evaluation module available
at https://github.com/eyloncaplan/splits.
|
2504.04642 | Hengrui Hu | Hengrui Hu, Anai N. Kothari, Anjishnu Banerjee | A Novel Algorithm for Personalized Federated Learning: Knowledge
Distillation with Weighted Combination Loss | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated learning (FL) offers a privacy-preserving framework for distributed
machine learning, enabling collaborative model training across diverse clients
without centralizing sensitive data. However, statistical heterogeneity,
characterized by non-independent and identically distributed (non-IID) client
data, poses significant challenges, leading to model drift and poor
generalization. This paper proposes a novel algorithm, pFedKD-WCL (Personalized
Federated Knowledge Distillation with Weighted Combination Loss), which
integrates knowledge distillation with bi-level optimization to address non-IID
challenges. pFedKD-WCL leverages the current global model as a teacher to guide
local models, optimizing both global convergence and local personalization
efficiently. We evaluate pFedKD-WCL on the MNIST dataset and a synthetic
dataset with non-IID partitioning, using multinomial logistic regression and
multilayer perceptron models. Experimental results demonstrate that pFedKD-WCL
outperforms state-of-the-art algorithms, including FedAvg, FedProx, Per-FedAvg,
and pFedMe, in terms of accuracy and convergence speed.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 23:22:03 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Hu",
"Hengrui",
""
],
[
"Kothari",
"Anai N.",
""
],
[
"Banerjee",
"Anjishnu",
""
]
] | TITLE: A Novel Algorithm for Personalized Federated Learning: Knowledge
Distillation with Weighted Combination Loss
ABSTRACT: Federated learning (FL) offers a privacy-preserving framework for distributed
machine learning, enabling collaborative model training across diverse clients
without centralizing sensitive data. However, statistical heterogeneity,
characterized by non-independent and identically distributed (non-IID) client
data, poses significant challenges, leading to model drift and poor
generalization. This paper proposes a novel algorithm, pFedKD-WCL (Personalized
Federated Knowledge Distillation with Weighted Combination Loss), which
integrates knowledge distillation with bi-level optimization to address non-IID
challenges. pFedKD-WCL leverages the current global model as a teacher to guide
local models, optimizing both global convergence and local personalization
efficiently. We evaluate pFedKD-WCL on the MNIST dataset and a synthetic
dataset with non-IID partitioning, using multinomial logistic regression and
multilayer perceptron models. Experimental results demonstrate that pFedKD-WCL
outperforms state-of-the-art algorithms, including FedAvg, FedProx, Per-FedAvg,
and pFedMe, in terms of accuracy and convergence speed.
|
2504.04645 | Tianyi Ren | Tianyi Ren, Juampablo Heras Rivera, Hitender Oswal, Yutong Pan,
Agamdeep Chopra, Jacob Ruzevick, and Mehmet Kurt | Here Comes the Explanation: A Shapley Perspective on Multi-contrast
Medical Image Segmentation | null | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Deep learning has been successfully applied to medical image segmentation,
enabling accurate identification of regions of interest such as organs and
lesions. This approach works effectively across diverse datasets, including
those with single-image contrast, multi-contrast, and multimodal imaging data.
To improve human understanding of these black-box models, there is a growing
need for Explainable AI (XAI) techniques for model transparency and
accountability. Previous research has primarily focused on post hoc pixel-level
explanations, using methods gradient-based and perturbation-based apporaches.
These methods rely on gradients or perturbations to explain model predictions.
However, these pixel-level explanations often struggle with the complexity
inherent in multi-contrast magnetic resonance imaging (MRI) segmentation tasks,
and the sparsely distributed explanations have limited clinical relevance. In
this study, we propose using contrast-level Shapley values to explain
state-of-the-art models trained on standard metrics used in brain tumor
segmentation. Our results demonstrate that Shapley analysis provides valuable
insights into different models' behavior used for tumor segmentation. We
demonstrated a bias for U-Net towards over-weighing T1-contrast and FLAIR,
while Swin-UNETR provided a cross-contrast understanding with balanced Shapley
distribution.
| [
{
"version": "v1",
"created": "Sun, 6 Apr 2025 23:52:07 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ren",
"Tianyi",
""
],
[
"Rivera",
"Juampablo Heras",
""
],
[
"Oswal",
"Hitender",
""
],
[
"Pan",
"Yutong",
""
],
[
"Chopra",
"Agamdeep",
""
],
[
"Ruzevick",
"Jacob",
""
],
[
"Kurt",
"Mehmet",
""
]
] | TITLE: Here Comes the Explanation: A Shapley Perspective on Multi-contrast
Medical Image Segmentation
ABSTRACT: Deep learning has been successfully applied to medical image segmentation,
enabling accurate identification of regions of interest such as organs and
lesions. This approach works effectively across diverse datasets, including
those with single-image contrast, multi-contrast, and multimodal imaging data.
To improve human understanding of these black-box models, there is a growing
need for Explainable AI (XAI) techniques for model transparency and
accountability. Previous research has primarily focused on post hoc pixel-level
explanations, using methods gradient-based and perturbation-based apporaches.
These methods rely on gradients or perturbations to explain model predictions.
However, these pixel-level explanations often struggle with the complexity
inherent in multi-contrast magnetic resonance imaging (MRI) segmentation tasks,
and the sparsely distributed explanations have limited clinical relevance. In
this study, we propose using contrast-level Shapley values to explain
state-of-the-art models trained on standard metrics used in brain tumor
segmentation. Our results demonstrate that Shapley analysis provides valuable
insights into different models' behavior used for tumor segmentation. We
demonstrated a bias for U-Net towards over-weighing T1-contrast and FLAIR,
while Swin-UNETR provided a cross-contrast understanding with balanced Shapley
distribution.
|
2504.04647 | Xinjie Li | Yujia Su, Xinjie Li, Lionel Z. Wang | Sub-Clustering for Class Distance Recalculation in Long-Tailed Drug
Classification | null | null | null | null | cs.LG q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | In the real world, long-tailed data distributions are prevalent, making it
challenging for models to effectively learn and classify tail classes. However,
we discover that in the field of drug chemistry, certain tail classes exhibit
higher identifiability during training due to their unique molecular structural
features, a finding that significantly contrasts with the conventional
understanding that tail classes are generally difficult to identify. Existing
imbalance learning methods, such as resampling and cost-sensitive reweighting,
overly rely on sample quantity priors, causing models to excessively focus on
tail classes at the expense of head class performance. To address this issue,
we propose a novel method that breaks away from the traditional static
evaluation paradigm based on sample size. Instead, we establish a dynamical
inter-class separability metric using feature distances between different
classes. Specifically, we employ a sub-clustering contrastive learning approach
to thoroughly learn the embedding features of each class, and we dynamically
compute the distances between class embeddings to capture the relative
positional evolution of samples from different classes in the feature space,
thereby rebalancing the weights of the classification loss function. We
conducted experiments on multiple existing long-tailed drug datasets and
achieved competitive results by improving the accuracy of tail classes without
compromising the performance of dominant classes.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 00:09:10 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Su",
"Yujia",
""
],
[
"Li",
"Xinjie",
""
],
[
"Wang",
"Lionel Z.",
""
]
] | TITLE: Sub-Clustering for Class Distance Recalculation in Long-Tailed Drug
Classification
ABSTRACT: In the real world, long-tailed data distributions are prevalent, making it
challenging for models to effectively learn and classify tail classes. However,
we discover that in the field of drug chemistry, certain tail classes exhibit
higher identifiability during training due to their unique molecular structural
features, a finding that significantly contrasts with the conventional
understanding that tail classes are generally difficult to identify. Existing
imbalance learning methods, such as resampling and cost-sensitive reweighting,
overly rely on sample quantity priors, causing models to excessively focus on
tail classes at the expense of head class performance. To address this issue,
we propose a novel method that breaks away from the traditional static
evaluation paradigm based on sample size. Instead, we establish a dynamical
inter-class separability metric using feature distances between different
classes. Specifically, we employ a sub-clustering contrastive learning approach
to thoroughly learn the embedding features of each class, and we dynamically
compute the distances between class embeddings to capture the relative
positional evolution of samples from different classes in the feature space,
thereby rebalancing the weights of the classification loss function. We
conducted experiments on multiple existing long-tailed drug datasets and
achieved competitive results by improving the accuracy of tail classes without
compromising the performance of dominant classes.
|
2504.04657 | Sathish Kumar | Tasnia Rahman, Sathish A. P. Kumar, Sumit Jha, and Arvind Ramanathan | ACE-RLHF: Automated Code Evaluation and Socratic Feedback Generation
Tool using Large Language Models and Reinforcement Learning with Human
Feedback | 9 pages, 3 figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Automated Program Repair tools are developed for generating feedback and
suggesting a repair method for erroneous code. State of the art (SOTA) code
repair methods rely on data-driven approaches and often fail to deliver
solution for complicated programming questions. To interpret the natural
language of unprecedented programming problems, using Large Language Models
(LLMs) for code-feedback generation is crucial. LLMs generate more
comprehensible feedback than compiler-generated error messages, and
Reinforcement Learning with Human Feedback (RLHF) further enhances quality by
integrating human-in-the-loop which helps novice students to lean programming
from scratch interactively. We are applying RLHF fine-tuning technique for an
expected Socratic response such as a question with hint to solve the
programming issue. We are proposing code feedback generation tool by
fine-tuning LLM with RLHF, Automated Code Evaluation with RLHF (ACE-RLHF),
combining two open-source LLM models with two different SOTA optimization
techniques. The quality of feedback is evaluated on two benchmark datasets
containing basic and competition-level programming questions where the later is
proposed by us. We achieved 2-5% higher accuracy than RL-free SOTA techniques
using Llama-3-7B-Proximal-policy optimization in automated evaluation and
similar or slightly higher accuracy compared to reward model-free RL with AI
Feedback (RLAIF). We achieved almost 40% higher accuracy with GPT-3.5 Best-of-n
optimization while performing manual evaluation.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 01:11:22 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Rahman",
"Tasnia",
""
],
[
"Kumar",
"Sathish A. P.",
""
],
[
"Jha",
"Sumit",
""
],
[
"Ramanathan",
"Arvind",
""
]
] | TITLE: ACE-RLHF: Automated Code Evaluation and Socratic Feedback Generation
Tool using Large Language Models and Reinforcement Learning with Human
Feedback
ABSTRACT: Automated Program Repair tools are developed for generating feedback and
suggesting a repair method for erroneous code. State of the art (SOTA) code
repair methods rely on data-driven approaches and often fail to deliver
solution for complicated programming questions. To interpret the natural
language of unprecedented programming problems, using Large Language Models
(LLMs) for code-feedback generation is crucial. LLMs generate more
comprehensible feedback than compiler-generated error messages, and
Reinforcement Learning with Human Feedback (RLHF) further enhances quality by
integrating human-in-the-loop which helps novice students to lean programming
from scratch interactively. We are applying RLHF fine-tuning technique for an
expected Socratic response such as a question with hint to solve the
programming issue. We are proposing code feedback generation tool by
fine-tuning LLM with RLHF, Automated Code Evaluation with RLHF (ACE-RLHF),
combining two open-source LLM models with two different SOTA optimization
techniques. The quality of feedback is evaluated on two benchmark datasets
containing basic and competition-level programming questions where the later is
proposed by us. We achieved 2-5% higher accuracy than RL-free SOTA techniques
using Llama-3-7B-Proximal-policy optimization in automated evaluation and
similar or slightly higher accuracy compared to reward model-free RL with AI
Feedback (RLAIF). We achieved almost 40% higher accuracy with GPT-3.5 Best-of-n
optimization while performing manual evaluation.
|
2504.04663 | Giuseppe Petrillo | Eugenio Lippiello, Cataldo Godano and Giuseppe Petrillo | Improve the estimate of the b-value in regional catalogs by means of the
the b-more positive method | 2 figures, 1 table | null | null | null | physics.geo-ph physics.data-an | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The b-value, which controls the slope of the frequency-magnitude distribution
of earthquakes, is a critical parameter in seismic forecasting. However,
accurately measuring the true b-value is challenging due to the temporal and
spatial variations in the completeness of instrumental seismic catalogs. In
this study, we systematically compare traditional methods for estimating the
b-value with newer approaches, specifically focusing on the b-more-positive
estimator based on positive magnitude difference statistics. We conduct this
comparison using both synthetic ETAS catalogs, with artificially introduced
incompleteness, and instrumental catalogs from five regions: Japan, Italy,
Southern California, Northern California, and New Zealand. Our results from
synthetic ETAS catalogs reveal that traditional estimators tend to
underestimate the b-value, while the b-more-positive estimator provides a more
accurate measurement. Similar patterns are observed in instrumental catalogs,
suggesting that traditional methods may also underestimate the true b-value in
real datasets.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 01:18:19 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lippiello",
"Eugenio",
""
],
[
"Godano",
"Cataldo",
""
],
[
"Petrillo",
"Giuseppe",
""
]
] | TITLE: Improve the estimate of the b-value in regional catalogs by means of the
the b-more positive method
ABSTRACT: The b-value, which controls the slope of the frequency-magnitude distribution
of earthquakes, is a critical parameter in seismic forecasting. However,
accurately measuring the true b-value is challenging due to the temporal and
spatial variations in the completeness of instrumental seismic catalogs. In
this study, we systematically compare traditional methods for estimating the
b-value with newer approaches, specifically focusing on the b-more-positive
estimator based on positive magnitude difference statistics. We conduct this
comparison using both synthetic ETAS catalogs, with artificially introduced
incompleteness, and instrumental catalogs from five regions: Japan, Italy,
Southern California, Northern California, and New Zealand. Our results from
synthetic ETAS catalogs reveal that traditional estimators tend to
underestimate the b-value, while the b-more-positive estimator provides a more
accurate measurement. Similar patterns are observed in instrumental catalogs,
suggesting that traditional methods may also underestimate the true b-value in
real datasets.
|
2504.04664 | Abu Saleh Musa Miah Dr. | Md Bayazid Hossain, Md Anwarul Islam Himel, Md Abdur Rahim, Shabbir
Mahmood, Abu Saleh Musa Miah, Jungpil Shin | Classification of ADHD and Healthy Children Using EEG Based Multi-Band
Spatial Features Enhancement | null | null | null | null | eess.SP cs.CV | http://creativecommons.org/licenses/by/4.0/ | Attention Deficit Hyperactivity Disorder (ADHD) is a common
neurodevelopmental disorder in children, characterized by difficulties in
attention, hyperactivity, and impulsivity. Early and accurate diagnosis of ADHD
is critical for effective intervention and management. Electroencephalogram
(EEG) signals have emerged as a non-invasive and efficient tool for ADHD
detection due to their high temporal resolution and ability to capture neural
dynamics. In this study, we propose a method for classifying ADHD and healthy
children using EEG data from the benchmark dataset. There were 61 children with
ADHD and 60 healthy children, both boys and girls, aged 7 to 12. The EEG
signals, recorded from 19 channels, were processed to extract Power Spectral
Density (PSD) and Spectral Entropy (SE) features across five frequency bands,
resulting in a comprehensive 190-dimensional feature set. To evaluate the
classification performance, a Support Vector Machine (SVM) with the RBF kernel
demonstrated the best performance with a mean cross-validation accuracy of
99.2\% and a standard deviation of 0.0079, indicating high robustness and
precision. These results highlight the potential of spatial features in
conjunction with machine learning for accurately classifying ADHD using EEG
data. This work contributes to developing non-invasive, data-driven tools for
early diagnosis and assessment of ADHD in children.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 01:19:14 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Hossain",
"Md Bayazid",
""
],
[
"Himel",
"Md Anwarul Islam",
""
],
[
"Rahim",
"Md Abdur",
""
],
[
"Mahmood",
"Shabbir",
""
],
[
"Miah",
"Abu Saleh Musa",
""
],
[
"Shin",
"Jungpil",
""
]
] | TITLE: Classification of ADHD and Healthy Children Using EEG Based Multi-Band
Spatial Features Enhancement
ABSTRACT: Attention Deficit Hyperactivity Disorder (ADHD) is a common
neurodevelopmental disorder in children, characterized by difficulties in
attention, hyperactivity, and impulsivity. Early and accurate diagnosis of ADHD
is critical for effective intervention and management. Electroencephalogram
(EEG) signals have emerged as a non-invasive and efficient tool for ADHD
detection due to their high temporal resolution and ability to capture neural
dynamics. In this study, we propose a method for classifying ADHD and healthy
children using EEG data from the benchmark dataset. There were 61 children with
ADHD and 60 healthy children, both boys and girls, aged 7 to 12. The EEG
signals, recorded from 19 channels, were processed to extract Power Spectral
Density (PSD) and Spectral Entropy (SE) features across five frequency bands,
resulting in a comprehensive 190-dimensional feature set. To evaluate the
classification performance, a Support Vector Machine (SVM) with the RBF kernel
demonstrated the best performance with a mean cross-validation accuracy of
99.2\% and a standard deviation of 0.0079, indicating high robustness and
precision. These results highlight the potential of spatial features in
conjunction with machine learning for accurately classifying ADHD using EEG
data. This work contributes to developing non-invasive, data-driven tools for
early diagnosis and assessment of ADHD in children.
|
2504.04667 | Wan Tian | Wan Tian, Zhongfeng Qin | Interval-Valued Time Series Classification Using $D_K$-Distance | null | null | null | null | stat.ML cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, modeling and analysis of interval-valued time series have
garnered increasing attention in econometrics, finance, and statistics.
However, these studies have predominantly focused on statistical inference in
the forecasting of univariate and multivariate interval-valued time series,
overlooking another important aspect: classification. In this paper, we
introduce a classification approach that treats intervals as unified entities,
applicable to both univariate and multivariate interval-valued time series.
Specifically, we first extend the point-valued time series imaging methods to
interval-valued scenarios using the $D_K$-distance, enabling the imaging of
interval-valued time series. Then, we employ suitable deep learning model for
classification on the obtained imaging dataset, aiming to achieve
classification for interval-valued time series. In theory, we derived a sharper
excess risk bound for deep multiclassifiers based on offset Rademacher
complexity. Finally, we validate the superiority of the proposed method through
comparisons with various existing point-valued time series classification
methods in both simulation studies and real data applications.
| [
{
"version": "v1",
"created": "Mon, 7 Apr 2025 01:31:31 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Tian",
"Wan",
""
],
[
"Qin",
"Zhongfeng",
""
]
] | TITLE: Interval-Valued Time Series Classification Using $D_K$-Distance
ABSTRACT: In recent years, modeling and analysis of interval-valued time series have
garnered increasing attention in econometrics, finance, and statistics.
However, these studies have predominantly focused on statistical inference in
the forecasting of univariate and multivariate interval-valued time series,
overlooking another important aspect: classification. In this paper, we
introduce a classification approach that treats intervals as unified entities,
applicable to both univariate and multivariate interval-valued time series.
Specifically, we first extend the point-valued time series imaging methods to
interval-valued scenarios using the $D_K$-distance, enabling the imaging of
interval-valued time series. Then, we employ suitable deep learning model for
classification on the obtained imaging dataset, aiming to achieve
classification for interval-valued time series. In theory, we derived a sharper
excess risk bound for deep multiclassifiers based on offset Rademacher
complexity. Finally, we validate the superiority of the proposed method through
comparisons with various existing point-valued time series classification
methods in both simulation studies and real data applications.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.