Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2408.07587 | Alessio Mora | Alessio Mora, Lorenzo Valerio, Paolo Bellavista, Andrea Passarella | FedQUIT: On-Device Federated Unlearning via a Quasi-Competent Virtual
Teacher | International Conference on Computer Vision 2025 (ICCV 2025) | null | null | null | cs.LG cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning (FL) systems enable the collaborative training of machine
learning models without requiring centralized collection of individual data. FL
participants should have the ability to exercise their right to be forgotten,
ensuring their past contributions can be removed from the learned model upon
request. In this paper, we propose FedQUIT, a novel algorithm that uses
knowledge distillation to scrub the contribution of the data to forget from an
FL global model while preserving its generalization ability. FedQUIT directly
works on client devices that request to leave the federation, and leverages a
teacher-student framework. The FL global model acts as the teacher, and the
local model works as the student. To induce forgetting, FedQUIT tailors the
teacher's output on local data (the data to forget) penalizing the prediction
score of the true class. Unlike previous work, our method does not require
hardly viable assumptions for cross-device settings, such as storing historical
updates of participants or requiring access to proxy datasets. Experimental
results on various datasets and model architectures demonstrate that (i)
FedQUIT outperforms state-of-the-art competitors in forgetting data, (ii) has
the exact computational requirements as a regular FedAvg round, and (iii)
reduces the cumulative communication costs by up to 117.6$\times$ compared to
retraining from scratch to restore the initial generalization performance after
unlearning.
| [
{
"version": "v1",
"created": "Wed, 14 Aug 2024 14:36:28 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 14:53:01 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Mora",
"Alessio",
""
],
[
"Valerio",
"Lorenzo",
""
],
[
"Bellavista",
"Paolo",
""
],
[
"Passarella",
"Andrea",
""
]
] | TITLE: FedQUIT: On-Device Federated Unlearning via a Quasi-Competent Virtual
Teacher
ABSTRACT: Federated Learning (FL) systems enable the collaborative training of machine
learning models without requiring centralized collection of individual data. FL
participants should have the ability to exercise their right to be forgotten,
ensuring their past contributions can be removed from the learned model upon
request. In this paper, we propose FedQUIT, a novel algorithm that uses
knowledge distillation to scrub the contribution of the data to forget from an
FL global model while preserving its generalization ability. FedQUIT directly
works on client devices that request to leave the federation, and leverages a
teacher-student framework. The FL global model acts as the teacher, and the
local model works as the student. To induce forgetting, FedQUIT tailors the
teacher's output on local data (the data to forget) penalizing the prediction
score of the true class. Unlike previous work, our method does not require
hardly viable assumptions for cross-device settings, such as storing historical
updates of participants or requiring access to proxy datasets. Experimental
results on various datasets and model architectures demonstrate that (i)
FedQUIT outperforms state-of-the-art competitors in forgetting data, (ii) has
the exact computational requirements as a regular FedAvg round, and (iii)
reduces the cumulative communication costs by up to 117.6$\times$ compared to
retraining from scratch to restore the initial generalization performance after
unlearning.
|
2408.10265 | Arjhun Swaminathan | Arjhun Swaminathan, Mete Akg\"un | Distributed and Secure Kernel-Based Quantum Machine Learning | This paper contains 23 pages, 5 figures, 1 table and 3 appendices.
For associated supplementary code, see
https://github.com/mdppml/distributed-secure-kernel-based-QML | null | null | null | quant-ph cs.LG | http://creativecommons.org/licenses/by/4.0/ | Quantum computing promises to revolutionize machine learning, offering
significant efficiency gains in tasks such as clustering and distance
estimation. Additionally, it provides enhanced security through fundamental
principles like the measurement postulate and the no-cloning theorem, enabling
secure protocols such as quantum teleportation and quantum key distribution.
While advancements in secure quantum machine learning are notable, the
development of secure and distributed quantum analogues of kernel-based machine
learning techniques remains underexplored.
In this work, we present a novel approach for securely computing common
kernels, including polynomial, radial basis function (RBF), and Laplacian
kernels, when data is distributed, using quantum feature maps. Our methodology
introduces a robust framework that leverages quantum teleportation to ensure
secure and distributed kernel learning. The proposed architecture is validated
using IBM's Qiskit Aer Simulator on various public datasets.
| [
{
"version": "v1",
"created": "Fri, 16 Aug 2024 06:31:45 GMT"
},
{
"version": "v2",
"created": "Thu, 24 Oct 2024 12:33:41 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 18:29:28 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Swaminathan",
"Arjhun",
""
],
[
"Akgün",
"Mete",
""
]
] | TITLE: Distributed and Secure Kernel-Based Quantum Machine Learning
ABSTRACT: Quantum computing promises to revolutionize machine learning, offering
significant efficiency gains in tasks such as clustering and distance
estimation. Additionally, it provides enhanced security through fundamental
principles like the measurement postulate and the no-cloning theorem, enabling
secure protocols such as quantum teleportation and quantum key distribution.
While advancements in secure quantum machine learning are notable, the
development of secure and distributed quantum analogues of kernel-based machine
learning techniques remains underexplored.
In this work, we present a novel approach for securely computing common
kernels, including polynomial, radial basis function (RBF), and Laplacian
kernels, when data is distributed, using quantum feature maps. Our methodology
introduces a robust framework that leverages quantum teleportation to ensure
secure and distributed kernel learning. The proposed architecture is validated
using IBM's Qiskit Aer Simulator on various public datasets.
|
2408.11505 | Minghao Han | Minghao Han, Linhao Qu, Dingkang Yang, Xukun Zhang, Xiaoying Wang,
Lihua Zhang | MSCPT: Few-shot Whole Slide Image Classification with Multi-scale and
Context-focused Prompt Tuning | This work has been submitted to the IEEE TMI for possible publication | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multiple instance learning (MIL) has become a standard paradigm for the
weakly supervised classification of whole slide images (WSIs). However, this
paradigm relies on using a large number of labeled WSIs for training. The lack
of training data and the presence of rare diseases pose significant challenges
for these methods. Prompt tuning combined with pre-trained Vision-Language
models (VLMs) is an effective solution to the Few-shot Weakly Supervised WSI
Classification (FSWC) task. Nevertheless, applying prompt tuning methods
designed for natural images to WSIs presents three significant challenges: 1)
These methods fail to fully leverage the prior knowledge from the VLM's text
modality; 2) They overlook the essential multi-scale and contextual information
in WSIs, leading to suboptimal results; and 3) They lack exploration of
instance aggregation methods. To address these problems, we propose a
Multi-Scale and Context-focused Prompt Tuning (MSCPT) method for FSWC task.
Specifically, MSCPT employs the frozen large language model to generate
pathological visual language prior knowledge at multiple scales, guiding
hierarchical prompt tuning. Additionally, we design a graph prompt tuning
module to learn essential contextual information within WSI, and finally, a
non-parametric cross-guided instance aggregation module has been introduced to
derive the WSI-level features. Extensive experiments, visualizations, and
interpretability analyses were conducted on five datasets and three downstream
tasks using three VLMs, demonstrating the strong performance of our MSCPT. All
codes have been made publicly accessible at
https://github.com/Hanminghao/MSCPT.
| [
{
"version": "v1",
"created": "Wed, 21 Aug 2024 10:25:51 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 09:22:43 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Han",
"Minghao",
""
],
[
"Qu",
"Linhao",
""
],
[
"Yang",
"Dingkang",
""
],
[
"Zhang",
"Xukun",
""
],
[
"Wang",
"Xiaoying",
""
],
[
"Zhang",
"Lihua",
""
]
] | TITLE: MSCPT: Few-shot Whole Slide Image Classification with Multi-scale and
Context-focused Prompt Tuning
ABSTRACT: Multiple instance learning (MIL) has become a standard paradigm for the
weakly supervised classification of whole slide images (WSIs). However, this
paradigm relies on using a large number of labeled WSIs for training. The lack
of training data and the presence of rare diseases pose significant challenges
for these methods. Prompt tuning combined with pre-trained Vision-Language
models (VLMs) is an effective solution to the Few-shot Weakly Supervised WSI
Classification (FSWC) task. Nevertheless, applying prompt tuning methods
designed for natural images to WSIs presents three significant challenges: 1)
These methods fail to fully leverage the prior knowledge from the VLM's text
modality; 2) They overlook the essential multi-scale and contextual information
in WSIs, leading to suboptimal results; and 3) They lack exploration of
instance aggregation methods. To address these problems, we propose a
Multi-Scale and Context-focused Prompt Tuning (MSCPT) method for FSWC task.
Specifically, MSCPT employs the frozen large language model to generate
pathological visual language prior knowledge at multiple scales, guiding
hierarchical prompt tuning. Additionally, we design a graph prompt tuning
module to learn essential contextual information within WSI, and finally, a
non-parametric cross-guided instance aggregation module has been introduced to
derive the WSI-level features. Extensive experiments, visualizations, and
interpretability analyses were conducted on five datasets and three downstream
tasks using three VLMs, demonstrating the strong performance of our MSCPT. All
codes have been made publicly accessible at
https://github.com/Hanminghao/MSCPT.
|
2408.11706 | Liyao Jiang | Liyao Jiang, Negar Hassanpour, Mohammad Salameh, Mohan Sai
Singamsetti, Fengyu Sun, Wei Lu, Di Niu | FRAP: Faithful and Realistic Text-to-Image Generation with Adaptive
Prompt Weighting | TMLR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Text-to-image (T2I) diffusion models have demonstrated impressive
capabilities in generating high-quality images given a text prompt. However,
ensuring the prompt-image alignment remains a considerable challenge, i.e.,
generating images that faithfully align with the prompt's semantics. Recent
works attempt to improve the faithfulness by optimizing the latent code, which
potentially could cause the latent code to go out-of-distribution and thus
produce unrealistic images. In this paper, we propose FRAP, a simple, yet
effective approach based on adaptively adjusting the per-token prompt weights
to improve prompt-image alignment and authenticity of the generated images. We
design an online algorithm to adaptively update each token's weight
coefficient, which is achieved by minimizing a unified objective function that
encourages object presence and the binding of object-modifier pairs. Through
extensive evaluations, we show FRAP generates images with significantly higher
prompt-image alignment to prompts from complex datasets, while having a lower
average latency compared to recent latent code optimization methods, e.g., 4
seconds faster than D&B on the COCO-Subject dataset. Furthermore, through
visual comparisons and evaluation of the CLIP-IQA-Real metric, we show that
FRAP not only improves prompt-image alignment but also generates more authentic
images with realistic appearances. We also explore combining FRAP with prompt
rewriting LLM to recover their degraded prompt-image alignment, where we
observe improvements in both prompt-image alignment and image quality. We
release the code at the following link:
https://github.com/LiyaoJiang1998/FRAP/.
| [
{
"version": "v1",
"created": "Wed, 21 Aug 2024 15:30:35 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 05:52:06 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Jiang",
"Liyao",
""
],
[
"Hassanpour",
"Negar",
""
],
[
"Salameh",
"Mohammad",
""
],
[
"Singamsetti",
"Mohan Sai",
""
],
[
"Sun",
"Fengyu",
""
],
[
"Lu",
"Wei",
""
],
[
"Niu",
"Di",
""
]
] | TITLE: FRAP: Faithful and Realistic Text-to-Image Generation with Adaptive
Prompt Weighting
ABSTRACT: Text-to-image (T2I) diffusion models have demonstrated impressive
capabilities in generating high-quality images given a text prompt. However,
ensuring the prompt-image alignment remains a considerable challenge, i.e.,
generating images that faithfully align with the prompt's semantics. Recent
works attempt to improve the faithfulness by optimizing the latent code, which
potentially could cause the latent code to go out-of-distribution and thus
produce unrealistic images. In this paper, we propose FRAP, a simple, yet
effective approach based on adaptively adjusting the per-token prompt weights
to improve prompt-image alignment and authenticity of the generated images. We
design an online algorithm to adaptively update each token's weight
coefficient, which is achieved by minimizing a unified objective function that
encourages object presence and the binding of object-modifier pairs. Through
extensive evaluations, we show FRAP generates images with significantly higher
prompt-image alignment to prompts from complex datasets, while having a lower
average latency compared to recent latent code optimization methods, e.g., 4
seconds faster than D&B on the COCO-Subject dataset. Furthermore, through
visual comparisons and evaluation of the CLIP-IQA-Real metric, we show that
FRAP not only improves prompt-image alignment but also generates more authentic
images with realistic appearances. We also explore combining FRAP with prompt
rewriting LLM to recover their degraded prompt-image alignment, where we
observe improvements in both prompt-image alignment and image quality. We
release the code at the following link:
https://github.com/LiyaoJiang1998/FRAP/.
|
2408.11795 | Feipeng Ma | Feipeng Ma, Yizhou Zhou, Zheyu Zhang, Shilin Yan, Hebei Li, Zilong He,
Siying Wu, Fengyun Rao, Yueyi Zhang, Xiaoyan Sun | EE-MLLM: A Data-Efficient and Compute-Efficient Multimodal Large
Language Model | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advancements in Multimodal Large Language Models (MLLMs) have
demonstrated satisfactory performance across various vision-language tasks.
Current approaches for vision and language interaction fall into two
categories: self-attention-based and cross-attention-based methods. However,
both approaches present inherent limitations, forcing a trade-off between data
and computational efficiency. To address this issue, we introduce the
Data-$\textbf{E}$fficient and Compute-$\textbf{E}$fficient $\textbf{MLLM}$
($\textbf{EE-MLLM}$). Specifically, we modify the original self-attention
mechanism in MLLM to a composite attention mechanism. This mechanism has two
key characteristics: 1) eliminating the computational overhead of
self-attention among visual tokens to achieve $\textbf{compute efficiency}$,
and 2) reusing the weights from each layer of LLM to facilitate effective
vision-language modality alignment for $\textbf{data efficiency}$. As a result,
EE-MLLM significantly outperforms Flamingo with limited training data, and
reduces the prefilling time to 79 ms on an H800 GPU, compared to LLaVA's 277
ms. To further investigate the efficiency of EE-MLLM, we present a
training-free variant named EE-MLLM-F, which reduces the computation cost of
self-attention-based method without additional training. Experimental results
demonstrate the effectiveness of EE-MLLM across a range of benchmarks,
including general-purpose datasets like MMBench and SeedBench, as well as
fine-grained tasks such as TextVQA and DocVQA.
| [
{
"version": "v1",
"created": "Wed, 21 Aug 2024 17:36:37 GMT"
},
{
"version": "v2",
"created": "Mon, 9 Sep 2024 18:57:01 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 18:52:08 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ma",
"Feipeng",
""
],
[
"Zhou",
"Yizhou",
""
],
[
"Zhang",
"Zheyu",
""
],
[
"Yan",
"Shilin",
""
],
[
"Li",
"Hebei",
""
],
[
"He",
"Zilong",
""
],
[
"Wu",
"Siying",
""
],
[
"Rao",
"Fengyun",
""
],
[
"Zhang",
"Yueyi",
""
],
[
"Sun",
"Xiaoyan",
""
]
] | TITLE: EE-MLLM: A Data-Efficient and Compute-Efficient Multimodal Large
Language Model
ABSTRACT: Recent advancements in Multimodal Large Language Models (MLLMs) have
demonstrated satisfactory performance across various vision-language tasks.
Current approaches for vision and language interaction fall into two
categories: self-attention-based and cross-attention-based methods. However,
both approaches present inherent limitations, forcing a trade-off between data
and computational efficiency. To address this issue, we introduce the
Data-$\textbf{E}$fficient and Compute-$\textbf{E}$fficient $\textbf{MLLM}$
($\textbf{EE-MLLM}$). Specifically, we modify the original self-attention
mechanism in MLLM to a composite attention mechanism. This mechanism has two
key characteristics: 1) eliminating the computational overhead of
self-attention among visual tokens to achieve $\textbf{compute efficiency}$,
and 2) reusing the weights from each layer of LLM to facilitate effective
vision-language modality alignment for $\textbf{data efficiency}$. As a result,
EE-MLLM significantly outperforms Flamingo with limited training data, and
reduces the prefilling time to 79 ms on an H800 GPU, compared to LLaVA's 277
ms. To further investigate the efficiency of EE-MLLM, we present a
training-free variant named EE-MLLM-F, which reduces the computation cost of
self-attention-based method without additional training. Experimental results
demonstrate the effectiveness of EE-MLLM across a range of benchmarks,
including general-purpose datasets like MMBench and SeedBench, as well as
fine-grained tasks such as TextVQA and DocVQA.
|
2408.14603 | Bongsoo Yi | Bongsoo Yi, Yue Kang, Yao Li | Biased Dueling Bandits with Stochastic Delayed Feedback | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | The dueling bandit problem, an essential variation of the traditional
multi-armed bandit problem, has become significantly prominent recently due to
its broad applications in online advertising, recommendation systems,
information retrieval, and more. However, in many real-world applications, the
feedback for actions is often subject to unavoidable delays and is not
immediately available to the agent. This partially observable issue poses a
significant challenge to existing dueling bandit literature, as it
significantly affects how quickly and accurately the agent can update their
policy on the fly. In this paper, we introduce and examine the biased dueling
bandit problem with stochastic delayed feedback, revealing that this new
practical problem will delve into a more realistic and intriguing scenario
involving a preference bias between the selections. We present two algorithms
designed to handle situations involving delay. Our first algorithm, requiring
complete delay distribution information, achieves the optimal regret bound for
the dueling bandit problem when there is no delay. The second algorithm is
tailored for situations where the distribution is unknown, but only the
expected value of delay is available. We provide a comprehensive regret
analysis for the two proposed algorithms and then evaluate their empirical
performance on both synthetic and real datasets.
| [
{
"version": "v1",
"created": "Mon, 26 Aug 2024 19:49:12 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 02:44:44 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yi",
"Bongsoo",
""
],
[
"Kang",
"Yue",
""
],
[
"Li",
"Yao",
""
]
] | TITLE: Biased Dueling Bandits with Stochastic Delayed Feedback
ABSTRACT: The dueling bandit problem, an essential variation of the traditional
multi-armed bandit problem, has become significantly prominent recently due to
its broad applications in online advertising, recommendation systems,
information retrieval, and more. However, in many real-world applications, the
feedback for actions is often subject to unavoidable delays and is not
immediately available to the agent. This partially observable issue poses a
significant challenge to existing dueling bandit literature, as it
significantly affects how quickly and accurately the agent can update their
policy on the fly. In this paper, we introduce and examine the biased dueling
bandit problem with stochastic delayed feedback, revealing that this new
practical problem will delve into a more realistic and intriguing scenario
involving a preference bias between the selections. We present two algorithms
designed to handle situations involving delay. Our first algorithm, requiring
complete delay distribution information, achieves the optimal regret bound for
the dueling bandit problem when there is no delay. The second algorithm is
tailored for situations where the distribution is unknown, but only the
expected value of delay is available. We provide a comprehensive regret
analysis for the two proposed algorithms and then evaluate their empirical
performance on both synthetic and real datasets.
|
2408.15549 | Taiwei Shi | Taiwei Shi, Zhuoer Wang, Longqi Yang, Ying-Chun Lin, Zexue He,
Mengting Wan, Pei Zhou, Sujay Jauhar, Sihao Chen, Shan Xia, Hongfei Zhang,
Jieyu Zhao, Xiaofeng Xu, Xia Song, Jennifer Neville | WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback | 24 pages | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | As large language models (LLMs) continue to advance, aligning these models
with human preferences has emerged as a critical challenge. Traditional
alignment methods, relying on human or LLM annotated datasets, are limited by
their resource-intensive nature, inherent subjectivity, misalignment with
real-world user preferences, and the risk of feedback loops that amplify model
biases. To overcome these limitations, we introduce WildFeedback, a novel
framework that leverages in-situ user feedback during conversations with LLMs
to create preference datasets automatically. Given a corpus of multi-turn
user-LLM conversation, WildFeedback identifies and classifies user feedback to
LLM responses between conversation turns. The user feedback is then used to
create examples of preferred and dispreferred responses according to users'
preference. Our experiments demonstrate that LLMs fine-tuned on WildFeedback
dataset exhibit significantly improved alignment with user preferences, as
evidenced by both traditional benchmarks and our proposed checklist-guided
evaluation. By incorporating in-situ feedback from actual users, WildFeedback
addresses the scalability, subjectivity, and bias challenges that plague
existing approaches, marking a significant step toward developing LLMs that are
more responsive to the diverse and evolving needs of their users.
| [
{
"version": "v1",
"created": "Wed, 28 Aug 2024 05:53:46 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Feb 2025 06:14:31 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 20:18:53 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Shi",
"Taiwei",
""
],
[
"Wang",
"Zhuoer",
""
],
[
"Yang",
"Longqi",
""
],
[
"Lin",
"Ying-Chun",
""
],
[
"He",
"Zexue",
""
],
[
"Wan",
"Mengting",
""
],
[
"Zhou",
"Pei",
""
],
[
"Jauhar",
"Sujay",
""
],
[
"Chen",
"Sihao",
""
],
[
"Xia",
"Shan",
""
],
[
"Zhang",
"Hongfei",
""
],
[
"Zhao",
"Jieyu",
""
],
[
"Xu",
"Xiaofeng",
""
],
[
"Song",
"Xia",
""
],
[
"Neville",
"Jennifer",
""
]
] | TITLE: WildFeedback: Aligning LLMs With In-situ User Interactions And Feedback
ABSTRACT: As large language models (LLMs) continue to advance, aligning these models
with human preferences has emerged as a critical challenge. Traditional
alignment methods, relying on human or LLM annotated datasets, are limited by
their resource-intensive nature, inherent subjectivity, misalignment with
real-world user preferences, and the risk of feedback loops that amplify model
biases. To overcome these limitations, we introduce WildFeedback, a novel
framework that leverages in-situ user feedback during conversations with LLMs
to create preference datasets automatically. Given a corpus of multi-turn
user-LLM conversation, WildFeedback identifies and classifies user feedback to
LLM responses between conversation turns. The user feedback is then used to
create examples of preferred and dispreferred responses according to users'
preference. Our experiments demonstrate that LLMs fine-tuned on WildFeedback
dataset exhibit significantly improved alignment with user preferences, as
evidenced by both traditional benchmarks and our proposed checklist-guided
evaluation. By incorporating in-situ feedback from actual users, WildFeedback
addresses the scalability, subjectivity, and bias challenges that plague
existing approaches, marking a significant step toward developing LLMs that are
more responsive to the diverse and evolving needs of their users.
|
2409.02584 | Nishat Tasnime Diba | N. T. Diba, N. Akter, S. A. H. Chowdhury, J. E. Giti | BMI Prediction from Handwritten English Characters Using a Convolutional
Neural Network | The manuscript is being withdrawn due to identified issues that
require substantial revision and additional experiments. We plan to address
these concerns and resubmit a revised version in the future | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | A person's Body Mass Index, or BMI, is the most widely used parameter for
assessing their health. BMI is a crucial predictor of potential diseases that
may arise at higher body fat levels because it is correlated with body fat.
Conversely, a community's or an individual's nutritional status can be
determined using the BMI. Although deep learning models are used in several
studies to estimate BMI from face photos and other data, no previous research
established a clear connection between deep learning techniques for handwriting
analysis and BMI prediction. This article addresses this research gap with a
deep learning approach to estimating BMI from handwritten characters by
developing a convolutional neural network (CNN). A dataset containing samples
from 48 people in lowercase English scripts is successfully captured for the
BMI prediction task. The proposed CNN-based approach reports a commendable
accuracy of 99.92%. Performance comparison with other popular CNN architectures
reveals that AlexNet and InceptionV3 achieve the second and third-best
performance, with the accuracy of 99.69% and 99.53%, respectively.
| [
{
"version": "v1",
"created": "Wed, 4 Sep 2024 10:06:42 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 15:54:27 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Diba",
"N. T.",
""
],
[
"Akter",
"N.",
""
],
[
"Chowdhury",
"S. A. H.",
""
],
[
"Giti",
"J. E.",
""
]
] | TITLE: BMI Prediction from Handwritten English Characters Using a Convolutional
Neural Network
ABSTRACT: A person's Body Mass Index, or BMI, is the most widely used parameter for
assessing their health. BMI is a crucial predictor of potential diseases that
may arise at higher body fat levels because it is correlated with body fat.
Conversely, a community's or an individual's nutritional status can be
determined using the BMI. Although deep learning models are used in several
studies to estimate BMI from face photos and other data, no previous research
established a clear connection between deep learning techniques for handwriting
analysis and BMI prediction. This article addresses this research gap with a
deep learning approach to estimating BMI from handwritten characters by
developing a convolutional neural network (CNN). A dataset containing samples
from 48 people in lowercase English scripts is successfully captured for the
BMI prediction task. The proposed CNN-based approach reports a commendable
accuracy of 99.92%. Performance comparison with other popular CNN architectures
reveals that AlexNet and InceptionV3 achieve the second and third-best
performance, with the accuracy of 99.69% and 99.53%, respectively.
|
2409.04025 | Yangguang Chen | Yangguang Chen, Tong Wang, Guanzhou Chen, Kun Zhu, Xiaoliang Tan,
Jiaqi Wang, Wenchao Guo, Qing Wang, Xiaolong Luo, Xiaodong Zhang | BFA-YOLO: A balanced multiscale object detection network for building
fa\c{c}ade attachments detection | 21 pages | null | 10.1016/j.aei.2025.103289 | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The detection of fa\c{c}ade elements on buildings, such as doors, windows,
balconies, air conditioning units, billboards, and glass curtain walls, is a
critical step in automating the creation of Building Information Modeling
(BIM). Yet, this field faces significant challenges, including the uneven
distribution of fa\c{c}ade elements, the presence of small objects, and
substantial background noise, which hamper detection accuracy. To address these
issues, we develop the BFA-YOLO model and the BFA-3D dataset in this study. The
BFA-YOLO model is an advanced architecture designed specifically for analyzing
multi-view images of fa\c{c}ade attachments. It integrates three novel
components: the Feature Balanced Spindle Module (FBSM) that tackles the issue
of uneven object distribution; the Target Dynamic Alignment Task Detection Head
(TDATH) that enhances the detection of small objects; and the Position Memory
Enhanced Self-Attention Mechanism (PMESA), aimed at reducing the impact of
background noise. These elements collectively enable BFA-YOLO to effectively
address each challenge, thereby improving model robustness and detection
precision. The BFA-3D dataset, offers multi-view images with precise
annotations across a wide range of fa\c{c}ade attachment categories. This
dataset is developed to address the limitations present in existing fa\c{c}ade
detection datasets, which often feature a single perspective and insufficient
category coverage. Through comparative analysis, BFA-YOLO demonstrated
improvements of 1.8\% and 2.9\% in mAP$_{50}$ on the BFA-3D dataset and the
public Fa\c{c}ade-WHU dataset, respectively, when compared to the baseline
YOLOv8 model. These results highlight the superior performance of BFA-YOLO in
fa\c{c}ade element detection and the advancement of intelligent BIM
technologies.
| [
{
"version": "v1",
"created": "Fri, 6 Sep 2024 04:44:52 GMT"
},
{
"version": "v2",
"created": "Mon, 11 Nov 2024 06:23:21 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chen",
"Yangguang",
""
],
[
"Wang",
"Tong",
""
],
[
"Chen",
"Guanzhou",
""
],
[
"Zhu",
"Kun",
""
],
[
"Tan",
"Xiaoliang",
""
],
[
"Wang",
"Jiaqi",
""
],
[
"Guo",
"Wenchao",
""
],
[
"Wang",
"Qing",
""
],
[
"Luo",
"Xiaolong",
""
],
[
"Zhang",
"Xiaodong",
""
]
] | TITLE: BFA-YOLO: A balanced multiscale object detection network for building
fa\c{c}ade attachments detection
ABSTRACT: The detection of fa\c{c}ade elements on buildings, such as doors, windows,
balconies, air conditioning units, billboards, and glass curtain walls, is a
critical step in automating the creation of Building Information Modeling
(BIM). Yet, this field faces significant challenges, including the uneven
distribution of fa\c{c}ade elements, the presence of small objects, and
substantial background noise, which hamper detection accuracy. To address these
issues, we develop the BFA-YOLO model and the BFA-3D dataset in this study. The
BFA-YOLO model is an advanced architecture designed specifically for analyzing
multi-view images of fa\c{c}ade attachments. It integrates three novel
components: the Feature Balanced Spindle Module (FBSM) that tackles the issue
of uneven object distribution; the Target Dynamic Alignment Task Detection Head
(TDATH) that enhances the detection of small objects; and the Position Memory
Enhanced Self-Attention Mechanism (PMESA), aimed at reducing the impact of
background noise. These elements collectively enable BFA-YOLO to effectively
address each challenge, thereby improving model robustness and detection
precision. The BFA-3D dataset, offers multi-view images with precise
annotations across a wide range of fa\c{c}ade attachment categories. This
dataset is developed to address the limitations present in existing fa\c{c}ade
detection datasets, which often feature a single perspective and insufficient
category coverage. Through comparative analysis, BFA-YOLO demonstrated
improvements of 1.8\% and 2.9\% in mAP$_{50}$ on the BFA-3D dataset and the
public Fa\c{c}ade-WHU dataset, respectively, when compared to the baseline
YOLOv8 model. These results highlight the superior performance of BFA-YOLO in
fa\c{c}ade element detection and the advancement of intelligent BIM
technologies.
|
2409.04363 | Hao Luo | Hao Luo, Baoliang Chen, Lingyu Zhu, Peilin Chen and Shiqi Wang | RCNet: Deep Recurrent Collaborative Network for Multi-View Low-Light
Image Enhancement | Accepted by IEEE Transactions on Multimedia (TMM) | null | 10.1109/TMM.2024.3521760 | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Scene observation from multiple perspectives would bring a more comprehensive
visual experience. However, in the context of acquiring multiple views in the
dark, the highly correlated views are seriously alienated, making it
challenging to improve scene understanding with auxiliary views. Recent single
image-based enhancement methods may not be able to provide consistently
desirable restoration performance for all views due to the ignorance of
potential feature correspondence among different views. To alleviate this
issue, we make the first attempt to investigate multi-view low-light image
enhancement. First, we construct a new dataset called Multi-View Low-light
Triplets (MVLT), including 1,860 pairs of triple images with large illumination
ranges and wide noise distribution. Each triplet is equipped with three
different viewpoints towards the same scene. Second, we propose a deep
multi-view enhancement framework based on the Recurrent Collaborative Network
(RCNet). Specifically, in order to benefit from similar texture correspondence
across different views, we design the recurrent feature enhancement, alignment
and fusion (ReEAF) module, in which intra-view feature enhancement (Intra-view
EN) followed by inter-view feature alignment and fusion (Inter-view AF) is
performed to model the intra-view and inter-view feature propagation
sequentially via multi-view collaboration. In addition, two different modules
from enhancement to alignment (E2A) and from alignment to enhancement (A2E) are
developed to enable the interactions between Intra-view EN and Inter-view AF,
which explicitly utilize attentive feature weighting and sampling for
enhancement and alignment, respectively. Experimental results demonstrate that
our RCNet significantly outperforms other state-of-the-art methods. All of our
dataset, code, and model will be available at https://github.com/hluo29/RCNet.
| [
{
"version": "v1",
"created": "Fri, 6 Sep 2024 15:49:49 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 00:13:13 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Luo",
"Hao",
""
],
[
"Chen",
"Baoliang",
""
],
[
"Zhu",
"Lingyu",
""
],
[
"Chen",
"Peilin",
""
],
[
"Wang",
"Shiqi",
""
]
] | TITLE: RCNet: Deep Recurrent Collaborative Network for Multi-View Low-Light
Image Enhancement
ABSTRACT: Scene observation from multiple perspectives would bring a more comprehensive
visual experience. However, in the context of acquiring multiple views in the
dark, the highly correlated views are seriously alienated, making it
challenging to improve scene understanding with auxiliary views. Recent single
image-based enhancement methods may not be able to provide consistently
desirable restoration performance for all views due to the ignorance of
potential feature correspondence among different views. To alleviate this
issue, we make the first attempt to investigate multi-view low-light image
enhancement. First, we construct a new dataset called Multi-View Low-light
Triplets (MVLT), including 1,860 pairs of triple images with large illumination
ranges and wide noise distribution. Each triplet is equipped with three
different viewpoints towards the same scene. Second, we propose a deep
multi-view enhancement framework based on the Recurrent Collaborative Network
(RCNet). Specifically, in order to benefit from similar texture correspondence
across different views, we design the recurrent feature enhancement, alignment
and fusion (ReEAF) module, in which intra-view feature enhancement (Intra-view
EN) followed by inter-view feature alignment and fusion (Inter-view AF) is
performed to model the intra-view and inter-view feature propagation
sequentially via multi-view collaboration. In addition, two different modules
from enhancement to alignment (E2A) and from alignment to enhancement (A2E) are
developed to enable the interactions between Intra-view EN and Inter-view AF,
which explicitly utilize attentive feature weighting and sampling for
enhancement and alignment, respectively. Experimental results demonstrate that
our RCNet significantly outperforms other state-of-the-art methods. All of our
dataset, code, and model will be available at https://github.com/hluo29/RCNet.
|
2409.06481 | Nhat-Tan Bui Mr | Nhat-Tan Bui and Dinh-Hieu Hoang and Quoc-Huy Trinh and Minh-Triet
Tran and Truong Nguyen and Susan Gauch | NeIn: Telling What You Don't Want | Accepted to CVPR 2025 Workshop SyntaGen. Project page:
https://tanbuinhat.github.io/NeIn/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Negation is a fundamental linguistic concept used by humans to convey
information that they do not desire. Despite this, minimal research has focused
on negation within text-guided image editing. This lack of research means that
vision-language models (VLMs) for image editing may struggle to understand
negation, implying that they struggle to provide accurate results. One barrier
to achieving human-level intelligence is the lack of a standard collection by
which research into negation can be evaluated. This paper presents the first
large-scale dataset, Negative Instruction (NeIn), for studying negation within
instruction-based image editing. Our dataset comprises 366,957 quintuplets,
i.e., source image, original caption, selected object, negative sentence, and
target image in total, including 342,775 queries for training and 24,182
queries for benchmarking image editing methods. Specifically, we automatically
generate NeIn based on a large, existing vision-language dataset, MS-COCO, via
two steps: generation and filtering. During the generation phase, we leverage
two VLMs, BLIP and InstructPix2Pix (fine-tuned on MagicBrush dataset), to
generate NeIn's samples and the negative clauses that expresses the content of
the source image. In the subsequent filtering phase, we apply BLIP and
LLaVA-NeXT to remove erroneous samples. Additionally, we introduce an
evaluation protocol to assess the negation understanding for image editing
models. Extensive experiments using our dataset across multiple VLMs for
text-guided image editing demonstrate that even recent state-of-the-art VLMs
struggle to understand negative queries.
| [
{
"version": "v1",
"created": "Mon, 9 Sep 2024 04:54:34 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 20:42:51 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Bui",
"Nhat-Tan",
""
],
[
"Hoang",
"Dinh-Hieu",
""
],
[
"Trinh",
"Quoc-Huy",
""
],
[
"Tran",
"Minh-Triet",
""
],
[
"Nguyen",
"Truong",
""
],
[
"Gauch",
"Susan",
""
]
] | TITLE: NeIn: Telling What You Don't Want
ABSTRACT: Negation is a fundamental linguistic concept used by humans to convey
information that they do not desire. Despite this, minimal research has focused
on negation within text-guided image editing. This lack of research means that
vision-language models (VLMs) for image editing may struggle to understand
negation, implying that they struggle to provide accurate results. One barrier
to achieving human-level intelligence is the lack of a standard collection by
which research into negation can be evaluated. This paper presents the first
large-scale dataset, Negative Instruction (NeIn), for studying negation within
instruction-based image editing. Our dataset comprises 366,957 quintuplets,
i.e., source image, original caption, selected object, negative sentence, and
target image in total, including 342,775 queries for training and 24,182
queries for benchmarking image editing methods. Specifically, we automatically
generate NeIn based on a large, existing vision-language dataset, MS-COCO, via
two steps: generation and filtering. During the generation phase, we leverage
two VLMs, BLIP and InstructPix2Pix (fine-tuned on MagicBrush dataset), to
generate NeIn's samples and the negative clauses that expresses the content of
the source image. In the subsequent filtering phase, we apply BLIP and
LLaVA-NeXT to remove erroneous samples. Additionally, we introduce an
evaluation protocol to assess the negation understanding for image editing
models. Extensive experiments using our dataset across multiple VLMs for
text-guided image editing demonstrate that even recent state-of-the-art VLMs
struggle to understand negative queries.
|
2409.10803 | Zeheng Wang | Zeheng Wang, Fangzhou Wang, Liang Li, Zirui Wang, Timothy van der
Laan, Ross C. C. Leon, Jing-Kai Huang, Muhammad Usman | Quantum Kernel Learning for Small Dataset Modeling in Semiconductor
Fabrication: Application to Ohmic Contact | Journal version draft | null | null | null | cs.LG cs.ET quant-ph | http://creativecommons.org/licenses/by/4.0/ | Complex semiconductor fabrication processes, such as Ohmic contact formation
in unconventional semiconductor devices, pose significant modeling challenges
due to a large number of operational variables and the difficulty of collecting
large, high-quality datasets. Classical machine learning (CML) models often
struggle in such scenarios, where the data is both high-dimensional and limited
in quantity, leading to overfitting and reduced predictive accuracy. To address
this challenge, we develop the first application of quantum machine learning
(QML) to model this semiconductor process, leveraging quantum systems' capacity
to efficiently capture complex correlations in high-dimensional spaces and
generalize well with small datasets. Using only 159 experimental samples
augmented via a variational autoencoder, we report a quantum kernel-based
regressor (SQKR) with a static 2-level ZZ feature map. The SQKR consistently
outperformed six mainstream CML models across all evaluation metrics, achieving
the lowest mean absolute error (MAE), mean squared error (MSE), and root mean
squared error (RMSE), with repeated experiments confirming its robustness.
Notably, SQKR achieved an MAE of 0.314 Ohm-mm with data from experimental
verification, demonstrating its ability to effectively model semiconductor
fabrication processes despite limited data availability. These results
highlight QML's unique capability to handle small yet high-dimensional datasets
in the semiconductor industry, making it a promising alternative to classical
approaches for semiconductor process modeling.
| [
{
"version": "v1",
"created": "Tue, 17 Sep 2024 00:44:49 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 02:57:39 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Zeheng",
""
],
[
"Wang",
"Fangzhou",
""
],
[
"Li",
"Liang",
""
],
[
"Wang",
"Zirui",
""
],
[
"van der Laan",
"Timothy",
""
],
[
"Leon",
"Ross C. C.",
""
],
[
"Huang",
"Jing-Kai",
""
],
[
"Usman",
"Muhammad",
""
]
] | TITLE: Quantum Kernel Learning for Small Dataset Modeling in Semiconductor
Fabrication: Application to Ohmic Contact
ABSTRACT: Complex semiconductor fabrication processes, such as Ohmic contact formation
in unconventional semiconductor devices, pose significant modeling challenges
due to a large number of operational variables and the difficulty of collecting
large, high-quality datasets. Classical machine learning (CML) models often
struggle in such scenarios, where the data is both high-dimensional and limited
in quantity, leading to overfitting and reduced predictive accuracy. To address
this challenge, we develop the first application of quantum machine learning
(QML) to model this semiconductor process, leveraging quantum systems' capacity
to efficiently capture complex correlations in high-dimensional spaces and
generalize well with small datasets. Using only 159 experimental samples
augmented via a variational autoencoder, we report a quantum kernel-based
regressor (SQKR) with a static 2-level ZZ feature map. The SQKR consistently
outperformed six mainstream CML models across all evaluation metrics, achieving
the lowest mean absolute error (MAE), mean squared error (MSE), and root mean
squared error (RMSE), with repeated experiments confirming its robustness.
Notably, SQKR achieved an MAE of 0.314 Ohm-mm with data from experimental
verification, demonstrating its ability to effectively model semiconductor
fabrication processes despite limited data availability. These results
highlight QML's unique capability to handle small yet high-dimensional datasets
in the semiconductor industry, making it a promising alternative to classical
approaches for semiconductor process modeling.
|
2409.11744 | Weiyan Shi | Weiyan Shi, Haihong Zhang, Wei Wang, Kenny Tsu Wei Choo | Exploring Gaze Pattern Differences Between Autistic and Neurotypical
Children: Clustering, Visualisation, and Prediction | work in progress | null | null | null | cs.CV cs.AI cs.HC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Autism Spectrum Disorder (ASD) affects children's social and communication
abilities, with eye-tracking widely used to identify atypical gaze patterns.
While unsupervised clustering can automate the creation of areas of interest
for gaze feature extraction, the use of internal cluster validity indices, like
Silhouette Coefficient, to distinguish gaze pattern differences between ASD and
typically developing (TD) children remains underexplored. We explore whether
internal cluster validity indices can distinguish ASD from TD children.
Specifically, we apply seven clustering algorithms to gaze points and extract
63 internal cluster validity indices to reveal correlations with ASD diagnosis.
Using these indices, we train predictive models for ASD diagnosis. Experiments
on three datasets demonstrate high predictive accuracy (81\% AUC), validating
the effectiveness of these indices.
| [
{
"version": "v1",
"created": "Wed, 18 Sep 2024 06:56:06 GMT"
},
{
"version": "v2",
"created": "Wed, 12 Feb 2025 06:53:14 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 05:59:18 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Shi",
"Weiyan",
""
],
[
"Zhang",
"Haihong",
""
],
[
"Wang",
"Wei",
""
],
[
"Choo",
"Kenny Tsu Wei",
""
]
] | TITLE: Exploring Gaze Pattern Differences Between Autistic and Neurotypical
Children: Clustering, Visualisation, and Prediction
ABSTRACT: Autism Spectrum Disorder (ASD) affects children's social and communication
abilities, with eye-tracking widely used to identify atypical gaze patterns.
While unsupervised clustering can automate the creation of areas of interest
for gaze feature extraction, the use of internal cluster validity indices, like
Silhouette Coefficient, to distinguish gaze pattern differences between ASD and
typically developing (TD) children remains underexplored. We explore whether
internal cluster validity indices can distinguish ASD from TD children.
Specifically, we apply seven clustering algorithms to gaze points and extract
63 internal cluster validity indices to reveal correlations with ASD diagnosis.
Using these indices, we train predictive models for ASD diagnosis. Experiments
on three datasets demonstrate high predictive accuracy (81\% AUC), validating
the effectiveness of these indices.
|
2409.13725 | Grace Proebsting | Oghenefejiro Isaacs Anigboro, Charlie M. Crawford, Grace Proebsting,
Dana\"e Metaxa, Sorelle A. Friedler | Identity-related Speech Suppression in Generative AI Content Moderation | null | null | null | null | cs.CL cs.CY cs.HC | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Automated content moderation has long been used to help identify and filter
undesired user-generated content online. Generative AI systems now use such
filters to keep undesired generated content from being created by or shown to
users. From classrooms to Hollywood, as generative AI is increasingly used for
creative or expressive text generation, whose stories will these technologies
allow to be told, and whose will they suppress? In this paper, we define and
introduce measures of speech suppression, focusing on speech related to
different identity groups incorrectly filtered by a range of content moderation
APIs. Using both short-form, user-generated datasets traditional in content
moderation and longer generative AI-focused data, including two datasets we
introduce in this work, we create a benchmark for measurement of speech
suppression for nine identity groups. Across one traditional and four
generative AI-focused automated content moderation services tested, we find
that identity-related speech is more likely to be incorrectly suppressed than
other speech. We find differences in identity-related speech suppression for
traditional versus generative AI data, with APIs performing better on
generative AI data but worse on longer text instances, and by identity, with
identity-specific reasons for incorrect flagging behavior. Overall, we find
that on traditional short-form data incorrectly suppressed speech is likely to
be political, while for generative AI creative data it is likely to be
television violence.
| [
{
"version": "v1",
"created": "Mon, 9 Sep 2024 14:34:51 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 00:30:38 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Anigboro",
"Oghenefejiro Isaacs",
""
],
[
"Crawford",
"Charlie M.",
""
],
[
"Proebsting",
"Grace",
""
],
[
"Metaxa",
"Danaë",
""
],
[
"Friedler",
"Sorelle A.",
""
]
] | TITLE: Identity-related Speech Suppression in Generative AI Content Moderation
ABSTRACT: Automated content moderation has long been used to help identify and filter
undesired user-generated content online. Generative AI systems now use such
filters to keep undesired generated content from being created by or shown to
users. From classrooms to Hollywood, as generative AI is increasingly used for
creative or expressive text generation, whose stories will these technologies
allow to be told, and whose will they suppress? In this paper, we define and
introduce measures of speech suppression, focusing on speech related to
different identity groups incorrectly filtered by a range of content moderation
APIs. Using both short-form, user-generated datasets traditional in content
moderation and longer generative AI-focused data, including two datasets we
introduce in this work, we create a benchmark for measurement of speech
suppression for nine identity groups. Across one traditional and four
generative AI-focused automated content moderation services tested, we find
that identity-related speech is more likely to be incorrectly suppressed than
other speech. We find differences in identity-related speech suppression for
traditional versus generative AI data, with APIs performing better on
generative AI data but worse on longer text instances, and by identity, with
identity-specific reasons for incorrect flagging behavior. Overall, we find
that on traditional short-form data incorrectly suppressed speech is likely to
be political, while for generative AI creative data it is likely to be
television violence.
|
2409.18219 | Kyle Stein | Kyle Stein, Arash Mahyari, Guillermo Francia III, Eman El-Sheikh | Packet Inspection Transformer: A Self-Supervised Journey to Unseen
Malware Detection with Few Samples | null | null | null | null | cs.CR cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | As networks continue to expand and become more interconnected, the need for
novel malware detection methods becomes more pronounced. Traditional security
measures are increasingly inadequate against the sophistication of modern cyber
attacks. Deep Packet Inspection (DPI) has been pivotal in enhancing network
security, offering an in-depth analysis of network traffic that surpasses
conventional monitoring techniques. DPI not only examines the metadata of
network packets, but also dives into the actual content being carried within
the packet payloads, providing a comprehensive view of the data flowing through
networks. While the integration of advanced deep learning techniques with DPI
has introduced modern methodologies into malware detection and network traffic
classification, state-of-the-art supervised learning approaches are limited by
their reliance on large amounts of annotated data and their inability to
generalize to novel, unseen malware threats. To address these limitations, this
paper leverages the recent advancements in self-supervised learning (SSL) and
few-shot learning (FSL). Our proposed self-supervised approach trains a
transformer via SSL to learn the embedding of packet content, including
payload, from vast amounts of unlabeled data by masking portions of packets,
leading to a learned representation that generalizes to various downstream
tasks. Once the representation is extracted from the packets, they are used to
train a malware detection algorithm. The representation obtained from the
transformer is then used to adapt the malware detector to novel types of
attacks using few-shot learning approaches. Our experimental results
demonstrate that our method achieves classification accuracies of up to 94.76%
on the UNSW-NB15 dataset and 83.25% on the CIC-IoT23 dataset.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2024 18:55:52 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Feb 2025 18:53:06 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Stein",
"Kyle",
""
],
[
"Mahyari",
"Arash",
""
],
[
"Francia",
"Guillermo",
"III"
],
[
"El-Sheikh",
"Eman",
""
]
] | TITLE: Packet Inspection Transformer: A Self-Supervised Journey to Unseen
Malware Detection with Few Samples
ABSTRACT: As networks continue to expand and become more interconnected, the need for
novel malware detection methods becomes more pronounced. Traditional security
measures are increasingly inadequate against the sophistication of modern cyber
attacks. Deep Packet Inspection (DPI) has been pivotal in enhancing network
security, offering an in-depth analysis of network traffic that surpasses
conventional monitoring techniques. DPI not only examines the metadata of
network packets, but also dives into the actual content being carried within
the packet payloads, providing a comprehensive view of the data flowing through
networks. While the integration of advanced deep learning techniques with DPI
has introduced modern methodologies into malware detection and network traffic
classification, state-of-the-art supervised learning approaches are limited by
their reliance on large amounts of annotated data and their inability to
generalize to novel, unseen malware threats. To address these limitations, this
paper leverages the recent advancements in self-supervised learning (SSL) and
few-shot learning (FSL). Our proposed self-supervised approach trains a
transformer via SSL to learn the embedding of packet content, including
payload, from vast amounts of unlabeled data by masking portions of packets,
leading to a learned representation that generalizes to various downstream
tasks. Once the representation is extracted from the packets, they are used to
train a malware detection algorithm. The representation obtained from the
transformer is then used to adapt the malware detector to novel types of
attacks using few-shot learning approaches. Our experimental results
demonstrate that our method achieves classification accuracies of up to 94.76%
on the UNSW-NB15 dataset and 83.25% on the CIC-IoT23 dataset.
|
2409.18370 | Yi Ding | Su Chen, Yi Ding, Hiroe Miyake, Xiaojun Li | Discovery and inversion of the viscoelastic wave equation in
inhomogeneous media | null | null | null | null | cs.LG physics.geo-ph | http://creativecommons.org/licenses/by/4.0/ | In scientific machine learning, the task of identifying partial differential
equations accurately from sparse and noisy data poses a significant challenge.
Current sparse regression methods may identify inaccurate equations on sparse
and noisy datasets and are not suitable for varying coefficients. To address
this issue, we propose a hybrid framework that combines two alternating
direction optimization phases: discovery and embedding. The discovery phase
employs current well-developed sparse regression techniques to preliminarily
identify governing equations from observations. The embedding phase implements
a recurrent convolutional neural network (RCNN), enabling efficient processes
for time-space iterations involved in discretized forms of wave equation. The
RCNN model further optimizes the imperfect sparse regression results to obtain
more accurate functional terms and coefficients. Through alternating update of
discovery-embedding phases, essential physical equations can be robustly
identified from noisy and low-resolution measurements. To assess the
performance of proposed framework, numerical experiments are conducted on
various scenarios involving wave equation in elastic/viscoelastic and
homogeneous/inhomogeneous media. The results demonstrate that the proposed
method exhibits excellent robustness and accuracy, even when faced with high
levels of noise and limited data availability in both spatial and temporal
domains.
| [
{
"version": "v1",
"created": "Fri, 27 Sep 2024 01:05:45 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 01:39:29 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chen",
"Su",
""
],
[
"Ding",
"Yi",
""
],
[
"Miyake",
"Hiroe",
""
],
[
"Li",
"Xiaojun",
""
]
] | TITLE: Discovery and inversion of the viscoelastic wave equation in
inhomogeneous media
ABSTRACT: In scientific machine learning, the task of identifying partial differential
equations accurately from sparse and noisy data poses a significant challenge.
Current sparse regression methods may identify inaccurate equations on sparse
and noisy datasets and are not suitable for varying coefficients. To address
this issue, we propose a hybrid framework that combines two alternating
direction optimization phases: discovery and embedding. The discovery phase
employs current well-developed sparse regression techniques to preliminarily
identify governing equations from observations. The embedding phase implements
a recurrent convolutional neural network (RCNN), enabling efficient processes
for time-space iterations involved in discretized forms of wave equation. The
RCNN model further optimizes the imperfect sparse regression results to obtain
more accurate functional terms and coefficients. Through alternating update of
discovery-embedding phases, essential physical equations can be robustly
identified from noisy and low-resolution measurements. To assess the
performance of proposed framework, numerical experiments are conducted on
various scenarios involving wave equation in elastic/viscoelastic and
homogeneous/inhomogeneous media. The results demonstrate that the proposed
method exhibits excellent robustness and accuracy, even when faced with high
levels of noise and limited data availability in both spatial and temporal
domains.
|
2410.10637 | Leyang Wang | Daniel J. Williams, Leyang Wang, Qizhen Ying, Song Liu, Mladen Kolar | High-Dimensional Differential Parameter Inference in Exponential Family
using Time Score Matching | Daniel J. Williams and Leyang Wang contributed equally to this work | null | null | null | stat.ML cs.LG | http://creativecommons.org/licenses/by/4.0/ | This paper addresses differential inference in time-varying parametric
probabilistic models, like graphical models with changing structures. Instead
of estimating a high-dimensional model at each time point and estimating
changes later, we directly learn the differential parameter, i.e., the time
derivative of the parameter. The main idea is treating the time score function
of an exponential family model as a linear model of the differential parameter
for direct estimation. We use time score matching to estimate parameter
derivatives. We prove the consistency of a regularized score matching objective
and demonstrate the finite-sample normality of a debiased estimator in
high-dimensional settings. Our methodology effectively infers differential
structures in high-dimensional graphical models, verified on simulated and
real-world datasets. The code reproducing our experiments can be found at:
https://github.com/Leyangw/tsm.
| [
{
"version": "v1",
"created": "Mon, 14 Oct 2024 15:49:27 GMT"
},
{
"version": "v2",
"created": "Fri, 27 Dec 2024 19:00:40 GMT"
},
{
"version": "v3",
"created": "Sat, 5 Apr 2025 10:13:14 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Williams",
"Daniel J.",
""
],
[
"Wang",
"Leyang",
""
],
[
"Ying",
"Qizhen",
""
],
[
"Liu",
"Song",
""
],
[
"Kolar",
"Mladen",
""
]
] | TITLE: High-Dimensional Differential Parameter Inference in Exponential Family
using Time Score Matching
ABSTRACT: This paper addresses differential inference in time-varying parametric
probabilistic models, like graphical models with changing structures. Instead
of estimating a high-dimensional model at each time point and estimating
changes later, we directly learn the differential parameter, i.e., the time
derivative of the parameter. The main idea is treating the time score function
of an exponential family model as a linear model of the differential parameter
for direct estimation. We use time score matching to estimate parameter
derivatives. We prove the consistency of a regularized score matching objective
and demonstrate the finite-sample normality of a debiased estimator in
high-dimensional settings. Our methodology effectively infers differential
structures in high-dimensional graphical models, verified on simulated and
real-world datasets. The code reproducing our experiments can be found at:
https://github.com/Leyangw/tsm.
|
2410.12784 | Sijun Tan | Sijun Tan, Siyuan Zhuang, Kyle Montgomery, William Y. Tang, Alejandro
Cuadron, Chenguang Wang, Raluca Ada Popa, Ion Stoica | JudgeBench: A Benchmark for Evaluating LLM-based Judges | Published as a conference paper at ICLR 2025 | null | null | null | cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | LLM-based judges have emerged as a scalable alternative to human evaluation
and are increasingly used to assess, compare, and improve models. However, the
reliability of LLM-based judges themselves is rarely scrutinized. As LLMs
become more advanced, their responses grow more sophisticated, requiring
stronger judges to evaluate them. Existing benchmarks primarily focus on a
judge's alignment with human preferences, but often fail to account for more
challenging tasks where crowdsourced human preference is a poor indicator of
factual and logical correctness. To address this, we propose a novel evaluation
framework to objectively evaluate LLM-based judges. Based on this framework, we
propose JudgeBench, a benchmark for evaluating LLM-based judges on challenging
response pairs spanning knowledge, reasoning, math, and coding. JudgeBench
leverages a novel pipeline for converting existing difficult datasets into
challenging response pairs with preference labels reflecting objective
correctness. Our comprehensive evaluation on a collection of prompted judges,
fine-tuned judges, multi-agent judges, and reward models shows that JudgeBench
poses a significantly greater challenge than previous benchmarks, with many
strong models (e.g., GPT-4o) performing just slightly better than random
guessing. Overall, JudgeBench offers a reliable platform for assessing
increasingly advanced LLM-based judges. Data and code are available at
https://github.com/ScalerLab/JudgeBench.
| [
{
"version": "v1",
"created": "Wed, 16 Oct 2024 17:58:19 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 00:07:35 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Tan",
"Sijun",
""
],
[
"Zhuang",
"Siyuan",
""
],
[
"Montgomery",
"Kyle",
""
],
[
"Tang",
"William Y.",
""
],
[
"Cuadron",
"Alejandro",
""
],
[
"Wang",
"Chenguang",
""
],
[
"Popa",
"Raluca Ada",
""
],
[
"Stoica",
"Ion",
""
]
] | TITLE: JudgeBench: A Benchmark for Evaluating LLM-based Judges
ABSTRACT: LLM-based judges have emerged as a scalable alternative to human evaluation
and are increasingly used to assess, compare, and improve models. However, the
reliability of LLM-based judges themselves is rarely scrutinized. As LLMs
become more advanced, their responses grow more sophisticated, requiring
stronger judges to evaluate them. Existing benchmarks primarily focus on a
judge's alignment with human preferences, but often fail to account for more
challenging tasks where crowdsourced human preference is a poor indicator of
factual and logical correctness. To address this, we propose a novel evaluation
framework to objectively evaluate LLM-based judges. Based on this framework, we
propose JudgeBench, a benchmark for evaluating LLM-based judges on challenging
response pairs spanning knowledge, reasoning, math, and coding. JudgeBench
leverages a novel pipeline for converting existing difficult datasets into
challenging response pairs with preference labels reflecting objective
correctness. Our comprehensive evaluation on a collection of prompted judges,
fine-tuned judges, multi-agent judges, and reward models shows that JudgeBench
poses a significantly greater challenge than previous benchmarks, with many
strong models (e.g., GPT-4o) performing just slightly better than random
guessing. Overall, JudgeBench offers a reliable platform for assessing
increasingly advanced LLM-based judges. Data and code are available at
https://github.com/ScalerLab/JudgeBench.
|
2410.13184 | Shwai He | Shwai He, Tao Ge, Guoheng Sun, Bowei Tian, Xiaoyang Wang, Dong Yu | Router-Tuning: A Simple and Effective Approach for Enabling
Dynamic-Depth in Transformers | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Traditional transformer models often allocate a fixed amount of computational
resources to every input token, leading to inefficient and unnecessary
computation. To address this, the Mixture of Depths (MoD) was introduced to
dynamically adjust the computational depth by skipping less important layers.
Despite its promise, current MoD approaches remain under-explored and face two
main challenges: (1) high training costs due to the need to train the entire
model along with the routers that determine which layers to skip, and (2) the
risk of performance degradation when important layers are bypassed. In response
to the first issue, we propose Router-Tuning, a method that fine-tunes only the
router on a small dataset, drastically reducing the computational overhead
associated with full model training. For the second challenge, we propose
MindSkip, which deploys Attention with Dynamic Depths. This method preserves
the model's performance while significantly enhancing computational and memory
efficiency. Extensive experiments demonstrate that our approach delivers
competitive results while dramatically improving the computation efficiency,
e.g., 21\% speedup and only a 0.2\% performance drop. The code is released at
https://github.com/CASE-Lab-UMD/Router-Tuning.
| [
{
"version": "v1",
"created": "Thu, 17 Oct 2024 03:23:50 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Feb 2025 02:05:10 GMT"
},
{
"version": "v3",
"created": "Mon, 17 Feb 2025 04:52:10 GMT"
},
{
"version": "v4",
"created": "Sun, 6 Apr 2025 18:27:47 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"He",
"Shwai",
""
],
[
"Ge",
"Tao",
""
],
[
"Sun",
"Guoheng",
""
],
[
"Tian",
"Bowei",
""
],
[
"Wang",
"Xiaoyang",
""
],
[
"Yu",
"Dong",
""
]
] | TITLE: Router-Tuning: A Simple and Effective Approach for Enabling
Dynamic-Depth in Transformers
ABSTRACT: Traditional transformer models often allocate a fixed amount of computational
resources to every input token, leading to inefficient and unnecessary
computation. To address this, the Mixture of Depths (MoD) was introduced to
dynamically adjust the computational depth by skipping less important layers.
Despite its promise, current MoD approaches remain under-explored and face two
main challenges: (1) high training costs due to the need to train the entire
model along with the routers that determine which layers to skip, and (2) the
risk of performance degradation when important layers are bypassed. In response
to the first issue, we propose Router-Tuning, a method that fine-tunes only the
router on a small dataset, drastically reducing the computational overhead
associated with full model training. For the second challenge, we propose
MindSkip, which deploys Attention with Dynamic Depths. This method preserves
the model's performance while significantly enhancing computational and memory
efficiency. Extensive experiments demonstrate that our approach delivers
competitive results while dramatically improving the computation efficiency,
e.g., 21\% speedup and only a 0.2\% performance drop. The code is released at
https://github.com/CASE-Lab-UMD/Router-Tuning.
|
2410.17166 | Julius R\"uckin | Julius R\"uckin, David Morilla-Cabello, Cyrill Stachniss, Eduardo
Montijano, Marija Popovi\'c | Towards Map-Agnostic Policies for Adaptive Informative Path Planning | 8 pages, 4 figures | null | null | null | cs.RO | http://creativecommons.org/licenses/by/4.0/ | Robots are frequently tasked to gather relevant sensor data in unknown
terrains. A key challenge for classical path planning algorithms used for
autonomous information gathering is adaptively replanning paths online as the
terrain is explored given limited onboard compute resources. Recently,
learning-based approaches emerged that train planning policies offline and
enable computationally efficient online replanning performing policy inference.
These approaches are designed and trained for terrain monitoring missions
assuming a single specific map representation, which limits their applicability
to different terrains. To address these issues, we propose a novel formulation
of the adaptive informative path planning problem unified across different map
representations, enabling training and deploying planning policies in a larger
variety of monitoring missions. Experimental results validate that our novel
formulation easily integrates with classical non-learning-based planning
approaches while maintaining their performance. Our trained planning policy
performs similarly to state-of-the-art map-specifically trained policies. We
validate our learned policy on unseen real-world terrain datasets.
| [
{
"version": "v1",
"created": "Tue, 22 Oct 2024 16:43:21 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 07:35:49 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Rückin",
"Julius",
""
],
[
"Morilla-Cabello",
"David",
""
],
[
"Stachniss",
"Cyrill",
""
],
[
"Montijano",
"Eduardo",
""
],
[
"Popović",
"Marija",
""
]
] | TITLE: Towards Map-Agnostic Policies for Adaptive Informative Path Planning
ABSTRACT: Robots are frequently tasked to gather relevant sensor data in unknown
terrains. A key challenge for classical path planning algorithms used for
autonomous information gathering is adaptively replanning paths online as the
terrain is explored given limited onboard compute resources. Recently,
learning-based approaches emerged that train planning policies offline and
enable computationally efficient online replanning performing policy inference.
These approaches are designed and trained for terrain monitoring missions
assuming a single specific map representation, which limits their applicability
to different terrains. To address these issues, we propose a novel formulation
of the adaptive informative path planning problem unified across different map
representations, enabling training and deploying planning policies in a larger
variety of monitoring missions. Experimental results validate that our novel
formulation easily integrates with classical non-learning-based planning
approaches while maintaining their performance. Our trained planning policy
performs similarly to state-of-the-art map-specifically trained policies. We
validate our learned policy on unseen real-world terrain datasets.
|
2410.17462 | Minhua Lin | Minhua Lin, Zhengzhang Chen, Yanchi Liu, Xujiang Zhao, Zongyu Wu,
Junxiang Wang, Xiang Zhang, Suhang Wang, Haifeng Chen | Decoding Time Series with LLMs: A Multi-Agent Framework for Cross-Domain
Annotation | 29 pages, 12 figures, 32 tables | null | null | null | cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Time series data is ubiquitous across various domains, including
manufacturing, finance, and healthcare. High-quality annotations are essential
for effectively understanding time series and facilitating downstream tasks;
however, obtaining such annotations is challenging, particularly in
mission-critical domains. In this paper, we propose TESSA, a multi-agent system
designed to automatically generate both general and domain-specific annotations
for time series data. TESSA introduces two agents: a general annotation agent
and a domain-specific annotation agent. The general agent captures common
patterns and knowledge across multiple source domains, leveraging both
time-series-wise and text-wise features to generate general annotations.
Meanwhile, the domain-specific agent utilizes limited annotations from the
target domain to learn domain-specific terminology and generate targeted
annotations. Extensive experiments on multiple synthetic and real-world
datasets demonstrate that TESSA effectively generates high-quality annotations,
outperforming existing methods.
| [
{
"version": "v1",
"created": "Tue, 22 Oct 2024 22:43:14 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 21:58:33 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lin",
"Minhua",
""
],
[
"Chen",
"Zhengzhang",
""
],
[
"Liu",
"Yanchi",
""
],
[
"Zhao",
"Xujiang",
""
],
[
"Wu",
"Zongyu",
""
],
[
"Wang",
"Junxiang",
""
],
[
"Zhang",
"Xiang",
""
],
[
"Wang",
"Suhang",
""
],
[
"Chen",
"Haifeng",
""
]
] | TITLE: Decoding Time Series with LLMs: A Multi-Agent Framework for Cross-Domain
Annotation
ABSTRACT: Time series data is ubiquitous across various domains, including
manufacturing, finance, and healthcare. High-quality annotations are essential
for effectively understanding time series and facilitating downstream tasks;
however, obtaining such annotations is challenging, particularly in
mission-critical domains. In this paper, we propose TESSA, a multi-agent system
designed to automatically generate both general and domain-specific annotations
for time series data. TESSA introduces two agents: a general annotation agent
and a domain-specific annotation agent. The general agent captures common
patterns and knowledge across multiple source domains, leveraging both
time-series-wise and text-wise features to generate general annotations.
Meanwhile, the domain-specific agent utilizes limited annotations from the
target domain to learn domain-specific terminology and generate targeted
annotations. Extensive experiments on multiple synthetic and real-world
datasets demonstrate that TESSA effectively generates high-quality annotations,
outperforming existing methods.
|
2410.18074 | Patrick Rim | Suchisrit Gangopadhyay, Xien Chen, Michael Chu, Patrick Rim,
Hyoungseob Park, Alex Wong | UnCLe: Benchmarking Continual Learning for Unsupervised Depth Completion | Preprint | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose UnCLe, a standardized benchmark for Unsupervised Continual
Learning of a multimodal depth estimation task: Depth completion aims to infer
a dense depth map from a pair of synchronized RGB image and sparse depth map.
We benchmark depth completion models under the practical scenario of
unsupervised learning over continuous streams of data. Existing methods are
typically trained on a static, or stationary, dataset. However, when adapting
to novel non-stationary distributions, they "catastrophically forget"
previously learned information. UnCLe simulates these non-stationary
distributions by adapting depth completion models to sequences of datasets
containing diverse scenes captured from distinct domains using different visual
and range sensors. We adopt representative methods from continual learning
paradigms and translate them to enable unsupervised continual learning of depth
completion. We benchmark these models for indoor and outdoor and investigate
the degree of catastrophic forgetting through standard quantitative metrics.
Furthermore, we introduce model inversion quality as an additional measure of
forgetting. We find that unsupervised continual learning of depth completion is
an open problem, and we invite researchers to leverage UnCLe as a development
platform.
| [
{
"version": "v1",
"created": "Wed, 23 Oct 2024 17:56:33 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Oct 2024 17:37:29 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 18:23:51 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Gangopadhyay",
"Suchisrit",
""
],
[
"Chen",
"Xien",
""
],
[
"Chu",
"Michael",
""
],
[
"Rim",
"Patrick",
""
],
[
"Park",
"Hyoungseob",
""
],
[
"Wong",
"Alex",
""
]
] | TITLE: UnCLe: Benchmarking Continual Learning for Unsupervised Depth Completion
ABSTRACT: We propose UnCLe, a standardized benchmark for Unsupervised Continual
Learning of a multimodal depth estimation task: Depth completion aims to infer
a dense depth map from a pair of synchronized RGB image and sparse depth map.
We benchmark depth completion models under the practical scenario of
unsupervised learning over continuous streams of data. Existing methods are
typically trained on a static, or stationary, dataset. However, when adapting
to novel non-stationary distributions, they "catastrophically forget"
previously learned information. UnCLe simulates these non-stationary
distributions by adapting depth completion models to sequences of datasets
containing diverse scenes captured from distinct domains using different visual
and range sensors. We adopt representative methods from continual learning
paradigms and translate them to enable unsupervised continual learning of depth
completion. We benchmark these models for indoor and outdoor and investigate
the degree of catastrophic forgetting through standard quantitative metrics.
Furthermore, we introduce model inversion quality as an additional measure of
forgetting. We find that unsupervised continual learning of depth completion is
an open problem, and we invite researchers to leverage UnCLe as a development
platform.
|
2410.18387 | Lehan Wang | Lehan Wang, Haonan Wang, Honglong Yang, Jiaji Mao, Zehong Yang, Jun
Shen, Xiaomeng Li | Interpretable Bilingual Multimodal Large Language Model for Diverse
Biomedical Tasks | Accepted in ICLR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several medical Multimodal Large Languange Models (MLLMs) have been developed
to address tasks involving visual images with textual instructions across
various medical modalities, achieving impressive results. Most current medical
generalist models are region-agnostic, treating the entire image as a holistic
representation. However, they struggle to identify which specific regions they
are focusing on when generating a sentence. To mimic the behavior of doctors,
who typically begin by reviewing the entire image before concentrating on
specific regions for a thorough evaluation, we aim to enhance the capability of
medical MLLMs in understanding anatomical regions within entire medical scans.
To achieve it, we first formulate Region-Centric tasks and construct a
large-scale dataset, MedRegInstruct, to incorporate regional information into
training. Combining our collected dataset with other medical multimodal corpora
for training, we propose a Region-Aware medical MLLM, MedRegA, which is the
first bilingual generalist medical AI system to simultaneously handle
image-level and region-level medical vision-language tasks across a broad range
of modalities. Our MedRegA not only enables three region-centric tasks, but
also achieves the best performance for visual question answering, report
generation and medical image classification over 8 modalities, showcasing
significant versatility. Experiments demonstrate that our model can not only
accomplish powerful performance across various medical vision-language tasks in
bilingual settings, but also recognize and detect structures in multimodal
medical scans, boosting the interpretability and user interactivity of medical
MLLMs. Our project page is https://medrega.github.io.
| [
{
"version": "v1",
"created": "Thu, 24 Oct 2024 02:55:41 GMT"
},
{
"version": "v2",
"created": "Fri, 25 Oct 2024 02:14:24 GMT"
},
{
"version": "v3",
"created": "Tue, 25 Mar 2025 14:54:31 GMT"
},
{
"version": "v4",
"created": "Mon, 7 Apr 2025 09:01:19 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Lehan",
""
],
[
"Wang",
"Haonan",
""
],
[
"Yang",
"Honglong",
""
],
[
"Mao",
"Jiaji",
""
],
[
"Yang",
"Zehong",
""
],
[
"Shen",
"Jun",
""
],
[
"Li",
"Xiaomeng",
""
]
] | TITLE: Interpretable Bilingual Multimodal Large Language Model for Diverse
Biomedical Tasks
ABSTRACT: Several medical Multimodal Large Languange Models (MLLMs) have been developed
to address tasks involving visual images with textual instructions across
various medical modalities, achieving impressive results. Most current medical
generalist models are region-agnostic, treating the entire image as a holistic
representation. However, they struggle to identify which specific regions they
are focusing on when generating a sentence. To mimic the behavior of doctors,
who typically begin by reviewing the entire image before concentrating on
specific regions for a thorough evaluation, we aim to enhance the capability of
medical MLLMs in understanding anatomical regions within entire medical scans.
To achieve it, we first formulate Region-Centric tasks and construct a
large-scale dataset, MedRegInstruct, to incorporate regional information into
training. Combining our collected dataset with other medical multimodal corpora
for training, we propose a Region-Aware medical MLLM, MedRegA, which is the
first bilingual generalist medical AI system to simultaneously handle
image-level and region-level medical vision-language tasks across a broad range
of modalities. Our MedRegA not only enables three region-centric tasks, but
also achieves the best performance for visual question answering, report
generation and medical image classification over 8 modalities, showcasing
significant versatility. Experiments demonstrate that our model can not only
accomplish powerful performance across various medical vision-language tasks in
bilingual settings, but also recognize and detect structures in multimodal
medical scans, boosting the interpretability and user interactivity of medical
MLLMs. Our project page is https://medrega.github.io.
|
2410.18921 | Junyi Ye | A M Muntasir Rahman, Junyi Ye, Wei Yao, Sierra S. Liu, Jesse Yu,
Jonathan Yu, Wenpeng Yin, Guiling Wang | From Blind Solvers to Logical Thinkers: Benchmarking LLMs' Logical
Integrity on Faulty Mathematical Problems | null | null | null | null | cs.CL cs.AI cs.LO | http://creativecommons.org/licenses/by/4.0/ | Consider the math problem: "Lily received 3 cookies from her best friend
yesterday and ate 5 for breakfast. Today, her friend gave her 3 more cookies.
How many cookies does Lily have now?" Many large language models (LLMs) in
previous research approach this problem by calculating the answer "1" using the
equation "3 - 5 + 3." However, from a human perspective, we recognize the
inherent flaw in this problem: Lily cannot eat 5 cookies if she initially only
had 3. This discrepancy prompts a key question: Are current LLMs merely Blind
Solver that apply mathematical operations without deeper reasoning, or can they
function as Logical Thinker capable of identifying logical inconsistencies?
To explore this question, we propose a benchmark dataset, FaultyMath, which
includes faulty math problems of rich diversity: i) multiple mathematical
categories, e.g., algebra, geometry, number theory, etc., ii) varying levels of
difficulty, and iii) different origins of faultiness -- ranging from violations
of common sense and ambiguous statements to mathematical contradictions and
more. We evaluate a broad spectrum of LLMs, including open-source,
closed-source, and math-specialized models, using FaultyMath across three
dimensions: (i) How accurately can the models detect faulty math problems
without being explicitly prompted to do so? (ii) When provided with hints --
either correct or misleading -- about the validity of the problems, to what
extent do LLMs adapt to become reliable Logical Thinker? (iii) How trustworthy
are the explanations generated by LLMs when they recognize a math problem as
flawed? Through extensive experimentation and detailed analysis, our results
demonstrate that existing LLMs largely function as Blind Solver and fall short
of the reasoning capabilities required to perform as Logical Thinker.
| [
{
"version": "v1",
"created": "Thu, 24 Oct 2024 17:10:39 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 20:06:36 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Rahman",
"A M Muntasir",
""
],
[
"Ye",
"Junyi",
""
],
[
"Yao",
"Wei",
""
],
[
"Liu",
"Sierra S.",
""
],
[
"Yu",
"Jesse",
""
],
[
"Yu",
"Jonathan",
""
],
[
"Yin",
"Wenpeng",
""
],
[
"Wang",
"Guiling",
""
]
] | TITLE: From Blind Solvers to Logical Thinkers: Benchmarking LLMs' Logical
Integrity on Faulty Mathematical Problems
ABSTRACT: Consider the math problem: "Lily received 3 cookies from her best friend
yesterday and ate 5 for breakfast. Today, her friend gave her 3 more cookies.
How many cookies does Lily have now?" Many large language models (LLMs) in
previous research approach this problem by calculating the answer "1" using the
equation "3 - 5 + 3." However, from a human perspective, we recognize the
inherent flaw in this problem: Lily cannot eat 5 cookies if she initially only
had 3. This discrepancy prompts a key question: Are current LLMs merely Blind
Solver that apply mathematical operations without deeper reasoning, or can they
function as Logical Thinker capable of identifying logical inconsistencies?
To explore this question, we propose a benchmark dataset, FaultyMath, which
includes faulty math problems of rich diversity: i) multiple mathematical
categories, e.g., algebra, geometry, number theory, etc., ii) varying levels of
difficulty, and iii) different origins of faultiness -- ranging from violations
of common sense and ambiguous statements to mathematical contradictions and
more. We evaluate a broad spectrum of LLMs, including open-source,
closed-source, and math-specialized models, using FaultyMath across three
dimensions: (i) How accurately can the models detect faulty math problems
without being explicitly prompted to do so? (ii) When provided with hints --
either correct or misleading -- about the validity of the problems, to what
extent do LLMs adapt to become reliable Logical Thinker? (iii) How trustworthy
are the explanations generated by LLMs when they recognize a math problem as
flawed? Through extensive experimentation and detailed analysis, our results
demonstrate that existing LLMs largely function as Blind Solver and fall short
of the reasoning capabilities required to perform as Logical Thinker.
|
2410.19219 | Maithili Patel | Maithili Patel, Sonia Chernova | Robot Behavior Personalization from Sparse User Feedback | null | null | 10.1109/LRA.2025.3550833 | null | cs.RO cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As service robots become more general-purpose, they will need to adapt to
their users' preferences over a large set of all possible tasks that they can
perform. This includes preferences regarding which actions the users prefer to
delegate to robots as opposed to doing themselves. Existing personalization
approaches require task-specific data for each user. To handle diversity across
all household tasks and users, and nuances in user preferences across tasks, we
propose to learn a task adaptation function independently, which can be used in
tandem with any universal robot policy to customize robot behavior. We create
Task Adaptation using Abstract Concepts (TAACo) framework. TAACo can learn to
predict the user's preferred manner of assistance with any given task, by
mediating reasoning through a representation composed of abstract concepts
built based on user feedback. TAACo can generalize to an open set of household
tasks from small amount of user feedback and explain its inferences through
intuitive concepts. We evaluate our model on a dataset we collected of 5
people's preferences, and show that TAACo outperforms GPT-4 by 16% and a
rule-based system by 54%, on prediction accuracy, with 40 samples of user
feedback.
| [
{
"version": "v1",
"created": "Fri, 25 Oct 2024 00:08:38 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Patel",
"Maithili",
""
],
[
"Chernova",
"Sonia",
""
]
] | TITLE: Robot Behavior Personalization from Sparse User Feedback
ABSTRACT: As service robots become more general-purpose, they will need to adapt to
their users' preferences over a large set of all possible tasks that they can
perform. This includes preferences regarding which actions the users prefer to
delegate to robots as opposed to doing themselves. Existing personalization
approaches require task-specific data for each user. To handle diversity across
all household tasks and users, and nuances in user preferences across tasks, we
propose to learn a task adaptation function independently, which can be used in
tandem with any universal robot policy to customize robot behavior. We create
Task Adaptation using Abstract Concepts (TAACo) framework. TAACo can learn to
predict the user's preferred manner of assistance with any given task, by
mediating reasoning through a representation composed of abstract concepts
built based on user feedback. TAACo can generalize to an open set of household
tasks from small amount of user feedback and explain its inferences through
intuitive concepts. We evaluate our model on a dataset we collected of 5
people's preferences, and show that TAACo outperforms GPT-4 by 16% and a
rule-based system by 54%, on prediction accuracy, with 40 samples of user
feedback.
|
2410.20537 | Ranit Das | Ranit Das, David Shih | SIGMA: Single Interpolated Generative Model for Anomalies | 12 pages, 7 figures, v2: added timing comparison and sample quality
in other SRs | null | null | null | hep-ph cs.LG hep-ex physics.data-an | http://creativecommons.org/licenses/by/4.0/ | A key step in any resonant anomaly detection search is accurate modeling of
the background distribution in each signal region. Data-driven methods like
CATHODE accomplish this by training separate generative models on the
complement of each signal region, and interpolating them into their
corresponding signal regions. Having to re-train the generative model on
essentially the entire dataset for each signal region is a major computational
cost in a typical sliding window search with many signal regions. Here, we
present SIGMA, a new, fully data-driven, computationally-efficient method for
estimating background distributions. The idea is to train a single generative
model on all of the data and interpolate its parameters in sideband regions in
order to obtain a model for the background in the signal region. The SIGMA
method significantly reduces the computational cost compared to previous
approaches, while retaining a similar high quality of background modeling and
sensitivity to anomalous signals.
| [
{
"version": "v1",
"created": "Sun, 27 Oct 2024 18:00:00 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 19:46:57 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Das",
"Ranit",
""
],
[
"Shih",
"David",
""
]
] | TITLE: SIGMA: Single Interpolated Generative Model for Anomalies
ABSTRACT: A key step in any resonant anomaly detection search is accurate modeling of
the background distribution in each signal region. Data-driven methods like
CATHODE accomplish this by training separate generative models on the
complement of each signal region, and interpolating them into their
corresponding signal regions. Having to re-train the generative model on
essentially the entire dataset for each signal region is a major computational
cost in a typical sliding window search with many signal regions. Here, we
present SIGMA, a new, fully data-driven, computationally-efficient method for
estimating background distributions. The idea is to train a single generative
model on all of the data and interpolate its parameters in sideband regions in
order to obtain a model for the background in the signal region. The SIGMA
method significantly reduces the computational cost compared to previous
approaches, while retaining a similar high quality of background modeling and
sensitivity to anomalous signals.
|
2410.22972 | Alberto Carlo Maria Mancino | Alberto Carlo Maria Mancino, Salvatore Bufi, Angela Di Fazio, Antonio
Ferrara, Daniele Malitesta, Claudio Pomo, Tommaso Di Noia | DataRec: A Python Library for Standardized and Reproducible Data
Management in Recommender Systems | null | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Recommender systems have demonstrated significant impact across diverse
domains, yet ensuring the reproducibility of experimental findings remains a
persistent challenge. A primary obstacle lies in the fragmented and often
opaque data management strategies employed during the preprocessing stage,
where decisions about dataset selection, filtering, and splitting can
substantially influence outcomes. To address these limitations, we introduce
DataRec, an open-source Python-based library specifically designed to unify and
streamline data handling in recommender system research. By providing
reproducible routines for dataset preparation, data versioning, and seamless
integration with other frameworks, DataRec promotes methodological
standardization, interoperability, and comparability across different
experimental setups. Our design is informed by an in-depth review of 55
state-of-the-art recommendation studies ensuring that DataRec adopts best
practices while addressing common pitfalls in data management. Ultimately, our
contribution facilitates fair benchmarking, enhances reproducibility, and
fosters greater trust in experimental results within the broader recommender
systems community. The DataRec library, documentation, and examples are freely
available at https://github.com/sisinflab/DataRec.
| [
{
"version": "v1",
"created": "Wed, 30 Oct 2024 12:39:39 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 07:29:36 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Mancino",
"Alberto Carlo Maria",
""
],
[
"Bufi",
"Salvatore",
""
],
[
"Di Fazio",
"Angela",
""
],
[
"Ferrara",
"Antonio",
""
],
[
"Malitesta",
"Daniele",
""
],
[
"Pomo",
"Claudio",
""
],
[
"Di Noia",
"Tommaso",
""
]
] | TITLE: DataRec: A Python Library for Standardized and Reproducible Data
Management in Recommender Systems
ABSTRACT: Recommender systems have demonstrated significant impact across diverse
domains, yet ensuring the reproducibility of experimental findings remains a
persistent challenge. A primary obstacle lies in the fragmented and often
opaque data management strategies employed during the preprocessing stage,
where decisions about dataset selection, filtering, and splitting can
substantially influence outcomes. To address these limitations, we introduce
DataRec, an open-source Python-based library specifically designed to unify and
streamline data handling in recommender system research. By providing
reproducible routines for dataset preparation, data versioning, and seamless
integration with other frameworks, DataRec promotes methodological
standardization, interoperability, and comparability across different
experimental setups. Our design is informed by an in-depth review of 55
state-of-the-art recommendation studies ensuring that DataRec adopts best
practices while addressing common pitfalls in data management. Ultimately, our
contribution facilitates fair benchmarking, enhances reproducibility, and
fosters greater trust in experimental results within the broader recommender
systems community. The DataRec library, documentation, and examples are freely
available at https://github.com/sisinflab/DataRec.
|
2410.23280 | Qingyu Shi | Qingyu Shi, Lu Qi, Jianzong Wu, Jinbin Bai, Jingbo Wang, Yunhai Tong,
Xiangtai Li | DreamRelation: Bridging Customization and Relation Generation | CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Customized image generation is essential for creating personalized content
based on user prompts, allowing large-scale text-to-image diffusion models to
more effectively meet individual needs. However, existing models often neglect
the relationships between customized objects in generated images. In contrast,
this work addresses this gap by focusing on relation-aware customized image
generation, which seeks to preserve the identities from image prompts while
maintaining the relationship specified in text prompts. Specifically, we
introduce DreamRelation, a framework that disentangles identity and relation
learning using a carefully curated dataset. Our training data consists of
relation-specific images, independent object images containing identity
information, and text prompts to guide relation generation. Then, we propose
two key modules to tackle the two main challenges: generating accurate and
natural relationships, especially when significant pose adjustments are
required, and avoiding object confusion in cases of overlap. First, we
introduce a keypoint matching loss that effectively guides the model in
adjusting object poses closely tied to their relationships. Second, we
incorporate local features of the image prompts to better distinguish between
objects, preventing confusion in overlapping cases. Extensive results on our
proposed benchmarks demonstrate the superiority of DreamRelation in generating
precise relations while preserving object identities across a diverse set of
objects and relationships.
| [
{
"version": "v1",
"created": "Wed, 30 Oct 2024 17:57:21 GMT"
},
{
"version": "v2",
"created": "Tue, 5 Nov 2024 05:28:46 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Mar 2025 01:52:56 GMT"
},
{
"version": "v4",
"created": "Sat, 5 Apr 2025 14:15:09 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Shi",
"Qingyu",
""
],
[
"Qi",
"Lu",
""
],
[
"Wu",
"Jianzong",
""
],
[
"Bai",
"Jinbin",
""
],
[
"Wang",
"Jingbo",
""
],
[
"Tong",
"Yunhai",
""
],
[
"Li",
"Xiangtai",
""
]
] | TITLE: DreamRelation: Bridging Customization and Relation Generation
ABSTRACT: Customized image generation is essential for creating personalized content
based on user prompts, allowing large-scale text-to-image diffusion models to
more effectively meet individual needs. However, existing models often neglect
the relationships between customized objects in generated images. In contrast,
this work addresses this gap by focusing on relation-aware customized image
generation, which seeks to preserve the identities from image prompts while
maintaining the relationship specified in text prompts. Specifically, we
introduce DreamRelation, a framework that disentangles identity and relation
learning using a carefully curated dataset. Our training data consists of
relation-specific images, independent object images containing identity
information, and text prompts to guide relation generation. Then, we propose
two key modules to tackle the two main challenges: generating accurate and
natural relationships, especially when significant pose adjustments are
required, and avoiding object confusion in cases of overlap. First, we
introduce a keypoint matching loss that effectively guides the model in
adjusting object poses closely tied to their relationships. Second, we
incorporate local features of the image prompts to better distinguish between
objects, preventing confusion in overlapping cases. Extensive results on our
proposed benchmarks demonstrate the superiority of DreamRelation in generating
precise relations while preserving object identities across a diverse set of
objects and relationships.
|
2411.07815 | Xianghong Zou | Xianghong Zou, Jianping Li, Weitong Wu, Fuxun Liang, Bisheng Yang,
Zhen Dong | Reliable-loc: Robust sequential LiDAR global localization in large-scale
street scenes based on verifiable cues | null | null | null | null | cs.RO cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Wearable laser scanning (WLS) system has the advantages of flexibility and
portability. It can be used for determining the user's path within a prior map,
which is a huge demand for applications in pedestrian navigation, collaborative
mapping, augmented reality, and emergency rescue. However, existing LiDAR-based
global localization methods suffer from insufficient robustness, especially in
complex large-scale outdoor scenes with insufficient features and incomplete
coverage of the prior map. To address such challenges, we propose LiDAR-based
reliable global localization (Reliable-loc) exploiting the verifiable cues in
the sequential LiDAR data. First, we propose a Monte Carlo Localization (MCL)
based on spatially verifiable cues, utilizing the rich information embedded in
local features to adjust the particles' weights hence avoiding the particles
converging to erroneous regions. Second, we propose a localization status
monitoring mechanism guided by the sequential pose uncertainties and adaptively
switching the localization mode using the temporal verifiable cues to avoid the
crash of the localization system. To validate the proposed Reliable-loc,
comprehensive experiments have been conducted on a large-scale heterogeneous
point cloud dataset consisting of high-precision vehicle-mounted mobile laser
scanning (MLS) point clouds and helmet-mounted WLS point clouds, which cover
various street scenes with a length of over 30 km. The experimental results
indicate that Reliable-loc exhibits high robustness, accuracy, and efficiency
in large-scale, complex street scenes, with a position accuracy of 2.91 m, yaw
accuracy of 3.74 degrees, and achieves real-time performance. For the code and
detailed experimental results, please refer to
https://github.com/zouxianghong/Reliable-loc.
| [
{
"version": "v1",
"created": "Sat, 9 Nov 2024 07:28:39 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 03:12:39 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zou",
"Xianghong",
""
],
[
"Li",
"Jianping",
""
],
[
"Wu",
"Weitong",
""
],
[
"Liang",
"Fuxun",
""
],
[
"Yang",
"Bisheng",
""
],
[
"Dong",
"Zhen",
""
]
] | TITLE: Reliable-loc: Robust sequential LiDAR global localization in large-scale
street scenes based on verifiable cues
ABSTRACT: Wearable laser scanning (WLS) system has the advantages of flexibility and
portability. It can be used for determining the user's path within a prior map,
which is a huge demand for applications in pedestrian navigation, collaborative
mapping, augmented reality, and emergency rescue. However, existing LiDAR-based
global localization methods suffer from insufficient robustness, especially in
complex large-scale outdoor scenes with insufficient features and incomplete
coverage of the prior map. To address such challenges, we propose LiDAR-based
reliable global localization (Reliable-loc) exploiting the verifiable cues in
the sequential LiDAR data. First, we propose a Monte Carlo Localization (MCL)
based on spatially verifiable cues, utilizing the rich information embedded in
local features to adjust the particles' weights hence avoiding the particles
converging to erroneous regions. Second, we propose a localization status
monitoring mechanism guided by the sequential pose uncertainties and adaptively
switching the localization mode using the temporal verifiable cues to avoid the
crash of the localization system. To validate the proposed Reliable-loc,
comprehensive experiments have been conducted on a large-scale heterogeneous
point cloud dataset consisting of high-precision vehicle-mounted mobile laser
scanning (MLS) point clouds and helmet-mounted WLS point clouds, which cover
various street scenes with a length of over 30 km. The experimental results
indicate that Reliable-loc exhibits high robustness, accuracy, and efficiency
in large-scale, complex street scenes, with a position accuracy of 2.91 m, yaw
accuracy of 3.74 degrees, and achieves real-time performance. For the code and
detailed experimental results, please refer to
https://github.com/zouxianghong/Reliable-loc.
|
2411.09439 | Jinxiang Lai | Jinxiang Lai, Jie Zhang, Jun Liu, Jian Li, Xiaocheng Lu, Song Guo | Spider: Any-to-Many Multimodal LLM | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Multimodal LLMs (MLLMs) have emerged as an extension of Large Language Models
(LLMs), enabling the integration of various modalities. However, Any-to-Any
MLLMs are limited to generating pairwise modalities 'Text + X' within a single
response, such as Text + {Image or Audio or Video}. To address this limitation,
we introduce Spider, a novel efficient Any-to-Many Modalities Generation (AMMG)
framework, which can generate an arbitrary combination of modalities 'Text +
Xs', such as Text + {Image and Audio and Video}. To achieve efficient AMMG, our
Spider integrates three core components: a Base Model for basic X-to-X (i.e.,
Any-to-Any) modality processing, an Any-to-Many Instruction Template designed
for producing Xs signal prompts, and a novel Efficient Decoders-Controller for
controlling multimodal Decoders to generate Xs (many-modal) contents. To train
Spider, we constructed a novel Text-formatted Many-Modal (TMM) dataset, which
facilitates learning the X-to-Xs (i.e., Any-to-Many) capability necessary for
AMMG. Ultimately, the well-trained Spider generates a pseudo X-to-Xs dataset,
the first-ever X-to-Xs many-modal dataset, enhancing the potential for AMMG
tasks in future research. Overall, this work not only pushes the boundary of
multimodal interaction but also provides rich data support for advancing the
field. Code: https://github.com/Layjins/Spider
| [
{
"version": "v1",
"created": "Thu, 14 Nov 2024 16:58:19 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 16:13:38 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Lai",
"Jinxiang",
""
],
[
"Zhang",
"Jie",
""
],
[
"Liu",
"Jun",
""
],
[
"Li",
"Jian",
""
],
[
"Lu",
"Xiaocheng",
""
],
[
"Guo",
"Song",
""
]
] | TITLE: Spider: Any-to-Many Multimodal LLM
ABSTRACT: Multimodal LLMs (MLLMs) have emerged as an extension of Large Language Models
(LLMs), enabling the integration of various modalities. However, Any-to-Any
MLLMs are limited to generating pairwise modalities 'Text + X' within a single
response, such as Text + {Image or Audio or Video}. To address this limitation,
we introduce Spider, a novel efficient Any-to-Many Modalities Generation (AMMG)
framework, which can generate an arbitrary combination of modalities 'Text +
Xs', such as Text + {Image and Audio and Video}. To achieve efficient AMMG, our
Spider integrates three core components: a Base Model for basic X-to-X (i.e.,
Any-to-Any) modality processing, an Any-to-Many Instruction Template designed
for producing Xs signal prompts, and a novel Efficient Decoders-Controller for
controlling multimodal Decoders to generate Xs (many-modal) contents. To train
Spider, we constructed a novel Text-formatted Many-Modal (TMM) dataset, which
facilitates learning the X-to-Xs (i.e., Any-to-Many) capability necessary for
AMMG. Ultimately, the well-trained Spider generates a pseudo X-to-Xs dataset,
the first-ever X-to-Xs many-modal dataset, enhancing the potential for AMMG
tasks in future research. Overall, this work not only pushes the boundary of
multimodal interaction but also provides rich data support for advancing the
field. Code: https://github.com/Layjins/Spider
|
2411.09540 | Jia-Wei Chen | Zi-Xuan Huang, Jia-Wei Chen, Zhi-Peng Zhang, Chia-Mu Yu | Prompting the Unseen: Detecting Hidden Backdoors in Black-Box Models | This paper has been accepted by IEEE/IFIP DSN 2025 | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Visual prompting (VP) is a new technique that adapts well-trained frozen
models for source domain tasks to target domain tasks. This study examines VP's
benefits for black-box model-level backdoor detection. The visual prompt in VP
maps class subspaces between source and target domains. We identify a
misalignment, termed class subspace inconsistency, between clean and poisoned
datasets. Based on this, we introduce \textsc{BProm}, a black-box model-level
detection method to identify backdoors in suspicious models, if any.
\textsc{BProm} leverages the low classification accuracy of prompted models
when backdoors are present. Extensive experiments confirm \textsc{BProm}'s
effectiveness.
| [
{
"version": "v1",
"created": "Thu, 14 Nov 2024 15:56:11 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 08:55:40 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Huang",
"Zi-Xuan",
""
],
[
"Chen",
"Jia-Wei",
""
],
[
"Zhang",
"Zhi-Peng",
""
],
[
"Yu",
"Chia-Mu",
""
]
] | TITLE: Prompting the Unseen: Detecting Hidden Backdoors in Black-Box Models
ABSTRACT: Visual prompting (VP) is a new technique that adapts well-trained frozen
models for source domain tasks to target domain tasks. This study examines VP's
benefits for black-box model-level backdoor detection. The visual prompt in VP
maps class subspaces between source and target domains. We identify a
misalignment, termed class subspace inconsistency, between clean and poisoned
datasets. Based on this, we introduce \textsc{BProm}, a black-box model-level
detection method to identify backdoors in suspicious models, if any.
\textsc{BProm} leverages the low classification accuracy of prompted models
when backdoors are present. Extensive experiments confirm \textsc{BProm}'s
effectiveness.
|
2411.10442 | Weiyun Wang | Weiyun Wang, Zhe Chen, Wenhai Wang, Yue Cao, Yangzhou Liu, Zhangwei
Gao, Jinguo Zhu, Xizhou Zhu, Lewei Lu, Yu Qiao, Jifeng Dai | Enhancing the Reasoning Ability of Multimodal Large Language Models via
Mixed Preference Optimization | null | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by/4.0/ | Existing open-source multimodal large language models (MLLMs) generally
follow a training process involving pre-training and supervised fine-tuning.
However, these models suffer from distribution shifts, which limit their
multimodal reasoning, particularly in the Chain-of-Thought (CoT) performance.
To address this, we introduce a preference optimization (PO) process to enhance
the multimodal reasoning capabilities of MLLMs. Specifically, (1) on the data
side, we design an automated preference data construction pipeline to create
MMPR, a high-quality, large-scale multimodal reasoning preference dataset; and
(2) on the model side, we explore integrating PO with MLLMs, developing a
simple yet effective method, termed Mixed Preference Optimization (MPO), which
boosts multimodal CoT performance. Our approach enhances the multimodal
reasoning abilities of both InternVL2-8B and InternVL2-76B. Notably, our model,
InternVL2-8B-MPO, achieves an accuracy of 67.0 on MathVista, outperforming
InternVL2-8B by 8.7 points and achieving performance comparable to the
10$\times$ larger InternVL2-76B. We hope this study could inspire further
advancements in MLLMs. Code, data, and model are released.
| [
{
"version": "v1",
"created": "Fri, 15 Nov 2024 18:59:27 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 09:09:39 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Weiyun",
""
],
[
"Chen",
"Zhe",
""
],
[
"Wang",
"Wenhai",
""
],
[
"Cao",
"Yue",
""
],
[
"Liu",
"Yangzhou",
""
],
[
"Gao",
"Zhangwei",
""
],
[
"Zhu",
"Jinguo",
""
],
[
"Zhu",
"Xizhou",
""
],
[
"Lu",
"Lewei",
""
],
[
"Qiao",
"Yu",
""
],
[
"Dai",
"Jifeng",
""
]
] | TITLE: Enhancing the Reasoning Ability of Multimodal Large Language Models via
Mixed Preference Optimization
ABSTRACT: Existing open-source multimodal large language models (MLLMs) generally
follow a training process involving pre-training and supervised fine-tuning.
However, these models suffer from distribution shifts, which limit their
multimodal reasoning, particularly in the Chain-of-Thought (CoT) performance.
To address this, we introduce a preference optimization (PO) process to enhance
the multimodal reasoning capabilities of MLLMs. Specifically, (1) on the data
side, we design an automated preference data construction pipeline to create
MMPR, a high-quality, large-scale multimodal reasoning preference dataset; and
(2) on the model side, we explore integrating PO with MLLMs, developing a
simple yet effective method, termed Mixed Preference Optimization (MPO), which
boosts multimodal CoT performance. Our approach enhances the multimodal
reasoning abilities of both InternVL2-8B and InternVL2-76B. Notably, our model,
InternVL2-8B-MPO, achieves an accuracy of 67.0 on MathVista, outperforming
InternVL2-8B by 8.7 points and achieving performance comparable to the
10$\times$ larger InternVL2-76B. We hope this study could inspire further
advancements in MLLMs. Code, data, and model are released.
|
2411.14927 | Zhenwei Yang | Zhenwei Yang, Jilei Mao, Wenxian Yang, Yibo Ai, Yu Kong, Haibao Yu,
Weidong Zhang | LiDAR-based End-to-end Temporal Perception for Vehicle-Infrastructure
Cooperation | 13 pages, 7 figures | null | 10.1109/JIOT.2025.3552526 | null | cs.CV cs.AI cs.LG cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Temporal perception, defined as the capability to detect and track objects
across temporal sequences, serves as a fundamental component in autonomous
driving systems. While single-vehicle perception systems encounter limitations,
stemming from incomplete perception due to object occlusion and inherent blind
spots, cooperative perception systems present their own challenges in terms of
sensor calibration precision and positioning accuracy. To address these issues,
we introduce LET-VIC, a LiDAR-based End-to-End Tracking framework for
Vehicle-Infrastructure Cooperation (VIC). First, we employ Temporal
Self-Attention and VIC Cross-Attention modules to effectively integrate
temporal and spatial information from both vehicle and infrastructure
perspectives. Then, we develop a novel Calibration Error Compensation (CEC)
module to mitigate sensor misalignment issues and facilitate accurate feature
alignment. Experiments on the V2X-Seq-SPD dataset demonstrate that LET-VIC
significantly outperforms baseline models. Compared to LET-V, LET-VIC achieves
+15.0% improvement in mAP and a +17.3% improvement in AMOTA. Furthermore,
LET-VIC surpasses representative Tracking by Detection models, including
V2VNet, FFNet, and PointPillars, with at least a +13.7% improvement in mAP and
a +13.1% improvement in AMOTA without considering communication delays,
showcasing its robust detection and tracking performance. The experiments
demonstrate that the integration of multi-view perspectives, temporal
sequences, or CEC in end-to-end training significantly improves both detection
and tracking performance. All code will be open-sourced.
| [
{
"version": "v1",
"created": "Fri, 22 Nov 2024 13:34:29 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 07:03:43 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yang",
"Zhenwei",
""
],
[
"Mao",
"Jilei",
""
],
[
"Yang",
"Wenxian",
""
],
[
"Ai",
"Yibo",
""
],
[
"Kong",
"Yu",
""
],
[
"Yu",
"Haibao",
""
],
[
"Zhang",
"Weidong",
""
]
] | TITLE: LiDAR-based End-to-end Temporal Perception for Vehicle-Infrastructure
Cooperation
ABSTRACT: Temporal perception, defined as the capability to detect and track objects
across temporal sequences, serves as a fundamental component in autonomous
driving systems. While single-vehicle perception systems encounter limitations,
stemming from incomplete perception due to object occlusion and inherent blind
spots, cooperative perception systems present their own challenges in terms of
sensor calibration precision and positioning accuracy. To address these issues,
we introduce LET-VIC, a LiDAR-based End-to-End Tracking framework for
Vehicle-Infrastructure Cooperation (VIC). First, we employ Temporal
Self-Attention and VIC Cross-Attention modules to effectively integrate
temporal and spatial information from both vehicle and infrastructure
perspectives. Then, we develop a novel Calibration Error Compensation (CEC)
module to mitigate sensor misalignment issues and facilitate accurate feature
alignment. Experiments on the V2X-Seq-SPD dataset demonstrate that LET-VIC
significantly outperforms baseline models. Compared to LET-V, LET-VIC achieves
+15.0% improvement in mAP and a +17.3% improvement in AMOTA. Furthermore,
LET-VIC surpasses representative Tracking by Detection models, including
V2VNet, FFNet, and PointPillars, with at least a +13.7% improvement in mAP and
a +13.1% improvement in AMOTA without considering communication delays,
showcasing its robust detection and tracking performance. The experiments
demonstrate that the integration of multi-view perspectives, temporal
sequences, or CEC in end-to-end training significantly improves both detection
and tracking performance. All code will be open-sourced.
|
2411.15966 | Soumava Paul | Soumava Paul, Prakhar Kaushik, Alan Yuille | Gaussian Scenes: Pose-Free Sparse-View Scene Reconstruction using
Depth-Enhanced Diffusion Priors | Project page is available at https://gaussianscenes.github.io/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we introduce a generative approach for pose-free (without
camera parameters) reconstruction of 360 scenes from a sparse set of 2D images.
Pose-free scene reconstruction from incomplete, pose-free observations is
usually regularized with depth estimation or 3D foundational priors. While
recent advances have enabled sparse-view reconstruction of large complex scenes
(with high degree of foreground and background detail) with known camera poses
using view-conditioned generative priors, these methods cannot be directly
adapted for the pose-free setting when ground-truth poses are not available
during evaluation. To address this, we propose an image-to-image generative
model designed to inpaint missing details and remove artifacts in novel view
renders and depth maps of a 3D scene. We introduce context and geometry
conditioning using Feature-wise Linear Modulation (FiLM) modulation layers as a
lightweight alternative to cross-attention and also propose a novel confidence
measure for 3D Gaussian splat representations to allow for better detection of
these artifacts. By progressively integrating these novel views in a
Gaussian-SLAM-inspired process, we achieve a multi-view-consistent 3D
representation. Evaluations on the MipNeRF360 and DL3DV-10K benchmark datasets
demonstrate that our method surpasses existing pose-free techniques and
performs competitively with state-of-the-art posed (precomputed camera
parameters are given) reconstruction methods in complex 360 scenes.
| [
{
"version": "v1",
"created": "Sun, 24 Nov 2024 19:34:58 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 13:43:27 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Paul",
"Soumava",
""
],
[
"Kaushik",
"Prakhar",
""
],
[
"Yuille",
"Alan",
""
]
] | TITLE: Gaussian Scenes: Pose-Free Sparse-View Scene Reconstruction using
Depth-Enhanced Diffusion Priors
ABSTRACT: In this work, we introduce a generative approach for pose-free (without
camera parameters) reconstruction of 360 scenes from a sparse set of 2D images.
Pose-free scene reconstruction from incomplete, pose-free observations is
usually regularized with depth estimation or 3D foundational priors. While
recent advances have enabled sparse-view reconstruction of large complex scenes
(with high degree of foreground and background detail) with known camera poses
using view-conditioned generative priors, these methods cannot be directly
adapted for the pose-free setting when ground-truth poses are not available
during evaluation. To address this, we propose an image-to-image generative
model designed to inpaint missing details and remove artifacts in novel view
renders and depth maps of a 3D scene. We introduce context and geometry
conditioning using Feature-wise Linear Modulation (FiLM) modulation layers as a
lightweight alternative to cross-attention and also propose a novel confidence
measure for 3D Gaussian splat representations to allow for better detection of
these artifacts. By progressively integrating these novel views in a
Gaussian-SLAM-inspired process, we achieve a multi-view-consistent 3D
representation. Evaluations on the MipNeRF360 and DL3DV-10K benchmark datasets
demonstrate that our method surpasses existing pose-free techniques and
performs competitively with state-of-the-art posed (precomputed camera
parameters are given) reconstruction methods in complex 360 scenes.
|
2411.16313 | Duo Wu | Duo Wu, Jinghe Wang, Yuan Meng, Yanning Zhang, Le Sun, Zhi Wang | CATP-LLM: Empowering Large Language Models for Cost-Aware Tool Planning | In submission | null | null | null | cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Utilizing large language models (LLMs) for tool planning has emerged as a
promising avenue for developing general AI systems, where LLMs automatically
schedule external tools (e.g. vision models) to tackle complex tasks based on
task descriptions. To push this paradigm toward practical applications, it is
crucial for LLMs to consider tool execution costs (e.g. execution time) for
tool planning. Unfortunately, prior studies overlook the tool execution costs,
leading to the generation of expensive plans of which the costs outweigh task
performance. To fill this gap, we propose the Cost-Aware Tool Planning with
LLMs (CATP-LLM) framework, which for the first time provides a coherent design
to empower LLMs for cost-aware tool planning. Specifically, CATP-LLM
incorporates a tool planning language to enhance the LLM to generate
non-sequential plans of multiple branches for efficient concurrent tool
execution and cost reduction. Moreover, it further designs a cost-aware offline
reinforcement learning algorithm to fine-tune the LLM to optimize the
performance-cost trade-off in tool planning. In lack of public cost-related
datasets, we further present OpenCATP, the first platform for cost-aware
planning evaluation. Experiments on OpenCATP show that CATP-LLM outperforms
GPT-4 even when using Llama2-7B as its backbone, with the average improvement
of 28.2%-30.2% higher plan performance and 24.7%-45.8% lower costs even on the
challenging planning tasks. The codes and dataset will be available at:
https://github.com/duowuyms/OpenCATP-LLM.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 12:05:49 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 15:06:17 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wu",
"Duo",
""
],
[
"Wang",
"Jinghe",
""
],
[
"Meng",
"Yuan",
""
],
[
"Zhang",
"Yanning",
""
],
[
"Sun",
"Le",
""
],
[
"Wang",
"Zhi",
""
]
] | TITLE: CATP-LLM: Empowering Large Language Models for Cost-Aware Tool Planning
ABSTRACT: Utilizing large language models (LLMs) for tool planning has emerged as a
promising avenue for developing general AI systems, where LLMs automatically
schedule external tools (e.g. vision models) to tackle complex tasks based on
task descriptions. To push this paradigm toward practical applications, it is
crucial for LLMs to consider tool execution costs (e.g. execution time) for
tool planning. Unfortunately, prior studies overlook the tool execution costs,
leading to the generation of expensive plans of which the costs outweigh task
performance. To fill this gap, we propose the Cost-Aware Tool Planning with
LLMs (CATP-LLM) framework, which for the first time provides a coherent design
to empower LLMs for cost-aware tool planning. Specifically, CATP-LLM
incorporates a tool planning language to enhance the LLM to generate
non-sequential plans of multiple branches for efficient concurrent tool
execution and cost reduction. Moreover, it further designs a cost-aware offline
reinforcement learning algorithm to fine-tune the LLM to optimize the
performance-cost trade-off in tool planning. In lack of public cost-related
datasets, we further present OpenCATP, the first platform for cost-aware
planning evaluation. Experiments on OpenCATP show that CATP-LLM outperforms
GPT-4 even when using Llama2-7B as its backbone, with the average improvement
of 28.2%-30.2% higher plan performance and 24.7%-45.8% lower costs even on the
challenging planning tasks. The codes and dataset will be available at:
https://github.com/duowuyms/OpenCATP-LLM.
|
2411.16537 | Chan Hee Song | Chan Hee Song, Valts Blukis, Jonathan Tremblay, Stephen Tyree, Yu Su,
Stan Birchfield | RoboSpatial: Teaching Spatial Understanding to 2D and 3D Vision-Language
Models for Robotics | CVPR 2025 (Oral); Project Website: https://chanh.ee/RoboSpatial | null | null | null | cs.CV cs.AI cs.CL cs.RO | http://creativecommons.org/licenses/by/4.0/ | Spatial understanding is a crucial capability that enables robots to perceive
their surroundings, reason about their environment, and interact with it
meaningfully. In modern robotics, these capabilities are increasingly provided
by vision-language models. However, these models face significant challenges in
spatial reasoning tasks, as their training data are based on general-purpose
image datasets that often lack sophisticated spatial understanding. For
example, datasets frequently do not capture reference frame comprehension, yet
effective spatial reasoning requires understanding whether to reason from ego-,
world-, or object-centric perspectives. To address this issue, we introduce
RoboSpatial, a large-scale dataset for spatial understanding in robotics. It
consists of real indoor and tabletop scenes, captured as 3D scans and
egocentric images, and annotated with rich spatial information relevant to
robotics. The dataset includes 1M images, 5k 3D scans, and 3M annotated spatial
relationships, and the pairing of 2D egocentric images with 3D scans makes it
both 2D- and 3D- ready. Our experiments show that models trained with
RoboSpatial outperform baselines on downstream tasks such as spatial affordance
prediction, spatial relationship prediction, and robot manipulation.
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 16:21:34 GMT"
},
{
"version": "v2",
"created": "Tue, 25 Mar 2025 07:49:16 GMT"
},
{
"version": "v3",
"created": "Wed, 26 Mar 2025 07:30:26 GMT"
},
{
"version": "v4",
"created": "Sat, 5 Apr 2025 06:46:03 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Song",
"Chan Hee",
""
],
[
"Blukis",
"Valts",
""
],
[
"Tremblay",
"Jonathan",
""
],
[
"Tyree",
"Stephen",
""
],
[
"Su",
"Yu",
""
],
[
"Birchfield",
"Stan",
""
]
] | TITLE: RoboSpatial: Teaching Spatial Understanding to 2D and 3D Vision-Language
Models for Robotics
ABSTRACT: Spatial understanding is a crucial capability that enables robots to perceive
their surroundings, reason about their environment, and interact with it
meaningfully. In modern robotics, these capabilities are increasingly provided
by vision-language models. However, these models face significant challenges in
spatial reasoning tasks, as their training data are based on general-purpose
image datasets that often lack sophisticated spatial understanding. For
example, datasets frequently do not capture reference frame comprehension, yet
effective spatial reasoning requires understanding whether to reason from ego-,
world-, or object-centric perspectives. To address this issue, we introduce
RoboSpatial, a large-scale dataset for spatial understanding in robotics. It
consists of real indoor and tabletop scenes, captured as 3D scans and
egocentric images, and annotated with rich spatial information relevant to
robotics. The dataset includes 1M images, 5k 3D scans, and 3M annotated spatial
relationships, and the pairing of 2D egocentric images with 3D scans makes it
both 2D- and 3D- ready. Our experiments show that models trained with
RoboSpatial outperform baselines on downstream tasks such as spatial affordance
prediction, spatial relationship prediction, and robot manipulation.
|
2411.16788 | Aishwarya Agarwal | Aishwarya Agarwal, Srikrishna Karanam, Vineet Gandhi | TIDE: Training Locally Interpretable Domain Generalization Models
Enables Test-time Correction | 15 pages, 11 figures | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We consider the problem of single-source domain generalization. Existing
methods typically rely on extensive augmentations to synthetically cover
diverse domains during training. However, they struggle with semantic shifts
(e.g., background and viewpoint changes), as they often learn global features
instead of local concepts that tend to be domain invariant. To address this
gap, we propose an approach that compels models to leverage such local concepts
during prediction. Given no suitable dataset with per-class concepts and
localization maps exists, we first develop a novel pipeline to generate
annotations by exploiting the rich features of diffusion and large-language
models. Our next innovation is TIDE, a novel training scheme with a concept
saliency alignment loss that ensures model focus on the right per-concept
regions and a local concept contrastive loss that promotes learning
domain-invariant concept representations. This not only gives a robust model
but also can be visually interpreted using the predicted concept saliency maps.
Given these maps at test time, our final contribution is a new correction
algorithm that uses the corresponding local concept representations to
iteratively refine the prediction until it aligns with prototypical concept
representations that we store at the end of model training. We evaluate our
approach extensively on four standard DG benchmark datasets and substantially
outperform the current state-ofthe-art (12% improvement on average) while also
demonstrating that our predictions can be visually interpreted
| [
{
"version": "v1",
"created": "Mon, 25 Nov 2024 08:46:37 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 07:08:18 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Agarwal",
"Aishwarya",
""
],
[
"Karanam",
"Srikrishna",
""
],
[
"Gandhi",
"Vineet",
""
]
] | TITLE: TIDE: Training Locally Interpretable Domain Generalization Models
Enables Test-time Correction
ABSTRACT: We consider the problem of single-source domain generalization. Existing
methods typically rely on extensive augmentations to synthetically cover
diverse domains during training. However, they struggle with semantic shifts
(e.g., background and viewpoint changes), as they often learn global features
instead of local concepts that tend to be domain invariant. To address this
gap, we propose an approach that compels models to leverage such local concepts
during prediction. Given no suitable dataset with per-class concepts and
localization maps exists, we first develop a novel pipeline to generate
annotations by exploiting the rich features of diffusion and large-language
models. Our next innovation is TIDE, a novel training scheme with a concept
saliency alignment loss that ensures model focus on the right per-concept
regions and a local concept contrastive loss that promotes learning
domain-invariant concept representations. This not only gives a robust model
but also can be visually interpreted using the predicted concept saliency maps.
Given these maps at test time, our final contribution is a new correction
algorithm that uses the corresponding local concept representations to
iteratively refine the prediction until it aligns with prototypical concept
representations that we store at the end of model training. We evaluate our
approach extensively on four standard DG benchmark datasets and substantially
outperform the current state-ofthe-art (12% improvement on average) while also
demonstrating that our predictions can be visually interpreted
|
2411.17150 | Chanyoung Kim | Chanyoung Kim, Dayun Ju, Woojung Han, Ming-Hsuan Yang, Seong Jae Hwang | Distilling Spectral Graph for Object-Context Aware Open-Vocabulary
Semantic Segmentation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Open-Vocabulary Semantic Segmentation (OVSS) has advanced with recent
vision-language models (VLMs), enabling segmentation beyond predefined
categories through various learning schemes. Notably, training-free methods
offer scalable, easily deployable solutions for handling unseen data, a key
goal of OVSS. Yet, a critical issue persists: lack of object-level context
consideration when segmenting complex objects in the challenging environment of
OVSS based on arbitrary query prompts. This oversight limits models' ability to
group semantically consistent elements within object and map them precisely to
user-defined arbitrary classes. In this work, we introduce a novel approach
that overcomes this limitation by incorporating object-level contextual
knowledge within images. Specifically, our model enhances intra-object
consistency by distilling spectral-driven features from vision foundation
models into the attention mechanism of the visual encoder, enabling
semantically coherent components to form a single object mask. Additionally, we
refine the text embeddings with zero-shot object presence likelihood to ensure
accurate alignment with the specific objects represented in the images. By
leveraging object-level contextual knowledge, our proposed approach achieves
state-of-the-art performance with strong generalizability across diverse
datasets.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 06:34:48 GMT"
},
{
"version": "v2",
"created": "Sun, 16 Mar 2025 10:45:45 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 04:25:44 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Kim",
"Chanyoung",
""
],
[
"Ju",
"Dayun",
""
],
[
"Han",
"Woojung",
""
],
[
"Yang",
"Ming-Hsuan",
""
],
[
"Hwang",
"Seong Jae",
""
]
] | TITLE: Distilling Spectral Graph for Object-Context Aware Open-Vocabulary
Semantic Segmentation
ABSTRACT: Open-Vocabulary Semantic Segmentation (OVSS) has advanced with recent
vision-language models (VLMs), enabling segmentation beyond predefined
categories through various learning schemes. Notably, training-free methods
offer scalable, easily deployable solutions for handling unseen data, a key
goal of OVSS. Yet, a critical issue persists: lack of object-level context
consideration when segmenting complex objects in the challenging environment of
OVSS based on arbitrary query prompts. This oversight limits models' ability to
group semantically consistent elements within object and map them precisely to
user-defined arbitrary classes. In this work, we introduce a novel approach
that overcomes this limitation by incorporating object-level contextual
knowledge within images. Specifically, our model enhances intra-object
consistency by distilling spectral-driven features from vision foundation
models into the attention mechanism of the visual encoder, enabling
semantically coherent components to form a single object mask. Additionally, we
refine the text embeddings with zero-shot object presence likelihood to ensure
accurate alignment with the specific objects represented in the images. By
leveraging object-level contextual knowledge, our proposed approach achieves
state-of-the-art performance with strong generalizability across diverse
datasets.
|
2411.17190 | Gyeongjin Kang | Gyeongjin Kang, Jisang Yoo, Jihyeon Park, Seungtae Nam, Hyeonsoo Im,
Sangheon Shin, Sangpil Kim, Eunbyung Park | SelfSplat: Pose-Free and 3D Prior-Free Generalizable 3D Gaussian
Splatting | Project page: https://gynjn.github.io/selfsplat/ | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | We propose SelfSplat, a novel 3D Gaussian Splatting model designed to perform
pose-free and 3D prior-free generalizable 3D reconstruction from unposed
multi-view images. These settings are inherently ill-posed due to the lack of
ground-truth data, learned geometric information, and the need to achieve
accurate 3D reconstruction without finetuning, making it difficult for
conventional methods to achieve high-quality results. Our model addresses these
challenges by effectively integrating explicit 3D representations with
self-supervised depth and pose estimation techniques, resulting in reciprocal
improvements in both pose accuracy and 3D reconstruction quality. Furthermore,
we incorporate a matching-aware pose estimation network and a depth refinement
module to enhance geometry consistency across views, ensuring more accurate and
stable 3D reconstructions. To present the performance of our method, we
evaluated it on large-scale real-world datasets, including RealEstate10K, ACID,
and DL3DV. SelfSplat achieves superior results over previous state-of-the-art
methods in both appearance and geometry quality, also demonstrates strong
cross-dataset generalization capabilities. Extensive ablation studies and
analysis also validate the effectiveness of our proposed methods. Code and
pretrained models are available at https://gynjn.github.io/selfsplat/
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 08:01:50 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Nov 2024 06:00:49 GMT"
},
{
"version": "v3",
"created": "Thu, 28 Nov 2024 04:44:33 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Mar 2025 03:33:42 GMT"
},
{
"version": "v5",
"created": "Sun, 6 Apr 2025 06:08:27 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Kang",
"Gyeongjin",
""
],
[
"Yoo",
"Jisang",
""
],
[
"Park",
"Jihyeon",
""
],
[
"Nam",
"Seungtae",
""
],
[
"Im",
"Hyeonsoo",
""
],
[
"Shin",
"Sangheon",
""
],
[
"Kim",
"Sangpil",
""
],
[
"Park",
"Eunbyung",
""
]
] | TITLE: SelfSplat: Pose-Free and 3D Prior-Free Generalizable 3D Gaussian
Splatting
ABSTRACT: We propose SelfSplat, a novel 3D Gaussian Splatting model designed to perform
pose-free and 3D prior-free generalizable 3D reconstruction from unposed
multi-view images. These settings are inherently ill-posed due to the lack of
ground-truth data, learned geometric information, and the need to achieve
accurate 3D reconstruction without finetuning, making it difficult for
conventional methods to achieve high-quality results. Our model addresses these
challenges by effectively integrating explicit 3D representations with
self-supervised depth and pose estimation techniques, resulting in reciprocal
improvements in both pose accuracy and 3D reconstruction quality. Furthermore,
we incorporate a matching-aware pose estimation network and a depth refinement
module to enhance geometry consistency across views, ensuring more accurate and
stable 3D reconstructions. To present the performance of our method, we
evaluated it on large-scale real-world datasets, including RealEstate10K, ACID,
and DL3DV. SelfSplat achieves superior results over previous state-of-the-art
methods in both appearance and geometry quality, also demonstrates strong
cross-dataset generalization capabilities. Extensive ablation studies and
analysis also validate the effectiveness of our proposed methods. Code and
pretrained models are available at https://gynjn.github.io/selfsplat/
|
2411.17911 | Hong-Hanh Nguyen-Le | Hong-Hanh Nguyen-Le, Van-Tuan Tran, Dinh-Thuc Nguyen and Nhien-An
Le-Khac | Passive Deepfake Detection Across Multi-modalities: A Comprehensive
Survey | 35 pages | null | null | null | cs.CV cs.CR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | In recent years, deepfakes (DFs) have been utilized for malicious purposes,
such as individual impersonation, misinformation spreading, and artists style
imitation, raising questions about ethical and security concerns. In this
survey, we provide a comprehensive review and comparison of passive DF
detection across multiple modalities, including image, video, audio, and
multi-modal, to explore the inter-modality relationships between them. Beyond
detection accuracy, we extend our analysis to encompass crucial performance
dimensions essential for real-world deployment: generalization capabilities
across novel generation techniques, robustness against adversarial
manipulations and postprocessing techniques, attribution precision in
identifying generation sources, and resilience under real-world operational
conditions. Additionally, we analyze the advantages and limitations of existing
datasets, benchmarks, and evaluation metrics for passive DF detection. Finally,
we propose future research directions that address these unexplored and
emerging issues in the field of passive DF detection. This survey offers
researchers and practitioners a comprehensive resource for understanding the
current landscape, methodological approaches, and promising future directions
in this rapidly evolving field.
| [
{
"version": "v1",
"created": "Tue, 26 Nov 2024 22:04:49 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 18:48:12 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Nguyen-Le",
"Hong-Hanh",
""
],
[
"Tran",
"Van-Tuan",
""
],
[
"Nguyen",
"Dinh-Thuc",
""
],
[
"Le-Khac",
"Nhien-An",
""
]
] | TITLE: Passive Deepfake Detection Across Multi-modalities: A Comprehensive
Survey
ABSTRACT: In recent years, deepfakes (DFs) have been utilized for malicious purposes,
such as individual impersonation, misinformation spreading, and artists style
imitation, raising questions about ethical and security concerns. In this
survey, we provide a comprehensive review and comparison of passive DF
detection across multiple modalities, including image, video, audio, and
multi-modal, to explore the inter-modality relationships between them. Beyond
detection accuracy, we extend our analysis to encompass crucial performance
dimensions essential for real-world deployment: generalization capabilities
across novel generation techniques, robustness against adversarial
manipulations and postprocessing techniques, attribution precision in
identifying generation sources, and resilience under real-world operational
conditions. Additionally, we analyze the advantages and limitations of existing
datasets, benchmarks, and evaluation metrics for passive DF detection. Finally,
we propose future research directions that address these unexplored and
emerging issues in the field of passive DF detection. This survey offers
researchers and practitioners a comprehensive resource for understanding the
current landscape, methodological approaches, and promising future directions
in this rapidly evolving field.
|
2412.01440 | Zhixiang Wang | Zhixiang Wang, Xiaosen Wang, Bo Wang, Siheng Chen, Zhibo Wang, Xingjun
Ma, Yu-Gang Jiang | DiffPatch: Generating Customizable Adversarial Patches using Diffusion
Models | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Physical adversarial patches printed on clothing can enable individuals to
evade person detectors, but most existing methods prioritize attack
effectiveness over stealthiness, resulting in aesthetically unpleasing patches.
While generative adversarial networks and diffusion models can produce more
natural-looking patches, they often fail to balance stealthiness with attack
effectiveness and lack flexibility for user customization. To address these
limitations, we propose DiffPatch, a novel diffusion-based framework for
generating customizable and naturalistic adversarial patches. Our approach
allows users to start from a reference image (rather than random noise) and
incorporates masks to create patches of various shapes, not limited to squares.
To preserve the original semantics during the diffusion process, we employ
Null-text inversion to map random noise samples to a single input image and
generate patches through Incomplete Diffusion Optimization (IDO). Our method
achieves attack performance comparable to state-of-the-art non-naturalistic
patches while maintaining a natural appearance. Using DiffPatch, we construct
AdvT-shirt-1K, the first physical adversarial T-shirt dataset comprising over a
thousand images captured in diverse scenarios. AdvT-shirt-1K can serve as a
useful dataset for training or testing future defense methods.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 12:30:35 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Dec 2024 06:47:08 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 15:38:19 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Zhixiang",
""
],
[
"Wang",
"Xiaosen",
""
],
[
"Wang",
"Bo",
""
],
[
"Chen",
"Siheng",
""
],
[
"Wang",
"Zhibo",
""
],
[
"Ma",
"Xingjun",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] | TITLE: DiffPatch: Generating Customizable Adversarial Patches using Diffusion
Models
ABSTRACT: Physical adversarial patches printed on clothing can enable individuals to
evade person detectors, but most existing methods prioritize attack
effectiveness over stealthiness, resulting in aesthetically unpleasing patches.
While generative adversarial networks and diffusion models can produce more
natural-looking patches, they often fail to balance stealthiness with attack
effectiveness and lack flexibility for user customization. To address these
limitations, we propose DiffPatch, a novel diffusion-based framework for
generating customizable and naturalistic adversarial patches. Our approach
allows users to start from a reference image (rather than random noise) and
incorporates masks to create patches of various shapes, not limited to squares.
To preserve the original semantics during the diffusion process, we employ
Null-text inversion to map random noise samples to a single input image and
generate patches through Incomplete Diffusion Optimization (IDO). Our method
achieves attack performance comparable to state-of-the-art non-naturalistic
patches while maintaining a natural appearance. Using DiffPatch, we construct
AdvT-shirt-1K, the first physical adversarial T-shirt dataset comprising over a
thousand images captured in diverse scenarios. AdvT-shirt-1K can serve as a
useful dataset for training or testing future defense methods.
|
2412.02205 | Luoxuan Weng | Luoxuan Weng, Yinghao Tang, Yingchaojie Feng, Zhuo Chang, Ruiqin Chen,
Haozhe Feng, Chen Hou, Danqing Huang, Yang Li, Huaming Rao, Haonan Wang,
Canshi Wei, Xiaofeng Yang, Yuhui Zhang, Yifeng Zheng, Xiuqi Huang, Minfeng
Zhu, Yuxin Ma, Bin Cui, Peng Chen, Wei Chen | DataLab: A Unified Platform for LLM-Powered Business Intelligence | Accepted to ICDE 2025 | null | null | null | cs.DB cs.AI cs.CL | http://creativecommons.org/licenses/by/4.0/ | Business intelligence (BI) transforms large volumes of data within modern
organizations into actionable insights for informed decision-making. Recently,
large language model (LLM)-based agents have streamlined the BI workflow by
automatically performing task planning, reasoning, and actions in executable
environments based on natural language (NL) queries. However, existing
approaches primarily focus on individual BI tasks such as NL2SQL and NL2VIS.
The fragmentation of tasks across different data roles and tools lead to
inefficiencies and potential errors due to the iterative and collaborative
nature of BI. In this paper, we introduce DataLab, a unified BI platform that
integrates a one-stop LLM-based agent framework with an augmented computational
notebook interface. DataLab supports various BI tasks for different data roles
in data preparation, analysis, and visualization by seamlessly combining LLM
assistance with user customization within a single environment. To achieve this
unification, we design a domain knowledge incorporation module tailored for
enterprise-specific BI tasks, an inter-agent communication mechanism to
facilitate information sharing across the BI workflow, and a cell-based context
management strategy to enhance context utilization efficiency in BI notebooks.
Extensive experiments demonstrate that DataLab achieves state-of-the-art
performance on various BI tasks across popular research benchmarks. Moreover,
DataLab maintains high effectiveness and efficiency on real-world datasets from
Tencent, achieving up to a 58.58% increase in accuracy and a 61.65% reduction
in token cost on enterprise-specific BI tasks.
| [
{
"version": "v1",
"created": "Tue, 3 Dec 2024 06:47:15 GMT"
},
{
"version": "v2",
"created": "Wed, 4 Dec 2024 16:12:08 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 12:01:15 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Weng",
"Luoxuan",
""
],
[
"Tang",
"Yinghao",
""
],
[
"Feng",
"Yingchaojie",
""
],
[
"Chang",
"Zhuo",
""
],
[
"Chen",
"Ruiqin",
""
],
[
"Feng",
"Haozhe",
""
],
[
"Hou",
"Chen",
""
],
[
"Huang",
"Danqing",
""
],
[
"Li",
"Yang",
""
],
[
"Rao",
"Huaming",
""
],
[
"Wang",
"Haonan",
""
],
[
"Wei",
"Canshi",
""
],
[
"Yang",
"Xiaofeng",
""
],
[
"Zhang",
"Yuhui",
""
],
[
"Zheng",
"Yifeng",
""
],
[
"Huang",
"Xiuqi",
""
],
[
"Zhu",
"Minfeng",
""
],
[
"Ma",
"Yuxin",
""
],
[
"Cui",
"Bin",
""
],
[
"Chen",
"Peng",
""
],
[
"Chen",
"Wei",
""
]
] | TITLE: DataLab: A Unified Platform for LLM-Powered Business Intelligence
ABSTRACT: Business intelligence (BI) transforms large volumes of data within modern
organizations into actionable insights for informed decision-making. Recently,
large language model (LLM)-based agents have streamlined the BI workflow by
automatically performing task planning, reasoning, and actions in executable
environments based on natural language (NL) queries. However, existing
approaches primarily focus on individual BI tasks such as NL2SQL and NL2VIS.
The fragmentation of tasks across different data roles and tools lead to
inefficiencies and potential errors due to the iterative and collaborative
nature of BI. In this paper, we introduce DataLab, a unified BI platform that
integrates a one-stop LLM-based agent framework with an augmented computational
notebook interface. DataLab supports various BI tasks for different data roles
in data preparation, analysis, and visualization by seamlessly combining LLM
assistance with user customization within a single environment. To achieve this
unification, we design a domain knowledge incorporation module tailored for
enterprise-specific BI tasks, an inter-agent communication mechanism to
facilitate information sharing across the BI workflow, and a cell-based context
management strategy to enhance context utilization efficiency in BI notebooks.
Extensive experiments demonstrate that DataLab achieves state-of-the-art
performance on various BI tasks across popular research benchmarks. Moreover,
DataLab maintains high effectiveness and efficiency on real-world datasets from
Tencent, achieving up to a 58.58% increase in accuracy and a 61.65% reduction
in token cost on enterprise-specific BI tasks.
|
2412.03848 | Omar Elezabi | Omar Elezabi, Marcos V. Conde, Zongwei Wu, Radu Timofte | INRetouch: Context Aware Implicit Neural Representation for Photography
Retouching | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Professional photo editing remains challenging, requiring extensive knowledge
of imaging pipelines and significant expertise. While recent deep learning
approaches, particularly style transfer methods, have attempted to automate
this process, they often struggle with output fidelity, editing control, and
complex retouching capabilities. We propose a novel retouch transfer approach
that learns from professional edits through before-after image pairs, enabling
precise replication of complex editing operations. We develop a context-aware
Implicit Neural Representation that learns to apply edits adaptively based on
image content and context, and is capable of learning from a single example.
Our method extracts implicit transformations from reference edits and
adaptively applies them to new images. To facilitate this research direction,
we introduce a comprehensive Photo Retouching Dataset comprising 100,000
high-quality images edited using over 170 professional Adobe Lightroom presets.
Through extensive evaluation, we demonstrate that our approach not only
surpasses existing methods in photo retouching but also enhances performance in
related image reconstruction tasks like Gamut Mapping and Raw Reconstruction.
By bridging the gap between professional editing capabilities and automated
solutions, our work presents a significant step toward making sophisticated
photo editing more accessible while maintaining high-fidelity results. Check
the Project Page at https://omaralezaby.github.io/inretouch for more Results
and information about Code and Dataset availability.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 03:31:48 GMT"
},
{
"version": "v2",
"created": "Wed, 11 Dec 2024 16:26:09 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 17:25:45 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Elezabi",
"Omar",
""
],
[
"Conde",
"Marcos V.",
""
],
[
"Wu",
"Zongwei",
""
],
[
"Timofte",
"Radu",
""
]
] | TITLE: INRetouch: Context Aware Implicit Neural Representation for Photography
Retouching
ABSTRACT: Professional photo editing remains challenging, requiring extensive knowledge
of imaging pipelines and significant expertise. While recent deep learning
approaches, particularly style transfer methods, have attempted to automate
this process, they often struggle with output fidelity, editing control, and
complex retouching capabilities. We propose a novel retouch transfer approach
that learns from professional edits through before-after image pairs, enabling
precise replication of complex editing operations. We develop a context-aware
Implicit Neural Representation that learns to apply edits adaptively based on
image content and context, and is capable of learning from a single example.
Our method extracts implicit transformations from reference edits and
adaptively applies them to new images. To facilitate this research direction,
we introduce a comprehensive Photo Retouching Dataset comprising 100,000
high-quality images edited using over 170 professional Adobe Lightroom presets.
Through extensive evaluation, we demonstrate that our approach not only
surpasses existing methods in photo retouching but also enhances performance in
related image reconstruction tasks like Gamut Mapping and Raw Reconstruction.
By bridging the gap between professional editing capabilities and automated
solutions, our work presents a significant step toward making sophisticated
photo editing more accessible while maintaining high-fidelity results. Check
the Project Page at https://omaralezaby.github.io/inretouch for more Results
and information about Code and Dataset availability.
|
2412.03937 | Kiyohiro Nakayama | Kiyohiro Nakayama and Jan Ackermann and Timur Levent Kesdogan and Yang
Zheng and Maria Korosteleva and Olga Sorkine-Hornung and Leonidas J. Guibas
and Guandao Yang and Gordon Wetzstein | AIpparel: A Multimodal Foundation Model for Digital Garments | The project website is at https://georgenakayama.github.io/AIpparel/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Apparel is essential to human life, offering protection, mirroring cultural
identities, and showcasing personal style. Yet, the creation of garments
remains a time-consuming process, largely due to the manual work involved in
designing them. To simplify this process, we introduce AIpparel, a multimodal
foundation model for generating and editing sewing patterns. Our model
fine-tunes state-of-the-art large multimodal models (LMMs) on a custom-curated
large-scale dataset of over 120,000 unique garments, each with multimodal
annotations including text, images, and sewing patterns. Additionally, we
propose a novel tokenization scheme that concisely encodes these complex sewing
patterns so that LLMs can learn to predict them efficiently. AIpparel achieves
state-of-the-art performance in single-modal tasks, including text-to-garment
and image-to-garment prediction, and enables novel multimodal garment
generation applications such as interactive garment editing. The project
website is at https://georgenakayama.github.io/AIpparel/.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 07:35:19 GMT"
},
{
"version": "v2",
"created": "Fri, 13 Dec 2024 06:15:54 GMT"
},
{
"version": "v3",
"created": "Mon, 16 Dec 2024 02:39:18 GMT"
},
{
"version": "v4",
"created": "Tue, 25 Mar 2025 06:59:40 GMT"
},
{
"version": "v5",
"created": "Sat, 5 Apr 2025 21:29:28 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Nakayama",
"Kiyohiro",
""
],
[
"Ackermann",
"Jan",
""
],
[
"Kesdogan",
"Timur Levent",
""
],
[
"Zheng",
"Yang",
""
],
[
"Korosteleva",
"Maria",
""
],
[
"Sorkine-Hornung",
"Olga",
""
],
[
"Guibas",
"Leonidas J.",
""
],
[
"Yang",
"Guandao",
""
],
[
"Wetzstein",
"Gordon",
""
]
] | TITLE: AIpparel: A Multimodal Foundation Model for Digital Garments
ABSTRACT: Apparel is essential to human life, offering protection, mirroring cultural
identities, and showcasing personal style. Yet, the creation of garments
remains a time-consuming process, largely due to the manual work involved in
designing them. To simplify this process, we introduce AIpparel, a multimodal
foundation model for generating and editing sewing patterns. Our model
fine-tunes state-of-the-art large multimodal models (LMMs) on a custom-curated
large-scale dataset of over 120,000 unique garments, each with multimodal
annotations including text, images, and sewing patterns. Additionally, we
propose a novel tokenization scheme that concisely encodes these complex sewing
patterns so that LLMs can learn to predict them efficiently. AIpparel achieves
state-of-the-art performance in single-modal tasks, including text-to-garment
and image-to-garment prediction, and enables novel multimodal garment
generation applications such as interactive garment editing. The project
website is at https://georgenakayama.github.io/AIpparel/.
|
2412.04272 | Qingyang Mao | Qingyang Mao, Qi Liu, Zhi Li, Mingyue Cheng, Zheng Zhang, Rui Li | PoTable: Towards Systematic Thinking via Stage-oriented
Plan-then-Execute Reasoning on Tables | 10 pages, 6 figures | null | null | null | cs.IR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, table reasoning has garnered substantial research interest,
particularly its integration with Large Language Models (LLMs) which
revolutionize natural language applications. Existing typical LLM-based studies
realize step-by-step reasoning, promoting the capabilities in table
understanding and analyzing. While these approaches emphasize autonomous
exploration to accomplish the task objective, they overlook systematic thinking
in the reasoning process, leading to potential risks of omitted steps,
disorganized logic and misleading results. In this paper, we propose PoTable, a
novel stage-oriented plan-then-execute reasoning approach that achieves
systematic thinking on tables. Specifically, PoTable deploys several distinct
tabular analytical stages with clear objectives and achieves stage-by-stage
reasoning. To accomplish the stage-specific goal, PoTable conducts
plan-then-execute reasoning, which first plans the operation chain under the
stage objective, and then executes each operation sequentially through code
generation, real-time running and feedback processing. As a result, PoTable can
produce reliable table reasoning results with highly accurate, steply commented
and completely executable programs. It possesses a high degree of alignment
with a distinguished tabular data analyst, offering advantages of high accuracy
and explainability. Finally, we conduct extensive experiments over four
evaluation datasets from WikiTQ and TabFact benchmarks, where the results
demonstrate the effectiveness of PoTable, as well as the efficiency and
explainability.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 15:54:16 GMT"
},
{
"version": "v2",
"created": "Thu, 26 Dec 2024 02:24:52 GMT"
},
{
"version": "v3",
"created": "Sat, 5 Apr 2025 10:18:34 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Mao",
"Qingyang",
""
],
[
"Liu",
"Qi",
""
],
[
"Li",
"Zhi",
""
],
[
"Cheng",
"Mingyue",
""
],
[
"Zhang",
"Zheng",
""
],
[
"Li",
"Rui",
""
]
] | TITLE: PoTable: Towards Systematic Thinking via Stage-oriented
Plan-then-Execute Reasoning on Tables
ABSTRACT: In recent years, table reasoning has garnered substantial research interest,
particularly its integration with Large Language Models (LLMs) which
revolutionize natural language applications. Existing typical LLM-based studies
realize step-by-step reasoning, promoting the capabilities in table
understanding and analyzing. While these approaches emphasize autonomous
exploration to accomplish the task objective, they overlook systematic thinking
in the reasoning process, leading to potential risks of omitted steps,
disorganized logic and misleading results. In this paper, we propose PoTable, a
novel stage-oriented plan-then-execute reasoning approach that achieves
systematic thinking on tables. Specifically, PoTable deploys several distinct
tabular analytical stages with clear objectives and achieves stage-by-stage
reasoning. To accomplish the stage-specific goal, PoTable conducts
plan-then-execute reasoning, which first plans the operation chain under the
stage objective, and then executes each operation sequentially through code
generation, real-time running and feedback processing. As a result, PoTable can
produce reliable table reasoning results with highly accurate, steply commented
and completely executable programs. It possesses a high degree of alignment
with a distinguished tabular data analyst, offering advantages of high accuracy
and explainability. Finally, we conduct extensive experiments over four
evaluation datasets from WikiTQ and TabFact benchmarks, where the results
demonstrate the effectiveness of PoTable, as well as the efficiency and
explainability.
|
2412.04307 | Changsheng Gao | Changsheng Gao, Yifan Ma, Qiaoxi Chen, Yenan Xu, Dong Liu, Weisi Lin | Feature Coding in the Era of Large Models: Dataset, Test Conditions, and
Benchmark | null | null | null | null | cs.MM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large models have achieved remarkable performance across various tasks, yet
they incur significant computational costs and privacy concerns during both
training and inference. Distributed deployment has emerged as a potential
solution, but it necessitates the exchange of intermediate information between
model segments, with feature representations serving as crucial information
carriers. To optimize information exchange, feature coding methods are applied
to reduce transmission and storage overhead. Despite its importance, feature
coding for large models remains an under-explored area. In this paper, we draw
attention to large model feature coding and make three contributions to this
field. First, we introduce a comprehensive dataset encompassing diverse
features generated by three representative types of large models. Second, we
establish unified test conditions, enabling standardized evaluation pipelines
and fair comparisons across future feature coding studies. Third, we introduce
two baseline methods derived from widely used image coding techniques and
benchmark their performance on the proposed dataset. These contributions aim to
advance the field of feature coding, facilitating more efficient large model
deployment. All source code and the dataset are now available at
\href{https://github.com/chansongoal/FCM-LM/tree/master}{https://github.com/chansongoal/FCM-LM/tree/master}.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 16:26:37 GMT"
},
{
"version": "v2",
"created": "Fri, 3 Jan 2025 13:17:32 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 07:22:06 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Gao",
"Changsheng",
""
],
[
"Ma",
"Yifan",
""
],
[
"Chen",
"Qiaoxi",
""
],
[
"Xu",
"Yenan",
""
],
[
"Liu",
"Dong",
""
],
[
"Lin",
"Weisi",
""
]
] | TITLE: Feature Coding in the Era of Large Models: Dataset, Test Conditions, and
Benchmark
ABSTRACT: Large models have achieved remarkable performance across various tasks, yet
they incur significant computational costs and privacy concerns during both
training and inference. Distributed deployment has emerged as a potential
solution, but it necessitates the exchange of intermediate information between
model segments, with feature representations serving as crucial information
carriers. To optimize information exchange, feature coding methods are applied
to reduce transmission and storage overhead. Despite its importance, feature
coding for large models remains an under-explored area. In this paper, we draw
attention to large model feature coding and make three contributions to this
field. First, we introduce a comprehensive dataset encompassing diverse
features generated by three representative types of large models. Second, we
establish unified test conditions, enabling standardized evaluation pipelines
and fair comparisons across future feature coding studies. Third, we introduce
two baseline methods derived from widely used image coding techniques and
benchmark their performance on the proposed dataset. These contributions aim to
advance the field of feature coding, facilitating more efficient large model
deployment. All source code and the dataset are now available at
\href{https://github.com/chansongoal/FCM-LM/tree/master}{https://github.com/chansongoal/FCM-LM/tree/master}.
|
2412.04686 | Brian Moser | Simon Florian Koch, Brian Moser, Anton\'in Lindner, Valerio Dao,
Ignacio Asensi, Daniela Bortoletto, Marianne Brekkum, Florian Dachs, Hans
Ludwig Joos, Milou van Rijnbach, Abhishek Sharma, Ismet Siral, Carlos Solans,
Yingjie Wei | Measuring the ATLAS ITk Pixel Detector Material via Multiple Scattering
of Positrons at the CERN PS | 12 pages, 12 figures | Eur. Phys. J. C 85, 381 (2025) | 10.1140/epjc/s10052-025-14092-2 | null | physics.ins-det hep-ex | http://creativecommons.org/licenses/by/4.0/ | The ITk is a new silicon tracker for the ATLAS experiment designed to
increase detector resolution, readout capacity, and radiation hardness, in
preparation for the larger number of simultaneous proton-proton interactions at
the High Luminosity LHC. This paper presents the first direct measurement of
the material budget of an ATLAS ITk pixel module, performed at a testbeam at
the CERN Proton Synchrotron via the multiple scattering of low energy positrons
within the module volume. Using a four plane telescope of thin monolithic pixel
detectors from the MALTA collaboration, scattering datasets were recorded at a
beam energy of $1.2\,\text{GeV}$. Kink angle distributions were extracted from
tracks derived with and without information from the ITk pixel module, and were
fit to extract the RMS scattering angle, which was converted to a fractional
radiation length $x/X_0$. The average $x/X_0$ across the module was measured as
$[0.89 \pm 0.01 \text{ (resolution)} \pm 0.01 \text{ (subtraction)} \pm 0.08
\text{ (beam momentum band)}]\%$, which agrees within uncertainties with an
estimate of $0.88\%$ derived from material component expectations.
| [
{
"version": "v1",
"created": "Fri, 6 Dec 2024 00:57:04 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Koch",
"Simon Florian",
""
],
[
"Moser",
"Brian",
""
],
[
"Lindner",
"Antonín",
""
],
[
"Dao",
"Valerio",
""
],
[
"Asensi",
"Ignacio",
""
],
[
"Bortoletto",
"Daniela",
""
],
[
"Brekkum",
"Marianne",
""
],
[
"Dachs",
"Florian",
""
],
[
"Joos",
"Hans Ludwig",
""
],
[
"van Rijnbach",
"Milou",
""
],
[
"Sharma",
"Abhishek",
""
],
[
"Siral",
"Ismet",
""
],
[
"Solans",
"Carlos",
""
],
[
"Wei",
"Yingjie",
""
]
] | TITLE: Measuring the ATLAS ITk Pixel Detector Material via Multiple Scattering
of Positrons at the CERN PS
ABSTRACT: The ITk is a new silicon tracker for the ATLAS experiment designed to
increase detector resolution, readout capacity, and radiation hardness, in
preparation for the larger number of simultaneous proton-proton interactions at
the High Luminosity LHC. This paper presents the first direct measurement of
the material budget of an ATLAS ITk pixel module, performed at a testbeam at
the CERN Proton Synchrotron via the multiple scattering of low energy positrons
within the module volume. Using a four plane telescope of thin monolithic pixel
detectors from the MALTA collaboration, scattering datasets were recorded at a
beam energy of $1.2\,\text{GeV}$. Kink angle distributions were extracted from
tracks derived with and without information from the ITk pixel module, and were
fit to extract the RMS scattering angle, which was converted to a fractional
radiation length $x/X_0$. The average $x/X_0$ across the module was measured as
$[0.89 \pm 0.01 \text{ (resolution)} \pm 0.01 \text{ (subtraction)} \pm 0.08
\text{ (beam momentum band)}]\%$, which agrees within uncertainties with an
estimate of $0.88\%$ derived from material component expectations.
|
2412.05826 | Yuanbo Xiangli | Yuanbo Xiangli, Ruojin Cai, Hanyu Chen, Jeffrey Byrne, Noah Snavely | Doppelgangers++: Improved Visual Disambiguation with Geometric 3D
Features | Project page can be found in
https://doppelgangers25.github.io/doppelgangers_plusplus/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate 3D reconstruction is frequently hindered by visual aliasing, where
visually similar but distinct surfaces (aka, doppelgangers), are incorrectly
matched. These spurious matches distort the structure-from-motion (SfM)
process, leading to misplaced model elements and reduced accuracy. Prior
efforts addressed this with CNN classifiers trained on curated datasets, but
these approaches struggle to generalize across diverse real-world scenes and
can require extensive parameter tuning. In this work, we present
Doppelgangers++, a method to enhance doppelganger detection and improve 3D
reconstruction accuracy. Our contributions include a diversified training
dataset that incorporates geo-tagged images from everyday scenes to expand
robustness beyond landmark-based datasets. We further propose a
Transformer-based classifier that leverages 3D-aware features from the MASt3R
model, achieving superior precision and recall across both in-domain and
out-of-domain tests. Doppelgangers++ integrates seamlessly into standard SfM
and MASt3R-SfM pipelines, offering efficiency and adaptability across varied
scenes. To evaluate SfM accuracy, we introduce an automated, geotag-based
method for validating reconstructed models, eliminating the need for manual
inspection. Through extensive experiments, we demonstrate that Doppelgangers++
significantly enhances pairwise visual disambiguation and improves 3D
reconstruction quality in complex and diverse scenarios.
| [
{
"version": "v1",
"created": "Sun, 8 Dec 2024 06:08:47 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 18:16:23 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Xiangli",
"Yuanbo",
""
],
[
"Cai",
"Ruojin",
""
],
[
"Chen",
"Hanyu",
""
],
[
"Byrne",
"Jeffrey",
""
],
[
"Snavely",
"Noah",
""
]
] | TITLE: Doppelgangers++: Improved Visual Disambiguation with Geometric 3D
Features
ABSTRACT: Accurate 3D reconstruction is frequently hindered by visual aliasing, where
visually similar but distinct surfaces (aka, doppelgangers), are incorrectly
matched. These spurious matches distort the structure-from-motion (SfM)
process, leading to misplaced model elements and reduced accuracy. Prior
efforts addressed this with CNN classifiers trained on curated datasets, but
these approaches struggle to generalize across diverse real-world scenes and
can require extensive parameter tuning. In this work, we present
Doppelgangers++, a method to enhance doppelganger detection and improve 3D
reconstruction accuracy. Our contributions include a diversified training
dataset that incorporates geo-tagged images from everyday scenes to expand
robustness beyond landmark-based datasets. We further propose a
Transformer-based classifier that leverages 3D-aware features from the MASt3R
model, achieving superior precision and recall across both in-domain and
out-of-domain tests. Doppelgangers++ integrates seamlessly into standard SfM
and MASt3R-SfM pipelines, offering efficiency and adaptability across varied
scenes. To evaluate SfM accuracy, we introduce an automated, geotag-based
method for validating reconstructed models, eliminating the need for manual
inspection. Through extensive experiments, we demonstrate that Doppelgangers++
significantly enhances pairwise visual disambiguation and improves 3D
reconstruction quality in complex and diverse scenarios.
|
2412.07775 | Zhen Liu | Zhen Liu, Tim Z. Xiao, Weiyang Liu, Yoshua Bengio, Dinghuai Zhang | Efficient Diversity-Preserving Diffusion Alignment via Gradient-Informed
GFlowNets | Technical Report (35 pages, 31 figures), Accepted at ICLR 2025 | null | null | null | cs.LG cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | While one commonly trains large diffusion models by collecting datasets on
target downstream tasks, it is often desired to align and finetune pretrained
diffusion models with some reward functions that are either designed by experts
or learned from small-scale datasets. Existing post-training methods for reward
finetuning of diffusion models typically suffer from lack of diversity in
generated samples, lack of prior preservation, and/or slow convergence in
finetuning. Inspired by recent successes in generative flow networks
(GFlowNets), a class of probabilistic models that sample with the unnormalized
density of a reward function, we propose a novel GFlowNet method dubbed
Nabla-GFlowNet (abbreviated as $\nabla$-GFlowNet), the first GFlowNet method
that leverages the rich signal in reward gradients, together with an objective
called $\nabla$-DB plus its variant residual $\nabla$-DB designed for
prior-preserving diffusion finetuning. We show that our proposed method
achieves fast yet diversity- and prior-preserving finetuning of Stable
Diffusion, a large-scale text-conditioned image diffusion model, on different
realistic reward functions.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 18:59:58 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 15:15:58 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 19:31:55 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Zhen",
""
],
[
"Xiao",
"Tim Z.",
""
],
[
"Liu",
"Weiyang",
""
],
[
"Bengio",
"Yoshua",
""
],
[
"Zhang",
"Dinghuai",
""
]
] | TITLE: Efficient Diversity-Preserving Diffusion Alignment via Gradient-Informed
GFlowNets
ABSTRACT: While one commonly trains large diffusion models by collecting datasets on
target downstream tasks, it is often desired to align and finetune pretrained
diffusion models with some reward functions that are either designed by experts
or learned from small-scale datasets. Existing post-training methods for reward
finetuning of diffusion models typically suffer from lack of diversity in
generated samples, lack of prior preservation, and/or slow convergence in
finetuning. Inspired by recent successes in generative flow networks
(GFlowNets), a class of probabilistic models that sample with the unnormalized
density of a reward function, we propose a novel GFlowNet method dubbed
Nabla-GFlowNet (abbreviated as $\nabla$-GFlowNet), the first GFlowNet method
that leverages the rich signal in reward gradients, together with an objective
called $\nabla$-DB plus its variant residual $\nabla$-DB designed for
prior-preserving diffusion finetuning. We show that our proposed method
achieves fast yet diversity- and prior-preserving finetuning of Stable
Diffusion, a large-scale text-conditioned image diffusion model, on different
realistic reward functions.
|
2412.09402 | Lehan Wang | Lehan Wang, Chongchong Qi, Chubin Ou, Lin An, Mei Jin, Xiangbin Kong,
Xiaomeng Li | MultiEYE: Dataset and Benchmark for OCT-Enhanced Retinal Disease
Recognition from Fundus Images | Accepted at IEEE TMI 2024 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Existing multi-modal learning methods on fundus and OCT images mostly require
both modalities to be available and strictly paired for training and testing,
which appears less practical in clinical scenarios. To expand the scope of
clinical applications, we formulate a novel setting, "OCT-enhanced disease
recognition from fundus images", that allows for the use of unpaired
multi-modal data during the training phase and relies on the widespread fundus
photographs for testing. To benchmark this setting, we present the first large
multi-modal multi-class dataset for eye disease diagnosis, MultiEYE, and
propose an OCT-assisted Conceptual Distillation Approach (OCT-CoDA), which
employs semantically rich concepts to extract disease-related knowledge from
OCT images and leverage them into the fundus model. Specifically, we regard the
image-concept relation as a link to distill useful knowledge from the OCT
teacher model to the fundus student model, which considerably improves the
diagnostic performance based on fundus images and formulates the cross-modal
knowledge transfer into an explainable process. Through extensive experiments
on the multi-disease classification task, our proposed OCT-CoDA demonstrates
remarkable results and interpretability, showing great potential for clinical
application. Our dataset and code are available at
https://github.com/xmed-lab/MultiEYE.
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2024 16:08:43 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 09:24:41 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Lehan",
""
],
[
"Qi",
"Chongchong",
""
],
[
"Ou",
"Chubin",
""
],
[
"An",
"Lin",
""
],
[
"Jin",
"Mei",
""
],
[
"Kong",
"Xiangbin",
""
],
[
"Li",
"Xiaomeng",
""
]
] | TITLE: MultiEYE: Dataset and Benchmark for OCT-Enhanced Retinal Disease
Recognition from Fundus Images
ABSTRACT: Existing multi-modal learning methods on fundus and OCT images mostly require
both modalities to be available and strictly paired for training and testing,
which appears less practical in clinical scenarios. To expand the scope of
clinical applications, we formulate a novel setting, "OCT-enhanced disease
recognition from fundus images", that allows for the use of unpaired
multi-modal data during the training phase and relies on the widespread fundus
photographs for testing. To benchmark this setting, we present the first large
multi-modal multi-class dataset for eye disease diagnosis, MultiEYE, and
propose an OCT-assisted Conceptual Distillation Approach (OCT-CoDA), which
employs semantically rich concepts to extract disease-related knowledge from
OCT images and leverage them into the fundus model. Specifically, we regard the
image-concept relation as a link to distill useful knowledge from the OCT
teacher model to the fundus student model, which considerably improves the
diagnostic performance based on fundus images and formulates the cross-modal
knowledge transfer into an explainable process. Through extensive experiments
on the multi-disease classification task, our proposed OCT-CoDA demonstrates
remarkable results and interpretability, showing great potential for clinical
application. Our dataset and code are available at
https://github.com/xmed-lab/MultiEYE.
|
2412.10128 | Rittwika Kansabanik | Rittwika Kansabanik, Adrian Barbu | Feature Selection for Latent Factor Models | Accepted in the CVPR conference 2025 | null | null | null | cs.LG stat.AP | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Feature selection is crucial for pinpointing relevant features in
high-dimensional datasets, mitigating the 'curse of dimensionality,' and
enhancing machine learning performance. Traditional feature selection methods
for classification use data from all classes to select features for each class.
This paper explores feature selection methods that select features for each
class separately, using class models based on low-rank generative methods and
introducing a signal-to-noise ratio (SNR) feature selection criterion. This
novel approach has theoretical true feature recovery guarantees under certain
assumptions and is shown to outperform some existing feature selection methods
on standard classification datasets.
| [
{
"version": "v1",
"created": "Fri, 13 Dec 2024 13:20:10 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 17:23:13 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Kansabanik",
"Rittwika",
""
],
[
"Barbu",
"Adrian",
""
]
] | TITLE: Feature Selection for Latent Factor Models
ABSTRACT: Feature selection is crucial for pinpointing relevant features in
high-dimensional datasets, mitigating the 'curse of dimensionality,' and
enhancing machine learning performance. Traditional feature selection methods
for classification use data from all classes to select features for each class.
This paper explores feature selection methods that select features for each
class separately, using class models based on low-rank generative methods and
introducing a signal-to-noise ratio (SNR) feature selection criterion. This
novel approach has theoretical true feature recovery guarantees under certain
assumptions and is shown to outperform some existing feature selection methods
on standard classification datasets.
|
2412.12032 | Gaojian Wang | Gaojian Wang, Feng Lin, Tong Wu, Zhenguang Liu, Zhongjie Ba, Kui Ren | FSFM: A Generalizable Face Security Foundation Model via Self-Supervised
Facial Representation Learning | 21 pages, 11 figures, project page: https://fsfm-3c.github.io | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work asks: with abundant, unlabeled real faces, how to learn a robust
and transferable facial representation that boosts various face security tasks
with respect to generalization performance? We make the first attempt and
propose a self-supervised pretraining framework to learn fundamental
representations of real face images, FSFM, that leverages the synergy between
masked image modeling (MIM) and instance discrimination (ID). We explore
various facial masking strategies for MIM and present a simple yet powerful
CRFR-P masking, which explicitly forces the model to capture meaningful
intra-region consistency and challenging inter-region coherency. Furthermore,
we devise the ID network that naturally couples with MIM to establish
underlying local-to-global correspondence via tailored self-distillation. These
three learning objectives, namely 3C, empower encoding both local features and
global semantics of real faces. After pretraining, a vanilla ViT serves as a
universal vision foundation model for downstream face security tasks:
cross-dataset deepfake detection, cross-domain face anti-spoofing, and unseen
diffusion facial forgery detection. Extensive experiments on 10 public datasets
demonstrate that our model transfers better than supervised pretraining, visual
and facial self-supervised learning arts, and even outperforms task-specialized
SOTA methods.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 17:58:45 GMT"
},
{
"version": "v2",
"created": "Sat, 28 Dec 2024 13:48:32 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 14:07:12 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Gaojian",
""
],
[
"Lin",
"Feng",
""
],
[
"Wu",
"Tong",
""
],
[
"Liu",
"Zhenguang",
""
],
[
"Ba",
"Zhongjie",
""
],
[
"Ren",
"Kui",
""
]
] | TITLE: FSFM: A Generalizable Face Security Foundation Model via Self-Supervised
Facial Representation Learning
ABSTRACT: This work asks: with abundant, unlabeled real faces, how to learn a robust
and transferable facial representation that boosts various face security tasks
with respect to generalization performance? We make the first attempt and
propose a self-supervised pretraining framework to learn fundamental
representations of real face images, FSFM, that leverages the synergy between
masked image modeling (MIM) and instance discrimination (ID). We explore
various facial masking strategies for MIM and present a simple yet powerful
CRFR-P masking, which explicitly forces the model to capture meaningful
intra-region consistency and challenging inter-region coherency. Furthermore,
we devise the ID network that naturally couples with MIM to establish
underlying local-to-global correspondence via tailored self-distillation. These
three learning objectives, namely 3C, empower encoding both local features and
global semantics of real faces. After pretraining, a vanilla ViT serves as a
universal vision foundation model for downstream face security tasks:
cross-dataset deepfake detection, cross-domain face anti-spoofing, and unseen
diffusion facial forgery detection. Extensive experiments on 10 public datasets
demonstrate that our model transfers better than supervised pretraining, visual
and facial self-supervised learning arts, and even outperforms task-specialized
SOTA methods.
|
2412.12423 | Nikola Zubi\'c | Nikola Zubi\'c and Davide Scaramuzza | GG-SSMs: Graph-Generating State Space Models | 12 pages, 8 tables, 2 figures, CVPR 2025 Camera Ready paper | IEEE/CVF Conference on Computer Vision and Pattern Recognition
(CVPR), Nashville, 2025 | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | State Space Models (SSMs) are powerful tools for modeling sequential data in
computer vision and time series analysis domains. However, traditional SSMs are
limited by fixed, one-dimensional sequential processing, which restricts their
ability to model non-local interactions in high-dimensional data. While methods
like Mamba and VMamba introduce selective and flexible scanning strategies,
they rely on predetermined paths, which fails to efficiently capture complex
dependencies. We introduce Graph-Generating State Space Models (GG-SSMs), a
novel framework that overcomes these limitations by dynamically constructing
graphs based on feature relationships. Using Chazelle's Minimum Spanning Tree
algorithm, GG-SSMs adapt to the inherent data structure, enabling robust
feature propagation across dynamically generated graphs and efficiently
modeling complex dependencies. We validate GG-SSMs on 11 diverse datasets,
including event-based eye-tracking, ImageNet classification, optical flow
estimation, and six time series datasets. GG-SSMs achieve state-of-the-art
performance across all tasks, surpassing existing methods by significant
margins. Specifically, GG-SSM attains a top-1 accuracy of 84.9% on ImageNet,
outperforming prior SSMs by 1%, reducing the KITTI-15 error rate to 2.77%, and
improving eye-tracking detection rates by up to 0.33% with fewer parameters.
These results demonstrate that dynamic scanning based on feature relationships
significantly improves SSMs' representational power and efficiency, offering a
versatile tool for various applications in computer vision and beyond.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 00:07:29 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 10:05:26 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zubić",
"Nikola",
""
],
[
"Scaramuzza",
"Davide",
""
]
] | TITLE: GG-SSMs: Graph-Generating State Space Models
ABSTRACT: State Space Models (SSMs) are powerful tools for modeling sequential data in
computer vision and time series analysis domains. However, traditional SSMs are
limited by fixed, one-dimensional sequential processing, which restricts their
ability to model non-local interactions in high-dimensional data. While methods
like Mamba and VMamba introduce selective and flexible scanning strategies,
they rely on predetermined paths, which fails to efficiently capture complex
dependencies. We introduce Graph-Generating State Space Models (GG-SSMs), a
novel framework that overcomes these limitations by dynamically constructing
graphs based on feature relationships. Using Chazelle's Minimum Spanning Tree
algorithm, GG-SSMs adapt to the inherent data structure, enabling robust
feature propagation across dynamically generated graphs and efficiently
modeling complex dependencies. We validate GG-SSMs on 11 diverse datasets,
including event-based eye-tracking, ImageNet classification, optical flow
estimation, and six time series datasets. GG-SSMs achieve state-of-the-art
performance across all tasks, surpassing existing methods by significant
margins. Specifically, GG-SSM attains a top-1 accuracy of 84.9% on ImageNet,
outperforming prior SSMs by 1%, reducing the KITTI-15 error rate to 2.77%, and
improving eye-tracking detection rates by up to 0.33% with fewer parameters.
These results demonstrate that dynamic scanning based on feature relationships
significantly improves SSMs' representational power and efficiency, offering a
versatile tool for various applications in computer vision and beyond.
|
2412.12463 | Aditya Ganeshan | Aditya Ganeshan, Thibault Groueix, Paul Guerrero, Radom\'ir M\v{e}ch,
Matthew Fisher, Daniel Ritchie | Pattern Analogies: Learning to Perform Programmatic Image Edits by
Analogy | CVPR 2024 - Website: https://bardofcodes.github.io/patterns/ | null | null | null | cs.CV cs.AI cs.GR cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Pattern images are everywhere in the digital and physical worlds, and tools
to edit them are valuable. But editing pattern images is tricky: desired edits
are often programmatic: structure-aware edits that alter the underlying program
which generates the pattern. One could attempt to infer this underlying
program, but current methods for doing so struggle with complex images and
produce unorganized programs that make editing tedious. In this work, we
introduce a novel approach to perform programmatic edits on pattern images. By
using a pattern analogy -- a pair of simple patterns to demonstrate the
intended edit -- and a learning-based generative model to execute these edits,
our method allows users to intuitively edit patterns. To enable this paradigm,
we introduce SplitWeave, a domain-specific language that, combined with a
framework for sampling synthetic pattern analogies, enables the creation of a
large, high-quality synthetic training dataset. We also present TriFuser, a
Latent Diffusion Model (LDM) designed to overcome critical issues that arise
when naively deploying LDMs to this task. Extensive experiments on real-world,
artist-sourced patterns reveals that our method faithfully performs the
demonstrated edit while also generalizing to related pattern styles beyond its
training distribution.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 01:52:12 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 16:33:40 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ganeshan",
"Aditya",
""
],
[
"Groueix",
"Thibault",
""
],
[
"Guerrero",
"Paul",
""
],
[
"Měch",
"Radomír",
""
],
[
"Fisher",
"Matthew",
""
],
[
"Ritchie",
"Daniel",
""
]
] | TITLE: Pattern Analogies: Learning to Perform Programmatic Image Edits by
Analogy
ABSTRACT: Pattern images are everywhere in the digital and physical worlds, and tools
to edit them are valuable. But editing pattern images is tricky: desired edits
are often programmatic: structure-aware edits that alter the underlying program
which generates the pattern. One could attempt to infer this underlying
program, but current methods for doing so struggle with complex images and
produce unorganized programs that make editing tedious. In this work, we
introduce a novel approach to perform programmatic edits on pattern images. By
using a pattern analogy -- a pair of simple patterns to demonstrate the
intended edit -- and a learning-based generative model to execute these edits,
our method allows users to intuitively edit patterns. To enable this paradigm,
we introduce SplitWeave, a domain-specific language that, combined with a
framework for sampling synthetic pattern analogies, enables the creation of a
large, high-quality synthetic training dataset. We also present TriFuser, a
Latent Diffusion Model (LDM) designed to overcome critical issues that arise
when naively deploying LDMs to this task. Extensive experiments on real-world,
artist-sourced patterns reveals that our method faithfully performs the
demonstrated edit while also generalizing to related pattern styles beyond its
training distribution.
|
2412.13823 | Wangyu Wu | Wangyu Wu, Xianglin Qiu, Siqi Song, Xiaowei Huang, Fei Ma, Jimin Xiao | Prompt Categories Cluster for Weakly Supervised Semantic Segmentation | Accepted at CVPR 2025 ELVM | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Weakly Supervised Semantic Segmentation (WSSS), which leverages image-level
labels, has garnered significant attention due to its cost-effectiveness. The
previous methods mainly strengthen the inter-class differences to avoid class
semantic ambiguity which may lead to erroneous activation. However, they
overlook the positive function of some shared information between similar
classes. Categories within the same cluster share some similar features.
Allowing the model to recognize these features can further relieve the semantic
ambiguity between these classes. To effectively identify and utilize this
shared information, in this paper, we introduce a novel WSSS framework called
Prompt Categories Clustering (PCC). Specifically, we explore the ability of
Large Language Models (LLMs) to derive category clusters through prompts. These
clusters effectively represent the intrinsic relationships between categories.
By integrating this relational information into the training network, our model
is able to better learn the hidden connections between categories. Experimental
results demonstrate the effectiveness of our approach, showing its ability to
enhance performance on the PASCAL VOC 2012 dataset and surpass existing
state-of-the-art methods in WSSS.
| [
{
"version": "v1",
"created": "Wed, 18 Dec 2024 13:11:58 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 06:29:48 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wu",
"Wangyu",
""
],
[
"Qiu",
"Xianglin",
""
],
[
"Song",
"Siqi",
""
],
[
"Huang",
"Xiaowei",
""
],
[
"Ma",
"Fei",
""
],
[
"Xiao",
"Jimin",
""
]
] | TITLE: Prompt Categories Cluster for Weakly Supervised Semantic Segmentation
ABSTRACT: Weakly Supervised Semantic Segmentation (WSSS), which leverages image-level
labels, has garnered significant attention due to its cost-effectiveness. The
previous methods mainly strengthen the inter-class differences to avoid class
semantic ambiguity which may lead to erroneous activation. However, they
overlook the positive function of some shared information between similar
classes. Categories within the same cluster share some similar features.
Allowing the model to recognize these features can further relieve the semantic
ambiguity between these classes. To effectively identify and utilize this
shared information, in this paper, we introduce a novel WSSS framework called
Prompt Categories Clustering (PCC). Specifically, we explore the ability of
Large Language Models (LLMs) to derive category clusters through prompts. These
clusters effectively represent the intrinsic relationships between categories.
By integrating this relational information into the training network, our model
is able to better learn the hidden connections between categories. Experimental
results demonstrate the effectiveness of our approach, showing its ability to
enhance performance on the PASCAL VOC 2012 dataset and surpass existing
state-of-the-art methods in WSSS.
|
2412.15190 | Akshay Dudhane | Sagar Soni, Akshay Dudhane, Hiyam Debary, Mustansar Fiaz, Muhammad
Akhtar Munir, Muhammad Sohail Danish, Paolo Fraccaro, Campbell D Watson,
Levente J Klein, Fahad Shahbaz Khan, Salman Khan | EarthDial: Turning Multi-sensory Earth Observations to Interactive
Dialogues | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Automated analysis of vast Earth observation data via interactive
Vision-Language Models (VLMs) can unlock new opportunities for environmental
monitoring, disaster response, and {resource management}. Existing generic VLMs
do not perform well on Remote Sensing data, while the recent Geo-spatial VLMs
remain restricted to a fixed resolution and few sensor modalities. In this
paper, we introduce EarthDial, a conversational assistant specifically designed
for Earth Observation (EO) data, transforming complex, multi-sensory Earth
observations into interactive, natural language dialogues. EarthDial supports
multi-spectral, multi-temporal, and multi-resolution imagery, enabling a wide
range of remote sensing tasks, including classification, detection, captioning,
question answering, visual reasoning, and visual grounding. To achieve this, we
introduce an extensive instruction tuning dataset comprising over 11.11M
instruction pairs covering RGB, Synthetic Aperture Radar (SAR), and
multispectral modalities such as Near-Infrared (NIR) and infrared. Furthermore,
EarthDial handles bi-temporal and multi-temporal sequence analysis for
applications like change detection. Our extensive experimental results on 44
downstream datasets demonstrate that EarthDial outperforms existing generic and
domain-specific models, achieving better generalization across various EO
tasks. Our source codes and pre-trained models are at
https://github.com/hiyamdebary/EarthDial.
| [
{
"version": "v1",
"created": "Thu, 19 Dec 2024 18:57:13 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 06:19:02 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Soni",
"Sagar",
""
],
[
"Dudhane",
"Akshay",
""
],
[
"Debary",
"Hiyam",
""
],
[
"Fiaz",
"Mustansar",
""
],
[
"Munir",
"Muhammad Akhtar",
""
],
[
"Danish",
"Muhammad Sohail",
""
],
[
"Fraccaro",
"Paolo",
""
],
[
"Watson",
"Campbell D",
""
],
[
"Klein",
"Levente J",
""
],
[
"Khan",
"Fahad Shahbaz",
""
],
[
"Khan",
"Salman",
""
]
] | TITLE: EarthDial: Turning Multi-sensory Earth Observations to Interactive
Dialogues
ABSTRACT: Automated analysis of vast Earth observation data via interactive
Vision-Language Models (VLMs) can unlock new opportunities for environmental
monitoring, disaster response, and {resource management}. Existing generic VLMs
do not perform well on Remote Sensing data, while the recent Geo-spatial VLMs
remain restricted to a fixed resolution and few sensor modalities. In this
paper, we introduce EarthDial, a conversational assistant specifically designed
for Earth Observation (EO) data, transforming complex, multi-sensory Earth
observations into interactive, natural language dialogues. EarthDial supports
multi-spectral, multi-temporal, and multi-resolution imagery, enabling a wide
range of remote sensing tasks, including classification, detection, captioning,
question answering, visual reasoning, and visual grounding. To achieve this, we
introduce an extensive instruction tuning dataset comprising over 11.11M
instruction pairs covering RGB, Synthetic Aperture Radar (SAR), and
multispectral modalities such as Near-Infrared (NIR) and infrared. Furthermore,
EarthDial handles bi-temporal and multi-temporal sequence analysis for
applications like change detection. Our extensive experimental results on 44
downstream datasets demonstrate that EarthDial outperforms existing generic and
domain-specific models, achieving better generalization across various EO
tasks. Our source codes and pre-trained models are at
https://github.com/hiyamdebary/EarthDial.
|
2412.16504 | Hao Du | Hao Du, Shang Liu, Lele Zheng, Yang Cao, Atsuyoshi Nakamura, Lei Chen | Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and
Future Directions | accepted by PAKDD2025 | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-tuning has emerged as a critical process in leveraging Large Language
Models (LLMs) for specific downstream tasks, enabling these models to achieve
state-of-the-art performance across various domains. However, the fine-tuning
process often involves sensitive datasets, introducing privacy risks that
exploit the unique characteristics of this stage. In this paper, we provide a
comprehensive survey of privacy challenges associated with fine-tuning LLMs,
highlighting vulnerabilities to various privacy attacks, including membership
inference, data extraction, and backdoor attacks. We further review defense
mechanisms designed to mitigate privacy risks in the fine-tuning phase, such as
differential privacy, federated learning, and knowledge unlearning, discussing
their effectiveness and limitations in addressing privacy risks and maintaining
model utility. By identifying key gaps in existing research, we highlight
challenges and propose directions to advance the development of
privacy-preserving methods for fine-tuning LLMs, promoting their responsible
use in diverse applications.
| [
{
"version": "v1",
"created": "Sat, 21 Dec 2024 06:41:29 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 10:28:21 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Du",
"Hao",
""
],
[
"Liu",
"Shang",
""
],
[
"Zheng",
"Lele",
""
],
[
"Cao",
"Yang",
""
],
[
"Nakamura",
"Atsuyoshi",
""
],
[
"Chen",
"Lei",
""
]
] | TITLE: Privacy in Fine-tuning Large Language Models: Attacks, Defenses, and
Future Directions
ABSTRACT: Fine-tuning has emerged as a critical process in leveraging Large Language
Models (LLMs) for specific downstream tasks, enabling these models to achieve
state-of-the-art performance across various domains. However, the fine-tuning
process often involves sensitive datasets, introducing privacy risks that
exploit the unique characteristics of this stage. In this paper, we provide a
comprehensive survey of privacy challenges associated with fine-tuning LLMs,
highlighting vulnerabilities to various privacy attacks, including membership
inference, data extraction, and backdoor attacks. We further review defense
mechanisms designed to mitigate privacy risks in the fine-tuning phase, such as
differential privacy, federated learning, and knowledge unlearning, discussing
their effectiveness and limitations in addressing privacy risks and maintaining
model utility. By identifying key gaps in existing research, we highlight
challenges and propose directions to advance the development of
privacy-preserving methods for fine-tuning LLMs, promoting their responsible
use in diverse applications.
|
2412.16859 | Jongmin Yu | Jongmin Yu, Zhongtian Sun, Chen Bene Chi, Jinhong Yang, Shan Luo | Adversarially Domain-adaptive Latent Diffusion for Unsupervised Semantic
Segmentation | Accepted from CVPR 2025 Workshop PVUW | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Semantic segmentation requires extensive pixel-level annotation, motivating
unsupervised domain adaptation (UDA) to transfer knowledge from labelled source
domains to unlabelled or weakly labelled target domains. One of the most
efficient strategies involves using synthetic datasets generated within
controlled virtual environments, such as video games or traffic simulators,
which can automatically generate pixel-level annotations. However, even when
such datasets are available, learning a well-generalised representation that
captures both domains remains challenging, owing to probabilistic and geometric
discrepancies between the virtual world and real-world imagery. This work
introduces a semantic segmentation method based on latent diffusion models,
termed Inter-Coder Connected Latent Diffusion (ICCLD), alongside an
unsupervised domain adaptation approach. The model employs an inter-coder
connection to enhance contextual understanding and preserve fine details, while
adversarial learning aligns latent feature distributions across domains during
the latent diffusion process. Experiments on GTA5, Synthia, and Cityscapes
demonstrate that ICCLD outperforms state-of-the-art UDA methods, achieving mIoU
scores of 74.4 (GTA5$\rightarrow$Cityscapes) and 67.2
(Synthia$\rightarrow$Cityscapes).
| [
{
"version": "v1",
"created": "Sun, 22 Dec 2024 04:55:41 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 02:01:25 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yu",
"Jongmin",
""
],
[
"Sun",
"Zhongtian",
""
],
[
"Chi",
"Chen Bene",
""
],
[
"Yang",
"Jinhong",
""
],
[
"Luo",
"Shan",
""
]
] | TITLE: Adversarially Domain-adaptive Latent Diffusion for Unsupervised Semantic
Segmentation
ABSTRACT: Semantic segmentation requires extensive pixel-level annotation, motivating
unsupervised domain adaptation (UDA) to transfer knowledge from labelled source
domains to unlabelled or weakly labelled target domains. One of the most
efficient strategies involves using synthetic datasets generated within
controlled virtual environments, such as video games or traffic simulators,
which can automatically generate pixel-level annotations. However, even when
such datasets are available, learning a well-generalised representation that
captures both domains remains challenging, owing to probabilistic and geometric
discrepancies between the virtual world and real-world imagery. This work
introduces a semantic segmentation method based on latent diffusion models,
termed Inter-Coder Connected Latent Diffusion (ICCLD), alongside an
unsupervised domain adaptation approach. The model employs an inter-coder
connection to enhance contextual understanding and preserve fine details, while
adversarial learning aligns latent feature distributions across domains during
the latent diffusion process. Experiments on GTA5, Synthia, and Cityscapes
demonstrate that ICCLD outperforms state-of-the-art UDA methods, achieving mIoU
scores of 74.4 (GTA5$\rightarrow$Cityscapes) and 67.2
(Synthia$\rightarrow$Cityscapes).
|
2412.20374 | Yan Luo | Yan Luo, Muhammad Osama Khan, Congcong Wen, Muhammad Muneeb Afzal,
Titus Fidelis Wuermeling, Min Shi, Yu Tian, Yi Fang, Mengyu Wang | FairDiffusion: Enhancing Equity in Latent Diffusion Models via Fair
Bayesian Perturbation | Published in Science Advances
(https://www.science.org/doi/full/10.1126/sciadv.ads4593). The data and code
are made publicly available at
https://github.com/Harvard-Ophthalmology-AI-Lab/FairDiffusion | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Recent progress in generative AI, especially diffusion models, has
demonstrated significant utility in text-to-image synthesis. Particularly in
healthcare, these models offer immense potential in generating synthetic
datasets and training medical students. However, despite these strong
performances, it remains uncertain if the image generation quality is
consistent across different demographic subgroups. To address this critical
concern, we present the first comprehensive study on the fairness of medical
text-to-image diffusion models. Our extensive evaluations of the popular Stable
Diffusion model reveal significant disparities across gender, race, and
ethnicity. To mitigate these biases, we introduce FairDiffusion, an
equity-aware latent diffusion model that enhances fairness in both image
generation quality as well as the semantic correlation of clinical features. In
addition, we also design and curate FairGenMed, the first dataset for studying
the fairness of medical generative models. Complementing this effort, we
further evaluate FairDiffusion on two widely-used external medical datasets:
HAM10000 (dermatoscopic images) and CheXpert (chest X-rays) to demonstrate
FairDiffusion's effectiveness in addressing fairness concerns across diverse
medical imaging modalities. Together, FairDiffusion and FairGenMed
significantly advance research in fair generative learning, promoting equitable
benefits of generative AI in healthcare.
| [
{
"version": "v1",
"created": "Sun, 29 Dec 2024 06:33:37 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 03:32:23 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Luo",
"Yan",
""
],
[
"Khan",
"Muhammad Osama",
""
],
[
"Wen",
"Congcong",
""
],
[
"Afzal",
"Muhammad Muneeb",
""
],
[
"Wuermeling",
"Titus Fidelis",
""
],
[
"Shi",
"Min",
""
],
[
"Tian",
"Yu",
""
],
[
"Fang",
"Yi",
""
],
[
"Wang",
"Mengyu",
""
]
] | TITLE: FairDiffusion: Enhancing Equity in Latent Diffusion Models via Fair
Bayesian Perturbation
ABSTRACT: Recent progress in generative AI, especially diffusion models, has
demonstrated significant utility in text-to-image synthesis. Particularly in
healthcare, these models offer immense potential in generating synthetic
datasets and training medical students. However, despite these strong
performances, it remains uncertain if the image generation quality is
consistent across different demographic subgroups. To address this critical
concern, we present the first comprehensive study on the fairness of medical
text-to-image diffusion models. Our extensive evaluations of the popular Stable
Diffusion model reveal significant disparities across gender, race, and
ethnicity. To mitigate these biases, we introduce FairDiffusion, an
equity-aware latent diffusion model that enhances fairness in both image
generation quality as well as the semantic correlation of clinical features. In
addition, we also design and curate FairGenMed, the first dataset for studying
the fairness of medical generative models. Complementing this effort, we
further evaluate FairDiffusion on two widely-used external medical datasets:
HAM10000 (dermatoscopic images) and CheXpert (chest X-rays) to demonstrate
FairDiffusion's effectiveness in addressing fairness concerns across diverse
medical imaging modalities. Together, FairDiffusion and FairGenMed
significantly advance research in fair generative learning, promoting equitable
benefits of generative AI in healthcare.
|
2501.00184 | Amirhossein Nadiri | Amirhossein Nadiri, Jing Li, Ali Faraji, Ghadeer Abuoda, Manos
Papagelis | TrajLearn: Trajectory Prediction Learning using Deep Generative Models | Accepted at ACM Transactions on Spatial Algorithms and Systems | null | null | null | cs.LG cs.CV cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Trajectory prediction aims to estimate an entity's future path using its
current position and historical movement data, benefiting fields like
autonomous navigation, robotics, and human movement analytics. Deep learning
approaches have become key in this area, utilizing large-scale trajectory
datasets to model movement patterns, but face challenges in managing complex
spatial dependencies and adapting to dynamic environments. To address these
challenges, we introduce TrajLearn, a novel model for trajectory prediction
that leverages generative modeling of higher-order mobility flows based on
hexagonal spatial representation. TrajLearn predicts the next $k$ steps by
integrating a customized beam search for exploring multiple potential paths
while maintaining spatial continuity. We conducted a rigorous evaluation of
TrajLearn, benchmarking it against leading state-of-the-art approaches and
meaningful baselines. The results indicate that TrajLearn achieves significant
performance gains, with improvements of up to ~40% across multiple real-world
trajectory datasets. In addition, we evaluated different prediction horizons
(i.e., various values of $k$), conducted resolution sensitivity analysis, and
performed ablation studies to assess the impact of key model components.
Furthermore, we developed a novel algorithm to generate mixed-resolution maps
by hierarchically subdividing hexagonal regions into finer segments within a
specified observation area. This approach supports selective detailing,
applying finer resolution to areas of interest or high activity (e.g., urban
centers) while using coarser resolution for less significant regions (e.g.,
rural areas), effectively reducing data storage requirements and computational
overhead. We promote reproducibility and adaptability by offering complete
code, data, and detailed documentation with flexible configuration options for
various applications.
| [
{
"version": "v1",
"created": "Mon, 30 Dec 2024 23:38:52 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 19:12:44 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Nadiri",
"Amirhossein",
""
],
[
"Li",
"Jing",
""
],
[
"Faraji",
"Ali",
""
],
[
"Abuoda",
"Ghadeer",
""
],
[
"Papagelis",
"Manos",
""
]
] | TITLE: TrajLearn: Trajectory Prediction Learning using Deep Generative Models
ABSTRACT: Trajectory prediction aims to estimate an entity's future path using its
current position and historical movement data, benefiting fields like
autonomous navigation, robotics, and human movement analytics. Deep learning
approaches have become key in this area, utilizing large-scale trajectory
datasets to model movement patterns, but face challenges in managing complex
spatial dependencies and adapting to dynamic environments. To address these
challenges, we introduce TrajLearn, a novel model for trajectory prediction
that leverages generative modeling of higher-order mobility flows based on
hexagonal spatial representation. TrajLearn predicts the next $k$ steps by
integrating a customized beam search for exploring multiple potential paths
while maintaining spatial continuity. We conducted a rigorous evaluation of
TrajLearn, benchmarking it against leading state-of-the-art approaches and
meaningful baselines. The results indicate that TrajLearn achieves significant
performance gains, with improvements of up to ~40% across multiple real-world
trajectory datasets. In addition, we evaluated different prediction horizons
(i.e., various values of $k$), conducted resolution sensitivity analysis, and
performed ablation studies to assess the impact of key model components.
Furthermore, we developed a novel algorithm to generate mixed-resolution maps
by hierarchically subdividing hexagonal regions into finer segments within a
specified observation area. This approach supports selective detailing,
applying finer resolution to areas of interest or high activity (e.g., urban
centers) while using coarser resolution for less significant regions (e.g.,
rural areas), effectively reducing data storage requirements and computational
overhead. We promote reproducibility and adaptability by offering complete
code, data, and detailed documentation with flexible configuration options for
various applications.
|
2501.00192 | Zhenting Wang | Zhenting Wang, Shuming Hu, Shiyu Zhao, Xiaowen Lin, Felix Juefei-Xu,
Zhuowei Li, Ligong Han, Harihar Subramanyam, Li Chen, Jianfa Chen, Nan Jiang,
Lingjuan Lyu, Shiqing Ma, Dimitris N. Metaxas, Ankit Jain | MLLM-as-a-Judge for Image Safety without Human Labeling | null | null | null | null | cs.CV cs.CL cs.CY cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image content safety has become a significant challenge with the rise of
visual media on online platforms. Meanwhile, in the age of AI-generated content
(AIGC), many image generation models are capable of producing harmful content,
such as images containing sexual or violent material. Thus, it becomes crucial
to identify such unsafe images based on established safety rules. Pre-trained
Multimodal Large Language Models (MLLMs) offer potential in this regard, given
their strong pattern recognition abilities. Existing approaches typically
fine-tune MLLMs with human-labeled datasets, which however brings a series of
drawbacks. First, relying on human annotators to label data following intricate
and detailed guidelines is both expensive and labor-intensive. Furthermore,
users of safety judgment systems may need to frequently update safety rules,
making fine-tuning on human-based annotation more challenging. This raises the
research question: Can we detect unsafe images by querying MLLMs in a zero-shot
setting using a predefined safety constitution (a set of safety rules)? Our
research showed that simply querying pre-trained MLLMs does not yield
satisfactory results. This lack of effectiveness stems from factors such as the
subjectivity of safety rules, the complexity of lengthy constitutions, and the
inherent biases in the models. To address these challenges, we propose a
MLLM-based method includes objectifying safety rules, assessing the relevance
between rules and images, making quick judgments based on debiased token
probabilities with logically complete yet simplified precondition chains for
safety rules, and conducting more in-depth reasoning with cascaded
chain-of-thought processes if necessary. Experiment results demonstrate that
our method is highly effective for zero-shot image safety judgment tasks.
| [
{
"version": "v1",
"created": "Tue, 31 Dec 2024 00:06:04 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 17:30:18 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Zhenting",
""
],
[
"Hu",
"Shuming",
""
],
[
"Zhao",
"Shiyu",
""
],
[
"Lin",
"Xiaowen",
""
],
[
"Juefei-Xu",
"Felix",
""
],
[
"Li",
"Zhuowei",
""
],
[
"Han",
"Ligong",
""
],
[
"Subramanyam",
"Harihar",
""
],
[
"Chen",
"Li",
""
],
[
"Chen",
"Jianfa",
""
],
[
"Jiang",
"Nan",
""
],
[
"Lyu",
"Lingjuan",
""
],
[
"Ma",
"Shiqing",
""
],
[
"Metaxas",
"Dimitris N.",
""
],
[
"Jain",
"Ankit",
""
]
] | TITLE: MLLM-as-a-Judge for Image Safety without Human Labeling
ABSTRACT: Image content safety has become a significant challenge with the rise of
visual media on online platforms. Meanwhile, in the age of AI-generated content
(AIGC), many image generation models are capable of producing harmful content,
such as images containing sexual or violent material. Thus, it becomes crucial
to identify such unsafe images based on established safety rules. Pre-trained
Multimodal Large Language Models (MLLMs) offer potential in this regard, given
their strong pattern recognition abilities. Existing approaches typically
fine-tune MLLMs with human-labeled datasets, which however brings a series of
drawbacks. First, relying on human annotators to label data following intricate
and detailed guidelines is both expensive and labor-intensive. Furthermore,
users of safety judgment systems may need to frequently update safety rules,
making fine-tuning on human-based annotation more challenging. This raises the
research question: Can we detect unsafe images by querying MLLMs in a zero-shot
setting using a predefined safety constitution (a set of safety rules)? Our
research showed that simply querying pre-trained MLLMs does not yield
satisfactory results. This lack of effectiveness stems from factors such as the
subjectivity of safety rules, the complexity of lengthy constitutions, and the
inherent biases in the models. To address these challenges, we propose a
MLLM-based method includes objectifying safety rules, assessing the relevance
between rules and images, making quick judgments based on debiased token
probabilities with logically complete yet simplified precondition chains for
safety rules, and conducting more in-depth reasoning with cascaded
chain-of-thought processes if necessary. Experiment results demonstrate that
our method is highly effective for zero-shot image safety judgment tasks.
|
2501.02020 | Kedi Chen | Kedi Chen, Qin Chen, Jie Zhou, Xinqi Tao, Bowen Ding, Jingwen Xie,
Mingchen Xie, Peilong Li, Feng Zheng, Liang He | Enhancing Uncertainty Modeling with Semantic Graph for Hallucination
Detection | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large Language Models (LLMs) are prone to hallucination with non-factual or
unfaithful statements, which undermines the applications in real-world
scenarios. Recent researches focus on uncertainty-based hallucination
detection, which utilizes the output probability of LLMs for uncertainty
calculation and does not rely on external knowledge or frequent sampling from
LLMs. Whereas, most approaches merely consider the uncertainty of each
independent token, while the intricate semantic relations among tokens and
sentences are not well studied, which limits the detection of hallucination
that spans over multiple tokens and sentences in the passage. In this paper, we
propose a method to enhance uncertainty modeling with semantic graph for
hallucination detection. Specifically, we first construct a semantic graph that
well captures the relations among entity tokens and sentences. Then, we
incorporate the relations between two entities for uncertainty propagation to
enhance sentence-level hallucination detection. Given that hallucination occurs
due to the conflict between sentences, we further present a graph-based
uncertainty calibration method that integrates the contradiction probability of
the sentence with its neighbors in the semantic graph for uncertainty
calculation. Extensive experiments on two datasets show the great advantages of
our proposed approach. In particular, we obtain substantial improvements with
19.78% in passage-level hallucination detection.
| [
{
"version": "v1",
"created": "Thu, 2 Jan 2025 16:45:05 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 18:06:29 GMT"
},
{
"version": "v3",
"created": "Sat, 5 Apr 2025 15:39:03 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chen",
"Kedi",
""
],
[
"Chen",
"Qin",
""
],
[
"Zhou",
"Jie",
""
],
[
"Tao",
"Xinqi",
""
],
[
"Ding",
"Bowen",
""
],
[
"Xie",
"Jingwen",
""
],
[
"Xie",
"Mingchen",
""
],
[
"Li",
"Peilong",
""
],
[
"Zheng",
"Feng",
""
],
[
"He",
"Liang",
""
]
] | TITLE: Enhancing Uncertainty Modeling with Semantic Graph for Hallucination
Detection
ABSTRACT: Large Language Models (LLMs) are prone to hallucination with non-factual or
unfaithful statements, which undermines the applications in real-world
scenarios. Recent researches focus on uncertainty-based hallucination
detection, which utilizes the output probability of LLMs for uncertainty
calculation and does not rely on external knowledge or frequent sampling from
LLMs. Whereas, most approaches merely consider the uncertainty of each
independent token, while the intricate semantic relations among tokens and
sentences are not well studied, which limits the detection of hallucination
that spans over multiple tokens and sentences in the passage. In this paper, we
propose a method to enhance uncertainty modeling with semantic graph for
hallucination detection. Specifically, we first construct a semantic graph that
well captures the relations among entity tokens and sentences. Then, we
incorporate the relations between two entities for uncertainty propagation to
enhance sentence-level hallucination detection. Given that hallucination occurs
due to the conflict between sentences, we further present a graph-based
uncertainty calibration method that integrates the contradiction probability of
the sentence with its neighbors in the semantic graph for uncertainty
calculation. Extensive experiments on two datasets show the great advantages of
our proposed approach. In particular, we obtain substantial improvements with
19.78% in passage-level hallucination detection.
|
2501.02560 | Vasileios Papapanagiotou | Vasileios Papapanagiotou, Ioannis Sarafis, Leonidas Alagialoglou,
Vasileios Gkolemis, Christos Diou, Anastasios Delopoulos | A system for objectively measuring behavior and the environment to
support large-scale studies on childhood obesity | 15 pages, 4 figures, 6 tables, journal | null | 10.1109/JBHI.2025.3526794 | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | Advances in IoT technologies combined with new algorithms have enabled the
collection and processing of high-rate multi-source data streams that quantify
human behavior in a fine-grained level and can lead to deeper insights on
individual behaviors as well as on the interplay between behaviors and the
environment. In this paper, we present an integrated system that collects and
extracts multiple behavioral and environmental indicators, aiming at improving
public health policies for tackling obesity. Data collection takes place using
passive methods based on smartphone and smartwatch applications that require
minimal interaction with the user. Our goal is to present a detailed account of
the design principles, the implementation processes, and the evaluation of
integrated algorithms, especially given the challenges we faced, in particular
(a) integrating multiple technologies, algorithms, and components under a
single, unified system, and (b) large scale (big data) requirements. We also
present evaluation results of the algorithms on datasets (public for most
cases) such as an absolute error of 8-9 steps when counting steps, 0.86
F1-score for detecting visited locations, and an error of less than 12 mins for
gross sleep time. Finally, we also briefly present studies that have been
materialized using our system, thus demonstrating its potential value to public
authorities and individual researchers.
| [
{
"version": "v1",
"created": "Sun, 5 Jan 2025 14:27:09 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Papapanagiotou",
"Vasileios",
""
],
[
"Sarafis",
"Ioannis",
""
],
[
"Alagialoglou",
"Leonidas",
""
],
[
"Gkolemis",
"Vasileios",
""
],
[
"Diou",
"Christos",
""
],
[
"Delopoulos",
"Anastasios",
""
]
] | TITLE: A system for objectively measuring behavior and the environment to
support large-scale studies on childhood obesity
ABSTRACT: Advances in IoT technologies combined with new algorithms have enabled the
collection and processing of high-rate multi-source data streams that quantify
human behavior in a fine-grained level and can lead to deeper insights on
individual behaviors as well as on the interplay between behaviors and the
environment. In this paper, we present an integrated system that collects and
extracts multiple behavioral and environmental indicators, aiming at improving
public health policies for tackling obesity. Data collection takes place using
passive methods based on smartphone and smartwatch applications that require
minimal interaction with the user. Our goal is to present a detailed account of
the design principles, the implementation processes, and the evaluation of
integrated algorithms, especially given the challenges we faced, in particular
(a) integrating multiple technologies, algorithms, and components under a
single, unified system, and (b) large scale (big data) requirements. We also
present evaluation results of the algorithms on datasets (public for most
cases) such as an absolute error of 8-9 steps when counting steps, 0.86
F1-score for detecting visited locations, and an error of less than 12 mins for
gross sleep time. Finally, we also briefly present studies that have been
materialized using our system, thus demonstrating its potential value to public
authorities and individual researchers.
|
2501.11218 | Anurag Awasthi | Anurag Awasthi | Leveraging GANs For Active Appearance Models Optimized Model Fitting | The full text of this preprint has been withdrawn, as it was
submitted in error at a much earlier stage, with work still needing
substantial refinement and validation. Therefore, the authors do not wish
this work to be cited as a reference | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Active Appearance Models (AAMs) are a well-established technique for fitting
deformable models to images, but they are limited by linear appearance
assumptions and can struggle with complex variations. In this paper, we explore
if the AAM fitting process can benefit from a Generative Adversarial Network
(GAN). We uses a U-Net based generator and a PatchGAN discriminator for
GAN-augmented framework in an attempt to refine the appearance model during
fitting. This approach attempts to addresses challenges such as non-linear
appearance variations and occlusions that traditional AAM optimization methods
may fail to handle. Limited experiments on face alignment datasets demonstrate
that the GAN-enhanced AAM can achieve higher accuracy and faster convergence
than classic approaches with some manual interventions. These results establish
feasibility of GANs as a tool for improving deformable model fitting in
challenging conditions while maintaining efficient performance, and establishes
the need for more future work to evaluate this approach at scale.
| [
{
"version": "v1",
"created": "Mon, 20 Jan 2025 01:49:37 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 20:12:07 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 04:07:08 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Awasthi",
"Anurag",
""
]
] | TITLE: Leveraging GANs For Active Appearance Models Optimized Model Fitting
ABSTRACT: Active Appearance Models (AAMs) are a well-established technique for fitting
deformable models to images, but they are limited by linear appearance
assumptions and can struggle with complex variations. In this paper, we explore
if the AAM fitting process can benefit from a Generative Adversarial Network
(GAN). We uses a U-Net based generator and a PatchGAN discriminator for
GAN-augmented framework in an attempt to refine the appearance model during
fitting. This approach attempts to addresses challenges such as non-linear
appearance variations and occlusions that traditional AAM optimization methods
may fail to handle. Limited experiments on face alignment datasets demonstrate
that the GAN-enhanced AAM can achieve higher accuracy and faster convergence
than classic approaches with some manual interventions. These results establish
feasibility of GANs as a tool for improving deformable model fitting in
challenging conditions while maintaining efficient performance, and establishes
the need for more future work to evaluate this approach at scale.
|
2501.15262 | Qianxi Mi | Qianxi Mi, Pengcheng Yuan, Chunlei Ma, Jiedan Chen, Mingzhe Yao | TflosYOLO+TFSC: An Accurate and Robust Model for Estimating Flower Count
and Flowering Period | null | null | null | null | cs.CV q-bio.QM | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tea flowers play a crucial role in taxonomic research and hybrid breeding for
the tea plant. As traditional methods of observing tea flower traits are
labor-intensive and inaccurate, we propose TflosYOLO and TFSC model for tea
flowering quantifying, which enable estimation of flower count and flowering
period. In this study, a highly representative and diverse dataset was
constructed by collecting flower images from 29 tea accessions in 2 years.
Based on this dataset, the TflosYOLO model was built on the YOLOv5 architecture
and enhanced with the Squeeze-and-Excitation (SE) network, which is the first
model to offer a viable solution for detecting and counting tea flowers. The
TflosYOLO model achieved an mAP50 of 0.874, outperforming YOLOv5, YOLOv7 and
YOLOv8. Furthermore, TflosYOLO model was tested on 34 datasets encompassing 26
tea accessions, five flowering stages, various lighting conditions, and pruned
/ unpruned plants, demonstrating high generalization and robustness. The
correlation coefficient (R^2) between the predicted and actual flower counts
was 0.974. Additionally, the TFSC (Tea Flowering Stage Classification) model, a
7-layer neural network was designed for automatic classification of the
flowering period. TFSC model was evaluated on 2 years and achieved an accuracy
of 0.738 and 0.899 respectively. Using the TflosYOLO+TFSC model, we monitored
the tea flowering dynamics and tracked the changes in flowering stages across
various tea accessions. The framework provides crucial support for tea plant
breeding programs and phenotypic analysis of germplasm resources.
| [
{
"version": "v1",
"created": "Sat, 25 Jan 2025 16:11:40 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Feb 2025 07:26:13 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 16:57:33 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Mi",
"Qianxi",
""
],
[
"Yuan",
"Pengcheng",
""
],
[
"Ma",
"Chunlei",
""
],
[
"Chen",
"Jiedan",
""
],
[
"Yao",
"Mingzhe",
""
]
] | TITLE: TflosYOLO+TFSC: An Accurate and Robust Model for Estimating Flower Count
and Flowering Period
ABSTRACT: Tea flowers play a crucial role in taxonomic research and hybrid breeding for
the tea plant. As traditional methods of observing tea flower traits are
labor-intensive and inaccurate, we propose TflosYOLO and TFSC model for tea
flowering quantifying, which enable estimation of flower count and flowering
period. In this study, a highly representative and diverse dataset was
constructed by collecting flower images from 29 tea accessions in 2 years.
Based on this dataset, the TflosYOLO model was built on the YOLOv5 architecture
and enhanced with the Squeeze-and-Excitation (SE) network, which is the first
model to offer a viable solution for detecting and counting tea flowers. The
TflosYOLO model achieved an mAP50 of 0.874, outperforming YOLOv5, YOLOv7 and
YOLOv8. Furthermore, TflosYOLO model was tested on 34 datasets encompassing 26
tea accessions, five flowering stages, various lighting conditions, and pruned
/ unpruned plants, demonstrating high generalization and robustness. The
correlation coefficient (R^2) between the predicted and actual flower counts
was 0.974. Additionally, the TFSC (Tea Flowering Stage Classification) model, a
7-layer neural network was designed for automatic classification of the
flowering period. TFSC model was evaluated on 2 years and achieved an accuracy
of 0.738 and 0.899 respectively. Using the TflosYOLO+TFSC model, we monitored
the tea flowering dynamics and tracked the changes in flowering stages across
various tea accessions. The framework provides crucial support for tea plant
breeding programs and phenotypic analysis of germplasm resources.
|
2501.16608 | Xiaolei Liu | Xiaolei Liu and Yan Sun and Zhiliang Wang and Mark Nixon | Unsupervised Domain Adaptation with Dynamic Clustering and Contrastive
Refinement for Gait Recognition | 21 pages, 8 figures | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Gait recognition is an emerging identification technology that distinguishes
individuals at long distances by analyzing individual walking patterns.
Traditional techniques rely heavily on large-scale labeled datasets, which
incurs high costs and significant labeling challenges. Recently, researchers
have explored unsupervised gait recognition with clustering-based unsupervised
domain adaptation methods and achieved notable success. However, these methods
directly use pseudo-label generated by clustering and neglect pseudolabel noise
caused by domain differences, which affects the effect of the model training
process. To mitigate these issues, we proposed a novel model called GaitDCCR,
which aims to reduce the influence of noisy pseudo labels on clustering and
model training. Our approach can be divided into two main stages: clustering
and training stage. In the clustering stage, we propose Dynamic Cluster
Parameters (DCP) and Dynamic Weight Centroids (DWC) to improve the efficiency
of clustering and obtain reliable cluster centroids. In the training stage, we
employ the classical teacher-student structure and propose Confidence-based
Pseudo-label Refinement (CPR) and Contrastive Teacher Module (CTM) to encourage
noisy samples to converge towards clusters containing their true identities.
Extensive experiments on public gait datasets have demonstrated that our simple
and effective method significantly enhances the performance of unsupervised
gait recognition, laying the foundation for its application in the real-world.
We will release the code at https://github.com/YanSun-github/GaitDCCR upon
acceptance.
| [
{
"version": "v1",
"created": "Tue, 28 Jan 2025 00:55:07 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 12:37:04 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Xiaolei",
""
],
[
"Sun",
"Yan",
""
],
[
"Wang",
"Zhiliang",
""
],
[
"Nixon",
"Mark",
""
]
] | TITLE: Unsupervised Domain Adaptation with Dynamic Clustering and Contrastive
Refinement for Gait Recognition
ABSTRACT: Gait recognition is an emerging identification technology that distinguishes
individuals at long distances by analyzing individual walking patterns.
Traditional techniques rely heavily on large-scale labeled datasets, which
incurs high costs and significant labeling challenges. Recently, researchers
have explored unsupervised gait recognition with clustering-based unsupervised
domain adaptation methods and achieved notable success. However, these methods
directly use pseudo-label generated by clustering and neglect pseudolabel noise
caused by domain differences, which affects the effect of the model training
process. To mitigate these issues, we proposed a novel model called GaitDCCR,
which aims to reduce the influence of noisy pseudo labels on clustering and
model training. Our approach can be divided into two main stages: clustering
and training stage. In the clustering stage, we propose Dynamic Cluster
Parameters (DCP) and Dynamic Weight Centroids (DWC) to improve the efficiency
of clustering and obtain reliable cluster centroids. In the training stage, we
employ the classical teacher-student structure and propose Confidence-based
Pseudo-label Refinement (CPR) and Contrastive Teacher Module (CTM) to encourage
noisy samples to converge towards clusters containing their true identities.
Extensive experiments on public gait datasets have demonstrated that our simple
and effective method significantly enhances the performance of unsupervised
gait recognition, laying the foundation for its application in the real-world.
We will release the code at https://github.com/YanSun-github/GaitDCCR upon
acceptance.
|
2501.16948 | Alfusainey Jallow | Alfusainey Jallow, Sven Bugiel | Stack Overflow Meets Replication: Security Research Amid Evolving Code
Snippets (Extended Version) | null | null | null | null | cs.CR | http://creativecommons.org/licenses/by-sa/4.0/ | We study the impact of Stack Overflow code evolution on the stability of
prior research findings derived from Stack Overflow data and provide
recommendations for future studies. We systematically reviewed papers published
between 2005--2023 to identify key aspects of Stack Overflow that can affect
study results, such as the language or context of code snippets. Our analysis
reveals that certain aspects are non-stationary over time, which could lead to
different conclusions if experiments are repeated at different times. We
replicated six studies using a more recent dataset to demonstrate this risk.
Our findings show that four papers produced significantly different results
than the original findings, preventing the same conclusions from being drawn
with a newer dataset version. Consequently, we recommend treating Stack
Overflow as a time series data source to provide context for interpreting
cross-sectional research conclusions.
| [
{
"version": "v1",
"created": "Tue, 28 Jan 2025 13:46:11 GMT"
},
{
"version": "v2",
"created": "Thu, 30 Jan 2025 10:22:59 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 22:31:06 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Jallow",
"Alfusainey",
""
],
[
"Bugiel",
"Sven",
""
]
] | TITLE: Stack Overflow Meets Replication: Security Research Amid Evolving Code
Snippets (Extended Version)
ABSTRACT: We study the impact of Stack Overflow code evolution on the stability of
prior research findings derived from Stack Overflow data and provide
recommendations for future studies. We systematically reviewed papers published
between 2005--2023 to identify key aspects of Stack Overflow that can affect
study results, such as the language or context of code snippets. Our analysis
reveals that certain aspects are non-stationary over time, which could lead to
different conclusions if experiments are repeated at different times. We
replicated six studies using a more recent dataset to demonstrate this risk.
Our findings show that four papers produced significantly different results
than the original findings, preventing the same conclusions from being drawn
with a newer dataset version. Consequently, we recommend treating Stack
Overflow as a time series data source to provide context for interpreting
cross-sectional research conclusions.
|
2501.17131 | Esteban Rivera | Esteban Rivera, Jannik L\"ubberstedt, Nico Uhlemann, Markus Lienkamp | Scenario Understanding of Traffic Scenes Through Large Visual Language
Models | Accepted at WACV2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Deep learning models for autonomous driving, encompassing perception,
planning, and control, depend on vast datasets to achieve their high
performance. However, their generalization often suffers due to domain-specific
data distributions, making an effective scene-based categorization of samples
necessary to improve their reliability across diverse domains. Manual
captioning, though valuable, is both labor-intensive and time-consuming,
creating a bottleneck in the data annotation process. Large Visual Language
Models (LVLMs) present a compelling solution by automating image analysis and
categorization through contextual queries, often without requiring retraining
for new categories. In this study, we evaluate the capabilities of LVLMs,
including GPT-4 and LLaVA, to understand and classify urban traffic scenes on
both an in-house dataset and the BDD100K. We propose a scalable captioning
pipeline that integrates state-of-the-art models, enabling a flexible
deployment on new datasets. Our analysis, combining quantitative metrics with
qualitative insights, demonstrates the effectiveness of LVLMs to understand
urban traffic scenarios and highlights their potential as an efficient tool for
data-driven advancements in autonomous driving.
| [
{
"version": "v1",
"created": "Tue, 28 Jan 2025 18:23:12 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 18:21:08 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Rivera",
"Esteban",
""
],
[
"Lübberstedt",
"Jannik",
""
],
[
"Uhlemann",
"Nico",
""
],
[
"Lienkamp",
"Markus",
""
]
] | TITLE: Scenario Understanding of Traffic Scenes Through Large Visual Language
Models
ABSTRACT: Deep learning models for autonomous driving, encompassing perception,
planning, and control, depend on vast datasets to achieve their high
performance. However, their generalization often suffers due to domain-specific
data distributions, making an effective scene-based categorization of samples
necessary to improve their reliability across diverse domains. Manual
captioning, though valuable, is both labor-intensive and time-consuming,
creating a bottleneck in the data annotation process. Large Visual Language
Models (LVLMs) present a compelling solution by automating image analysis and
categorization through contextual queries, often without requiring retraining
for new categories. In this study, we evaluate the capabilities of LVLMs,
including GPT-4 and LLaVA, to understand and classify urban traffic scenes on
both an in-house dataset and the BDD100K. We propose a scalable captioning
pipeline that integrates state-of-the-art models, enabling a flexible
deployment on new datasets. Our analysis, combining quantitative metrics with
qualitative insights, demonstrates the effectiveness of LVLMs to understand
urban traffic scenarios and highlights their potential as an efficient tool for
data-driven advancements in autonomous driving.
|
2502.00356 | Sameh Abdulah | Zipei Geng, Sameh Abdulah, Ying Sun, Hatem Ltaief, David E. Keyes,
Marc G. Genton | GPU-Accelerated Modified Bessel Function of the Second Kind for Gaussian
Processes | null | null | null | null | cs.DC | http://creativecommons.org/licenses/by/4.0/ | Modified Bessel functions of the second kind are widely used in physics,
engineering, spatial statistics, and machine learning. Since contemporary
scientific applications, including machine learning, rely on GPUs for
acceleration, providing robust GPU-hosted implementations of special functions,
such as the modified Bessel function, is crucial for performance. Existing
implementations of the modified Bessel function of the second kind rely on CPUs
and have limited coverage of the full range of values needed in some
applications. In this work, we present a robust implementation of the modified
Bessel function of the second kind on GPUs, eliminating the dependence on the
CPU host. We cover a range of values commonly used in real applications,
providing high accuracy compared to common libraries like the GNU Scientific
Library (GSL) when referenced to Mathematica as the authority. Our
GPU-accelerated approach also demonstrates a 2.68X performance improvement
using a single A100 GPU compared to the GSL on 40-core Intel Cascade Lake CPUs.
Our implementation is integrated into ExaGeoStat, the HPC framework for
Gaussian process modeling, where the modified Bessel function of the second
kind is required by the Matern covariance function in generating covariance
matrices. We accelerate the matrix generation process in ExaGeoStat by up to
12.62X with four A100 GPUs while maintaining almost the same accuracy for
modeling and prediction operations using synthetic and real datasets.
| [
{
"version": "v1",
"created": "Sat, 1 Feb 2025 07:27:30 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 03:10:49 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Geng",
"Zipei",
""
],
[
"Abdulah",
"Sameh",
""
],
[
"Sun",
"Ying",
""
],
[
"Ltaief",
"Hatem",
""
],
[
"Keyes",
"David E.",
""
],
[
"Genton",
"Marc G.",
""
]
] | TITLE: GPU-Accelerated Modified Bessel Function of the Second Kind for Gaussian
Processes
ABSTRACT: Modified Bessel functions of the second kind are widely used in physics,
engineering, spatial statistics, and machine learning. Since contemporary
scientific applications, including machine learning, rely on GPUs for
acceleration, providing robust GPU-hosted implementations of special functions,
such as the modified Bessel function, is crucial for performance. Existing
implementations of the modified Bessel function of the second kind rely on CPUs
and have limited coverage of the full range of values needed in some
applications. In this work, we present a robust implementation of the modified
Bessel function of the second kind on GPUs, eliminating the dependence on the
CPU host. We cover a range of values commonly used in real applications,
providing high accuracy compared to common libraries like the GNU Scientific
Library (GSL) when referenced to Mathematica as the authority. Our
GPU-accelerated approach also demonstrates a 2.68X performance improvement
using a single A100 GPU compared to the GSL on 40-core Intel Cascade Lake CPUs.
Our implementation is integrated into ExaGeoStat, the HPC framework for
Gaussian process modeling, where the modified Bessel function of the second
kind is required by the Matern covariance function in generating covariance
matrices. We accelerate the matrix generation process in ExaGeoStat by up to
12.62X with four A100 GPUs while maintaining almost the same accuracy for
modeling and prediction operations using synthetic and real datasets.
|
2502.01267 | Jose M. Alvarez | Jose M. Alvarez and Salvatore Ruggieri | Counterfactual Situation Testing: From Single to Multidimensional
Discrimination | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | We present counterfactual situation testing (CST), a causal data mining
framework for detecting individual discrimination in a dataset of classifier
decisions. CST answers the question ``what would have been the model outcome
had the individual, or complainant, been of a different protected status?'' It
extends the legally-grounded situation testing (ST) of Thanh et al. (2011) by
operationalizing the notion of "fairness given the difference" via
counterfactual reasoning. ST finds for each complainant similar protected and
non-protected instances in the dataset; constructs, respectively, a control and
test group; and compares the groups such that a difference in model outcomes
implies a potential case of individual discrimination. CST, instead, avoids
this idealized comparison by establishing the test group on the complainant's
generated counterfactual, which reflects how the protected attribute when
changed influences other seemingly neutral attributes of the complainant. Under
CST we test for discrimination for each complainant by comparing similar
individuals within the control and test group but dissimilar individuals across
these groups. We consider single (e.g.,~gender) and multidimensional
(e.g.,~gender and race) discrimination testing. For multidimensional
discrimination we study multiple and intersectional discrimination and, as
feared by legal scholars, find evidence that the former fails to account for
the latter kind. Using a k-nearest neighbor implementation, we showcase CST on
synthetic and real data. Experimental results show that CST uncovers a higher
number of cases than ST, even when the model is counterfactually fair. CST, in
fact, extends counterfactual fairness (CF) of Kusner et al. (2017) by equipping
CF with confidence intervals, which we report for all experiments.
| [
{
"version": "v1",
"created": "Mon, 3 Feb 2025 11:38:48 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 08:09:21 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Alvarez",
"Jose M.",
""
],
[
"Ruggieri",
"Salvatore",
""
]
] | TITLE: Counterfactual Situation Testing: From Single to Multidimensional
Discrimination
ABSTRACT: We present counterfactual situation testing (CST), a causal data mining
framework for detecting individual discrimination in a dataset of classifier
decisions. CST answers the question ``what would have been the model outcome
had the individual, or complainant, been of a different protected status?'' It
extends the legally-grounded situation testing (ST) of Thanh et al. (2011) by
operationalizing the notion of "fairness given the difference" via
counterfactual reasoning. ST finds for each complainant similar protected and
non-protected instances in the dataset; constructs, respectively, a control and
test group; and compares the groups such that a difference in model outcomes
implies a potential case of individual discrimination. CST, instead, avoids
this idealized comparison by establishing the test group on the complainant's
generated counterfactual, which reflects how the protected attribute when
changed influences other seemingly neutral attributes of the complainant. Under
CST we test for discrimination for each complainant by comparing similar
individuals within the control and test group but dissimilar individuals across
these groups. We consider single (e.g.,~gender) and multidimensional
(e.g.,~gender and race) discrimination testing. For multidimensional
discrimination we study multiple and intersectional discrimination and, as
feared by legal scholars, find evidence that the former fails to account for
the latter kind. Using a k-nearest neighbor implementation, we showcase CST on
synthetic and real data. Experimental results show that CST uncovers a higher
number of cases than ST, even when the model is counterfactually fair. CST, in
fact, extends counterfactual fairness (CF) of Kusner et al. (2017) by equipping
CF with confidence intervals, which we report for all experiments.
|
2502.01976 | Wenhao Zheng | Wenhao Zheng, Yixiao Chen, Weitong Zhang, Souvik Kundu, Yun Li,
Zhengzhong Liu, Eric P. Xing, Hongyi Wang, Huaxiu Yao | CITER: Collaborative Inference for Efficient Large Language Model
Decoding with Token-Level Routing | null | null | null | null | cs.CL cs.AI cs.LG cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models have achieved remarkable success in various tasks but
suffer from high computational costs during inference, limiting their
deployment in resource-constrained applications. To address this issue, we
propose a novel Collaborative Inference with Token-lEvel Routing (CITER)
framework that enables efficient collaboration between small and large language
models (SLMs \& LLMs) through a token-level routing strategy. Specifically,
CITER routes non-critical tokens to an SLM for efficiency and routes critical
tokens to an LLM for generalization quality. We formulate router training as a
policy optimization, where the router receives rewards based on both the
quality of predictions and the inference costs of generation. This allows the
router to learn to predict token-level routing scores and make routing
decisions based on both the current token and the future impact of its
decisions. To further accelerate the reward evaluation process, we introduce a
shortcut which significantly reduces the costs of the reward estimation and
improving the practicality of our approach. Extensive experiments on five
benchmark datasets demonstrate that CITER reduces the inference costs while
preserving high-quality generation, offering a promising solution for real-time
and resource-constrained applications. Our data and code are available at
https://github.com/aiming-lab/CITER.
| [
{
"version": "v1",
"created": "Tue, 4 Feb 2025 03:36:44 GMT"
},
{
"version": "v2",
"created": "Wed, 5 Feb 2025 17:26:35 GMT"
},
{
"version": "v3",
"created": "Sun, 9 Feb 2025 17:47:41 GMT"
},
{
"version": "v4",
"created": "Mon, 7 Apr 2025 03:22:31 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zheng",
"Wenhao",
""
],
[
"Chen",
"Yixiao",
""
],
[
"Zhang",
"Weitong",
""
],
[
"Kundu",
"Souvik",
""
],
[
"Li",
"Yun",
""
],
[
"Liu",
"Zhengzhong",
""
],
[
"Xing",
"Eric P.",
""
],
[
"Wang",
"Hongyi",
""
],
[
"Yao",
"Huaxiu",
""
]
] | TITLE: CITER: Collaborative Inference for Efficient Large Language Model
Decoding with Token-Level Routing
ABSTRACT: Large language models have achieved remarkable success in various tasks but
suffer from high computational costs during inference, limiting their
deployment in resource-constrained applications. To address this issue, we
propose a novel Collaborative Inference with Token-lEvel Routing (CITER)
framework that enables efficient collaboration between small and large language
models (SLMs \& LLMs) through a token-level routing strategy. Specifically,
CITER routes non-critical tokens to an SLM for efficiency and routes critical
tokens to an LLM for generalization quality. We formulate router training as a
policy optimization, where the router receives rewards based on both the
quality of predictions and the inference costs of generation. This allows the
router to learn to predict token-level routing scores and make routing
decisions based on both the current token and the future impact of its
decisions. To further accelerate the reward evaluation process, we introduce a
shortcut which significantly reduces the costs of the reward estimation and
improving the practicality of our approach. Extensive experiments on five
benchmark datasets demonstrate that CITER reduces the inference costs while
preserving high-quality generation, offering a promising solution for real-time
and resource-constrained applications. Our data and code are available at
https://github.com/aiming-lab/CITER.
|
2502.04093 | Kai Zhao | Zhuoxun Yang, Sheng Di, Longtao Zhang, Ruoyu Li, Ximiao Li, Jiajun
Huang, Jinyang Liu, Franck Cappello, Kai Zhao | IPComp: Interpolation Based Progressive Lossy Compression for Scientific
Applications | accepted by HPDC'25 | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compression is a crucial solution for data reduction in modern scientific
applications due to the exponential growth of data from simulations,
experiments, and observations. Compression with progressive retrieval
capability allows users to access coarse approximations of data quickly and
then incrementally refine these approximations to higher fidelity. Existing
progressive compression solutions suffer from low reduction ratios or high
operation costs, effectively undermining the approach's benefits. In this
paper, we propose the first-ever interpolation-based progressive lossy
compression solution that has both high reduction ratios and low operation
costs. The interpolation-based algorithm has been verified as one of the best
for scientific data reduction, but previously no effort exists to make it
support progressive retrieval. Our contributions are three-fold: (1) We
thoroughly analyze the error characteristics of the interpolation algorithm and
propose our solution IPComp with multi-level bitplane and predictive coding.
(2) We derive optimized strategies toward minimum data retrieval under
different fidelity levels indicated by users through error bounds and bitrates.
(3) We evaluate the proposed solution using six real-world datasets from four
diverse domains. Experimental results demonstrate our solution archives up to
$487\%$ higher compression ratios and $698\%$ faster speed than other
state-of-the-art progressive compressors, and reduces the data volume for
retrieval by up to $83\%$ compared to baselines under the same error bound, and
reduces the error by up to $99\%$ under the same bitrate.
| [
{
"version": "v1",
"created": "Thu, 6 Feb 2025 14:07:26 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Feb 2025 12:50:50 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 21:58:30 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Yang",
"Zhuoxun",
""
],
[
"Di",
"Sheng",
""
],
[
"Zhang",
"Longtao",
""
],
[
"Li",
"Ruoyu",
""
],
[
"Li",
"Ximiao",
""
],
[
"Huang",
"Jiajun",
""
],
[
"Liu",
"Jinyang",
""
],
[
"Cappello",
"Franck",
""
],
[
"Zhao",
"Kai",
""
]
] | TITLE: IPComp: Interpolation Based Progressive Lossy Compression for Scientific
Applications
ABSTRACT: Compression is a crucial solution for data reduction in modern scientific
applications due to the exponential growth of data from simulations,
experiments, and observations. Compression with progressive retrieval
capability allows users to access coarse approximations of data quickly and
then incrementally refine these approximations to higher fidelity. Existing
progressive compression solutions suffer from low reduction ratios or high
operation costs, effectively undermining the approach's benefits. In this
paper, we propose the first-ever interpolation-based progressive lossy
compression solution that has both high reduction ratios and low operation
costs. The interpolation-based algorithm has been verified as one of the best
for scientific data reduction, but previously no effort exists to make it
support progressive retrieval. Our contributions are three-fold: (1) We
thoroughly analyze the error characteristics of the interpolation algorithm and
propose our solution IPComp with multi-level bitplane and predictive coding.
(2) We derive optimized strategies toward minimum data retrieval under
different fidelity levels indicated by users through error bounds and bitrates.
(3) We evaluate the proposed solution using six real-world datasets from four
diverse domains. Experimental results demonstrate our solution archives up to
$487\%$ higher compression ratios and $698\%$ faster speed than other
state-of-the-art progressive compressors, and reduces the data volume for
retrieval by up to $83\%$ compared to baselines under the same error bound, and
reduces the error by up to $99\%$ under the same bitrate.
|
2502.07045 | John Hastings | Haywood Gelman, John D. Hastings | Scalable and Ethical Insider Threat Detection through Data Synthesis and
Analysis by LLMs | 6 pages, 0 figures, 8 tables | null | null | null | cs.CR cs.AI cs.CL cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Insider threats wield an outsized influence on organizations,
disproportionate to their small numbers. This is due to the internal access
insiders have to systems, information, and infrastructure. %One example of this
influence is where anonymous respondents submit web-based job search site
reviews, an insider threat risk to organizations. Signals for such risks may be
found in anonymous submissions to public web-based job search site reviews.
This research studies the potential for large language models (LLMs) to analyze
and detect insider threat sentiment within job site reviews. Addressing ethical
data collection concerns, this research utilizes synthetic data generation
using LLMs alongside existing job review datasets. A comparative analysis of
sentiment scores generated by LLMs is benchmarked against expert human scoring.
Findings reveal that LLMs demonstrate alignment with human evaluations in most
cases, thus effectively identifying nuanced indicators of threat sentiment. The
performance is lower on human-generated data than synthetic data, suggesting
areas for improvement in evaluating real-world data. Text diversity analysis
found differences between human-generated and LLM-generated datasets, with
synthetic data exhibiting somewhat lower diversity. Overall, the results
demonstrate the applicability of LLMs to insider threat detection, and a
scalable solution for insider sentiment testing by overcoming ethical and
logistical barriers tied to data acquisition.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 21:27:06 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 16:01:47 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Gelman",
"Haywood",
""
],
[
"Hastings",
"John D.",
""
]
] | TITLE: Scalable and Ethical Insider Threat Detection through Data Synthesis and
Analysis by LLMs
ABSTRACT: Insider threats wield an outsized influence on organizations,
disproportionate to their small numbers. This is due to the internal access
insiders have to systems, information, and infrastructure. %One example of this
influence is where anonymous respondents submit web-based job search site
reviews, an insider threat risk to organizations. Signals for such risks may be
found in anonymous submissions to public web-based job search site reviews.
This research studies the potential for large language models (LLMs) to analyze
and detect insider threat sentiment within job site reviews. Addressing ethical
data collection concerns, this research utilizes synthetic data generation
using LLMs alongside existing job review datasets. A comparative analysis of
sentiment scores generated by LLMs is benchmarked against expert human scoring.
Findings reveal that LLMs demonstrate alignment with human evaluations in most
cases, thus effectively identifying nuanced indicators of threat sentiment. The
performance is lower on human-generated data than synthetic data, suggesting
areas for improvement in evaluating real-world data. Text diversity analysis
found differences between human-generated and LLM-generated datasets, with
synthetic data exhibiting somewhat lower diversity. Overall, the results
demonstrate the applicability of LLMs to insider threat detection, and a
scalable solution for insider sentiment testing by overcoming ethical and
logistical barriers tied to data acquisition.
|
2502.10603 | Youssef Shoeb | Youssef Shoeb, Azarm Nowzad, Hanno Gottschalk | Adaptive Neural Networks for Intelligent Data-Driven Development | Accepted to 2025 IEEE Intelligent Vehicles Symposium (IV) | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Advances in machine learning methods for computer vision tasks have led to
their consideration for safety-critical applications like autonomous driving.
However, effectively integrating these methods into the automotive development
lifecycle remains challenging. Since the performance of machine learning
algorithms relies heavily on the training data provided, the data and model
development lifecycle play a key role in successfully integrating these
components into the product development lifecycle. Existing models frequently
encounter difficulties recognizing or adapting to novel instances not present
in the original training dataset. This poses a significant risk for reliable
deployment in dynamic environments. To address this challenge, we propose an
adaptive neural network architecture and an iterative development framework
that enables users to efficiently incorporate previously unknown objects into
the current perception system. Our approach builds on continuous learning,
emphasizing the necessity of dynamic updates to reflect real-world deployment
conditions. Specifically, we introduce a pipeline with three key components:
(1) a scalable network extension strategy to integrate new classes while
preserving existing performance, (2) a dynamic OoD detection component that
requires no additional retraining for newly added classes, and (3) a
retrieval-based data augmentation process tailored for safety-critical
deployments. The integration of these components establishes a pragmatic and
adaptive pipeline for the continuous evolution of perception systems in the
context of autonomous driving.
| [
{
"version": "v1",
"created": "Fri, 14 Feb 2025 23:18:54 GMT"
},
{
"version": "v2",
"created": "Sun, 2 Mar 2025 01:50:22 GMT"
},
{
"version": "v3",
"created": "Sat, 5 Apr 2025 02:18:00 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Shoeb",
"Youssef",
""
],
[
"Nowzad",
"Azarm",
""
],
[
"Gottschalk",
"Hanno",
""
]
] | TITLE: Adaptive Neural Networks for Intelligent Data-Driven Development
ABSTRACT: Advances in machine learning methods for computer vision tasks have led to
their consideration for safety-critical applications like autonomous driving.
However, effectively integrating these methods into the automotive development
lifecycle remains challenging. Since the performance of machine learning
algorithms relies heavily on the training data provided, the data and model
development lifecycle play a key role in successfully integrating these
components into the product development lifecycle. Existing models frequently
encounter difficulties recognizing or adapting to novel instances not present
in the original training dataset. This poses a significant risk for reliable
deployment in dynamic environments. To address this challenge, we propose an
adaptive neural network architecture and an iterative development framework
that enables users to efficiently incorporate previously unknown objects into
the current perception system. Our approach builds on continuous learning,
emphasizing the necessity of dynamic updates to reflect real-world deployment
conditions. Specifically, we introduce a pipeline with three key components:
(1) a scalable network extension strategy to integrate new classes while
preserving existing performance, (2) a dynamic OoD detection component that
requires no additional retraining for newly added classes, and (3) a
retrieval-based data augmentation process tailored for safety-critical
deployments. The integration of these components establishes a pragmatic and
adaptive pipeline for the continuous evolution of perception systems in the
context of autonomous driving.
|
2502.11464 | Xintong Ling Dr. | Zixiang Cui, Xintong Ling, Xingyu Zhou, Jiaheng Wang, Zhi Ding, Xiqi
Gao | BagChain: A Dual-functional Blockchain Leveraging Bagging-based
Distributed Learning | null | null | null | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This work proposes a dual-functional blockchain framework named BagChain for
bagging-based decentralized learning. BagChain integrates blockchain with
distributed machine learning by replacing the computationally costly hash
operations in proof-of-work with machine-learning model training. BagChain
utilizes individual miners' private data samples and limited computing
resources to train potentially weak base models, which may be very weak, and
further aggregates them into strong ensemble models. Specifically, we design a
three-layer blockchain structure associated with the corresponding generation
and validation mechanisms to enable distributed machine learning among
uncoordinated miners in a permissionless and open setting. To reduce
computational waste due to blockchain forking, we further propose the cross
fork sharing mechanism for practical networks with lengthy delays. Extensive
experiments illustrate the superiority and efficacy of BagChain when handling
various machine learning tasks on both independently and identically
distributed (IID) and non-IID datasets. BagChain remains robust and effective
even when facing constrained local computing capability, heterogeneous private
user data, and sparse network connectivity.
| [
{
"version": "v1",
"created": "Mon, 17 Feb 2025 05:49:45 GMT"
},
{
"version": "v2",
"created": "Wed, 19 Mar 2025 08:48:31 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 14:54:00 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Cui",
"Zixiang",
""
],
[
"Ling",
"Xintong",
""
],
[
"Zhou",
"Xingyu",
""
],
[
"Wang",
"Jiaheng",
""
],
[
"Ding",
"Zhi",
""
],
[
"Gao",
"Xiqi",
""
]
] | TITLE: BagChain: A Dual-functional Blockchain Leveraging Bagging-based
Distributed Learning
ABSTRACT: This work proposes a dual-functional blockchain framework named BagChain for
bagging-based decentralized learning. BagChain integrates blockchain with
distributed machine learning by replacing the computationally costly hash
operations in proof-of-work with machine-learning model training. BagChain
utilizes individual miners' private data samples and limited computing
resources to train potentially weak base models, which may be very weak, and
further aggregates them into strong ensemble models. Specifically, we design a
three-layer blockchain structure associated with the corresponding generation
and validation mechanisms to enable distributed machine learning among
uncoordinated miners in a permissionless and open setting. To reduce
computational waste due to blockchain forking, we further propose the cross
fork sharing mechanism for practical networks with lengthy delays. Extensive
experiments illustrate the superiority and efficacy of BagChain when handling
various machine learning tasks on both independently and identically
distributed (IID) and non-IID datasets. BagChain remains robust and effective
even when facing constrained local computing capability, heterogeneous private
user data, and sparse network connectivity.
|
2502.14314 | Tianyou Jiang | Tianyou Jiang, Yang Zhong | ODverse33: Is the New YOLO Version Always Better? A Multi Domain
benchmark from YOLO v5 to v11 | 19 pages, 4 figures, 7 tables | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | You Look Only Once (YOLO) models have been widely used for building real-time
object detectors across various domains. With the increasing frequency of new
YOLO versions being released, key questions arise. Are the newer versions
always better than their previous versions? What are the core innovations in
each YOLO version and how do these changes translate into real-world
performance gains? In this paper, we summarize the key innovations from YOLOv1
to YOLOv11, introduce a comprehensive benchmark called ODverse33, which
includes 33 datasets spanning 11 diverse domains (Autonomous driving,
Agricultural, Underwater, Medical, Videogame, Industrial, Aerial, Wildlife,
Retail, Microscopic, and Security), and explore the practical impact of model
improvements in real-world, multi-domain applications through extensive
experimental results. We hope this study can provide some guidance to the
extensive users of object detection models and give some references for future
real-time object detector development.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 06:57:58 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 14:21:38 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Jiang",
"Tianyou",
""
],
[
"Zhong",
"Yang",
""
]
] | TITLE: ODverse33: Is the New YOLO Version Always Better? A Multi Domain
benchmark from YOLO v5 to v11
ABSTRACT: You Look Only Once (YOLO) models have been widely used for building real-time
object detectors across various domains. With the increasing frequency of new
YOLO versions being released, key questions arise. Are the newer versions
always better than their previous versions? What are the core innovations in
each YOLO version and how do these changes translate into real-world
performance gains? In this paper, we summarize the key innovations from YOLOv1
to YOLOv11, introduce a comprehensive benchmark called ODverse33, which
includes 33 datasets spanning 11 diverse domains (Autonomous driving,
Agricultural, Underwater, Medical, Videogame, Industrial, Aerial, Wildlife,
Retail, Microscopic, and Security), and explore the practical impact of model
improvements in real-world, multi-domain applications through extensive
experimental results. We hope this study can provide some guidance to the
extensive users of object detection models and give some references for future
real-time object detector development.
|
2502.14807 | Fadillah Maani | Fadillah Maani, Numan Saeed, Tausifa Saleem, Zaid Farooq, Hussain
Alasmawi, Werner Diehl, Ameera Mohammad, Gareth Waring, Saudabi Valappi,
Leanne Bricker, Mohammad Yaqub | FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image
Analysis | null | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Foundation models are becoming increasingly effective in the medical domain,
offering pre-trained models on large datasets that can be readily adapted for
downstream tasks. Despite progress, fetal ultrasound images remain a
challenging domain for foundation models due to their inherent complexity,
often requiring substantial additional training and facing limitations due to
the scarcity of paired multimodal data. To overcome these challenges, here we
introduce FetalCLIP, a vision-language foundation model capable of generating
universal representation of fetal ultrasound images. FetalCLIP was pre-trained
using a multimodal learning approach on a diverse dataset of 210,035 fetal
ultrasound images paired with text. This represents the largest paired dataset
of its kind used for foundation model development to date. This unique training
approach allows FetalCLIP to effectively learn the intricate anatomical
features present in fetal ultrasound images, resulting in robust
representations that can be used for a variety of downstream applications. In
extensive benchmarking across a range of key fetal ultrasound applications,
including classification, gestational age estimation, congenital heart defect
(CHD) detection, and fetal structure segmentation, FetalCLIP outperformed all
baselines while demonstrating remarkable generalizability and strong
performance even with limited labeled data. We plan to release the FetalCLIP
model publicly for the benefit of the broader scientific community.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 18:30:34 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 17:03:03 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Maani",
"Fadillah",
""
],
[
"Saeed",
"Numan",
""
],
[
"Saleem",
"Tausifa",
""
],
[
"Farooq",
"Zaid",
""
],
[
"Alasmawi",
"Hussain",
""
],
[
"Diehl",
"Werner",
""
],
[
"Mohammad",
"Ameera",
""
],
[
"Waring",
"Gareth",
""
],
[
"Valappi",
"Saudabi",
""
],
[
"Bricker",
"Leanne",
""
],
[
"Yaqub",
"Mohammad",
""
]
] | TITLE: FetalCLIP: A Visual-Language Foundation Model for Fetal Ultrasound Image
Analysis
ABSTRACT: Foundation models are becoming increasingly effective in the medical domain,
offering pre-trained models on large datasets that can be readily adapted for
downstream tasks. Despite progress, fetal ultrasound images remain a
challenging domain for foundation models due to their inherent complexity,
often requiring substantial additional training and facing limitations due to
the scarcity of paired multimodal data. To overcome these challenges, here we
introduce FetalCLIP, a vision-language foundation model capable of generating
universal representation of fetal ultrasound images. FetalCLIP was pre-trained
using a multimodal learning approach on a diverse dataset of 210,035 fetal
ultrasound images paired with text. This represents the largest paired dataset
of its kind used for foundation model development to date. This unique training
approach allows FetalCLIP to effectively learn the intricate anatomical
features present in fetal ultrasound images, resulting in robust
representations that can be used for a variety of downstream applications. In
extensive benchmarking across a range of key fetal ultrasound applications,
including classification, gestational age estimation, congenital heart defect
(CHD) detection, and fetal structure segmentation, FetalCLIP outperformed all
baselines while demonstrating remarkable generalizability and strong
performance even with limited labeled data. We plan to release the FetalCLIP
model publicly for the benefit of the broader scientific community.
|
2502.15011 | Sayan Deb Sarkar | Sayan Deb Sarkar, Ondrej Miksik, Marc Pollefeys, Daniel Barath, Iro
Armeni | CrossOver: 3D Scene Cross-Modal Alignment | Project Page: https://sayands.github.io/crossover/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multi-modal 3D object understanding has gained significant attention, yet
current approaches often assume complete data availability and rigid alignment
across all modalities. We present CrossOver, a novel framework for cross-modal
3D scene understanding via flexible, scene-level modality alignment. Unlike
traditional methods that require aligned modality data for every object
instance, CrossOver learns a unified, modality-agnostic embedding space for
scenes by aligning modalities -- RGB images, point clouds, CAD models,
floorplans, and text descriptions -- with relaxed constraints and without
explicit object semantics. Leveraging dimensionality-specific encoders, a
multi-stage training pipeline, and emergent cross-modal behaviors, CrossOver
supports robust scene retrieval and object localization, even with missing
modalities. Evaluations on ScanNet and 3RScan datasets show its superior
performance across diverse metrics, highlighting the adaptability for
real-world applications in 3D scene understanding.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 20:05:30 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 18:15:59 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Sarkar",
"Sayan Deb",
""
],
[
"Miksik",
"Ondrej",
""
],
[
"Pollefeys",
"Marc",
""
],
[
"Barath",
"Daniel",
""
],
[
"Armeni",
"Iro",
""
]
] | TITLE: CrossOver: 3D Scene Cross-Modal Alignment
ABSTRACT: Multi-modal 3D object understanding has gained significant attention, yet
current approaches often assume complete data availability and rigid alignment
across all modalities. We present CrossOver, a novel framework for cross-modal
3D scene understanding via flexible, scene-level modality alignment. Unlike
traditional methods that require aligned modality data for every object
instance, CrossOver learns a unified, modality-agnostic embedding space for
scenes by aligning modalities -- RGB images, point clouds, CAD models,
floorplans, and text descriptions -- with relaxed constraints and without
explicit object semantics. Leveraging dimensionality-specific encoders, a
multi-stage training pipeline, and emergent cross-modal behaviors, CrossOver
supports robust scene retrieval and object localization, even with missing
modalities. Evaluations on ScanNet and 3RScan datasets show its superior
performance across diverse metrics, highlighting the adaptability for
real-world applications in 3D scene understanding.
|
2502.15860 | Arefeh Kazemi | Arefeh Kazemi and Sri Balaaji Natarajan Kalaivendan and Joachim Wagner
and Hamza Qadeer and Brian Davis | Synthetic vs. Gold: The Role of LLM-Generated Labels and Data in
Cyberbullying Detection | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Cyberbullying (CB) presents a pressing threat, especially to children,
underscoring the urgent need for robust detection systems to ensure online
safety. However, progress in developing such systems is hindered by the
scarcity of large, labeled datasets that are specifically tailored for
specialized tasks and the target age groups. Creating these datasets relies
heavily on human annotation, which not only strains resources but also raises
significant ethical and legal concerns due to annotators' exposure to harmful
content, notwithstanding the acquisition of this type of data from vulnerable
populations such as children. In this paper, we address these challenges by
leveraging Large Language Models (LLMs) to generate synthetic data and labels.
Our experiments demonstrate that synthetic data enables BERT-based CB
classifiers to achieve performance close to that of those trained on fully
authentic datasets (75.8% vs. 81.5% accuracy). Additionally, LLMs can
effectively label authentic yet unlabeled data, allowing BERT classifiers to
attain a comparable performance level (79.1% vs. 81.5% accuracy). These results
highlight the potential of LLMs as a scalable, ethical, and cost-effective
solution for generating data for CB detection.
| [
{
"version": "v1",
"created": "Fri, 21 Feb 2025 10:17:29 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 09:42:07 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Kazemi",
"Arefeh",
""
],
[
"Kalaivendan",
"Sri Balaaji Natarajan",
""
],
[
"Wagner",
"Joachim",
""
],
[
"Qadeer",
"Hamza",
""
],
[
"Davis",
"Brian",
""
]
] | TITLE: Synthetic vs. Gold: The Role of LLM-Generated Labels and Data in
Cyberbullying Detection
ABSTRACT: Cyberbullying (CB) presents a pressing threat, especially to children,
underscoring the urgent need for robust detection systems to ensure online
safety. However, progress in developing such systems is hindered by the
scarcity of large, labeled datasets that are specifically tailored for
specialized tasks and the target age groups. Creating these datasets relies
heavily on human annotation, which not only strains resources but also raises
significant ethical and legal concerns due to annotators' exposure to harmful
content, notwithstanding the acquisition of this type of data from vulnerable
populations such as children. In this paper, we address these challenges by
leveraging Large Language Models (LLMs) to generate synthetic data and labels.
Our experiments demonstrate that synthetic data enables BERT-based CB
classifiers to achieve performance close to that of those trained on fully
authentic datasets (75.8% vs. 81.5% accuracy). Additionally, LLMs can
effectively label authentic yet unlabeled data, allowing BERT classifiers to
attain a comparable performance level (79.1% vs. 81.5% accuracy). These results
highlight the potential of LLMs as a scalable, ethical, and cost-effective
solution for generating data for CB detection.
|
2502.16748 | Anand Kumar | Anand Kumar, Kavinder Roghit Kanthen, Josna John | GS-TransUNet: Integrated 2D Gaussian Splatting and Transformer UNet for
Accurate Skin Lesion Analysis | 12 pages, 7 figures, SPIE Medical Imaging 2025 | SPIE Medical Imaging 2025 | 10.1117/12.3046869 | 13407-1340736 | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We can achieve fast and consistent early skin cancer detection with recent
developments in computer vision and deep learning techniques. However, the
existing skin lesion segmentation and classification prediction models run
independently, thus missing potential efficiencies from their integrated
execution. To unify skin lesion analysis, our paper presents the Gaussian
Splatting - Transformer UNet (GS-TransUNet), a novel approach that
synergistically combines 2D Gaussian splatting with the Transformer UNet
architecture for automated skin cancer diagnosis. Our unified deep learning
model efficiently delivers dual-function skin lesion classification and
segmentation for clinical diagnosis. Evaluated on ISIC-2017 and PH2 datasets,
our network demonstrates superior performance compared to existing
state-of-the-art models across multiple metrics through 5-fold
cross-validation. Our findings illustrate significant advancements in the
precision of segmentation and classification. This integration sets new
benchmarks in the field and highlights the potential for further research into
multi-task medical image analysis methodologies, promising enhancements in
automated diagnostic systems.
| [
{
"version": "v1",
"created": "Sun, 23 Feb 2025 23:28:47 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Kumar",
"Anand",
""
],
[
"Kanthen",
"Kavinder Roghit",
""
],
[
"John",
"Josna",
""
]
] | TITLE: GS-TransUNet: Integrated 2D Gaussian Splatting and Transformer UNet for
Accurate Skin Lesion Analysis
ABSTRACT: We can achieve fast and consistent early skin cancer detection with recent
developments in computer vision and deep learning techniques. However, the
existing skin lesion segmentation and classification prediction models run
independently, thus missing potential efficiencies from their integrated
execution. To unify skin lesion analysis, our paper presents the Gaussian
Splatting - Transformer UNet (GS-TransUNet), a novel approach that
synergistically combines 2D Gaussian splatting with the Transformer UNet
architecture for automated skin cancer diagnosis. Our unified deep learning
model efficiently delivers dual-function skin lesion classification and
segmentation for clinical diagnosis. Evaluated on ISIC-2017 and PH2 datasets,
our network demonstrates superior performance compared to existing
state-of-the-art models across multiple metrics through 5-fold
cross-validation. Our findings illustrate significant advancements in the
precision of segmentation and classification. This integration sets new
benchmarks in the field and highlights the potential for further research into
multi-task medical image analysis methodologies, promising enhancements in
automated diagnostic systems.
|
2502.17177 | Utsav Akhaury | Utsav Akhaury, Pascale Jablonka, Fr\'ed\'eric Courbin, Jean-Luc Starck | Joint multiband deconvolution for Euclid and Vera C. Rubin images | 12 pages, 12 figures | null | null | null | astro-ph.IM cs.CV | http://creativecommons.org/licenses/by/4.0/ | With the advent of surveys like Euclid and Vera C. Rubin, astrophysicists
will have access to both deep, high-resolution images and multiband images.
However, these two types are not simultaneously available in any single
dataset. It is therefore vital to devise image deconvolution algorithms that
exploit the best of both worlds and that can jointly analyze datasets spanning
a range of resolutions and wavelengths. In this work we introduce a novel
multiband deconvolution technique aimed at improving the resolution of
ground-based astronomical images by leveraging higher-resolution space-based
observations. The method capitalizes on the fortunate fact that the Rubin $r$,
$i$, and $z$ bands lie within the Euclid VIS band. The algorithm jointly
de-convolves all the data to convert the $r$-, $i$-, and $z$-band Rubin images
to the resolution of Euclid by leveraging the correlations between the
different bands. We also investigate the performance of deep-learning-based
denoising with DRUNet to further improve the results. We illustrate the
effectiveness of our method in terms of resolution and morphology recovery,
flux preservation, and generalization to different noise levels. This approach
extends beyond the specific Euclid-Rubin combination, offering a versatile
solution to improving the resolution of ground-based images in multiple
photometric bands by jointly using any space-based images with overlapping
filters.
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 14:13:38 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 15:39:06 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Akhaury",
"Utsav",
""
],
[
"Jablonka",
"Pascale",
""
],
[
"Courbin",
"Frédéric",
""
],
[
"Starck",
"Jean-Luc",
""
]
] | TITLE: Joint multiband deconvolution for Euclid and Vera C. Rubin images
ABSTRACT: With the advent of surveys like Euclid and Vera C. Rubin, astrophysicists
will have access to both deep, high-resolution images and multiband images.
However, these two types are not simultaneously available in any single
dataset. It is therefore vital to devise image deconvolution algorithms that
exploit the best of both worlds and that can jointly analyze datasets spanning
a range of resolutions and wavelengths. In this work we introduce a novel
multiband deconvolution technique aimed at improving the resolution of
ground-based astronomical images by leveraging higher-resolution space-based
observations. The method capitalizes on the fortunate fact that the Rubin $r$,
$i$, and $z$ bands lie within the Euclid VIS band. The algorithm jointly
de-convolves all the data to convert the $r$-, $i$-, and $z$-band Rubin images
to the resolution of Euclid by leveraging the correlations between the
different bands. We also investigate the performance of deep-learning-based
denoising with DRUNet to further improve the results. We illustrate the
effectiveness of our method in terms of resolution and morphology recovery,
flux preservation, and generalization to different noise levels. This approach
extends beyond the specific Euclid-Rubin combination, offering a versatile
solution to improving the resolution of ground-based images in multiple
photometric bands by jointly using any space-based images with overlapping
filters.
|
2502.17475 | Xu Wang | Xu Wang, Jiaju Kang, Puyu Han, Yubao Zhao, Qian Liu, Liwenfei He,
Lingqiong Zhang, Lingyun Dai, Yongcheng Wang, Jie Tao | ECG-Expert-QA: A Benchmark for Evaluating Medical Large Language Models
in Heart Disease Diagnosis | null | null | null | null | eess.SP cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We present ECG-Expert-QA, a comprehensive multimodal dataset for evaluating
diagnostic capabilities in electrocardiogram (ECG) interpretation. It combines
real-world clinical ECG data with systematically generated synthetic cases,
covering 12 essential diagnostic tasks and totaling 47,211 expert-validated QA
pairs. These encompass diverse clinical scenarios, from basic rhythm
recognition to complex diagnoses involving rare conditions and temporal
changes. A key innovation is the support for multi-turn dialogues, enabling the
development of conversational medical AI systems that emulate clinician-patient
or interprofessional interactions. This allows for more realistic assessment of
AI models' clinical reasoning, diagnostic accuracy, and knowledge integration.
Constructed through a knowledge-guided framework with strict quality control,
ECG-Expert-QA ensures linguistic and clinical consistency, making it a
high-quality resource for advancing AI-assisted ECG interpretation. It
challenges models with tasks like identifying subtle ischemic changes and
interpreting complex arrhythmias in context-rich scenarios. To promote research
transparency and collaboration, the dataset, accompanying code, and prompts are
publicly released at https://github.com/Zaozzz/ECG-Expert-QA
| [
{
"version": "v1",
"created": "Sun, 16 Feb 2025 13:28:55 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Feb 2025 12:57:16 GMT"
},
{
"version": "v3",
"created": "Mon, 7 Apr 2025 09:59:44 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Xu",
""
],
[
"Kang",
"Jiaju",
""
],
[
"Han",
"Puyu",
""
],
[
"Zhao",
"Yubao",
""
],
[
"Liu",
"Qian",
""
],
[
"He",
"Liwenfei",
""
],
[
"Zhang",
"Lingqiong",
""
],
[
"Dai",
"Lingyun",
""
],
[
"Wang",
"Yongcheng",
""
],
[
"Tao",
"Jie",
""
]
] | TITLE: ECG-Expert-QA: A Benchmark for Evaluating Medical Large Language Models
in Heart Disease Diagnosis
ABSTRACT: We present ECG-Expert-QA, a comprehensive multimodal dataset for evaluating
diagnostic capabilities in electrocardiogram (ECG) interpretation. It combines
real-world clinical ECG data with systematically generated synthetic cases,
covering 12 essential diagnostic tasks and totaling 47,211 expert-validated QA
pairs. These encompass diverse clinical scenarios, from basic rhythm
recognition to complex diagnoses involving rare conditions and temporal
changes. A key innovation is the support for multi-turn dialogues, enabling the
development of conversational medical AI systems that emulate clinician-patient
or interprofessional interactions. This allows for more realistic assessment of
AI models' clinical reasoning, diagnostic accuracy, and knowledge integration.
Constructed through a knowledge-guided framework with strict quality control,
ECG-Expert-QA ensures linguistic and clinical consistency, making it a
high-quality resource for advancing AI-assisted ECG interpretation. It
challenges models with tasks like identifying subtle ischemic changes and
interpreting complex arrhythmias in context-rich scenarios. To promote research
transparency and collaboration, the dataset, accompanying code, and prompts are
publicly released at https://github.com/Zaozzz/ECG-Expert-QA
|
2502.17648 | Lei Cheng | Lei Cheng, Lihao Guo, Tianya Zhang, Tam Bang, Austin Harris, Mustafa
Hajij, Mina Sartipi and Siyang Cao | CalibRefine: Deep Learning-Based Online Automatic Targetless
LiDAR-Camera Calibration with Iterative and Attention-Driven Post-Refinement | null | null | null | null | cs.CV cs.SY eess.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate multi-sensor calibration is essential for deploying robust
perception systems in applications such as autonomous driving, robotics, and
intelligent transportation. Existing LiDAR-camera calibration methods often
rely on manually placed targets, preliminary parameter estimates, or intensive
data preprocessing, limiting their scalability and adaptability in real-world
settings. In this work, we propose a fully automatic, targetless, and online
calibration framework, CalibRefine, which directly processes raw LiDAR point
clouds and camera images. Our approach is divided into four stages: (1) a
Common Feature Discriminator that trains on automatically detected
objects--using relative positions, appearance embeddings, and semantic
classes--to generate reliable LiDAR-camera correspondences, (2) a coarse
homography-based calibration, (3) an iterative refinement to incrementally
improve alignment as additional data frames become available, and (4) an
attention-based refinement that addresses non-planar distortions by leveraging
a Vision Transformer and cross-attention mechanisms. Through extensive
experiments on two urban traffic datasets, we show that CalibRefine delivers
high-precision calibration results with minimal human involvement,
outperforming state-of-the-art targetless methods and remaining competitive
with, or surpassing, manually tuned baselines. Our findings highlight how
robust object-level feature matching, together with iterative and
self-supervised attention-based adjustments, enables consistent sensor fusion
in complex, real-world conditions without requiring ground-truth calibration
matrices or elaborate data preprocessing. Code is available at
\href{https://github.com/radar-lab/Lidar\_Camera\_Automatic\_Calibration}{https://github.com/radar-lab/Lidar\_Camera\_Automatic\_Calibration}
| [
{
"version": "v1",
"created": "Mon, 24 Feb 2025 20:53:42 GMT"
},
{
"version": "v2",
"created": "Wed, 26 Feb 2025 06:35:56 GMT"
},
{
"version": "v3",
"created": "Tue, 4 Mar 2025 17:54:37 GMT"
},
{
"version": "v4",
"created": "Sat, 5 Apr 2025 15:05:48 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Cheng",
"Lei",
""
],
[
"Guo",
"Lihao",
""
],
[
"Zhang",
"Tianya",
""
],
[
"Bang",
"Tam",
""
],
[
"Harris",
"Austin",
""
],
[
"Hajij",
"Mustafa",
""
],
[
"Sartipi",
"Mina",
""
],
[
"Cao",
"Siyang",
""
]
] | TITLE: CalibRefine: Deep Learning-Based Online Automatic Targetless
LiDAR-Camera Calibration with Iterative and Attention-Driven Post-Refinement
ABSTRACT: Accurate multi-sensor calibration is essential for deploying robust
perception systems in applications such as autonomous driving, robotics, and
intelligent transportation. Existing LiDAR-camera calibration methods often
rely on manually placed targets, preliminary parameter estimates, or intensive
data preprocessing, limiting their scalability and adaptability in real-world
settings. In this work, we propose a fully automatic, targetless, and online
calibration framework, CalibRefine, which directly processes raw LiDAR point
clouds and camera images. Our approach is divided into four stages: (1) a
Common Feature Discriminator that trains on automatically detected
objects--using relative positions, appearance embeddings, and semantic
classes--to generate reliable LiDAR-camera correspondences, (2) a coarse
homography-based calibration, (3) an iterative refinement to incrementally
improve alignment as additional data frames become available, and (4) an
attention-based refinement that addresses non-planar distortions by leveraging
a Vision Transformer and cross-attention mechanisms. Through extensive
experiments on two urban traffic datasets, we show that CalibRefine delivers
high-precision calibration results with minimal human involvement,
outperforming state-of-the-art targetless methods and remaining competitive
with, or surpassing, manually tuned baselines. Our findings highlight how
robust object-level feature matching, together with iterative and
self-supervised attention-based adjustments, enables consistent sensor fusion
in complex, real-world conditions without requiring ground-truth calibration
matrices or elaborate data preprocessing. Code is available at
\href{https://github.com/radar-lab/Lidar\_Camera\_Automatic\_Calibration}{https://github.com/radar-lab/Lidar\_Camera\_Automatic\_Calibration}
|
2502.20678 | Aaryan Garg | Aaryan Garg, Akash Kumar, Yogesh S Rawat | STPro: Spatial and Temporal Progressive Learning for Weakly Supervised
Spatio-Temporal Grounding | CVPR'25 Conference | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | In this work we study Weakly Supervised Spatio-Temporal Video Grounding
(WSTVG), a challenging task of localizing subjects spatio-temporally in videos
using only textual queries and no bounding box supervision. Inspired by recent
advances in vision-language foundation models, we investigate their utility for
WSTVG, leveraging their zero-shot grounding capabilities. However, we find that
a simple adaptation lacks essential spatio-temporal grounding abilities. To
bridge this gap, we introduce Tubelet Referral Grounding (TRG), which connects
textual queries to tubelets to enable spatio-temporal predictions. Despite its
promise, TRG struggles with compositional action understanding and dense scene
scenarios. To address these limitations, we propose STPro, a novel progressive
learning framework with two key modules: (1) Sub-Action Temporal Curriculum
Learning (SA-TCL), which incrementally builds compositional action
understanding, and (2) Congestion-Guided Spatial Curriculum Learning (CG-SCL),
which adapts the model to complex scenes by spatially increasing task
difficulty. STPro achieves state-of-the-art results on three benchmark
datasets, with improvements of 1.0% on VidSTG-Declarative and 3.0% on
HCSTVG-v1.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 03:06:23 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 08:57:56 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Garg",
"Aaryan",
""
],
[
"Kumar",
"Akash",
""
],
[
"Rawat",
"Yogesh S",
""
]
] | TITLE: STPro: Spatial and Temporal Progressive Learning for Weakly Supervised
Spatio-Temporal Grounding
ABSTRACT: In this work we study Weakly Supervised Spatio-Temporal Video Grounding
(WSTVG), a challenging task of localizing subjects spatio-temporally in videos
using only textual queries and no bounding box supervision. Inspired by recent
advances in vision-language foundation models, we investigate their utility for
WSTVG, leveraging their zero-shot grounding capabilities. However, we find that
a simple adaptation lacks essential spatio-temporal grounding abilities. To
bridge this gap, we introduce Tubelet Referral Grounding (TRG), which connects
textual queries to tubelets to enable spatio-temporal predictions. Despite its
promise, TRG struggles with compositional action understanding and dense scene
scenarios. To address these limitations, we propose STPro, a novel progressive
learning framework with two key modules: (1) Sub-Action Temporal Curriculum
Learning (SA-TCL), which incrementally builds compositional action
understanding, and (2) Congestion-Guided Spatial Curriculum Learning (CG-SCL),
which adapts the model to complex scenes by spatially increasing task
difficulty. STPro achieves state-of-the-art results on three benchmark
datasets, with improvements of 1.0% on VidSTG-Declarative and 3.0% on
HCSTVG-v1.
|
2503.01190 | Jonathan Fhima | Jonathan Fhima, Jan Van Eijgen, Lennert Beeckmans, Thomas Jacobs, Moti
Freiman, Luis Filipe Nakayama, Ingeborg Stalmans, Chaim Baskin and Joachim A.
Behar | Enhancing Retinal Vessel Segmentation Generalization via Layout-Aware
Generative Modelling | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Generalization in medical segmentation models is challenging due to limited
annotated datasets and imaging variability. To address this, we propose Retinal
Layout-Aware Diffusion (RLAD), a novel diffusion-based framework for generating
controllable layout-aware images. RLAD conditions image generation on multiple
key layout components extracted from real images, ensuring high structural
fidelity while enabling diversity in other components. Applied to retinal
fundus imaging, we augmented the training datasets by synthesizing paired
retinal images and vessel segmentations conditioned on extracted blood vessels
from real images, while varying other layout components such as lesions and the
optic disc. Experiments demonstrated that RLAD-generated data improved
generalization in retinal vessel segmentation by up to 8.1%. Furthermore, we
present REYIA, a comprehensive dataset comprising 586 manually segmented
retinal images. To foster reproducibility and drive innovation, both our code
and dataset will be made publicly accessible.
| [
{
"version": "v1",
"created": "Mon, 3 Mar 2025 05:31:52 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 10:17:40 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Fhima",
"Jonathan",
""
],
[
"Van Eijgen",
"Jan",
""
],
[
"Beeckmans",
"Lennert",
""
],
[
"Jacobs",
"Thomas",
""
],
[
"Freiman",
"Moti",
""
],
[
"Nakayama",
"Luis Filipe",
""
],
[
"Stalmans",
"Ingeborg",
""
],
[
"Baskin",
"Chaim",
""
],
[
"Behar",
"Joachim A.",
""
]
] | TITLE: Enhancing Retinal Vessel Segmentation Generalization via Layout-Aware
Generative Modelling
ABSTRACT: Generalization in medical segmentation models is challenging due to limited
annotated datasets and imaging variability. To address this, we propose Retinal
Layout-Aware Diffusion (RLAD), a novel diffusion-based framework for generating
controllable layout-aware images. RLAD conditions image generation on multiple
key layout components extracted from real images, ensuring high structural
fidelity while enabling diversity in other components. Applied to retinal
fundus imaging, we augmented the training datasets by synthesizing paired
retinal images and vessel segmentations conditioned on extracted blood vessels
from real images, while varying other layout components such as lesions and the
optic disc. Experiments demonstrated that RLAD-generated data improved
generalization in retinal vessel segmentation by up to 8.1%. Furthermore, we
present REYIA, a comprehensive dataset comprising 586 manually segmented
retinal images. To foster reproducibility and drive innovation, both our code
and dataset will be made publicly accessible.
|
2503.02876 | Dmitry Nechaev | Dmitry Nechaev, Alexey Pchelnikov, Ekaterina Ivanova | SPIDER: A Comprehensive Multi-Organ Supervised Pathology Dataset and
Baseline Models | null | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Advancing AI in computational pathology requires large, high-quality, and
diverse datasets, yet existing public datasets are often limited in organ
diversity, class coverage, or annotation quality. To bridge this gap, we
introduce SPIDER (Supervised Pathology Image-DEscription Repository), the
largest publicly available patch-level dataset covering multiple organ types,
including Skin, Colorectal, Thorax, and Breast with comprehensive class
coverage for each organ. SPIDER provides high-quality annotations verified by
expert pathologists and includes surrounding context patches, which enhance
classification performance by providing spatial context.
Alongside the dataset, we present baseline models trained on SPIDER using the
Hibou-L foundation model as a feature extractor combined with an
attention-based classification head. The models achieve state-of-the-art
performance across multiple tissue categories and serve as strong benchmarks
for future digital pathology research. Beyond patch classification, the model
enables rapid identification of significant areas, quantitative tissue metrics,
and establishes a foundation for multimodal approaches.
Both the dataset and trained models are publicly available to advance
research, reproducibility, and AI-driven pathology development. Access them at:
https://github.com/HistAI/SPIDER
| [
{
"version": "v1",
"created": "Tue, 4 Mar 2025 18:57:12 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 12:20:28 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Nechaev",
"Dmitry",
""
],
[
"Pchelnikov",
"Alexey",
""
],
[
"Ivanova",
"Ekaterina",
""
]
] | TITLE: SPIDER: A Comprehensive Multi-Organ Supervised Pathology Dataset and
Baseline Models
ABSTRACT: Advancing AI in computational pathology requires large, high-quality, and
diverse datasets, yet existing public datasets are often limited in organ
diversity, class coverage, or annotation quality. To bridge this gap, we
introduce SPIDER (Supervised Pathology Image-DEscription Repository), the
largest publicly available patch-level dataset covering multiple organ types,
including Skin, Colorectal, Thorax, and Breast with comprehensive class
coverage for each organ. SPIDER provides high-quality annotations verified by
expert pathologists and includes surrounding context patches, which enhance
classification performance by providing spatial context.
Alongside the dataset, we present baseline models trained on SPIDER using the
Hibou-L foundation model as a feature extractor combined with an
attention-based classification head. The models achieve state-of-the-art
performance across multiple tissue categories and serve as strong benchmarks
for future digital pathology research. Beyond patch classification, the model
enables rapid identification of significant areas, quantitative tissue metrics,
and establishes a foundation for multimodal approaches.
Both the dataset and trained models are publicly available to advance
research, reproducibility, and AI-driven pathology development. Access them at:
https://github.com/HistAI/SPIDER
|
2503.03222 | Zhumei Wang | Zhumei Wang, Zechen Hu, Ruoxi Guo, Huaijin Pi, Ziyong Feng, Sida Peng,
Xiaowei Zhou | Mocap-2-to-3: Lifting 2D Diffusion-Based Pretrained Models for 3D Motion
Capture | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recovering absolute poses in the world coordinate system from monocular views
presents significant challenges. Two primary issues arise in this context.
Firstly, existing methods rely on 3D motion data for training, which requires
collection in limited environments. Acquiring such 3D labels for new actions in
a timely manner is impractical, severely restricting the model's generalization
capabilities. In contrast, 2D poses are far more accessible and easier to
obtain. Secondly, estimating a person's absolute position in metric space from
a single viewpoint is inherently more complex. To address these challenges, we
introduce Mocap-2-to-3, a novel framework that decomposes intricate 3D motions
into 2D poses, leveraging 2D data to enhance 3D motion reconstruction in
diverse scenarios and accurately predict absolute positions in the world
coordinate system. We initially pretrain a single-view diffusion model with
extensive 2D data, followed by fine-tuning a multi-view diffusion model for
view consistency using publicly available 3D data. This strategy facilitates
the effective use of large-scale 2D data. Additionally, we propose an
innovative human motion representation that decouples local actions from global
movements and encodes geometric priors of the ground, ensuring the generative
model learns accurate motion priors from 2D data. During inference, this allows
for the gradual recovery of global movements, resulting in more plausible
positioning. We evaluate our model's performance on real-world datasets,
demonstrating superior accuracy in motion and absolute human positioning
compared to state-of-the-art methods, along with enhanced generalization and
scalability. Our code will be made publicly available.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 06:32:49 GMT"
},
{
"version": "v2",
"created": "Thu, 6 Mar 2025 14:32:49 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 13:54:00 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Zhumei",
""
],
[
"Hu",
"Zechen",
""
],
[
"Guo",
"Ruoxi",
""
],
[
"Pi",
"Huaijin",
""
],
[
"Feng",
"Ziyong",
""
],
[
"Peng",
"Sida",
""
],
[
"Zhou",
"Xiaowei",
""
]
] | TITLE: Mocap-2-to-3: Lifting 2D Diffusion-Based Pretrained Models for 3D Motion
Capture
ABSTRACT: Recovering absolute poses in the world coordinate system from monocular views
presents significant challenges. Two primary issues arise in this context.
Firstly, existing methods rely on 3D motion data for training, which requires
collection in limited environments. Acquiring such 3D labels for new actions in
a timely manner is impractical, severely restricting the model's generalization
capabilities. In contrast, 2D poses are far more accessible and easier to
obtain. Secondly, estimating a person's absolute position in metric space from
a single viewpoint is inherently more complex. To address these challenges, we
introduce Mocap-2-to-3, a novel framework that decomposes intricate 3D motions
into 2D poses, leveraging 2D data to enhance 3D motion reconstruction in
diverse scenarios and accurately predict absolute positions in the world
coordinate system. We initially pretrain a single-view diffusion model with
extensive 2D data, followed by fine-tuning a multi-view diffusion model for
view consistency using publicly available 3D data. This strategy facilitates
the effective use of large-scale 2D data. Additionally, we propose an
innovative human motion representation that decouples local actions from global
movements and encodes geometric priors of the ground, ensuring the generative
model learns accurate motion priors from 2D data. During inference, this allows
for the gradual recovery of global movements, resulting in more plausible
positioning. We evaluate our model's performance on real-world datasets,
demonstrating superior accuracy in motion and absolute human positioning
compared to state-of-the-art methods, along with enhanced generalization and
scalability. Our code will be made publicly available.
|
2503.03883 | Jingyun Chen | Jingyun Chen, Yading Yuan | Decentralized Personalization for Federated Medical Image Segmentation
via Gossip Contrastive Mutual Learning | Accepted by IEEE Transactions on Medical Imaging, Open-source code
at: https://github.com/CUMC-Yuan-Lab/GCML | null | 10.1109/TMI.2025.3549292 | null | cs.DC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Federated Learning (FL) presents a promising avenue for collaborative model
training among medical centers, facilitating knowledge exchange without
compromising data privacy. However, vanilla FL is prone to server failures and
rarely achieves optimal performance on all participating sites due to
heterogeneous data distributions among them. To overcome these challenges, we
propose Gossip Contrastive Mutual Learning (GCML), a unified framework to
optimize personalized models in a decentralized environment, where Gossip
Protocol is employed for flexible and robust peer-to-peer communication. To
make efficient and reliable knowledge exchange in each communication without
the global knowledge across all the sites, we introduce deep contrast mutual
learning (DCML), a simple yet effective scheme to encourage knowledge transfer
between the incoming and local models through collaborative training on local
data. By integrating DCML with other efforts to optimize site-specific models
by leveraging useful information from peers, we evaluated the performance and
efficiency of the proposed method on three publicly available datasets with
different segmentation tasks. Our extensive experimental results show that the
proposed GCML framework outperformed both centralized and decentralized FL
methods with significantly reduced communication overhead, indicating its
potential for real-world deployment. Upon the acceptance of manuscript, the
code will be available at: https://github.com/CUMC-Yuan-Lab/GCML.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 20:30:03 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 00:57:37 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chen",
"Jingyun",
""
],
[
"Yuan",
"Yading",
""
]
] | TITLE: Decentralized Personalization for Federated Medical Image Segmentation
via Gossip Contrastive Mutual Learning
ABSTRACT: Federated Learning (FL) presents a promising avenue for collaborative model
training among medical centers, facilitating knowledge exchange without
compromising data privacy. However, vanilla FL is prone to server failures and
rarely achieves optimal performance on all participating sites due to
heterogeneous data distributions among them. To overcome these challenges, we
propose Gossip Contrastive Mutual Learning (GCML), a unified framework to
optimize personalized models in a decentralized environment, where Gossip
Protocol is employed for flexible and robust peer-to-peer communication. To
make efficient and reliable knowledge exchange in each communication without
the global knowledge across all the sites, we introduce deep contrast mutual
learning (DCML), a simple yet effective scheme to encourage knowledge transfer
between the incoming and local models through collaborative training on local
data. By integrating DCML with other efforts to optimize site-specific models
by leveraging useful information from peers, we evaluated the performance and
efficiency of the proposed method on three publicly available datasets with
different segmentation tasks. Our extensive experimental results show that the
proposed GCML framework outperformed both centralized and decentralized FL
methods with significantly reduced communication overhead, indicating its
potential for real-world deployment. Upon the acceptance of manuscript, the
code will be available at: https://github.com/CUMC-Yuan-Lab/GCML.
|
2503.04839 | Yanshu Li | Yanshu Li | Advancing Multimodal In-Context Learning in Large Vision-Language Models
with Task-aware Demonstrations | Accepted by ICLR 2025 Workshop on Reasoning and Planning for LLMs, 25
pages, 13 tables | null | null | null | cs.CV cs.AI cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal in-context learning (ICL) has emerged as a key capability of Large
Vision-Language Models (LVLMs), driven by their increasing scale and
applicability. Despite its promise, effective ICL in the multimodal setting
remains challenging due to the inherent complexity of image-text inputs and the
high sensitivity of ICL performance to input configurations. In this work, we
shed light on the core mechanism underlying multimodal ICL, identifying task
mapping as a crucial factor in configuring robust in-context demonstration
(ICD) sequences. Building on these insights, we propose \textit{SabER}, a
lightweight yet powerful decoder-only transformer equipped with task-aware
attention, which intelligently selects and arranges ICDs from a demonstration
library in an autoregressive fashion. This design enables fine-grained feature
extraction and cross-modal reasoning, iteratively refining task mapping to
generate high-quality ICD sequences. Through extensive experiments covering
five LVLMs and nine benchmark datasets, SabER not only demonstrates strong
empirical performance, but also provides deeper understanding of how task
semantics interact with multimodal ICDs. Our findings highlight the importance
of principled ICD sequence configuration and open new avenues to enhance
multimodal ICL in a wide range of real-world scenarios.
| [
{
"version": "v1",
"created": "Wed, 5 Mar 2025 16:33:10 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 20:41:41 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Yanshu",
""
]
] | TITLE: Advancing Multimodal In-Context Learning in Large Vision-Language Models
with Task-aware Demonstrations
ABSTRACT: Multimodal in-context learning (ICL) has emerged as a key capability of Large
Vision-Language Models (LVLMs), driven by their increasing scale and
applicability. Despite its promise, effective ICL in the multimodal setting
remains challenging due to the inherent complexity of image-text inputs and the
high sensitivity of ICL performance to input configurations. In this work, we
shed light on the core mechanism underlying multimodal ICL, identifying task
mapping as a crucial factor in configuring robust in-context demonstration
(ICD) sequences. Building on these insights, we propose \textit{SabER}, a
lightweight yet powerful decoder-only transformer equipped with task-aware
attention, which intelligently selects and arranges ICDs from a demonstration
library in an autoregressive fashion. This design enables fine-grained feature
extraction and cross-modal reasoning, iteratively refining task mapping to
generate high-quality ICD sequences. Through extensive experiments covering
five LVLMs and nine benchmark datasets, SabER not only demonstrates strong
empirical performance, but also provides deeper understanding of how task
semantics interact with multimodal ICDs. Our findings highlight the importance
of principled ICD sequence configuration and open new avenues to enhance
multimodal ICL in a wide range of real-world scenarios.
|
2503.04918 | Aysegul Ucar | Aysegul Ucar, Soumyadeep Ro, Sanapala Satwika, Pamarthi Yasoda
Gayathri, and Mohmmad Ghaith Balsha | Fine-Tuning Transformer-Based Vision-Language Models for Robust Object
Detection in Unstructured Environments | 22 pages, 13 Figures, 6 Tables | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Vision-Language Models (VLMs) have emerged as powerful tools in artificial
intelli-gence, capable of integrating textual and visual data for a unified
understanding of complex scenes. While models such as Florence2, built on
transformer architectures, have shown promise across general tasks, their
performance in object detection within unstructured or cluttered environments
remains underexplored. In this study, we fi-ne-tuned the Florence2 model for
object detection tasks in non-constructed, complex environments. A
comprehensive experimental framework was established involving multiple
hardware configurations (NVIDIA T4, L4, and A100 GPUs), optimizers (AdamW,
SGD), and varied hyperparameters including learning rates and LoRA (Low-Rank
Adaptation) setups. Model training and evaluation were conducted on challenging
datasets representative of real-world, disordered settings. The optimized
Florence2 models exhibited significant improvements in object detection
accuracy, with Mean Average Precision (mAP) metrics approaching or matching
those of estab-lished models such as YOLOv8, YOLOv9, and YOLOv10. The
integration of LoRA and careful fine-tuning of transformer layers contributed
notably to these gains. Our find-ings highlight the adaptability of
transformer-based VLMs like Florence2 for do-main-specific tasks, particularly
in visually complex environments. The study under-scores the potential of
fine-tuned VLMs to rival traditional convolution-based detec-tors, offering a
flexible and scalable approach for advanced vision applications in re-al-world,
unstructured settings.
| [
{
"version": "v1",
"created": "Thu, 6 Mar 2025 19:31:51 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 19:00:17 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 17:48:14 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Ucar",
"Aysegul",
""
],
[
"Ro",
"Soumyadeep",
""
],
[
"Satwika",
"Sanapala",
""
],
[
"Gayathri",
"Pamarthi Yasoda",
""
],
[
"Balsha",
"Mohmmad Ghaith",
""
]
] | TITLE: Fine-Tuning Transformer-Based Vision-Language Models for Robust Object
Detection in Unstructured Environments
ABSTRACT: Vision-Language Models (VLMs) have emerged as powerful tools in artificial
intelli-gence, capable of integrating textual and visual data for a unified
understanding of complex scenes. While models such as Florence2, built on
transformer architectures, have shown promise across general tasks, their
performance in object detection within unstructured or cluttered environments
remains underexplored. In this study, we fi-ne-tuned the Florence2 model for
object detection tasks in non-constructed, complex environments. A
comprehensive experimental framework was established involving multiple
hardware configurations (NVIDIA T4, L4, and A100 GPUs), optimizers (AdamW,
SGD), and varied hyperparameters including learning rates and LoRA (Low-Rank
Adaptation) setups. Model training and evaluation were conducted on challenging
datasets representative of real-world, disordered settings. The optimized
Florence2 models exhibited significant improvements in object detection
accuracy, with Mean Average Precision (mAP) metrics approaching or matching
those of estab-lished models such as YOLOv8, YOLOv9, and YOLOv10. The
integration of LoRA and careful fine-tuning of transformer layers contributed
notably to these gains. Our find-ings highlight the adaptability of
transformer-based VLMs like Florence2 for do-main-specific tasks, particularly
in visually complex environments. The study under-scores the potential of
fine-tuned VLMs to rival traditional convolution-based detec-tors, offering a
flexible and scalable approach for advanced vision applications in re-al-world,
unstructured settings.
|
2503.05794 | Yiming Li | Yiming Li, Kaiying Yan, Shuo Shao, Tongqing Zhai, Shu-Tao Xia, Zhan
Qin, Dacheng Tao | CBW: Towards Dataset Ownership Verification for Speaker Verification via
Clustering-based Backdoor Watermarking | 14 pages. The journal extension of our ICASSP'21 paper
(arXiv:2010.11607) | null | null | null | cs.CR cs.AI cs.LG cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | With the increasing adoption of deep learning in speaker verification,
large-scale speech datasets have become valuable intellectual property. To
audit and prevent the unauthorized usage of these valuable released datasets,
especially in commercial or open-source scenarios, we propose a novel dataset
ownership verification method. Our approach introduces a clustering-based
backdoor watermark (CBW), enabling dataset owners to determine whether a
suspicious third-party model has been trained on a protected dataset under a
black-box setting. The CBW method consists of two key stages: dataset
watermarking and ownership verification. During watermarking, we implant
multiple trigger patterns in the dataset to make similar samples (measured by
their feature similarities) close to the same trigger while dissimilar samples
are near different triggers. This ensures that any model trained on the
watermarked dataset exhibits specific misclassification behaviors when exposed
to trigger-embedded inputs. To verify dataset ownership, we design a
hypothesis-test-based framework that statistically evaluates whether a
suspicious model exhibits the expected backdoor behavior. We conduct extensive
experiments on benchmark datasets, verifying the effectiveness and robustness
of our method against potential adaptive attacks. The code for reproducing main
experiments is available at https://github.com/Radiant0726/CBW
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 02:02:57 GMT"
},
{
"version": "v2",
"created": "Tue, 11 Mar 2025 00:44:01 GMT"
},
{
"version": "v3",
"created": "Sat, 5 Apr 2025 15:05:33 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Li",
"Yiming",
""
],
[
"Yan",
"Kaiying",
""
],
[
"Shao",
"Shuo",
""
],
[
"Zhai",
"Tongqing",
""
],
[
"Xia",
"Shu-Tao",
""
],
[
"Qin",
"Zhan",
""
],
[
"Tao",
"Dacheng",
""
]
] | TITLE: CBW: Towards Dataset Ownership Verification for Speaker Verification via
Clustering-based Backdoor Watermarking
ABSTRACT: With the increasing adoption of deep learning in speaker verification,
large-scale speech datasets have become valuable intellectual property. To
audit and prevent the unauthorized usage of these valuable released datasets,
especially in commercial or open-source scenarios, we propose a novel dataset
ownership verification method. Our approach introduces a clustering-based
backdoor watermark (CBW), enabling dataset owners to determine whether a
suspicious third-party model has been trained on a protected dataset under a
black-box setting. The CBW method consists of two key stages: dataset
watermarking and ownership verification. During watermarking, we implant
multiple trigger patterns in the dataset to make similar samples (measured by
their feature similarities) close to the same trigger while dissimilar samples
are near different triggers. This ensures that any model trained on the
watermarked dataset exhibits specific misclassification behaviors when exposed
to trigger-embedded inputs. To verify dataset ownership, we design a
hypothesis-test-based framework that statistically evaluates whether a
suspicious model exhibits the expected backdoor behavior. We conduct extensive
experiments on benchmark datasets, verifying the effectiveness and robustness
of our method against potential adaptive attacks. The code for reproducing main
experiments is available at https://github.com/Radiant0726/CBW
|
2503.07202 | Bingchen Liu | Bingchen Liu, Jingchen Li, Yuanyuan Fang, Xin Li | A Zero-shot Learning Method Based on Large Language Models for
Multi-modal Knowledge Graph Embedding | null | null | null | null | cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Zero-shot learning (ZL) is crucial for tasks involving unseen categories,
such as natural language processing, image classification, and cross-lingual
transfer.Current applications often fail to accurately infer and handle new
relations orentities involving unseen categories, severely limiting their
scalability and prac-ticality in open-domain scenarios. ZL learning faces the
challenge of effectivelytransferring semantic information of unseen categories
in multi-modal knowledgegraph (MMKG) embedding representation learning. In this
paper, we proposeZSLLM, a framework for zero-shot embedding learning of MMKGs
using largelanguage models (LLMs). We leverage textual modality information of
unseencategories as prompts to fully utilize the reasoning capabilities of
LLMs, enablingsemantic information transfer across different modalities for
unseen categories.Through model-based learning, the embedding representation of
unseen cate-gories in MMKG is enhanced. Extensive experiments conducted on
multiplereal-world datasets demonstrate the superiority of our approach
compared tostate-of-the-art methods.
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 11:38:21 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 07:22:25 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Liu",
"Bingchen",
""
],
[
"Li",
"Jingchen",
""
],
[
"Fang",
"Yuanyuan",
""
],
[
"Li",
"Xin",
""
]
] | TITLE: A Zero-shot Learning Method Based on Large Language Models for
Multi-modal Knowledge Graph Embedding
ABSTRACT: Zero-shot learning (ZL) is crucial for tasks involving unseen categories,
such as natural language processing, image classification, and cross-lingual
transfer.Current applications often fail to accurately infer and handle new
relations orentities involving unseen categories, severely limiting their
scalability and prac-ticality in open-domain scenarios. ZL learning faces the
challenge of effectivelytransferring semantic information of unseen categories
in multi-modal knowledgegraph (MMKG) embedding representation learning. In this
paper, we proposeZSLLM, a framework for zero-shot embedding learning of MMKGs
using largelanguage models (LLMs). We leverage textual modality information of
unseencategories as prompts to fully utilize the reasoning capabilities of
LLMs, enablingsemantic information transfer across different modalities for
unseen categories.Through model-based learning, the embedding representation of
unseen cate-gories in MMKG is enhanced. Extensive experiments conducted on
multiplereal-world datasets demonstrate the superiority of our approach
compared tostate-of-the-art methods.
|
2503.07591 | Bardia Safaei | Bardia Safaei, Faizan Siddiqui, Jiacong Xu, Vishal M. Patel, Shao-Yuan
Lo | Filter Images First, Generate Instructions Later: Pre-Instruction Data
Selection for Visual Instruction Tuning | Accepted at CVPR 2025 (Highlight) | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual instruction tuning (VIT) for large vision-language models (LVLMs)
requires training on expansive datasets of image-instruction pairs, which can
be costly. Recent efforts in VIT data selection aim to select a small subset of
high-quality image-instruction pairs, reducing VIT runtime while maintaining
performance comparable to full-scale training. However, a major challenge often
overlooked is that generating instructions from unlabeled images for VIT is
highly expensive. Most existing VIT datasets rely heavily on human annotations
or paid services like the GPT API, which limits users with constrained
resources from creating VIT datasets for custom applications. To address this,
we introduce Pre-Instruction Data Selection (PreSel), a more practical data
selection paradigm that directly selects the most beneficial unlabeled images
and generates instructions only for the selected images. PreSel first estimates
the relative importance of each vision task within VIT datasets to derive
task-wise sampling budgets. It then clusters image features within each task,
selecting the most representative images with the budget. This approach reduces
computational overhead for both instruction generation during VIT data
formation and LVLM fine-tuning. By generating instructions for only 15% of the
images, PreSel achieves performance comparable to full-data VIT on the
LLaVA-1.5 and Vision-Flan datasets. The link to our project page:
https://bardisafa.github.io/PreSel
| [
{
"version": "v1",
"created": "Mon, 10 Mar 2025 17:55:11 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 15:13:01 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Safaei",
"Bardia",
""
],
[
"Siddiqui",
"Faizan",
""
],
[
"Xu",
"Jiacong",
""
],
[
"Patel",
"Vishal M.",
""
],
[
"Lo",
"Shao-Yuan",
""
]
] | TITLE: Filter Images First, Generate Instructions Later: Pre-Instruction Data
Selection for Visual Instruction Tuning
ABSTRACT: Visual instruction tuning (VIT) for large vision-language models (LVLMs)
requires training on expansive datasets of image-instruction pairs, which can
be costly. Recent efforts in VIT data selection aim to select a small subset of
high-quality image-instruction pairs, reducing VIT runtime while maintaining
performance comparable to full-scale training. However, a major challenge often
overlooked is that generating instructions from unlabeled images for VIT is
highly expensive. Most existing VIT datasets rely heavily on human annotations
or paid services like the GPT API, which limits users with constrained
resources from creating VIT datasets for custom applications. To address this,
we introduce Pre-Instruction Data Selection (PreSel), a more practical data
selection paradigm that directly selects the most beneficial unlabeled images
and generates instructions only for the selected images. PreSel first estimates
the relative importance of each vision task within VIT datasets to derive
task-wise sampling budgets. It then clusters image features within each task,
selecting the most representative images with the budget. This approach reduces
computational overhead for both instruction generation during VIT data
formation and LVLM fine-tuning. By generating instructions for only 15% of the
images, PreSel achieves performance comparable to full-data VIT on the
LLaVA-1.5 and Vision-Flan datasets. The link to our project page:
https://bardisafa.github.io/PreSel
|
2503.09334 | Adel ElZemity | Adel ElZemity, Budi Arief and Shujun Li | CyberLLMInstruct: A New Dataset for Analysing Safety of Fine-Tuned LLMs
Using Cyber Security Data | null | null | null | null | cs.CR cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The integration of large language models (LLMs) into cyber security
applications presents significant opportunities, such as enhancing threat
analysis and malware detection, but can also introduce critical risks and
safety concerns, including personal data leakage and automated generation of
new malware. To address these challenges, we developed CyberLLMInstruct, a
dataset of 54,928 instruction-response pairs spanning cyber security tasks such
as malware analysis, phishing simulations, and zero-day vulnerabilities. The
dataset was constructed through a multi-stage process. This involved sourcing
data from multiple resources, filtering and structuring it into
instruction-response pairs, and aligning it with real-world scenarios to
enhance its applicability. Seven open-source LLMs were chosen to test the
usefulness of CyberLLMInstruct: Phi 3 Mini 3.8B, Mistral 7B, Qwen 2.5 7B, Llama
3 8B, Llama 3.1 8B, Gemma 2 9B, and Llama 2 70B. In our primary example, we
rigorously assess the safety of fine-tuned models using the OWASP top 10
framework, finding that fine-tuning reduces safety resilience across all tested
LLMs and every adversarial attack (e.g., the security score of Llama 3.1 8B
against prompt injection drops from 0.95 to 0.15). In our second example, we
show that these same fine-tuned models can also achieve up to 92.50 percent
accuracy on the CyberMetric benchmark. These findings highlight a trade-off
between performance and safety, showing the importance of adversarial testing
and further research into fine-tuning methodologies that can mitigate safety
risks while still improving performance across diverse datasets and domains.
The dataset creation pipeline, along with comprehensive documentation,
examples, and resources for reproducing our results, is publicly available at
https://github.com/Adelsamir01/CyberLLMInstruct.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 12:29:27 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 14:29:49 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"ElZemity",
"Adel",
""
],
[
"Arief",
"Budi",
""
],
[
"Li",
"Shujun",
""
]
] | TITLE: CyberLLMInstruct: A New Dataset for Analysing Safety of Fine-Tuned LLMs
Using Cyber Security Data
ABSTRACT: The integration of large language models (LLMs) into cyber security
applications presents significant opportunities, such as enhancing threat
analysis and malware detection, but can also introduce critical risks and
safety concerns, including personal data leakage and automated generation of
new malware. To address these challenges, we developed CyberLLMInstruct, a
dataset of 54,928 instruction-response pairs spanning cyber security tasks such
as malware analysis, phishing simulations, and zero-day vulnerabilities. The
dataset was constructed through a multi-stage process. This involved sourcing
data from multiple resources, filtering and structuring it into
instruction-response pairs, and aligning it with real-world scenarios to
enhance its applicability. Seven open-source LLMs were chosen to test the
usefulness of CyberLLMInstruct: Phi 3 Mini 3.8B, Mistral 7B, Qwen 2.5 7B, Llama
3 8B, Llama 3.1 8B, Gemma 2 9B, and Llama 2 70B. In our primary example, we
rigorously assess the safety of fine-tuned models using the OWASP top 10
framework, finding that fine-tuning reduces safety resilience across all tested
LLMs and every adversarial attack (e.g., the security score of Llama 3.1 8B
against prompt injection drops from 0.95 to 0.15). In our second example, we
show that these same fine-tuned models can also achieve up to 92.50 percent
accuracy on the CyberMetric benchmark. These findings highlight a trade-off
between performance and safety, showing the importance of adversarial testing
and further research into fine-tuning methodologies that can mitigate safety
risks while still improving performance across diverse datasets and domains.
The dataset creation pipeline, along with comprehensive documentation,
examples, and resources for reproducing our results, is publicly available at
https://github.com/Adelsamir01/CyberLLMInstruct.
|
2503.09906 | Pablo Peso Parada | Haaris Mehmood, Karthikeyan Saravanan, Pablo Peso Parada, David
Tuckey, Mete Ozay, Gil Ho Lee, Jungin Lee, Seokyeong Jung | ValSub: Subsampling Validation Data to Mitigate Forgetting during ASR
Personalization | Accepted at ICASSP 2025 | null | null | null | eess.AS cs.SD | http://creativecommons.org/licenses/by/4.0/ | Automatic Speech Recognition (ASR) is widely used within consumer devices
such as mobile phones. Recently, personalization or on-device model fine-tuning
has shown that adaptation of ASR models towards target user speech improves
their performance over rare words or accented speech. Despite these gains,
fine-tuning on user data (target domain) risks the personalized model to forget
knowledge about its original training distribution (source domain) i.e.
catastrophic forgetting, leading to subpar general ASR performance. A simple
and efficient approach to combat catastrophic forgetting is to measure
forgetting via a validation set that represents the source domain distribution.
However, such validation sets are large and impractical for mobile devices.
Towards this, we propose a novel method to subsample a substantially large
validation set into a smaller one while maintaining the ability to estimate
forgetting. We demonstrate the efficacy of such a dataset in mitigating
forgetting by utilizing it to dynamically determine the number of ideal
fine-tuning epochs. When measuring the deviations in per user fine-tuning
epochs against a 50x larger validation set (oracle), our method achieves a
lower mean-absolute-error (3.39) compared to randomly selected subsets of the
same size (3.78-8.65). Unlike random baselines, our method consistently tracks
the oracle's behaviour across three different forgetting thresholds.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 23:53:53 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 13:08:04 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Mehmood",
"Haaris",
""
],
[
"Saravanan",
"Karthikeyan",
""
],
[
"Parada",
"Pablo Peso",
""
],
[
"Tuckey",
"David",
""
],
[
"Ozay",
"Mete",
""
],
[
"Lee",
"Gil Ho",
""
],
[
"Lee",
"Jungin",
""
],
[
"Jung",
"Seokyeong",
""
]
] | TITLE: ValSub: Subsampling Validation Data to Mitigate Forgetting during ASR
Personalization
ABSTRACT: Automatic Speech Recognition (ASR) is widely used within consumer devices
such as mobile phones. Recently, personalization or on-device model fine-tuning
has shown that adaptation of ASR models towards target user speech improves
their performance over rare words or accented speech. Despite these gains,
fine-tuning on user data (target domain) risks the personalized model to forget
knowledge about its original training distribution (source domain) i.e.
catastrophic forgetting, leading to subpar general ASR performance. A simple
and efficient approach to combat catastrophic forgetting is to measure
forgetting via a validation set that represents the source domain distribution.
However, such validation sets are large and impractical for mobile devices.
Towards this, we propose a novel method to subsample a substantially large
validation set into a smaller one while maintaining the ability to estimate
forgetting. We demonstrate the efficacy of such a dataset in mitigating
forgetting by utilizing it to dynamically determine the number of ideal
fine-tuning epochs. When measuring the deviations in per user fine-tuning
epochs against a 50x larger validation set (oracle), our method achieves a
lower mean-absolute-error (3.39) compared to randomly selected subsets of the
same size (3.78-8.65). Unlike random baselines, our method consistently tracks
the oracle's behaviour across three different forgetting thresholds.
|
2503.11963 | Zeng Zhihao | Zhihao Zeng, Ziquan Fang, Yuting Huang, Lu Chen, Yunjun Gao | A Cross-Domain Traffic Prediction Based on Federated Learning | null | null | null | null | cs.LG cs.CR | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose an effective, efficient, and privacy-aware
cross-domain traffic prediction framework, along with a novel federated
transfer paradigm, to overcome the limitations of privacy leakage risk,
cross-city data discrepancy, low data quality, and inefficient knowledge
transfer. Experiments using four datasets on three mainstream traffic
prediction tasks demonstrate the framework's superiority.
| [
{
"version": "v1",
"created": "Sat, 15 Mar 2025 02:26:24 GMT"
},
{
"version": "v2",
"created": "Sat, 5 Apr 2025 13:21:23 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Zeng",
"Zhihao",
""
],
[
"Fang",
"Ziquan",
""
],
[
"Huang",
"Yuting",
""
],
[
"Chen",
"Lu",
""
],
[
"Gao",
"Yunjun",
""
]
] | TITLE: A Cross-Domain Traffic Prediction Based on Federated Learning
ABSTRACT: In this paper, we propose an effective, efficient, and privacy-aware
cross-domain traffic prediction framework, along with a novel federated
transfer paradigm, to overcome the limitations of privacy leakage risk,
cross-city data discrepancy, low data quality, and inefficient knowledge
transfer. Experiments using four datasets on three mainstream traffic
prediction tasks demonstrate the framework's superiority.
|
2503.13983 | Jiankang Wang | Jiankang Wang, Zhihan zhang, Zhihang Liu, Yang Li, Jiannan Ge, Hongtao
Xie, Yongdong Zhang | SpaceVLLM: Endowing Multimodal Large Language Model with Spatio-Temporal
Video Grounding Capability | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Multimodal large language models (MLLMs) have made remarkable progress in
either temporal or spatial localization. However, they struggle to perform
spatio-temporal video grounding. This limitation stems from two major
challenges. Firstly, it is difficult to extract accurate spatio-temporal
information of each frame in the video. Secondly, the substantial number of
visual tokens makes it challenging to precisely map visual tokens of each frame
to their corresponding spatial coordinates. To address these issues, we
introduce SpaceVLLM, a MLLM endowed with spatio-temporal video grounding
capability. Specifically, we adopt a set of interleaved Spatio-Temporal Aware
Queries to capture temporal perception and dynamic spatial information.
Moreover, we propose a Query-Guided Space Decoder to establish a corresponding
connection between the queries and spatial coordinates. Additionally, due to
the lack of spatio-temporal datasets, we construct the Unified Spatio-Temporal
Grounding (Uni-STG) dataset, comprising 480K instances across three tasks. This
dataset fully exploits the potential of MLLM to simultaneously facilitate
localization in both temporal and spatial dimensions. Extensive experiments
demonstrate that SpaceVLLM achieves the state-of-the-art performance across 11
benchmarks covering temporal, spatial, spatio-temporal and video understanding
tasks, highlighting the effectiveness of our approach. Our code, datasets and
model will be released.
| [
{
"version": "v1",
"created": "Tue, 18 Mar 2025 07:40:36 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 11:47:42 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wang",
"Jiankang",
""
],
[
"zhang",
"Zhihan",
""
],
[
"Liu",
"Zhihang",
""
],
[
"Li",
"Yang",
""
],
[
"Ge",
"Jiannan",
""
],
[
"Xie",
"Hongtao",
""
],
[
"Zhang",
"Yongdong",
""
]
] | TITLE: SpaceVLLM: Endowing Multimodal Large Language Model with Spatio-Temporal
Video Grounding Capability
ABSTRACT: Multimodal large language models (MLLMs) have made remarkable progress in
either temporal or spatial localization. However, they struggle to perform
spatio-temporal video grounding. This limitation stems from two major
challenges. Firstly, it is difficult to extract accurate spatio-temporal
information of each frame in the video. Secondly, the substantial number of
visual tokens makes it challenging to precisely map visual tokens of each frame
to their corresponding spatial coordinates. To address these issues, we
introduce SpaceVLLM, a MLLM endowed with spatio-temporal video grounding
capability. Specifically, we adopt a set of interleaved Spatio-Temporal Aware
Queries to capture temporal perception and dynamic spatial information.
Moreover, we propose a Query-Guided Space Decoder to establish a corresponding
connection between the queries and spatial coordinates. Additionally, due to
the lack of spatio-temporal datasets, we construct the Unified Spatio-Temporal
Grounding (Uni-STG) dataset, comprising 480K instances across three tasks. This
dataset fully exploits the potential of MLLM to simultaneously facilitate
localization in both temporal and spatial dimensions. Extensive experiments
demonstrate that SpaceVLLM achieves the state-of-the-art performance across 11
benchmarks covering temporal, spatial, spatio-temporal and video understanding
tasks, highlighting the effectiveness of our approach. Our code, datasets and
model will be released.
|
2503.15514 | Jaymari Chua | Jaymari Chua, Chen Wang, Lina Yao | Superhuman Game AI Disclosure: Expertise and Context Moderate Effects on
Trust and Fairness | null | null | null | null | cs.HC cs.AI cs.CL cs.CY cs.ET | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As artificial intelligence surpasses human performance in select tasks,
disclosing superhuman capabilities poses distinct challenges for fairness,
accountability, and trust. However, the impact of such disclosures on diverse
user attitudes and behaviors remains unclear, particularly concerning potential
negative reactions like discouragement or overreliance. This paper investigates
these effects by utilizing Persona Cards: a validated, standardized set of
synthetic personas designed to simulate diverse user reactions and fairness
perspectives. We conducted an ethics board-approved study (N=32), utilizing
these personas to investigate how capability disclosure influenced behaviors
with a superhuman game AI in competitive StarCraft II scenarios. Our results
reveal transparency is double-edged: while disclosure could alleviate
suspicion, it also provoked frustration and strategic defeatism among novices
in cooperative scenarios, as well as overreliance in competitive contexts.
Experienced and competitive players interpreted disclosure as confirmation of
an unbeatable opponent, shifting to suboptimal goals. We release the Persona
Cards Dataset, including profiles, prompts, interaction logs, and protocols, to
foster reproducible research into human alignment AI design. This work
demonstrates that transparency is not a cure-all; successfully leveraging
disclosure to enhance trust and accountability requires careful tailoring to
user characteristics, domain norms, and specific fairness objectives.
| [
{
"version": "v1",
"created": "Fri, 31 Jan 2025 05:50:50 GMT"
},
{
"version": "v2",
"created": "Mon, 7 Apr 2025 17:39:10 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Chua",
"Jaymari",
""
],
[
"Wang",
"Chen",
""
],
[
"Yao",
"Lina",
""
]
] | TITLE: Superhuman Game AI Disclosure: Expertise and Context Moderate Effects on
Trust and Fairness
ABSTRACT: As artificial intelligence surpasses human performance in select tasks,
disclosing superhuman capabilities poses distinct challenges for fairness,
accountability, and trust. However, the impact of such disclosures on diverse
user attitudes and behaviors remains unclear, particularly concerning potential
negative reactions like discouragement or overreliance. This paper investigates
these effects by utilizing Persona Cards: a validated, standardized set of
synthetic personas designed to simulate diverse user reactions and fairness
perspectives. We conducted an ethics board-approved study (N=32), utilizing
these personas to investigate how capability disclosure influenced behaviors
with a superhuman game AI in competitive StarCraft II scenarios. Our results
reveal transparency is double-edged: while disclosure could alleviate
suspicion, it also provoked frustration and strategic defeatism among novices
in cooperative scenarios, as well as overreliance in competitive contexts.
Experienced and competitive players interpreted disclosure as confirmation of
an unbeatable opponent, shifting to suboptimal goals. We release the Persona
Cards Dataset, including profiles, prompts, interaction logs, and protocols, to
foster reproducible research into human alignment AI design. This work
demonstrates that transparency is not a cure-all; successfully leveraging
disclosure to enhance trust and accountability requires careful tailoring to
user characteristics, domain norms, and specific fairness objectives.
|
2503.17830 | Tushin Mallick | Tushin Mallick, Ramana Kompella, Ashish Kundu, Cristina Nita-Rotaru | Fingerprinting Implementations of Cryptographic Primitives and Protocols
that Use Post-Quantum Algorithms | null | null | null | null | cs.CR | http://creativecommons.org/publicdomain/zero/1.0/ | Fingerprinting is a technique used to create behavioral profiles of systems
to identify threats and weaknesses. When applied to cryptographic primitives
and network protocols, it can be exploited by attackers for denial-of-service,
key recovery, or downgrade attacks. In this paper, we evaluate the feasibility
of fingerprinting post-quantum (PQ) algorithms by analyzing key exchange and
digital signature primitives, their integration into protocols like TLS, SSH,
QUIC, OpenVPN, and OIDC, and their usage in SNARK libraries (pysnark and
lattice_zksnark). PQ algorithms differ from classical ones in memory and
computation demands. We examine implementations across liboqs and CIRCL
libraries on Windows, Ubuntu, and MacOS. Our experiments show that we can
distinguish classical from PQ key exchange and signatures with 98% and 100%
accuracy, respectively; identify the specific PQ algorithm used with 97% and
86% accuracy; distinguish between liboqs and CIRCL implementations with up to
100% accuracy; and identify PQ vs. hybrid implementations within CIRCL with 97%
accuracy. In protocol-level analysis, we can detect the presence and type of PQ
key exchange. SNARK libraries are distinguishable with 100% accuracy. To
demonstrate real-world applicability, we apply our fingerprinting methods to
the Tranco dataset to detect domains using PQ TLS and integrate our methods
into QUARTZ, an open-source threat analysis tool developed by Cisco.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 18:00:21 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 16:27:35 GMT"
},
{
"version": "v3",
"created": "Sun, 6 Apr 2025 20:17:18 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Mallick",
"Tushin",
""
],
[
"Kompella",
"Ramana",
""
],
[
"Kundu",
"Ashish",
""
],
[
"Nita-Rotaru",
"Cristina",
""
]
] | TITLE: Fingerprinting Implementations of Cryptographic Primitives and Protocols
that Use Post-Quantum Algorithms
ABSTRACT: Fingerprinting is a technique used to create behavioral profiles of systems
to identify threats and weaknesses. When applied to cryptographic primitives
and network protocols, it can be exploited by attackers for denial-of-service,
key recovery, or downgrade attacks. In this paper, we evaluate the feasibility
of fingerprinting post-quantum (PQ) algorithms by analyzing key exchange and
digital signature primitives, their integration into protocols like TLS, SSH,
QUIC, OpenVPN, and OIDC, and their usage in SNARK libraries (pysnark and
lattice_zksnark). PQ algorithms differ from classical ones in memory and
computation demands. We examine implementations across liboqs and CIRCL
libraries on Windows, Ubuntu, and MacOS. Our experiments show that we can
distinguish classical from PQ key exchange and signatures with 98% and 100%
accuracy, respectively; identify the specific PQ algorithm used with 97% and
86% accuracy; distinguish between liboqs and CIRCL implementations with up to
100% accuracy; and identify PQ vs. hybrid implementations within CIRCL with 97%
accuracy. In protocol-level analysis, we can detect the presence and type of PQ
key exchange. SNARK libraries are distinguishable with 100% accuracy. To
demonstrate real-world applicability, we apply our fingerprinting methods to
the Tranco dataset to detect domains using PQ TLS and integrate our methods
into QUARTZ, an open-source threat analysis tool developed by Cisco.
|
2503.20093 | Nimesha Wickramasinghe | Nimesha Wickramasinghe, Arash Shaghaghi, Gene Tsudik, Sanjay Jha | SoK: Decoding the Enigma of Encrypted Network Traffic Classifiers | Accepted to IEEE Symposium on Security and Privacy (S&P) - 2025 | null | null | null | cs.CR cs.NI | http://creativecommons.org/licenses/by/4.0/ | The adoption of modern encryption protocols such as TLS 1.3 has significantly
challenged traditional network traffic classification (NTC) methods. As a
consequence, researchers are increasingly turning to machine learning (ML)
approaches to overcome these obstacles. In this paper, we comprehensively
analyze ML-based NTC studies, developing a taxonomy of their design choices,
benchmarking suites, and prevalent assumptions impacting classifier
performance. Through this systematization, we demonstrate widespread reliance
on outdated datasets, oversights in design choices, and the consequences of
unsubstantiated assumptions. Our evaluation reveals that the majority of
proposed encrypted traffic classifiers have mistakenly utilized unencrypted
traffic due to the use of legacy datasets. Furthermore, by conducting 348
feature occlusion experiments on state-of-the-art classifiers, we show how
oversights in NTC design choices lead to overfitting, and validate or refute
prevailing assumptions with empirical evidence. By highlighting lessons
learned, we offer strategic insights, identify emerging research directions,
and recommend best practices to support the development of real-world
applicable NTC methodologies.
| [
{
"version": "v1",
"created": "Tue, 25 Mar 2025 22:15:50 GMT"
},
{
"version": "v2",
"created": "Sun, 6 Apr 2025 14:04:52 GMT"
}
] | 2025-04-08T00:00:00 | [
[
"Wickramasinghe",
"Nimesha",
""
],
[
"Shaghaghi",
"Arash",
""
],
[
"Tsudik",
"Gene",
""
],
[
"Jha",
"Sanjay",
""
]
] | TITLE: SoK: Decoding the Enigma of Encrypted Network Traffic Classifiers
ABSTRACT: The adoption of modern encryption protocols such as TLS 1.3 has significantly
challenged traditional network traffic classification (NTC) methods. As a
consequence, researchers are increasingly turning to machine learning (ML)
approaches to overcome these obstacles. In this paper, we comprehensively
analyze ML-based NTC studies, developing a taxonomy of their design choices,
benchmarking suites, and prevalent assumptions impacting classifier
performance. Through this systematization, we demonstrate widespread reliance
on outdated datasets, oversights in design choices, and the consequences of
unsubstantiated assumptions. Our evaluation reveals that the majority of
proposed encrypted traffic classifiers have mistakenly utilized unencrypted
traffic due to the use of legacy datasets. Furthermore, by conducting 348
feature occlusion experiments on state-of-the-art classifiers, we show how
oversights in NTC design choices lead to overfitting, and validate or refute
prevailing assumptions with empirical evidence. By highlighting lessons
learned, we offer strategic insights, identify emerging research directions,
and recommend best practices to support the development of real-world
applicable NTC methodologies.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.