Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2210.02672 | Salar Basiri | Salar Basiri, Alisina Bayati and Srinivasa Salapaka | Orthogonal Nonnegative Matrix Factorization with Sparsity Constraints | This revision includes: (1) a shortened title; (2) replacing the l0
equality constraint with an inequality for broader applicability; (3)
addition of Alisina Bayati as co-author for his work on the CBF-based
solution; and (4) minor edits to the paper's flow and proofs for clarity | null | null | null | cs.DS cs.IT cs.LG math.IT math.PR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This article presents a novel approach to solving the sparsity-constrained
Orthogonal Nonnegative Matrix Factorization (SCONMF) problem, which requires
decomposing a non-negative data matrix into the product of two lower-rank
non-negative matrices, X=WH, where the mixing matrix H has orthogonal rows
HH^T=I, while also satisfying an upper bound on the number of nonzero elements
in each row. By reformulating SCONMF as a capacity-constrained
facility-location problem (CCFLP), the proposed method naturally integrates
non-negativity, orthogonality, and sparsity constraints. Specifically, our
approach integrates control-barrier function (CBF) based framework used for
dynamic optimal control design problems with maximum-entropy-principle-based
framework used for facility location problems to enforce these constraints
while ensuring robust factorization. Additionally, this work introduces a
quantitative approach for determining the ``true" rank of W or H, equivalent to
the number of ``true" features - a critical aspect in ONMF applications where
the number of features is unknown. Simulations on various datasets demonstrate
significantly improved factorizations with low reconstruction errors (as small
as by 150 times) while strictly satisfying all constraints, outperforming
existing methods that struggle with balancing accuracy and constraint
adherence.
| [
{
"version": "v1",
"created": "Thu, 6 Oct 2022 04:30:59 GMT"
},
{
"version": "v2",
"created": "Wed, 22 Mar 2023 04:59:05 GMT"
},
{
"version": "v3",
"created": "Fri, 19 Jan 2024 00:57:05 GMT"
},
{
"version": "v4",
"created": "Fri, 4 Apr 2025 05:59:30 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Basiri",
"Salar",
""
],
[
"Bayati",
"Alisina",
""
],
[
"Salapaka",
"Srinivasa",
""
]
] | TITLE: Orthogonal Nonnegative Matrix Factorization with Sparsity Constraints
ABSTRACT: This article presents a novel approach to solving the sparsity-constrained
Orthogonal Nonnegative Matrix Factorization (SCONMF) problem, which requires
decomposing a non-negative data matrix into the product of two lower-rank
non-negative matrices, X=WH, where the mixing matrix H has orthogonal rows
HH^T=I, while also satisfying an upper bound on the number of nonzero elements
in each row. By reformulating SCONMF as a capacity-constrained
facility-location problem (CCFLP), the proposed method naturally integrates
non-negativity, orthogonality, and sparsity constraints. Specifically, our
approach integrates control-barrier function (CBF) based framework used for
dynamic optimal control design problems with maximum-entropy-principle-based
framework used for facility location problems to enforce these constraints
while ensuring robust factorization. Additionally, this work introduces a
quantitative approach for determining the ``true" rank of W or H, equivalent to
the number of ``true" features - a critical aspect in ONMF applications where
the number of features is unknown. Simulations on various datasets demonstrate
significantly improved factorizations with low reconstruction errors (as small
as by 150 times) while strictly satisfying all constraints, outperforming
existing methods that struggle with balancing accuracy and constraint
adherence.
|
2305.09907 | Vivek Yelleti Mr. | Vivek Yelleti and Ch Priyanka | Incremental Outlier Detection Modelling Using Streaming Analytics in
Finance & Health Care | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | In the era of real-time data, traditional methods often struggle to keep pace
with the dynamic nature of streaming environments. In this paper, we proposed a
hybrid framework where in (i) stage-I follows a traditional approach where the
model is built once and evaluated in a real-time environment, and (ii) stage-II
employs an incremental learning approach where the model is continuously
retrained as new data arrives, enabling it to adapt and stay up to date. To
implement these frameworks, we employed 8 distinct state-of-the-art outlier
detection models, including one-class support vector machine (OCSVM), isolation
forest adaptive sliding window approach (IForest ASD), exact storm (ES),
angle-based outlier detection (ABOD), local outlier factor (LOF), Kitsunes
online algorithm (KitNet), and K-nearest neighbour conformal density and
distance based (KNN CAD). We evaluated the performance of these models across
seven financial and healthcare prediction tasks, including credit card fraud
detection, churn prediction, Ethereum fraud detection, heart stroke prediction,
and diabetes prediction. The results indicate that our proposed incremental
learning framework significantly improves performance, particularly on highly
imbalanced datasets. Among all models, the IForest ASD model consistently
ranked among the top three best-performing models, demonstrating superior
effectiveness across various datasets.
| [
{
"version": "v1",
"created": "Wed, 17 May 2023 02:30:28 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 09:52:35 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yelleti",
"Vivek",
""
],
[
"Priyanka",
"Ch",
""
]
] | TITLE: Incremental Outlier Detection Modelling Using Streaming Analytics in
Finance & Health Care
ABSTRACT: In the era of real-time data, traditional methods often struggle to keep pace
with the dynamic nature of streaming environments. In this paper, we proposed a
hybrid framework where in (i) stage-I follows a traditional approach where the
model is built once and evaluated in a real-time environment, and (ii) stage-II
employs an incremental learning approach where the model is continuously
retrained as new data arrives, enabling it to adapt and stay up to date. To
implement these frameworks, we employed 8 distinct state-of-the-art outlier
detection models, including one-class support vector machine (OCSVM), isolation
forest adaptive sliding window approach (IForest ASD), exact storm (ES),
angle-based outlier detection (ABOD), local outlier factor (LOF), Kitsunes
online algorithm (KitNet), and K-nearest neighbour conformal density and
distance based (KNN CAD). We evaluated the performance of these models across
seven financial and healthcare prediction tasks, including credit card fraud
detection, churn prediction, Ethereum fraud detection, heart stroke prediction,
and diabetes prediction. The results indicate that our proposed incremental
learning framework significantly improves performance, particularly on highly
imbalanced datasets. Among all models, the IForest ASD model consistently
ranked among the top three best-performing models, demonstrating superior
effectiveness across various datasets.
|
2310.06906 | Anbang Yang | Anbang Yang, Ge Jin, Junjie Huang, Yao Wang, John-Ross Rizzo, Chen
Feng | Distillation Improves Visual Place Recognition for Low Quality Images | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Real-time visual localization often utilizes online computing, for which
query images or videos are transmitted to remote servers for visual place
recognition (VPR). However, limited network bandwidth necessitates
image-quality reduction and thus the degradation of global image descriptors,
reducing VPR accuracy. We address this issue at the descriptor extraction level
with a knowledge-distillation methodology that learns feature representations
from high-quality images to extract more discriminative descriptors from
low-quality images. Our approach includes the Inter-channel Correlation
Knowledge Distillation (ICKD) loss, Mean Squared Error (MSE) loss, and Triplet
loss. We validate the proposed losses on multiple VPR methods and datasets
subjected to JPEG compression, resolution reduction, and video quantization. We
obtain significant improvements in VPR recall rates under all three tested
modalities of lowered image quality. Furthermore, we fill a gap in VPR
literature on video-based data and its influence on VPR performance. This work
contributes to more reliable place recognition in resource-constrained
environments.
| [
{
"version": "v1",
"created": "Tue, 10 Oct 2023 18:03:29 GMT"
},
{
"version": "v2",
"created": "Sun, 27 Oct 2024 22:09:27 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 19:47:09 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yang",
"Anbang",
""
],
[
"Jin",
"Ge",
""
],
[
"Huang",
"Junjie",
""
],
[
"Wang",
"Yao",
""
],
[
"Rizzo",
"John-Ross",
""
],
[
"Feng",
"Chen",
""
]
] | TITLE: Distillation Improves Visual Place Recognition for Low Quality Images
ABSTRACT: Real-time visual localization often utilizes online computing, for which
query images or videos are transmitted to remote servers for visual place
recognition (VPR). However, limited network bandwidth necessitates
image-quality reduction and thus the degradation of global image descriptors,
reducing VPR accuracy. We address this issue at the descriptor extraction level
with a knowledge-distillation methodology that learns feature representations
from high-quality images to extract more discriminative descriptors from
low-quality images. Our approach includes the Inter-channel Correlation
Knowledge Distillation (ICKD) loss, Mean Squared Error (MSE) loss, and Triplet
loss. We validate the proposed losses on multiple VPR methods and datasets
subjected to JPEG compression, resolution reduction, and video quantization. We
obtain significant improvements in VPR recall rates under all three tested
modalities of lowered image quality. Furthermore, we fill a gap in VPR
literature on video-based data and its influence on VPR performance. This work
contributes to more reliable place recognition in resource-constrained
environments.
|
2311.02757 | Yushun Dong | Yushun Dong, Binchi Zhang, Hanghang Tong, Jundong Li | Certified Defense on the Fairness of Graph Neural Networks | null | null | null | null | cs.LG cs.CR stat.ML | http://creativecommons.org/licenses/by/4.0/ | Graph Neural Networks (GNNs) have emerged as a prominent graph learning model
in various graph-based tasks over the years. Nevertheless, due to the
vulnerabilities of GNNs, it has been empirically proved that malicious
attackers could easily corrupt the fairness level of their predictions by
adding perturbations to the input graph data. In this paper, we take crucial
steps to study a novel problem of certifiable defense on the fairness level of
GNNs. Specifically, we propose a principled framework named ELEGANT and present
a detailed theoretical certification analysis for the fairness of GNNs. ELEGANT
takes any GNNs as its backbone, and the fairness level of such a backbone is
theoretically impossible to be corrupted under certain perturbation budgets for
attackers. Notably, ELEGANT does not have any assumption over the GNN structure
or parameters, and does not require re-training the GNNs to realize
certification. Hence it can serve as a plug-and-play framework for any
optimized GNNs ready to be deployed. We verify the satisfactory effectiveness
of ELEGANT in practice through extensive experiments on real-world datasets
across different backbones of GNNs, where ELEGANT is also demonstrated to be
beneficial for GNN debiasing. Open-source code can be found at
https://github.com/yushundong/ELEGANT.
| [
{
"version": "v1",
"created": "Sun, 5 Nov 2023 20:29:40 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 05:00:42 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Dong",
"Yushun",
""
],
[
"Zhang",
"Binchi",
""
],
[
"Tong",
"Hanghang",
""
],
[
"Li",
"Jundong",
""
]
] | TITLE: Certified Defense on the Fairness of Graph Neural Networks
ABSTRACT: Graph Neural Networks (GNNs) have emerged as a prominent graph learning model
in various graph-based tasks over the years. Nevertheless, due to the
vulnerabilities of GNNs, it has been empirically proved that malicious
attackers could easily corrupt the fairness level of their predictions by
adding perturbations to the input graph data. In this paper, we take crucial
steps to study a novel problem of certifiable defense on the fairness level of
GNNs. Specifically, we propose a principled framework named ELEGANT and present
a detailed theoretical certification analysis for the fairness of GNNs. ELEGANT
takes any GNNs as its backbone, and the fairness level of such a backbone is
theoretically impossible to be corrupted under certain perturbation budgets for
attackers. Notably, ELEGANT does not have any assumption over the GNN structure
or parameters, and does not require re-training the GNNs to realize
certification. Hence it can serve as a plug-and-play framework for any
optimized GNNs ready to be deployed. We verify the satisfactory effectiveness
of ELEGANT in practice through extensive experiments on real-world datasets
across different backbones of GNNs, where ELEGANT is also demonstrated to be
beneficial for GNN debiasing. Open-source code can be found at
https://github.com/yushundong/ELEGANT.
|
2311.08870 | Mingzhao Yang | Mingzhao Yang, Shangchao Su, Bin Li, Xiangyang Xue | One-Shot Heterogeneous Federated Learning with Local Model-Guided
Diffusion Models | null | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In recent years, One-shot Federated Learning methods based on Diffusion
Models have garnered increasing attention due to their remarkable performance.
However, most of these methods require the deployment of foundation models on
client devices, which significantly raises the computational requirements and
reduces their adaptability to heterogeneous client models compared to
traditional FL methods. In this paper, we propose FedLMG, a heterogeneous
one-shot Federated learning method with Local Model-Guided diffusion models.
Briefly speaking, in FedLMG, clients do not need access to any foundation
models but only train and upload their local models, which is consistent with
traditional FL methods. On the clients, we employ classification loss and BN
loss to capture the broad category features and detailed contextual features of
the client distributions. On the server, based on the uploaded client models,
we utilize backpropagation to guide the server's DM in generating synthetic
datasets that comply with the client distributions, which are then used to
train the aggregated model. By using the locally trained client models as a
medium to transfer client knowledge, our method significantly reduces the
computational requirements on client devices and effectively adapts to
scenarios with heterogeneous clients. Extensive quantitation and visualization
experiments on three large-scale real-world datasets, along with theoretical
analysis, demonstrate that the synthetic datasets generated by FedLMG exhibit
comparable quality and diversity to the client datasets, which leads to an
aggregated model that outperforms all compared methods and even the performance
ceiling, further elucidating the significant potential of utilizing DMs in FL.
| [
{
"version": "v1",
"created": "Wed, 15 Nov 2023 11:11:25 GMT"
},
{
"version": "v2",
"created": "Thu, 16 Nov 2023 15:43:52 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 03:46:28 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yang",
"Mingzhao",
""
],
[
"Su",
"Shangchao",
""
],
[
"Li",
"Bin",
""
],
[
"Xue",
"Xiangyang",
""
]
] | TITLE: One-Shot Heterogeneous Federated Learning with Local Model-Guided
Diffusion Models
ABSTRACT: In recent years, One-shot Federated Learning methods based on Diffusion
Models have garnered increasing attention due to their remarkable performance.
However, most of these methods require the deployment of foundation models on
client devices, which significantly raises the computational requirements and
reduces their adaptability to heterogeneous client models compared to
traditional FL methods. In this paper, we propose FedLMG, a heterogeneous
one-shot Federated learning method with Local Model-Guided diffusion models.
Briefly speaking, in FedLMG, clients do not need access to any foundation
models but only train and upload their local models, which is consistent with
traditional FL methods. On the clients, we employ classification loss and BN
loss to capture the broad category features and detailed contextual features of
the client distributions. On the server, based on the uploaded client models,
we utilize backpropagation to guide the server's DM in generating synthetic
datasets that comply with the client distributions, which are then used to
train the aggregated model. By using the locally trained client models as a
medium to transfer client knowledge, our method significantly reduces the
computational requirements on client devices and effectively adapts to
scenarios with heterogeneous clients. Extensive quantitation and visualization
experiments on three large-scale real-world datasets, along with theoretical
analysis, demonstrate that the synthetic datasets generated by FedLMG exhibit
comparable quality and diversity to the client datasets, which leads to an
aggregated model that outperforms all compared methods and even the performance
ceiling, further elucidating the significant potential of utilizing DMs in FL.
|
2311.18773 | Zitian Tang | Zitian Tang, Rohan Myer Krishnan, Zhiqiu Yu and Chen Sun | Spacewalk-18: A Benchmark for Multimodal and Long-form Procedural Video
Understanding in Novel Domains | Under submission. Code and models will be released at
https://brown-palm.github.io/Spacewalk-18/ | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Learning from (procedural) videos has increasingly served as a pathway for
embodied agents to acquire skills from human demonstrations. To do this, video
understanding models must be able to obtain structured understandings, such as
the temporal segmentation of a demonstration into sequences of actions and
skills, and to generalize the understandings to novel environments, tasks, and
problem domains. In pursuit of this goal, we introduce Spacewalk-18, a
benchmark containing two tasks: (1) step recognition and (2) video question
answering, over a dataset of temporally segmented and labeled tasks in
International Space Station spacewalk recordings. In tandem, the two tasks
quantify a model's ability to: (1) generalize to novel domains; (2) utilize
long temporal context and multimodal (e.g. visual and speech) information. Our
extensive experimental analysis highlights the challenges of Spacewalk-18, but
also suggests best practices for domain generalization and long-form
understanding. Notably, we discover a promising adaptation via summarization
technique that leads to significant performance improvement without model
fine-tuning. The Spacewalk-18 benchmark is released at
https://brown-palm.github.io/Spacewalk-18/.
| [
{
"version": "v1",
"created": "Thu, 30 Nov 2023 18:19:23 GMT"
},
{
"version": "v2",
"created": "Fri, 22 Mar 2024 01:21:14 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 21:40:28 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Tang",
"Zitian",
""
],
[
"Krishnan",
"Rohan Myer",
""
],
[
"Yu",
"Zhiqiu",
""
],
[
"Sun",
"Chen",
""
]
] | TITLE: Spacewalk-18: A Benchmark for Multimodal and Long-form Procedural Video
Understanding in Novel Domains
ABSTRACT: Learning from (procedural) videos has increasingly served as a pathway for
embodied agents to acquire skills from human demonstrations. To do this, video
understanding models must be able to obtain structured understandings, such as
the temporal segmentation of a demonstration into sequences of actions and
skills, and to generalize the understandings to novel environments, tasks, and
problem domains. In pursuit of this goal, we introduce Spacewalk-18, a
benchmark containing two tasks: (1) step recognition and (2) video question
answering, over a dataset of temporally segmented and labeled tasks in
International Space Station spacewalk recordings. In tandem, the two tasks
quantify a model's ability to: (1) generalize to novel domains; (2) utilize
long temporal context and multimodal (e.g. visual and speech) information. Our
extensive experimental analysis highlights the challenges of Spacewalk-18, but
also suggests best practices for domain generalization and long-form
understanding. Notably, we discover a promising adaptation via summarization
technique that leads to significant performance improvement without model
fine-tuning. The Spacewalk-18 benchmark is released at
https://brown-palm.github.io/Spacewalk-18/.
|
2402.12309 | Siheng Xiong | Siheng Xiong, Yuan Yang, Faramarz Fekri, James Clayton Kerce | TILP: Differentiable Learning of Temporal Logical Rules on Knowledge
Graphs | ICLR 2023 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Compared with static knowledge graphs, temporal knowledge graphs (tKG), which
can capture the evolution and change of information over time, are more
realistic and general. However, due to the complexity that the notion of time
introduces to the learning of the rules, an accurate graph reasoning, e.g.,
predicting new links between entities, is still a difficult problem. In this
paper, we propose TILP, a differentiable framework for temporal logical rules
learning. By designing a constrained random walk mechanism and the introduction
of temporal operators, we ensure the efficiency of our model. We present
temporal features modeling in tKG, e.g., recurrence, temporal order, interval
between pair of relations, and duration, and incorporate it into our learning
process. We compare TILP with state-of-the-art methods on two benchmark
datasets. We show that our proposed framework can improve upon the performance
of baseline methods while providing interpretable results. In particular, we
consider various scenarios in which training samples are limited, data is
biased, and the time range between training and inference are different. In all
these cases, TILP works much better than the state-of-the-art methods.
| [
{
"version": "v1",
"created": "Mon, 19 Feb 2024 17:30:44 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 20:08:28 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Xiong",
"Siheng",
""
],
[
"Yang",
"Yuan",
""
],
[
"Fekri",
"Faramarz",
""
],
[
"Kerce",
"James Clayton",
""
]
] | TITLE: TILP: Differentiable Learning of Temporal Logical Rules on Knowledge
Graphs
ABSTRACT: Compared with static knowledge graphs, temporal knowledge graphs (tKG), which
can capture the evolution and change of information over time, are more
realistic and general. However, due to the complexity that the notion of time
introduces to the learning of the rules, an accurate graph reasoning, e.g.,
predicting new links between entities, is still a difficult problem. In this
paper, we propose TILP, a differentiable framework for temporal logical rules
learning. By designing a constrained random walk mechanism and the introduction
of temporal operators, we ensure the efficiency of our model. We present
temporal features modeling in tKG, e.g., recurrence, temporal order, interval
between pair of relations, and duration, and incorporate it into our learning
process. We compare TILP with state-of-the-art methods on two benchmark
datasets. We show that our proposed framework can improve upon the performance
of baseline methods while providing interpretable results. In particular, we
consider various scenarios in which training samples are limited, data is
biased, and the time range between training and inference are different. In all
these cases, TILP works much better than the state-of-the-art methods.
|
2403.10070 | Daiying Yin | Jianyu Hu, Juan-Pablo Ortega, Daiying Yin | A Structure-Preserving Kernel Method for Learning Hamiltonian Systems | null | null | null | null | stat.ML cs.LG math.DS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | A structure-preserving kernel ridge regression method is presented that
allows the recovery of nonlinear Hamiltonian functions out of datasets made of
noisy observations of Hamiltonian vector fields. The method proposes a
closed-form solution that yields excellent numerical performances that surpass
other techniques proposed in the literature in this setup. From the
methodological point of view, the paper extends kernel regression methods to
problems in which loss functions involving linear functions of gradients are
required and, in particular, a differential reproducing property and a
Representer Theorem are proved in this context. The relation between the
structure-preserving kernel estimator and the Gaussian posterior mean estimator
is analyzed. A full error analysis is conducted that provides convergence rates
using fixed and adaptive regularization parameters. The good performance of the
proposed estimator together with the convergence rate is illustrated with
various numerical experiments.
| [
{
"version": "v1",
"created": "Fri, 15 Mar 2024 07:20:21 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 04:28:27 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Hu",
"Jianyu",
""
],
[
"Ortega",
"Juan-Pablo",
""
],
[
"Yin",
"Daiying",
""
]
] | TITLE: A Structure-Preserving Kernel Method for Learning Hamiltonian Systems
ABSTRACT: A structure-preserving kernel ridge regression method is presented that
allows the recovery of nonlinear Hamiltonian functions out of datasets made of
noisy observations of Hamiltonian vector fields. The method proposes a
closed-form solution that yields excellent numerical performances that surpass
other techniques proposed in the literature in this setup. From the
methodological point of view, the paper extends kernel regression methods to
problems in which loss functions involving linear functions of gradients are
required and, in particular, a differential reproducing property and a
Representer Theorem are proved in this context. The relation between the
structure-preserving kernel estimator and the Gaussian posterior mean estimator
is analyzed. A full error analysis is conducted that provides convergence rates
using fixed and adaptive regularization parameters. The good performance of the
proposed estimator together with the convergence rate is illustrated with
various numerical experiments.
|
2403.17834 | Ibrahim Hamamci Mr. | Ibrahim Ethem Hamamci, Sezgin Er, Chenyu Wang, Furkan Almas, Ayse
Gulnihan Simsek, Sevval Nil Esirgun, Irem Doga, Omer Faruk Durugol, Weicheng
Dai, Murong Xu, Muhammed Furkan Dasdelen, Bastian Wittmann, Tamaz
Amiranashvili, Enis Simsar, Mehmet Simsar, Emine Bensu Erdemir, Abdullah
Alanbay, Anjany Sekuboyina, Berkan Lafci, Christian Bluethgen, Kayhan
Batmanghelich, Mehmet Kemal Ozdemir, Bjoern Menze | Developing Generalist Foundation Models from a Multimodal Dataset for 3D
Computed Tomography | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | While computer vision has achieved tremendous success with multimodal
encoding and direct textual interaction with images via chat-based large
language models, similar advancements in medical imaging AI, particularly in 3D
imaging, have been limited due to the scarcity of comprehensive datasets. To
address this critical gap, we introduce CT-RATE, the first dataset that pairs
3D medical images with corresponding textual reports. CT-RATE comprises 25,692
non-contrast 3D chest CT scans from 21,304 unique patients. Through various
reconstructions, these scans are expanded to 50,188 volumes, totaling over 14.3
million 2D slices. Each scan is accompanied by its corresponding radiology
report. Leveraging CT-RATE, we develop CT-CLIP, a CT-focused contrastive
language-image pretraining framework designed for broad applications without
the need for task-specific training. We demonstrate how CT-CLIP can be used in
two tasks: multi-abnormality detection and case retrieval. Remarkably, in
multi-abnormality detection, CT-CLIP outperforms state-of-the-art fully
supervised models across all key metrics, effectively eliminating the need for
manual annotation. In case retrieval, it efficiently retrieves relevant cases
using either image or textual queries, thereby enhancing knowledge
dissemination. By combining CT-CLIP's vision encoder with a pretrained large
language model, we create CT-CHAT, a vision-language foundational chat model
for 3D chest CT volumes. Finetuned on over 2.7 million question-answer pairs
derived from the CT-RATE dataset, CT-CHAT surpasses other multimodal AI
assistants, underscoring the necessity for specialized methods in 3D medical
imaging. Collectively, the open-source release of CT-RATE, CT-CLIP, and CT-CHAT
not only addresses critical challenges in 3D medical imaging, but also lays the
groundwork for future innovations in medical AI and improved patient care.
| [
{
"version": "v1",
"created": "Tue, 26 Mar 2024 16:19:56 GMT"
},
{
"version": "v2",
"created": "Wed, 16 Oct 2024 12:49:19 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 13:02:12 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Hamamci",
"Ibrahim Ethem",
""
],
[
"Er",
"Sezgin",
""
],
[
"Wang",
"Chenyu",
""
],
[
"Almas",
"Furkan",
""
],
[
"Simsek",
"Ayse Gulnihan",
""
],
[
"Esirgun",
"Sevval Nil",
""
],
[
"Doga",
"Irem",
""
],
[
"Durugol",
"Omer Faruk",
""
],
[
"Dai",
"Weicheng",
""
],
[
"Xu",
"Murong",
""
],
[
"Dasdelen",
"Muhammed Furkan",
""
],
[
"Wittmann",
"Bastian",
""
],
[
"Amiranashvili",
"Tamaz",
""
],
[
"Simsar",
"Enis",
""
],
[
"Simsar",
"Mehmet",
""
],
[
"Erdemir",
"Emine Bensu",
""
],
[
"Alanbay",
"Abdullah",
""
],
[
"Sekuboyina",
"Anjany",
""
],
[
"Lafci",
"Berkan",
""
],
[
"Bluethgen",
"Christian",
""
],
[
"Batmanghelich",
"Kayhan",
""
],
[
"Ozdemir",
"Mehmet Kemal",
""
],
[
"Menze",
"Bjoern",
""
]
] | TITLE: Developing Generalist Foundation Models from a Multimodal Dataset for 3D
Computed Tomography
ABSTRACT: While computer vision has achieved tremendous success with multimodal
encoding and direct textual interaction with images via chat-based large
language models, similar advancements in medical imaging AI, particularly in 3D
imaging, have been limited due to the scarcity of comprehensive datasets. To
address this critical gap, we introduce CT-RATE, the first dataset that pairs
3D medical images with corresponding textual reports. CT-RATE comprises 25,692
non-contrast 3D chest CT scans from 21,304 unique patients. Through various
reconstructions, these scans are expanded to 50,188 volumes, totaling over 14.3
million 2D slices. Each scan is accompanied by its corresponding radiology
report. Leveraging CT-RATE, we develop CT-CLIP, a CT-focused contrastive
language-image pretraining framework designed for broad applications without
the need for task-specific training. We demonstrate how CT-CLIP can be used in
two tasks: multi-abnormality detection and case retrieval. Remarkably, in
multi-abnormality detection, CT-CLIP outperforms state-of-the-art fully
supervised models across all key metrics, effectively eliminating the need for
manual annotation. In case retrieval, it efficiently retrieves relevant cases
using either image or textual queries, thereby enhancing knowledge
dissemination. By combining CT-CLIP's vision encoder with a pretrained large
language model, we create CT-CHAT, a vision-language foundational chat model
for 3D chest CT volumes. Finetuned on over 2.7 million question-answer pairs
derived from the CT-RATE dataset, CT-CHAT surpasses other multimodal AI
assistants, underscoring the necessity for specialized methods in 3D medical
imaging. Collectively, the open-source release of CT-RATE, CT-CLIP, and CT-CHAT
not only addresses critical challenges in 3D medical imaging, but also lays the
groundwork for future innovations in medical AI and improved patient care.
|
2404.00916 | Heemin Yang | Heemin Yang, Jaesung Rim, Seungyong Lee, Seung-Hwan Baek, Sunghyun Cho | Gyro-based Neural Single Image Deblurring | 10 pages, 10 figures, CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present GyroDeblurNet, a novel single-image deblurring
method that utilizes a gyro sensor to resolve the ill-posedness of image
deblurring. The gyro sensor provides valuable information about camera motion
that can improve deblurring quality. However, exploiting real-world gyro data
is challenging due to errors from various sources. To handle these errors,
GyroDeblurNet is equipped with two novel neural network blocks: a gyro
refinement block and a gyro deblurring block. The gyro refinement block refines
the erroneous gyro data using the blur information from the input image. The
gyro deblurring block removes blur from the input image using the refined gyro
data and further compensates for gyro error by leveraging the blur information
from the input image. For training a neural network with erroneous gyro data,
we propose a training strategy based on the curriculum learning. We also
introduce a novel gyro data embedding scheme to represent real-world intricate
camera shakes. Finally, we present both synthetic and real-world datasets for
training and evaluating gyro-based single image deblurring. Our experiments
demonstrate that our approach achieves state-of-the-art deblurring quality by
effectively utilizing erroneous gyro data.
| [
{
"version": "v1",
"created": "Mon, 1 Apr 2024 04:43:45 GMT"
},
{
"version": "v2",
"created": "Mon, 8 Apr 2024 08:08:43 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 02:10:10 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yang",
"Heemin",
""
],
[
"Rim",
"Jaesung",
""
],
[
"Lee",
"Seungyong",
""
],
[
"Baek",
"Seung-Hwan",
""
],
[
"Cho",
"Sunghyun",
""
]
] | TITLE: Gyro-based Neural Single Image Deblurring
ABSTRACT: In this paper, we present GyroDeblurNet, a novel single-image deblurring
method that utilizes a gyro sensor to resolve the ill-posedness of image
deblurring. The gyro sensor provides valuable information about camera motion
that can improve deblurring quality. However, exploiting real-world gyro data
is challenging due to errors from various sources. To handle these errors,
GyroDeblurNet is equipped with two novel neural network blocks: a gyro
refinement block and a gyro deblurring block. The gyro refinement block refines
the erroneous gyro data using the blur information from the input image. The
gyro deblurring block removes blur from the input image using the refined gyro
data and further compensates for gyro error by leveraging the blur information
from the input image. For training a neural network with erroneous gyro data,
we propose a training strategy based on the curriculum learning. We also
introduce a novel gyro data embedding scheme to represent real-world intricate
camera shakes. Finally, we present both synthetic and real-world datasets for
training and evaluating gyro-based single image deblurring. Our experiments
demonstrate that our approach achieves state-of-the-art deblurring quality by
effectively utilizing erroneous gyro data.
|
2404.05659 | Khai Le-Duc | Khai Le-Duc | VietMed: A Dataset and Benchmark for Automatic Speech Recognition of
Vietnamese in the Medical Domain | LREC-COLING 2024 (Oral), 24 pages | null | null | null | cs.CL cs.AI eess.AS | http://creativecommons.org/licenses/by/4.0/ | Due to privacy restrictions, there's a shortage of publicly available speech
recognition datasets in the medical domain. In this work, we present VietMed -
a Vietnamese speech recognition dataset in the medical domain comprising 16h of
labeled medical speech, 1000h of unlabeled medical speech and 1200h of
unlabeled general-domain speech. To our best knowledge, VietMed is by far the
world's largest public medical speech recognition dataset in 7 aspects: total
duration, number of speakers, diseases, recording conditions, speaker roles,
unique medical terms and accents. VietMed is also by far the largest public
Vietnamese speech dataset in terms of total duration. Additionally, we are the
first to present a medical ASR dataset covering all ICD-10 disease groups and
all accents within a country. Moreover, we release the first public large-scale
pre-trained models for Vietnamese ASR, w2v2-Viet and XLSR-53-Viet, along with
the first public large-scale fine-tuned models for medical ASR. Even without
any medical data in unsupervised pre-training, our best pre-trained model
XLSR-53-Viet generalizes very well to the medical domain by outperforming
state-of-the-art XLSR-53, from 51.8% to 29.6% WER on test set (a relative
reduction of more than 40%). All code, data and models are made publicly
available: https://github.com/leduckhai/MultiMed/tree/master/VietMed.
| [
{
"version": "v1",
"created": "Mon, 8 Apr 2024 16:43:52 GMT"
},
{
"version": "v2",
"created": "Tue, 28 May 2024 05:27:48 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 15:06:21 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Le-Duc",
"Khai",
""
]
] | TITLE: VietMed: A Dataset and Benchmark for Automatic Speech Recognition of
Vietnamese in the Medical Domain
ABSTRACT: Due to privacy restrictions, there's a shortage of publicly available speech
recognition datasets in the medical domain. In this work, we present VietMed -
a Vietnamese speech recognition dataset in the medical domain comprising 16h of
labeled medical speech, 1000h of unlabeled medical speech and 1200h of
unlabeled general-domain speech. To our best knowledge, VietMed is by far the
world's largest public medical speech recognition dataset in 7 aspects: total
duration, number of speakers, diseases, recording conditions, speaker roles,
unique medical terms and accents. VietMed is also by far the largest public
Vietnamese speech dataset in terms of total duration. Additionally, we are the
first to present a medical ASR dataset covering all ICD-10 disease groups and
all accents within a country. Moreover, we release the first public large-scale
pre-trained models for Vietnamese ASR, w2v2-Viet and XLSR-53-Viet, along with
the first public large-scale fine-tuned models for medical ASR. Even without
any medical data in unsupervised pre-training, our best pre-trained model
XLSR-53-Viet generalizes very well to the medical domain by outperforming
state-of-the-art XLSR-53, from 51.8% to 29.6% WER on test set (a relative
reduction of more than 40%). All code, data and models are made publicly
available: https://github.com/leduckhai/MultiMed/tree/master/VietMed.
|
2404.10620 | Sinisa Stekovic | Sinisa Stekovic, Arslan Artykov, Stefan Ainetter, Mattia D'Urso,
Friedrich Fraundorfer | PyTorchGeoNodes: Enabling Differentiable Shape Programs for 3D Shape
Reconstruction | Accepted at CVPR | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose PyTorchGeoNodes, a differentiable module for reconstructing 3D
objects and their parameters from images using interpretable shape programs.
Unlike traditional CAD model retrieval, shape programs allow reasoning about
semantic parameters, editing, and a low memory footprint. Despite their
potential, shape programs for 3D scene understanding have been largely
overlooked. Our key contribution is enabling gradient-based optimization by
parsing shape programs, or more precisely procedural models designed in
Blender, into efficient PyTorch code. While there are many possible
applications of our PyTochGeoNodes, we show that a combination of
PyTorchGeoNodes with genetic algorithm is a method of choice to optimize both
discrete and continuous shape program parameters for 3D reconstruction and
understanding of 3D object parameters. Our modular framework can be further
integrated with other reconstruction algorithms, and we demonstrate one such
integration to enable procedural Gaussian splatting. Our experiments on the
ScanNet dataset show that our method achieves accurate reconstructions while
enabling, until now, unseen level of 3D scene understanding.
| [
{
"version": "v1",
"created": "Tue, 16 Apr 2024 14:43:33 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 10:54:29 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Stekovic",
"Sinisa",
""
],
[
"Artykov",
"Arslan",
""
],
[
"Ainetter",
"Stefan",
""
],
[
"D'Urso",
"Mattia",
""
],
[
"Fraundorfer",
"Friedrich",
""
]
] | TITLE: PyTorchGeoNodes: Enabling Differentiable Shape Programs for 3D Shape
Reconstruction
ABSTRACT: We propose PyTorchGeoNodes, a differentiable module for reconstructing 3D
objects and their parameters from images using interpretable shape programs.
Unlike traditional CAD model retrieval, shape programs allow reasoning about
semantic parameters, editing, and a low memory footprint. Despite their
potential, shape programs for 3D scene understanding have been largely
overlooked. Our key contribution is enabling gradient-based optimization by
parsing shape programs, or more precisely procedural models designed in
Blender, into efficient PyTorch code. While there are many possible
applications of our PyTochGeoNodes, we show that a combination of
PyTorchGeoNodes with genetic algorithm is a method of choice to optimize both
discrete and continuous shape program parameters for 3D reconstruction and
understanding of 3D object parameters. Our modular framework can be further
integrated with other reconstruction algorithms, and we demonstrate one such
integration to enable procedural Gaussian splatting. Our experiments on the
ScanNet dataset show that our method achieves accurate reconstructions while
enabling, until now, unseen level of 3D scene understanding.
|
2405.16444 | Jiayi Yao | Jiayi Yao, Hanchen Li, Yuhan Liu, Siddhant Ray, Yihua Cheng, Qizheng
Zhang, Kuntai Du, Shan Lu, Junchen Jiang | CacheBlend: Fast Large Language Model Serving for RAG with Cached
Knowledge Fusion | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Large language models (LLMs) often incorporate multiple text chunks in their
inputs to provide the necessary contexts. To speed up the prefill of the long
LLM inputs, one can pre-compute the KV cache of a text and re-use the KV cache
when the context is reused as the prefix of another LLM input. However, the
reused text chunks are not always the input prefix, which makes precomputed KV
caches not directly usable since they ignore the text's cross-attention with
the preceding texts. Thus, the benefits of reusing KV caches remain largely
unrealized.
This paper tackles just one challenge: when an LLM input contains multiple
text chunks, how to quickly combine their precomputed KV caches in order to
achieve the same generation quality as the expensive full prefill (i.e.,
without reusing KV cache)? This challenge naturally arises in
retrieval-augmented generation (RAG) where the input is supplemented with
multiple retrieved texts as the context. We present CacheBlend, a scheme that
reuses the precomputed KV caches, regardless prefix or not, and selectively
recomputes the KV values of a small subset of tokens to partially update each
reused KV cache. In the meantime, the small extra delay for recomputing some
tokens can be pipelined with the retrieval of KV caches within the same job,
allowing CacheBlend to store KV caches in slower devices with more storage
capacity while retrieving them without increasing the inference delay. By
comparing CacheBlend with the state-of-the-art KV cache reusing schemes on
three open-source LLMs of various sizes and four popular benchmark datasets of
different tasks, we show that CacheBlend reduces time-to-first-token (TTFT) by
2.2-3.3x and increases the inference throughput by 2.8-5x from full KV
recompute without compromising generation quality. The code is available at
https://github.com/LMCache/LMCache.
| [
{
"version": "v1",
"created": "Sun, 26 May 2024 06:00:17 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Jun 2024 10:57:57 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 22:49:22 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yao",
"Jiayi",
""
],
[
"Li",
"Hanchen",
""
],
[
"Liu",
"Yuhan",
""
],
[
"Ray",
"Siddhant",
""
],
[
"Cheng",
"Yihua",
""
],
[
"Zhang",
"Qizheng",
""
],
[
"Du",
"Kuntai",
""
],
[
"Lu",
"Shan",
""
],
[
"Jiang",
"Junchen",
""
]
] | TITLE: CacheBlend: Fast Large Language Model Serving for RAG with Cached
Knowledge Fusion
ABSTRACT: Large language models (LLMs) often incorporate multiple text chunks in their
inputs to provide the necessary contexts. To speed up the prefill of the long
LLM inputs, one can pre-compute the KV cache of a text and re-use the KV cache
when the context is reused as the prefix of another LLM input. However, the
reused text chunks are not always the input prefix, which makes precomputed KV
caches not directly usable since they ignore the text's cross-attention with
the preceding texts. Thus, the benefits of reusing KV caches remain largely
unrealized.
This paper tackles just one challenge: when an LLM input contains multiple
text chunks, how to quickly combine their precomputed KV caches in order to
achieve the same generation quality as the expensive full prefill (i.e.,
without reusing KV cache)? This challenge naturally arises in
retrieval-augmented generation (RAG) where the input is supplemented with
multiple retrieved texts as the context. We present CacheBlend, a scheme that
reuses the precomputed KV caches, regardless prefix or not, and selectively
recomputes the KV values of a small subset of tokens to partially update each
reused KV cache. In the meantime, the small extra delay for recomputing some
tokens can be pipelined with the retrieval of KV caches within the same job,
allowing CacheBlend to store KV caches in slower devices with more storage
capacity while retrieving them without increasing the inference delay. By
comparing CacheBlend with the state-of-the-art KV cache reusing schemes on
three open-source LLMs of various sizes and four popular benchmark datasets of
different tasks, we show that CacheBlend reduces time-to-first-token (TTFT) by
2.2-3.3x and increases the inference throughput by 2.8-5x from full KV
recompute without compromising generation quality. The code is available at
https://github.com/LMCache/LMCache.
|
2405.17743 | Zhengyang Tang | Chenyu Huang, Zhengyang Tang, Shixi Hu, Ruoqing Jiang, Xin Zheng,
Dongdong Ge, Benyou Wang, Zizhuo Wang | ORLM: A Customizable Framework in Training Large Models for Automated
Optimization Modeling | accepted by Operations Research | null | null | null | cs.CL cs.AI cs.CE cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Optimization modeling plays a critical role in the application of Operations
Research (OR) tools to address real-world problems, yet they pose challenges
and require extensive expertise from OR experts. With the advent of large
language models (LLMs), new opportunities have emerged to streamline and
automate such task. However, current research predominantly relies on
closed-source LLMs such as GPT-4, along with extensive prompt engineering
techniques. This reliance stems from the scarcity of high-quality training
datasets for optimization modeling, resulting in elevated costs, prolonged
processing times, and privacy concerns. To address these challenges, our work
is the first to propose a viable path for training open-source LLMs that are
capable of optimization modeling and developing solver codes, eventually
leading to a superior ability for automating optimization modeling and solving.
Particularly, we design the {\sc OR-Instruct}, a semi-automated data synthesis
framework for optimization modeling that enables customizable enhancements for
specific scenarios or model types. This work also introduces IndustryOR, the
first industrial benchmark for evaluating LLMs in solving practical OR
problems. We train several 7B-scale open-source LLMs using synthesized data
(dubbed ORLMs{https://github.com/Cardinal-Operations/ORLM}), which exhibit
significantly enhanced optimization modeling capabilities, achieving
competitive performance across the NL4OPT, MAMO, and IndustryOR benchmarks.
Additionally, our experiments highlight the potential of scaling law and
reinforcement learning to further enhance the performance of ORLMs. The
workflows and human-machine interaction paradigms of ORLMs in practical
industrial applications are also discussed in the paper.
| [
{
"version": "v1",
"created": "Tue, 28 May 2024 01:55:35 GMT"
},
{
"version": "v2",
"created": "Thu, 30 May 2024 02:12:05 GMT"
},
{
"version": "v3",
"created": "Fri, 15 Nov 2024 03:25:40 GMT"
},
{
"version": "v4",
"created": "Sun, 5 Jan 2025 14:35:49 GMT"
},
{
"version": "v5",
"created": "Fri, 4 Apr 2025 13:31:38 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Huang",
"Chenyu",
""
],
[
"Tang",
"Zhengyang",
""
],
[
"Hu",
"Shixi",
""
],
[
"Jiang",
"Ruoqing",
""
],
[
"Zheng",
"Xin",
""
],
[
"Ge",
"Dongdong",
""
],
[
"Wang",
"Benyou",
""
],
[
"Wang",
"Zizhuo",
""
]
] | TITLE: ORLM: A Customizable Framework in Training Large Models for Automated
Optimization Modeling
ABSTRACT: Optimization modeling plays a critical role in the application of Operations
Research (OR) tools to address real-world problems, yet they pose challenges
and require extensive expertise from OR experts. With the advent of large
language models (LLMs), new opportunities have emerged to streamline and
automate such task. However, current research predominantly relies on
closed-source LLMs such as GPT-4, along with extensive prompt engineering
techniques. This reliance stems from the scarcity of high-quality training
datasets for optimization modeling, resulting in elevated costs, prolonged
processing times, and privacy concerns. To address these challenges, our work
is the first to propose a viable path for training open-source LLMs that are
capable of optimization modeling and developing solver codes, eventually
leading to a superior ability for automating optimization modeling and solving.
Particularly, we design the {\sc OR-Instruct}, a semi-automated data synthesis
framework for optimization modeling that enables customizable enhancements for
specific scenarios or model types. This work also introduces IndustryOR, the
first industrial benchmark for evaluating LLMs in solving practical OR
problems. We train several 7B-scale open-source LLMs using synthesized data
(dubbed ORLMs{https://github.com/Cardinal-Operations/ORLM}), which exhibit
significantly enhanced optimization modeling capabilities, achieving
competitive performance across the NL4OPT, MAMO, and IndustryOR benchmarks.
Additionally, our experiments highlight the potential of scaling law and
reinforcement learning to further enhance the performance of ORLMs. The
workflows and human-machine interaction paradigms of ORLMs in practical
industrial applications are also discussed in the paper.
|
2406.13221 | John Chiang | John Chiang | Privacy-Preserving Logistic Regression Training on Large Datasets | null | null | null | null | cs.CR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Privacy-preserving machine learning is one class of cryptographic methods
that aim to analyze private and sensitive data while keeping privacy, such as
homomorphic logistic regression training over large encrypted data. In this
paper, we propose an efficient algorithm for logistic regression training on
large encrypted data using Homomorphic Encryption (HE), which is the mini-batch
version of recent methods using a faster gradient variant called
$\texttt{quadratic gradient}$. It is claimed that $\texttt{quadratic gradient}$
can integrate curve information (Hessian matrix) into the gradient and
therefore can effectively accelerate the first-order gradient (descent)
algorithms. We also implement the full-batch version of their method when the
encrypted dataset is so large that it has to be encrypted in the mini-batch
manner. We compare our mini-batch algorithm with our full-batch implementation
method on real financial data consisting of 422,108 samples with 200 freatures.
%Our experiments show that Nesterov's accelerated gradient (NAG) Given the
inefficiency of HEs, our results are inspiring and demonstrate that the
logistic regression training on large encrypted dataset is of practical
feasibility, marking a significant milestone in our understanding.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2024 05:19:20 GMT"
},
{
"version": "v2",
"created": "Wed, 14 Aug 2024 09:07:59 GMT"
},
{
"version": "v3",
"created": "Thu, 24 Oct 2024 10:08:02 GMT"
},
{
"version": "v4",
"created": "Fri, 4 Apr 2025 08:57:16 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Chiang",
"John",
""
]
] | TITLE: Privacy-Preserving Logistic Regression Training on Large Datasets
ABSTRACT: Privacy-preserving machine learning is one class of cryptographic methods
that aim to analyze private and sensitive data while keeping privacy, such as
homomorphic logistic regression training over large encrypted data. In this
paper, we propose an efficient algorithm for logistic regression training on
large encrypted data using Homomorphic Encryption (HE), which is the mini-batch
version of recent methods using a faster gradient variant called
$\texttt{quadratic gradient}$. It is claimed that $\texttt{quadratic gradient}$
can integrate curve information (Hessian matrix) into the gradient and
therefore can effectively accelerate the first-order gradient (descent)
algorithms. We also implement the full-batch version of their method when the
encrypted dataset is so large that it has to be encrypted in the mini-batch
manner. We compare our mini-batch algorithm with our full-batch implementation
method on real financial data consisting of 422,108 samples with 200 freatures.
%Our experiments show that Nesterov's accelerated gradient (NAG) Given the
inefficiency of HEs, our results are inspiring and demonstrate that the
logistic regression training on large encrypted dataset is of practical
feasibility, marking a significant milestone in our understanding.
|
2406.13363 | Ryoma Kumon | Ryoma Kumon, Daiki Matsuoka, Hitomi Yanaka | Evaluating Structural Generalization in Neural Machine Translation | To appear at ACL 2024 findings | null | 10.18653/v1/2024.findings-acl.783 | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Compositional generalization refers to the ability to generalize to novel
combinations of previously observed words and syntactic structures. Since it is
regarded as a desired property of neural models, recent work has assessed
compositional generalization in machine translation as well as semantic
parsing. However, previous evaluations with machine translation have focused
mostly on lexical generalization (i.e., generalization to unseen combinations
of known words). Thus, it remains unclear to what extent models can translate
sentences that require structural generalization (i.e., generalization to
different sorts of syntactic structures). To address this question, we
construct SGET, a machine translation dataset covering various types of
compositional generalization with control of words and sentence structures. We
evaluate neural machine translation models on SGET and show that they struggle
more in structural generalization than in lexical generalization. We also find
different performance trends in semantic parsing and machine translation, which
indicates the importance of evaluations across various tasks.
| [
{
"version": "v1",
"created": "Wed, 19 Jun 2024 09:09:11 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Kumon",
"Ryoma",
""
],
[
"Matsuoka",
"Daiki",
""
],
[
"Yanaka",
"Hitomi",
""
]
] | TITLE: Evaluating Structural Generalization in Neural Machine Translation
ABSTRACT: Compositional generalization refers to the ability to generalize to novel
combinations of previously observed words and syntactic structures. Since it is
regarded as a desired property of neural models, recent work has assessed
compositional generalization in machine translation as well as semantic
parsing. However, previous evaluations with machine translation have focused
mostly on lexical generalization (i.e., generalization to unseen combinations
of known words). Thus, it remains unclear to what extent models can translate
sentences that require structural generalization (i.e., generalization to
different sorts of syntactic structures). To address this question, we
construct SGET, a machine translation dataset covering various types of
compositional generalization with control of words and sentence structures. We
evaluate neural machine translation models on SGET and show that they struggle
more in structural generalization than in lexical generalization. We also find
different performance trends in semantic parsing and machine translation, which
indicates the importance of evaluations across various tasks.
|
2406.15523 | Yili Wang | Yili Wang, Yixin Liu, Xu Shen, Chenyu Li, Kaize Ding, Rui Miao, Ying
Wang, Shirui Pan, Xin Wang | Unifying Unsupervised Graph-Level Anomaly Detection and
Out-of-Distribution Detection: A Benchmark | null | null | null | null | cs.LG stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | To build safe and reliable graph machine learning systems, unsupervised
graph-level anomaly detection (GLAD) and unsupervised graph-level
out-of-distribution (OOD) detection (GLOD) have received significant attention
in recent years. Though those two lines of research indeed share the same
objective, they have been studied independently in the community due to
distinct evaluation setups, creating a gap that hinders the application and
evaluation of methods from one to the other. To bridge the gap, in this work,
we present a \underline{\textbf{U}}nified \underline{\textbf{B}}enchmark for
unsupervised \underline{\textbf{G}}raph-level \underline{\textbf{O}}OD and
anoma\underline{\textbf{L}}y \underline{\textbf{D}}etection (\ourmethod), a
comprehensive evaluation framework that unifies GLAD and GLOD under the concept
of generalized graph-level OOD detection. Our benchmark encompasses 35 datasets
spanning four practical anomaly and OOD detection scenarios, facilitating the
comparison of 18 representative GLAD/GLOD methods. We conduct multi-dimensional
analyses to explore the effectiveness, OOD sensitivity spectrum, robustness,
and efficiency of existing methods, shedding light on their strengths and
limitations. Furthermore, we provide an open-source codebase
(https://github.com/UB-GOLD/UB-GOLD) of \ourmethod to foster reproducible
research and outline potential directions for future investigations based on
our insights.
| [
{
"version": "v1",
"created": "Fri, 21 Jun 2024 04:07:43 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 12:19:21 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Wang",
"Yili",
""
],
[
"Liu",
"Yixin",
""
],
[
"Shen",
"Xu",
""
],
[
"Li",
"Chenyu",
""
],
[
"Ding",
"Kaize",
""
],
[
"Miao",
"Rui",
""
],
[
"Wang",
"Ying",
""
],
[
"Pan",
"Shirui",
""
],
[
"Wang",
"Xin",
""
]
] | TITLE: Unifying Unsupervised Graph-Level Anomaly Detection and
Out-of-Distribution Detection: A Benchmark
ABSTRACT: To build safe and reliable graph machine learning systems, unsupervised
graph-level anomaly detection (GLAD) and unsupervised graph-level
out-of-distribution (OOD) detection (GLOD) have received significant attention
in recent years. Though those two lines of research indeed share the same
objective, they have been studied independently in the community due to
distinct evaluation setups, creating a gap that hinders the application and
evaluation of methods from one to the other. To bridge the gap, in this work,
we present a \underline{\textbf{U}}nified \underline{\textbf{B}}enchmark for
unsupervised \underline{\textbf{G}}raph-level \underline{\textbf{O}}OD and
anoma\underline{\textbf{L}}y \underline{\textbf{D}}etection (\ourmethod), a
comprehensive evaluation framework that unifies GLAD and GLOD under the concept
of generalized graph-level OOD detection. Our benchmark encompasses 35 datasets
spanning four practical anomaly and OOD detection scenarios, facilitating the
comparison of 18 representative GLAD/GLOD methods. We conduct multi-dimensional
analyses to explore the effectiveness, OOD sensitivity spectrum, robustness,
and efficiency of existing methods, shedding light on their strengths and
limitations. Furthermore, we provide an open-source codebase
(https://github.com/UB-GOLD/UB-GOLD) of \ourmethod to foster reproducible
research and outline potential directions for future investigations based on
our insights.
|
2406.15888 | Khai Le-Duc | Khai Le-Duc, Khai-Nguyen Nguyen, Long Vo-Dang, Truong-Son Hy | Real-time Speech Summarization for Medical Conversations | Interspeech 2024 (Oral) | null | null | null | cs.CL cs.AI cs.LG cs.SD eess.AS | http://creativecommons.org/licenses/by/4.0/ | In doctor-patient conversations, identifying medically relevant information
is crucial, posing the need for conversation summarization. In this work, we
propose the first deployable real-time speech summarization system for
real-world applications in industry, which generates a local summary after
every N speech utterances within a conversation and a global summary after the
end of a conversation. Our system could enhance user experience from a business
standpoint, while also reducing computational costs from a technical
perspective. Secondly, we present VietMed-Sum which, to our knowledge, is the
first speech summarization dataset for medical conversations. Thirdly, we are
the first to utilize LLM and human annotators collaboratively to create gold
standard and synthetic summaries for medical conversation summarization.
Finally, we present baseline results of state-of-the-art models on VietMed-Sum.
All code, data (English-translated and Vietnamese) and models are available
online: https://github.com/leduckhai/MultiMed/tree/master/VietMed-Sum
| [
{
"version": "v1",
"created": "Sat, 22 Jun 2024 16:37:51 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 14:12:54 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Le-Duc",
"Khai",
""
],
[
"Nguyen",
"Khai-Nguyen",
""
],
[
"Vo-Dang",
"Long",
""
],
[
"Hy",
"Truong-Son",
""
]
] | TITLE: Real-time Speech Summarization for Medical Conversations
ABSTRACT: In doctor-patient conversations, identifying medically relevant information
is crucial, posing the need for conversation summarization. In this work, we
propose the first deployable real-time speech summarization system for
real-world applications in industry, which generates a local summary after
every N speech utterances within a conversation and a global summary after the
end of a conversation. Our system could enhance user experience from a business
standpoint, while also reducing computational costs from a technical
perspective. Secondly, we present VietMed-Sum which, to our knowledge, is the
first speech summarization dataset for medical conversations. Thirdly, we are
the first to utilize LLM and human annotators collaboratively to create gold
standard and synthetic summaries for medical conversation summarization.
Finally, we present baseline results of state-of-the-art models on VietMed-Sum.
All code, data (English-translated and Vietnamese) and models are available
online: https://github.com/leduckhai/MultiMed/tree/master/VietMed-Sum
|
2408.03745 | Dimitris Iakovidis | Georgia Sovatzidi, Michael D. Vasilakakis, and Dimitris K. Iakovidis | Intuitionistic Fuzzy Cognitive Maps for Interpretable Image
Classification | This work has been submitted for possible journal publication | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Several deep learning (DL) approaches have been proposed to deal with image
classification tasks. However, despite their effectiveness, they lack
interpretability, as they are unable to explain or justify their results. To
address the challenge of interpretable image classification, this paper
introduces a novel framework, named Interpretable Intuitionistic Fuzzy
Cognitive Maps (I2FCMs).Intuitionistic FCMs (iFCMs) have been proposed as an
extension of FCMs offering a natural mechanism to assess the quality of their
output through the estimation of hesitancy, a concept resembling human
hesitation in decision making. In the context of image classification,
hesitancy is considered as a degree of unconfidence with which an image is
categorized to a class. To the best of our knowledge this is the first time
iFCMs are applied for image classification. Further novel contributions of the
introduced framework include the following: a) a feature extraction process
focusing on the most informative image regions; b) a learning algorithm for
automatic data-driven determination of the intuitionistic fuzzy
interconnections of the iFCM, thereby reducing human intervention in the
definition of the graph structure; c) an inherently interpretable
classification approach based on image contents, providing understandable
explanations of its predictions, using linguistic terms. Furthermore, the
proposed I2FCM framework can be applied to DL models, including Convolutional
Neural Network (CNN), rendering them interpretable. The effectiveness of I2FCM
is evaluated on publicly available datasets, and the results confirm that it
can provide enhanced classification performance, while providing interpretable
inferences.
| [
{
"version": "v1",
"created": "Wed, 7 Aug 2024 12:58:39 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 16:28:33 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Sovatzidi",
"Georgia",
""
],
[
"Vasilakakis",
"Michael D.",
""
],
[
"Iakovidis",
"Dimitris K.",
""
]
] | TITLE: Intuitionistic Fuzzy Cognitive Maps for Interpretable Image
Classification
ABSTRACT: Several deep learning (DL) approaches have been proposed to deal with image
classification tasks. However, despite their effectiveness, they lack
interpretability, as they are unable to explain or justify their results. To
address the challenge of interpretable image classification, this paper
introduces a novel framework, named Interpretable Intuitionistic Fuzzy
Cognitive Maps (I2FCMs).Intuitionistic FCMs (iFCMs) have been proposed as an
extension of FCMs offering a natural mechanism to assess the quality of their
output through the estimation of hesitancy, a concept resembling human
hesitation in decision making. In the context of image classification,
hesitancy is considered as a degree of unconfidence with which an image is
categorized to a class. To the best of our knowledge this is the first time
iFCMs are applied for image classification. Further novel contributions of the
introduced framework include the following: a) a feature extraction process
focusing on the most informative image regions; b) a learning algorithm for
automatic data-driven determination of the intuitionistic fuzzy
interconnections of the iFCM, thereby reducing human intervention in the
definition of the graph structure; c) an inherently interpretable
classification approach based on image contents, providing understandable
explanations of its predictions, using linguistic terms. Furthermore, the
proposed I2FCM framework can be applied to DL models, including Convolutional
Neural Network (CNN), rendering them interpretable. The effectiveness of I2FCM
is evaluated on publicly available datasets, and the results confirm that it
can provide enhanced classification performance, while providing interpretable
inferences.
|
2409.11223 | Abu Saleh Musa Miah Dr. | Yuta Kaneko, Abu Saleh Musa Miah, Najmul Hassan, Hyoun-Sup Lee,
Si-Woong Jang, Jungpil Shin | Multimodal Attention-Enhanced Feature Fusion-based Weekly Supervised
Anomaly Violence Detection | null | IEEE Open Journal of the Computer Society, vol. 6, pp. 129-140,
2025 | 10.1109/OJCS.2024.3517154 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Weakly supervised video anomaly detection (WS-VAD) is a crucial area in
computer vision for developing intelligent surveillance systems. This system
uses three feature streams: RGB video, optical flow, and audio signals, where
each stream extracts complementary spatial and temporal features using an
enhanced attention module to improve detection accuracy and robustness. In the
first stream, we employed an attention-based, multi-stage feature enhancement
approach to improve spatial and temporal features from the RGB video where the
first stage consists of a ViT-based CLIP module, with top-k features
concatenated in parallel with I3D and Temporal Contextual Aggregation (TCA)
based rich spatiotemporal features. The second stage effectively captures
temporal dependencies using the Uncertainty-Regulated Dual Memory Units
(UR-DMU) model, which learns representations of normal and abnormal data
simultaneously, and the third stage is employed to select the most relevant
spatiotemporal features. The second stream extracted enhanced attention-based
spatiotemporal features from the flow data modality-based feature by taking
advantage of the integration of the deep learning and attention module. The
audio stream captures auditory cues using an attention module integrated with
the VGGish model, aiming to detect anomalies based on sound patterns. These
streams enrich the model by incorporating motion and audio signals often
indicative of abnormal events undetectable through visual analysis alone. The
concatenation of the multimodal fusion leverages the strengths of each
modality, resulting in a comprehensive feature set that significantly improves
anomaly detection accuracy and robustness across three datasets. The extensive
experiment and high performance with the three benchmark datasets proved the
effectiveness of the proposed system over the existing state-of-the-art system.
| [
{
"version": "v1",
"created": "Tue, 17 Sep 2024 14:17:52 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Kaneko",
"Yuta",
""
],
[
"Miah",
"Abu Saleh Musa",
""
],
[
"Hassan",
"Najmul",
""
],
[
"Lee",
"Hyoun-Sup",
""
],
[
"Jang",
"Si-Woong",
""
],
[
"Shin",
"Jungpil",
""
]
] | TITLE: Multimodal Attention-Enhanced Feature Fusion-based Weekly Supervised
Anomaly Violence Detection
ABSTRACT: Weakly supervised video anomaly detection (WS-VAD) is a crucial area in
computer vision for developing intelligent surveillance systems. This system
uses three feature streams: RGB video, optical flow, and audio signals, where
each stream extracts complementary spatial and temporal features using an
enhanced attention module to improve detection accuracy and robustness. In the
first stream, we employed an attention-based, multi-stage feature enhancement
approach to improve spatial and temporal features from the RGB video where the
first stage consists of a ViT-based CLIP module, with top-k features
concatenated in parallel with I3D and Temporal Contextual Aggregation (TCA)
based rich spatiotemporal features. The second stage effectively captures
temporal dependencies using the Uncertainty-Regulated Dual Memory Units
(UR-DMU) model, which learns representations of normal and abnormal data
simultaneously, and the third stage is employed to select the most relevant
spatiotemporal features. The second stream extracted enhanced attention-based
spatiotemporal features from the flow data modality-based feature by taking
advantage of the integration of the deep learning and attention module. The
audio stream captures auditory cues using an attention module integrated with
the VGGish model, aiming to detect anomalies based on sound patterns. These
streams enrich the model by incorporating motion and audio signals often
indicative of abnormal events undetectable through visual analysis alone. The
concatenation of the multimodal fusion leverages the strengths of each
modality, resulting in a comprehensive feature set that significantly improves
anomaly detection accuracy and robustness across three datasets. The extensive
experiment and high performance with the three benchmark datasets proved the
effectiveness of the proposed system over the existing state-of-the-art system.
|
2409.13521 | Andrea Tagarelli | Lorenzo Zangari, Candida M. Greco, Davide Picca, Andrea Tagarelli | A Survey on Moral Foundation Theory and Pre-Trained Language Models:
Current Advances and Challenges | Accepted for publication with AI & Society, March 2025 | AI & Society, March 2025 | 10.1007/s00146-025-02225-w | null | cs.CL cs.AI cs.CY cs.DL cs.HC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Moral values have deep roots in early civilizations, codified within norms
and laws that regulated societal order and the common good. They play a crucial
role in understanding the psychological basis of human behavior and cultural
orientation. The Moral Foundation Theory (MFT) is a well-established framework
that identifies the core moral foundations underlying the manner in which
different cultures shape individual and social lives. Recent advancements in
natural language processing, particularly Pre-trained Language Models (PLMs),
have enabled the extraction and analysis of moral dimensions from textual data.
This survey presents a comprehensive review of MFT-informed PLMs, providing an
analysis of moral tendencies in PLMs and their application in the context of
the MFT. We also review relevant datasets and lexicons and discuss trends,
limitations, and future directions. By providing a structured overview of the
intersection between PLMs and MFT, this work bridges moral psychology insights
within the realm of PLMs, paving the way for further research and development
in creating morally aware AI systems.
| [
{
"version": "v1",
"created": "Fri, 20 Sep 2024 14:03:06 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 11:52:55 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Zangari",
"Lorenzo",
""
],
[
"Greco",
"Candida M.",
""
],
[
"Picca",
"Davide",
""
],
[
"Tagarelli",
"Andrea",
""
]
] | TITLE: A Survey on Moral Foundation Theory and Pre-Trained Language Models:
Current Advances and Challenges
ABSTRACT: Moral values have deep roots in early civilizations, codified within norms
and laws that regulated societal order and the common good. They play a crucial
role in understanding the psychological basis of human behavior and cultural
orientation. The Moral Foundation Theory (MFT) is a well-established framework
that identifies the core moral foundations underlying the manner in which
different cultures shape individual and social lives. Recent advancements in
natural language processing, particularly Pre-trained Language Models (PLMs),
have enabled the extraction and analysis of moral dimensions from textual data.
This survey presents a comprehensive review of MFT-informed PLMs, providing an
analysis of moral tendencies in PLMs and their application in the context of
the MFT. We also review relevant datasets and lexicons and discuss trends,
limitations, and future directions. By providing a structured overview of the
intersection between PLMs and MFT, this work bridges moral psychology insights
within the realm of PLMs, paving the way for further research and development
in creating morally aware AI systems.
|
2409.14729 | Jiahao Yu | Jiahao Yu, Yangguang Shao, Hanwen Miao, Junzheng Shi | PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt
Injection in LLMs | null | null | null | null | cs.CR cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have gained widespread use in various
applications due to their powerful capability to generate human-like text.
However, prompt injection attacks, which involve overwriting a model's original
instructions with malicious prompts to manipulate the generated text, have
raised significant concerns about the security and reliability of LLMs.
Ensuring that LLMs are robust against such attacks is crucial for their
deployment in real-world applications, particularly in critical tasks.
In this paper, we propose PROMPTFUZZ, a novel testing framework that
leverages fuzzing techniques to systematically assess the robustness of LLMs
against prompt injection attacks. Inspired by software fuzzing, PROMPTFUZZ
selects promising seed prompts and generates a diverse set of prompt injections
to evaluate the target LLM's resilience. PROMPTFUZZ operates in two stages: the
prepare phase, which involves selecting promising initial seeds and collecting
few-shot examples, and the focus phase, which uses the collected examples to
generate diverse, high-quality prompt injections. Using PROMPTFUZZ, we can
uncover more vulnerabilities in LLMs, even those with strong defense prompts.
By deploying the generated attack prompts from PROMPTFUZZ in a real-world
competition, we achieved the 7th ranking out of over 4000 participants (top
0.14%) within 2 hours. Additionally, we construct a dataset to fine-tune LLMs
for enhanced robustness against prompt injection attacks. While the fine-tuned
model shows improved robustness, PROMPTFUZZ continues to identify
vulnerabilities, highlighting the importance of robust testing for LLMs. Our
work emphasizes the critical need for effective testing tools and provides a
practical framework for evaluating and improving the robustness of LLMs against
prompt injection attacks.
| [
{
"version": "v1",
"created": "Mon, 23 Sep 2024 06:08:32 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 23:03:17 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yu",
"Jiahao",
""
],
[
"Shao",
"Yangguang",
""
],
[
"Miao",
"Hanwen",
""
],
[
"Shi",
"Junzheng",
""
]
] | TITLE: PROMPTFUZZ: Harnessing Fuzzing Techniques for Robust Testing of Prompt
Injection in LLMs
ABSTRACT: Large Language Models (LLMs) have gained widespread use in various
applications due to their powerful capability to generate human-like text.
However, prompt injection attacks, which involve overwriting a model's original
instructions with malicious prompts to manipulate the generated text, have
raised significant concerns about the security and reliability of LLMs.
Ensuring that LLMs are robust against such attacks is crucial for their
deployment in real-world applications, particularly in critical tasks.
In this paper, we propose PROMPTFUZZ, a novel testing framework that
leverages fuzzing techniques to systematically assess the robustness of LLMs
against prompt injection attacks. Inspired by software fuzzing, PROMPTFUZZ
selects promising seed prompts and generates a diverse set of prompt injections
to evaluate the target LLM's resilience. PROMPTFUZZ operates in two stages: the
prepare phase, which involves selecting promising initial seeds and collecting
few-shot examples, and the focus phase, which uses the collected examples to
generate diverse, high-quality prompt injections. Using PROMPTFUZZ, we can
uncover more vulnerabilities in LLMs, even those with strong defense prompts.
By deploying the generated attack prompts from PROMPTFUZZ in a real-world
competition, we achieved the 7th ranking out of over 4000 participants (top
0.14%) within 2 hours. Additionally, we construct a dataset to fine-tune LLMs
for enhanced robustness against prompt injection attacks. While the fine-tuned
model shows improved robustness, PROMPTFUZZ continues to identify
vulnerabilities, highlighting the importance of robust testing for LLMs. Our
work emphasizes the critical need for effective testing tools and provides a
practical framework for evaluating and improving the robustness of LLMs against
prompt injection attacks.
|
2409.18932 | Wenfeng Huang | Wenfeng Huang, Guoan Xu, Wenjing Jia, Stuart Perry and Guangwei Gao | ReviveDiff: A Universal Diffusion Model for Restoring Images in Adverse
Weather Conditions | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Images captured in challenging environments--such as nighttime, smoke, rainy
weather, and underwater--often suffer from significant degradation, resulting
in a substantial loss of visual quality. The effective restoration of these
degraded images is critical for the subsequent vision tasks. While many
existing approaches have successfully incorporated specific priors for
individual tasks, these tailored solutions limit their applicability to other
degradations. In this work, we propose a universal network architecture, dubbed
``ReviveDiff'', which can address various degradations and bring images back to
life by enhancing and restoring their quality. Our approach is inspired by the
observation that, unlike degradation caused by movement or electronic issues,
quality degradation under adverse conditions primarily stems from natural media
(such as fog, water, and low luminance), which generally preserves the original
structures of objects. To restore the quality of such images, we leveraged the
latest advancements in diffusion models and developed ReviveDiff to restore
image quality from both macro and micro levels across some key factors
determining image quality, such as sharpness, distortion, noise level, dynamic
range, and color accuracy. We rigorously evaluated ReviveDiff on seven
benchmark datasets covering five types of degrading conditions: Rainy,
Underwater, Low-light, Smoke, and Nighttime Hazy. Our experimental results
demonstrate that ReviveDiff outperforms the state-of-the-art methods both
quantitatively and visually.
| [
{
"version": "v1",
"created": "Fri, 27 Sep 2024 17:29:23 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 06:09:49 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Huang",
"Wenfeng",
""
],
[
"Xu",
"Guoan",
""
],
[
"Jia",
"Wenjing",
""
],
[
"Perry",
"Stuart",
""
],
[
"Gao",
"Guangwei",
""
]
] | TITLE: ReviveDiff: A Universal Diffusion Model for Restoring Images in Adverse
Weather Conditions
ABSTRACT: Images captured in challenging environments--such as nighttime, smoke, rainy
weather, and underwater--often suffer from significant degradation, resulting
in a substantial loss of visual quality. The effective restoration of these
degraded images is critical for the subsequent vision tasks. While many
existing approaches have successfully incorporated specific priors for
individual tasks, these tailored solutions limit their applicability to other
degradations. In this work, we propose a universal network architecture, dubbed
``ReviveDiff'', which can address various degradations and bring images back to
life by enhancing and restoring their quality. Our approach is inspired by the
observation that, unlike degradation caused by movement or electronic issues,
quality degradation under adverse conditions primarily stems from natural media
(such as fog, water, and low luminance), which generally preserves the original
structures of objects. To restore the quality of such images, we leveraged the
latest advancements in diffusion models and developed ReviveDiff to restore
image quality from both macro and micro levels across some key factors
determining image quality, such as sharpness, distortion, noise level, dynamic
range, and color accuracy. We rigorously evaluated ReviveDiff on seven
benchmark datasets covering five types of degrading conditions: Rainy,
Underwater, Low-light, Smoke, and Nighttime Hazy. Our experimental results
demonstrate that ReviveDiff outperforms the state-of-the-art methods both
quantitatively and visually.
|
2410.09893 | Enyu Zhou | Enyu Zhou, Guodong Zheng, Binghai Wang, Zhiheng Xi, Shihan Dou, Rong
Bao, Wei Shen, Limao Xiong, Jessica Fan, Yurong Mou, Rui Zheng, Tao Gui, Qi
Zhang, Xuanjing Huang | RMB: Comprehensively Benchmarking Reward Models in LLM Alignment | Accepted by ICLR2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Reward models (RMs) guide the alignment of large language models (LLMs),
steering them toward behaviors preferred by humans. Evaluating RMs is the key
to better aligning LLMs. However, the current evaluation of RMs may not
directly correspond to their alignment performance due to the limited
distribution of evaluation data and evaluation methods that are not closely
related to alignment objectives. To address these limitations, we propose RMB,
a comprehensive RM benchmark that covers over 49 real-world scenarios and
includes both pairwise and Best-of-N (BoN) evaluations to better reflect the
effectiveness of RMs in guiding alignment optimization. We demonstrate a
positive correlation between our benchmark and the downstream alignment task
performance. Based on our benchmark, we conduct extensive analysis on the
state-of-the-art RMs, revealing their generalization defects that were not
discovered by previous benchmarks, and highlighting the potential of generative
RMs. Furthermore, we delve into open questions in reward models, specifically
examining the effectiveness of majority voting for the evaluation of reward
models and analyzing the impact factors of generative RMs, including the
influence of evaluation criteria and instructing methods. Our evaluation code
and datasets are available at
https://github.com/Zhou-Zoey/RMB-Reward-Model-Benchmark.
| [
{
"version": "v1",
"created": "Sun, 13 Oct 2024 16:06:54 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 11:45:02 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Zhou",
"Enyu",
""
],
[
"Zheng",
"Guodong",
""
],
[
"Wang",
"Binghai",
""
],
[
"Xi",
"Zhiheng",
""
],
[
"Dou",
"Shihan",
""
],
[
"Bao",
"Rong",
""
],
[
"Shen",
"Wei",
""
],
[
"Xiong",
"Limao",
""
],
[
"Fan",
"Jessica",
""
],
[
"Mou",
"Yurong",
""
],
[
"Zheng",
"Rui",
""
],
[
"Gui",
"Tao",
""
],
[
"Zhang",
"Qi",
""
],
[
"Huang",
"Xuanjing",
""
]
] | TITLE: RMB: Comprehensively Benchmarking Reward Models in LLM Alignment
ABSTRACT: Reward models (RMs) guide the alignment of large language models (LLMs),
steering them toward behaviors preferred by humans. Evaluating RMs is the key
to better aligning LLMs. However, the current evaluation of RMs may not
directly correspond to their alignment performance due to the limited
distribution of evaluation data and evaluation methods that are not closely
related to alignment objectives. To address these limitations, we propose RMB,
a comprehensive RM benchmark that covers over 49 real-world scenarios and
includes both pairwise and Best-of-N (BoN) evaluations to better reflect the
effectiveness of RMs in guiding alignment optimization. We demonstrate a
positive correlation between our benchmark and the downstream alignment task
performance. Based on our benchmark, we conduct extensive analysis on the
state-of-the-art RMs, revealing their generalization defects that were not
discovered by previous benchmarks, and highlighting the potential of generative
RMs. Furthermore, we delve into open questions in reward models, specifically
examining the effectiveness of majority voting for the evaluation of reward
models and analyzing the impact factors of generative RMs, including the
influence of evaluation criteria and instructing methods. Our evaluation code
and datasets are available at
https://github.com/Zhou-Zoey/RMB-Reward-Model-Benchmark.
|
2410.15316 | Huy Hoang Ha | Alan Dao (Gia Tuan Dao), Dinh Bach Vu, Huy Hoang Ha | Ichigo: Mixed-Modal Early-Fusion Realtime Voice Assistant | null | null | null | null | cs.CL cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) have revolutionized natural language processing,
but their application to speech-based tasks remains challenging due to the
complexities of integrating audio and text modalities. This paper introduces
Ichigo, a mixed-modal model that seamlessly processes interleaved sequences of
speech and text. Utilizing a tokenized early-fusion approach, Ichigo quantizes
speech into discrete tokens and employs a uniform transformer-based
architecture for both speech and text modalities. This method enables joint
reasoning and generation across modalities without the need for separate
adapters. We present a comprehensive training methodology, including
pre-training on multilingual speech recognition datasets and fine-tuning on a
curated instruction dataset. Ichigo demonstrates state-of-the-art performance
on speech question-answering benchmarks, outperforming existing open-source
speech language models and achieving comparable results to cascaded systems.
Notably, Ichigo exhibits a latency of just 111 ms to first token generation,
significantly lower than current models. Our approach not only advances the
field of multimodal AI but also provides a framework for smaller research teams
to contribute effectively to open-source speech-language models.
| [
{
"version": "v1",
"created": "Sun, 20 Oct 2024 07:03:49 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 13:57:22 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 08:29:19 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Dao",
"Alan",
"",
"Gia Tuan Dao"
],
[
"Vu",
"Dinh Bach",
""
],
[
"Ha",
"Huy Hoang",
""
]
] | TITLE: Ichigo: Mixed-Modal Early-Fusion Realtime Voice Assistant
ABSTRACT: Large Language Models (LLMs) have revolutionized natural language processing,
but their application to speech-based tasks remains challenging due to the
complexities of integrating audio and text modalities. This paper introduces
Ichigo, a mixed-modal model that seamlessly processes interleaved sequences of
speech and text. Utilizing a tokenized early-fusion approach, Ichigo quantizes
speech into discrete tokens and employs a uniform transformer-based
architecture for both speech and text modalities. This method enables joint
reasoning and generation across modalities without the need for separate
adapters. We present a comprehensive training methodology, including
pre-training on multilingual speech recognition datasets and fine-tuning on a
curated instruction dataset. Ichigo demonstrates state-of-the-art performance
on speech question-answering benchmarks, outperforming existing open-source
speech language models and achieving comparable results to cascaded systems.
Notably, Ichigo exhibits a latency of just 111 ms to first token generation,
significantly lower than current models. Our approach not only advances the
field of multimodal AI but also provides a framework for smaller research teams
to contribute effectively to open-source speech-language models.
|
2410.22314 | Minghao Ning | Minghao Ning, Ahmad Reza Alghooneh, Chen Sun, Ruihe Zhang, Pouya
Panahandeh, Steven Tuer, Ehsan Hashemi and Amir Khajepour | An Efficient Approach to Generate Safe Drivable Space by
LiDAR-Camera-HDmap Fusion | null | 2024 IEEE 27th International Conference on Intelligent
Transportation Systems (ITSC) | 10.1109/ITSC58415.2024.10919608 | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we propose an accurate and robust perception module for
Autonomous Vehicles (AVs) for drivable space extraction. Perception is crucial
in autonomous driving, where many deep learning-based methods, while accurate
on benchmark datasets, fail to generalize effectively, especially in diverse
and unpredictable environments. Our work introduces a robust easy-to-generalize
perception module that leverages LiDAR, camera, and HD map data fusion to
deliver a safe and reliable drivable space in all weather conditions. We
present an adaptive ground removal and curb detection method integrated with HD
map data for enhanced obstacle detection reliability. Additionally, we propose
an adaptive DBSCAN clustering algorithm optimized for precipitation noise, and
a cost-effective LiDAR-camera frustum association that is resilient to
calibration discrepancies. Our comprehensive drivable space representation
incorporates all perception data, ensuring compatibility with vehicle
dimensions and road regulations. This approach not only improves generalization
and efficiency, but also significantly enhances safety in autonomous vehicle
operations. Our approach is tested on a real dataset and its reliability is
verified during the daily (including harsh snowy weather) operation of our
autonomous shuttle, WATonoBus
| [
{
"version": "v1",
"created": "Tue, 29 Oct 2024 17:54:02 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Ning",
"Minghao",
""
],
[
"Alghooneh",
"Ahmad Reza",
""
],
[
"Sun",
"Chen",
""
],
[
"Zhang",
"Ruihe",
""
],
[
"Panahandeh",
"Pouya",
""
],
[
"Tuer",
"Steven",
""
],
[
"Hashemi",
"Ehsan",
""
],
[
"Khajepour",
"Amir",
""
]
] | TITLE: An Efficient Approach to Generate Safe Drivable Space by
LiDAR-Camera-HDmap Fusion
ABSTRACT: In this paper, we propose an accurate and robust perception module for
Autonomous Vehicles (AVs) for drivable space extraction. Perception is crucial
in autonomous driving, where many deep learning-based methods, while accurate
on benchmark datasets, fail to generalize effectively, especially in diverse
and unpredictable environments. Our work introduces a robust easy-to-generalize
perception module that leverages LiDAR, camera, and HD map data fusion to
deliver a safe and reliable drivable space in all weather conditions. We
present an adaptive ground removal and curb detection method integrated with HD
map data for enhanced obstacle detection reliability. Additionally, we propose
an adaptive DBSCAN clustering algorithm optimized for precipitation noise, and
a cost-effective LiDAR-camera frustum association that is resilient to
calibration discrepancies. Our comprehensive drivable space representation
incorporates all perception data, ensuring compatibility with vehicle
dimensions and road regulations. This approach not only improves generalization
and efficiency, but also significantly enhances safety in autonomous vehicle
operations. Our approach is tested on a real dataset and its reliability is
verified during the daily (including harsh snowy weather) operation of our
autonomous shuttle, WATonoBus
|
2410.23132 | Tassilo Wald | Tassilo Wald, Constantin Ulrich, Stanislav Lukyanenko, Andrei
Goncharov, Alberto Paderno, Maximilian Miller, Leander Maerkisch, Paul F.
J\"ager, Klaus Maier-Hein | Revisiting MAE pre-training for 3D medical image segmentation | CVPR 2025. Update to Camera-Ready | null | null | null | cs.CV cs.AI cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | Self-Supervised Learning (SSL) presents an exciting opportunity to unlock the
potential of vast, untapped clinical datasets, for various downstream
applications that suffer from the scarcity of labeled data. While SSL has
revolutionized fields like natural language processing and computer vision, its
adoption in 3D medical image computing has been limited by three key pitfalls:
Small pre-training dataset sizes, architectures inadequate for 3D medical image
analysis, and insufficient evaluation practices. In this paper, we address
these issues by i) leveraging a large-scale dataset of 39k 3D brain MRI volumes
and ii) using a Residual Encoder U-Net architecture within the state-of-the-art
nnU-Net framework. iii) A robust development framework, incorporating 5
development and 8 testing brain MRI segmentation datasets, allowed
performance-driven design decisions to optimize the simple concept of Masked
Auto Encoders (MAEs) for 3D CNNs. The resulting model not only surpasses
previous SSL methods but also outperforms the strong nnU-Net baseline by an
average of approximately 3 Dice points setting a new state-of-the-art. Our code
and models are made available here.
| [
{
"version": "v1",
"created": "Wed, 30 Oct 2024 15:42:59 GMT"
},
{
"version": "v2",
"created": "Mon, 2 Dec 2024 12:05:29 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 15:51:37 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Wald",
"Tassilo",
""
],
[
"Ulrich",
"Constantin",
""
],
[
"Lukyanenko",
"Stanislav",
""
],
[
"Goncharov",
"Andrei",
""
],
[
"Paderno",
"Alberto",
""
],
[
"Miller",
"Maximilian",
""
],
[
"Maerkisch",
"Leander",
""
],
[
"Jäger",
"Paul F.",
""
],
[
"Maier-Hein",
"Klaus",
""
]
] | TITLE: Revisiting MAE pre-training for 3D medical image segmentation
ABSTRACT: Self-Supervised Learning (SSL) presents an exciting opportunity to unlock the
potential of vast, untapped clinical datasets, for various downstream
applications that suffer from the scarcity of labeled data. While SSL has
revolutionized fields like natural language processing and computer vision, its
adoption in 3D medical image computing has been limited by three key pitfalls:
Small pre-training dataset sizes, architectures inadequate for 3D medical image
analysis, and insufficient evaluation practices. In this paper, we address
these issues by i) leveraging a large-scale dataset of 39k 3D brain MRI volumes
and ii) using a Residual Encoder U-Net architecture within the state-of-the-art
nnU-Net framework. iii) A robust development framework, incorporating 5
development and 8 testing brain MRI segmentation datasets, allowed
performance-driven design decisions to optimize the simple concept of Masked
Auto Encoders (MAEs) for 3D CNNs. The resulting model not only surpasses
previous SSL methods but also outperforms the strong nnU-Net baseline by an
average of approximately 3 Dice points setting a new state-of-the-art. Our code
and models are made available here.
|
2411.02624 | Minghao Ning | Minghao Ning, Yaodong Cui, Yufeng Yang, Shucheng Huang, Zhenan Liu,
Ahmad Reza Alghooneh, Ehsan Hashemi and Amir Khajepour | Enhancing Indoor Mobility with Connected Sensor Nodes: A Real-Time,
Delay-Aware Cooperative Perception Approach | null | 2024 IEEE 27th International Conference on Intelligent
Transportation Systems (ITSC) | 10.1109/ITSC58415.2024.10919722 | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | This paper presents a novel real-time, delay-aware cooperative perception
system designed for intelligent mobility platforms operating in dynamic indoor
environments. The system contains a network of multi-modal sensor nodes and a
central node that collectively provide perception services to mobility
platforms. The proposed Hierarchical Clustering Considering the Scanning
Pattern and Ground Contacting Feature based Lidar Camera Fusion improve
intra-node perception for crowded environment. The system also features
delay-aware global perception to synchronize and aggregate data across nodes.
To validate our approach, we introduced the Indoor Pedestrian Tracking dataset,
compiled from data captured by two indoor sensor nodes. Our experiments,
compared to baselines, demonstrate significant improvements in detection
accuracy and robustness against delays. The dataset is available in the
repository: https://github.com/NingMingHao/MVSLab-IndoorCooperativePerception
| [
{
"version": "v1",
"created": "Mon, 4 Nov 2024 21:31:45 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Ning",
"Minghao",
""
],
[
"Cui",
"Yaodong",
""
],
[
"Yang",
"Yufeng",
""
],
[
"Huang",
"Shucheng",
""
],
[
"Liu",
"Zhenan",
""
],
[
"Alghooneh",
"Ahmad Reza",
""
],
[
"Hashemi",
"Ehsan",
""
],
[
"Khajepour",
"Amir",
""
]
] | TITLE: Enhancing Indoor Mobility with Connected Sensor Nodes: A Real-Time,
Delay-Aware Cooperative Perception Approach
ABSTRACT: This paper presents a novel real-time, delay-aware cooperative perception
system designed for intelligent mobility platforms operating in dynamic indoor
environments. The system contains a network of multi-modal sensor nodes and a
central node that collectively provide perception services to mobility
platforms. The proposed Hierarchical Clustering Considering the Scanning
Pattern and Ground Contacting Feature based Lidar Camera Fusion improve
intra-node perception for crowded environment. The system also features
delay-aware global perception to synchronize and aggregate data across nodes.
To validate our approach, we introduced the Indoor Pedestrian Tracking dataset,
compiled from data captured by two indoor sensor nodes. Our experiments,
compared to baselines, demonstrate significant improvements in detection
accuracy and robustness against delays. The dataset is available in the
repository: https://github.com/NingMingHao/MVSLab-IndoorCooperativePerception
|
2411.03321 | Yue Zhao | Chenxiao Yu, Zhaotian Weng, Yuangang Li, Zheng Li, Xiyang Hu, Yue Zhao | Towards More Accurate US Presidential Election via Multi-step Reasoning
with Large Language Models | This research is ongoing work. Xiyang Hu and Yue Zhao are the
corresponding authors | null | null | null | cs.AI cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Can Large Language Models (LLMs) accurately predict election outcomes? While
LLMs have demonstrated impressive performance in various domains, including
healthcare, legal analysis, and creative tasks, their ability to forecast
elections remains unknown. Election prediction poses unique challenges, such as
limited voter-level data, rapidly changing political landscapes, and the need
to model complex human behavior. To address these challenges, we introduce a
multi-step reasoning framework designed for political analysis. Our approach is
validated on real-world data from the American National Election Studies (ANES)
2016 and 2020, as well as synthetic personas generated by the leading machine
learning framework, offering scalable datasets for voter behavior modeling. To
capture temporal dynamics, we incorporate candidates' policy positions and
biographical details, ensuring that the model adapts to evolving political
contexts. Drawing on Chain of Thought prompting, our multi-step reasoning
pipeline systematically integrates demographic, ideological, and time-dependent
factors, enhancing the model's predictive power.
| [
{
"version": "v1",
"created": "Mon, 21 Oct 2024 06:18:53 GMT"
},
{
"version": "v2",
"created": "Wed, 27 Nov 2024 07:05:31 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 01:33:20 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yu",
"Chenxiao",
""
],
[
"Weng",
"Zhaotian",
""
],
[
"Li",
"Yuangang",
""
],
[
"Li",
"Zheng",
""
],
[
"Hu",
"Xiyang",
""
],
[
"Zhao",
"Yue",
""
]
] | TITLE: Towards More Accurate US Presidential Election via Multi-step Reasoning
with Large Language Models
ABSTRACT: Can Large Language Models (LLMs) accurately predict election outcomes? While
LLMs have demonstrated impressive performance in various domains, including
healthcare, legal analysis, and creative tasks, their ability to forecast
elections remains unknown. Election prediction poses unique challenges, such as
limited voter-level data, rapidly changing political landscapes, and the need
to model complex human behavior. To address these challenges, we introduce a
multi-step reasoning framework designed for political analysis. Our approach is
validated on real-world data from the American National Election Studies (ANES)
2016 and 2020, as well as synthetic personas generated by the leading machine
learning framework, offering scalable datasets for voter behavior modeling. To
capture temporal dynamics, we incorporate candidates' policy positions and
biographical details, ensuring that the model adapts to evolving political
contexts. Drawing on Chain of Thought prompting, our multi-step reasoning
pipeline systematically integrates demographic, ideological, and time-dependent
factors, enhancing the model's predictive power.
|
2411.06789 | Shenghai Yuan | Yizhuo Yang and Shenghai Yuan and Muqing Cao and Jianfei Yang and
Lihua Xie | AV-PedAware: Self-Supervised Audio-Visual Fusion for Dynamic Pedestrian
Awareness | This work has been accepted for publication at the 2023 IEEE/RSJ
International Conference on Intelligent Robots and Systems (IROS). Personal
use is permitted. For other uses, permission from IEEE is required | null | 10.1109/IROS55552.2023.10342257 | null | cs.RO | http://creativecommons.org/licenses/by-nc-nd/4.0/ | In this study, we introduce AV-PedAware, a self-supervised audio-visual
fusion system designed to improve dynamic pedestrian awareness for robotics
applications. Pedestrian awareness is a critical requirement in many robotics
applications. However, traditional approaches that rely on cameras and LIDARs
to cover multiple views can be expensive and susceptible to issues such as
changes in illumination, occlusion, and weather conditions. Our proposed
solution replicates human perception for 3D pedestrian detection using low-cost
audio and visual fusion. This study represents the first attempt to employ
audio-visual fusion to monitor footstep sounds for the purpose of predicting
the movements of pedestrians in the vicinity. The system is trained through
self-supervised learning based on LIDAR-generated labels, making it a
cost-effective alternative to LIDAR-based pedestrian awareness. AV-PedAware
achieves comparable results to LIDAR-based systems at a fraction of the cost.
By utilizing an attention mechanism, it can handle dynamic lighting and
occlusions, overcoming the limitations of traditional LIDAR and camera-based
systems. To evaluate our approach's effectiveness, we collected a new
multimodal pedestrian detection dataset and conducted experiments that
demonstrate the system's ability to provide reliable 3D detection results using
only audio and visual data, even in extreme visual conditions. We will make our
collected dataset and source code available online for the community to
encourage further development in the field of robotics perception systems.
| [
{
"version": "v1",
"created": "Mon, 11 Nov 2024 08:36:17 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 10:55:28 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yang",
"Yizhuo",
""
],
[
"Yuan",
"Shenghai",
""
],
[
"Cao",
"Muqing",
""
],
[
"Yang",
"Jianfei",
""
],
[
"Xie",
"Lihua",
""
]
] | TITLE: AV-PedAware: Self-Supervised Audio-Visual Fusion for Dynamic Pedestrian
Awareness
ABSTRACT: In this study, we introduce AV-PedAware, a self-supervised audio-visual
fusion system designed to improve dynamic pedestrian awareness for robotics
applications. Pedestrian awareness is a critical requirement in many robotics
applications. However, traditional approaches that rely on cameras and LIDARs
to cover multiple views can be expensive and susceptible to issues such as
changes in illumination, occlusion, and weather conditions. Our proposed
solution replicates human perception for 3D pedestrian detection using low-cost
audio and visual fusion. This study represents the first attempt to employ
audio-visual fusion to monitor footstep sounds for the purpose of predicting
the movements of pedestrians in the vicinity. The system is trained through
self-supervised learning based on LIDAR-generated labels, making it a
cost-effective alternative to LIDAR-based pedestrian awareness. AV-PedAware
achieves comparable results to LIDAR-based systems at a fraction of the cost.
By utilizing an attention mechanism, it can handle dynamic lighting and
occlusions, overcoming the limitations of traditional LIDAR and camera-based
systems. To evaluate our approach's effectiveness, we collected a new
multimodal pedestrian detection dataset and conducted experiments that
demonstrate the system's ability to provide reliable 3D detection results using
only audio and visual data, even in extreme visual conditions. We will make our
collected dataset and source code available online for the community to
encourage further development in the field of robotics perception systems.
|
2411.07462 | Li Niu | Jiaxuan Chen, Bo Zhang, Qingdong He, Jinlong Peng, Li Niu | MureObjectStitch: Multi-reference Image Composition | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Generative image composition aims to regenerate the given foreground object
in the background image to produce a realistic composite image. The existing
methods are struggling to preserve the foreground details and adjust the
foreground pose/viewpoint at the same time. In this work, we propose an
effective finetuning strategy for generative image composition model, in which
we finetune a pretrained model using one or more images containing the same
foreground object. Moreover, we propose a multi-reference strategy, which
allows the model to take in multiple reference images of the foreground object.
The experiments on MureCOM dataset verify the effectiveness of our method. The
code and model have been released at
https://github.com/bcmi/MureObjectStitch-Image-Composition.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 00:53:20 GMT"
},
{
"version": "v2",
"created": "Sat, 11 Jan 2025 01:08:26 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 02:49:47 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Chen",
"Jiaxuan",
""
],
[
"Zhang",
"Bo",
""
],
[
"He",
"Qingdong",
""
],
[
"Peng",
"Jinlong",
""
],
[
"Niu",
"Li",
""
]
] | TITLE: MureObjectStitch: Multi-reference Image Composition
ABSTRACT: Generative image composition aims to regenerate the given foreground object
in the background image to produce a realistic composite image. The existing
methods are struggling to preserve the foreground details and adjust the
foreground pose/viewpoint at the same time. In this work, we propose an
effective finetuning strategy for generative image composition model, in which
we finetune a pretrained model using one or more images containing the same
foreground object. Moreover, we propose a multi-reference strategy, which
allows the model to take in multiple reference images of the foreground object.
The experiments on MureCOM dataset verify the effectiveness of our method. The
code and model have been released at
https://github.com/bcmi/MureObjectStitch-Image-Composition.
|
2411.07660 | Cheng Jin | Cheng Jin, Luyang Luo, Huangjing Lin, Jun Hou, Hao Chen | HMIL: Hierarchical Multi-Instance Learning for Fine-Grained Whole Slide
Image Classification | Accepted by TMI 2025 | null | 10.1109/TMI.2024.3520602 | null | cs.CV | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Fine-grained classification of whole slide images (WSIs) is essential in
precision oncology, enabling precise cancer diagnosis and personalized
treatment strategies. The core of this task involves distinguishing subtle
morphological variations within the same broad category of gigapixel-resolution
images, which presents a significant challenge. While the multi-instance
learning (MIL) paradigm alleviates the computational burden of WSIs, existing
MIL methods often overlook hierarchical label correlations, treating
fine-grained classification as a flat multi-class classification task. To
overcome these limitations, we introduce a novel hierarchical multi-instance
learning (HMIL) framework. By facilitating on the hierarchical alignment of
inherent relationships between different hierarchy of labels at instance and
bag level, our approach provides a more structured and informative learning
process. Specifically, HMIL incorporates a class-wise attention mechanism that
aligns hierarchical information at both the instance and bag levels.
Furthermore, we introduce supervised contrastive learning to enhance the
discriminative capability for fine-grained classification and a
curriculum-based dynamic weighting module to adaptively balance the
hierarchical feature during training. Extensive experiments on our large-scale
cytology cervical cancer (CCC) dataset and two public histology datasets, BRACS
and PANDA, demonstrate the state-of-the-art class-wise and overall performance
of our HMIL framework. Our source code is available at
https://github.com/ChengJin-git/HMIL.
| [
{
"version": "v1",
"created": "Tue, 12 Nov 2024 09:22:00 GMT"
},
{
"version": "v2",
"created": "Sun, 15 Dec 2024 11:52:45 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 12:47:34 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Jin",
"Cheng",
""
],
[
"Luo",
"Luyang",
""
],
[
"Lin",
"Huangjing",
""
],
[
"Hou",
"Jun",
""
],
[
"Chen",
"Hao",
""
]
] | TITLE: HMIL: Hierarchical Multi-Instance Learning for Fine-Grained Whole Slide
Image Classification
ABSTRACT: Fine-grained classification of whole slide images (WSIs) is essential in
precision oncology, enabling precise cancer diagnosis and personalized
treatment strategies. The core of this task involves distinguishing subtle
morphological variations within the same broad category of gigapixel-resolution
images, which presents a significant challenge. While the multi-instance
learning (MIL) paradigm alleviates the computational burden of WSIs, existing
MIL methods often overlook hierarchical label correlations, treating
fine-grained classification as a flat multi-class classification task. To
overcome these limitations, we introduce a novel hierarchical multi-instance
learning (HMIL) framework. By facilitating on the hierarchical alignment of
inherent relationships between different hierarchy of labels at instance and
bag level, our approach provides a more structured and informative learning
process. Specifically, HMIL incorporates a class-wise attention mechanism that
aligns hierarchical information at both the instance and bag levels.
Furthermore, we introduce supervised contrastive learning to enhance the
discriminative capability for fine-grained classification and a
curriculum-based dynamic weighting module to adaptively balance the
hierarchical feature during training. Extensive experiments on our large-scale
cytology cervical cancer (CCC) dataset and two public histology datasets, BRACS
and PANDA, demonstrate the state-of-the-art class-wise and overall performance
of our HMIL framework. Our source code is available at
https://github.com/ChengJin-git/HMIL.
|
2411.09921 | Andong Deng | Andong Deng, Tongjia Chen, Shoubin Yu, Taojiannan Yang, Lincoln
Spencer, Yapeng Tian, Ajmal Saeed Mian, Mohit Bansal, Chen Chen | Motion-Grounded Video Reasoning: Understanding and Perceiving Motion at
Pixel Level | CVPR 2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | In this paper, we introduce Motion-Grounded Video Reasoning, a new motion
understanding task that requires generating visual answers (video segmentation
masks) according to the input question, and hence needs implicit spatiotemporal
reasoning and grounding. This task extends existing spatiotemporal grounding
work focusing on explicit action/motion grounding, to a more general format by
enabling implicit reasoning via questions. To facilitate the development of the
new task, we collect a large-scale dataset called GROUNDMORE, which comprises
1,715 video clips, 249K object masks that are deliberately designed with 4
question types (Causal, Sequential, Counterfactual, and Descriptive) for
benchmarking deep and comprehensive motion reasoning abilities. GROUNDMORE
uniquely requires models to generate visual answers, providing a more concrete
and visually interpretable response than plain texts. It evaluates models on
both spatiotemporal grounding and reasoning, fostering to address complex
challenges in motion-related video reasoning, temporal perception, and
pixel-level understanding. Furthermore, we introduce a novel baseline model
named Motion-Grounded Video Reasoning Assistant (MORA). MORA incorporates the
multimodal reasoning ability from the Multimodal LLM, the pixel-level
perception capability from the grounding model (SAM), and the temporal
perception ability from a lightweight localization head. MORA achieves
respectable performance on GROUNDMORE outperforming the best existing visual
grounding baseline model by an average of 21.5% relatively. We hope this novel
and challenging task will pave the way for future advancements in robust and
general motion understanding via video reasoning segmentation
| [
{
"version": "v1",
"created": "Fri, 15 Nov 2024 03:45:09 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 03:20:03 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Deng",
"Andong",
""
],
[
"Chen",
"Tongjia",
""
],
[
"Yu",
"Shoubin",
""
],
[
"Yang",
"Taojiannan",
""
],
[
"Spencer",
"Lincoln",
""
],
[
"Tian",
"Yapeng",
""
],
[
"Mian",
"Ajmal Saeed",
""
],
[
"Bansal",
"Mohit",
""
],
[
"Chen",
"Chen",
""
]
] | TITLE: Motion-Grounded Video Reasoning: Understanding and Perceiving Motion at
Pixel Level
ABSTRACT: In this paper, we introduce Motion-Grounded Video Reasoning, a new motion
understanding task that requires generating visual answers (video segmentation
masks) according to the input question, and hence needs implicit spatiotemporal
reasoning and grounding. This task extends existing spatiotemporal grounding
work focusing on explicit action/motion grounding, to a more general format by
enabling implicit reasoning via questions. To facilitate the development of the
new task, we collect a large-scale dataset called GROUNDMORE, which comprises
1,715 video clips, 249K object masks that are deliberately designed with 4
question types (Causal, Sequential, Counterfactual, and Descriptive) for
benchmarking deep and comprehensive motion reasoning abilities. GROUNDMORE
uniquely requires models to generate visual answers, providing a more concrete
and visually interpretable response than plain texts. It evaluates models on
both spatiotemporal grounding and reasoning, fostering to address complex
challenges in motion-related video reasoning, temporal perception, and
pixel-level understanding. Furthermore, we introduce a novel baseline model
named Motion-Grounded Video Reasoning Assistant (MORA). MORA incorporates the
multimodal reasoning ability from the Multimodal LLM, the pixel-level
perception capability from the grounding model (SAM), and the temporal
perception ability from a lightweight localization head. MORA achieves
respectable performance on GROUNDMORE outperforming the best existing visual
grounding baseline model by an average of 21.5% relatively. We hope this novel
and challenging task will pave the way for future advancements in robust and
general motion understanding via video reasoning segmentation
|
2411.11896 | Saedeh Tahery | Saedeh Tahery, Fatemeh Hamid Akhlaghi, Termeh Amirsoleimani | HeartBERT: A Self-Supervised ECG Embedding Model for Efficient and
Effective Medical Signal Analysis | 23 pages, 8 Figures, 7 Tables | null | null | null | eess.SP cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The HeartBert model is introduced with three primary objectives: reducing the
need for labeled data, minimizing computational resources, and simultaneously
improving performance in machine learning systems that analyze
Electrocardiogram (ECG) signals. Inspired by Bidirectional Encoder
Representations from Transformers (BERT) in natural language processing and
enhanced with a self-supervised learning approach, the HeartBert model-built on
the RoBERTa architecture-generates sophisticated embeddings tailored for
ECG-based projects in the medical domain. To demonstrate the versatility,
generalizability, and efficiency of the proposed model, two key downstream
tasks have been selected: sleep stage detection and heartbeat classification.
HeartBERT-based systems, utilizing bidirectional LSTM heads, are designed to
address complex challenges. A series of practical experiments have been
conducted to demonstrate the superiority and advancements of HeartBERT,
particularly in terms of its ability to perform well with smaller training
datasets, reduced learning parameters, and effective performance compared to
rival models. The code and data are publicly available at
https://github.com/ecgResearch/HeartBert.
| [
{
"version": "v1",
"created": "Fri, 8 Nov 2024 14:25:00 GMT"
},
{
"version": "v2",
"created": "Wed, 18 Dec 2024 14:54:33 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 13:53:30 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Tahery",
"Saedeh",
""
],
[
"Akhlaghi",
"Fatemeh Hamid",
""
],
[
"Amirsoleimani",
"Termeh",
""
]
] | TITLE: HeartBERT: A Self-Supervised ECG Embedding Model for Efficient and
Effective Medical Signal Analysis
ABSTRACT: The HeartBert model is introduced with three primary objectives: reducing the
need for labeled data, minimizing computational resources, and simultaneously
improving performance in machine learning systems that analyze
Electrocardiogram (ECG) signals. Inspired by Bidirectional Encoder
Representations from Transformers (BERT) in natural language processing and
enhanced with a self-supervised learning approach, the HeartBert model-built on
the RoBERTa architecture-generates sophisticated embeddings tailored for
ECG-based projects in the medical domain. To demonstrate the versatility,
generalizability, and efficiency of the proposed model, two key downstream
tasks have been selected: sleep stage detection and heartbeat classification.
HeartBERT-based systems, utilizing bidirectional LSTM heads, are designed to
address complex challenges. A series of practical experiments have been
conducted to demonstrate the superiority and advancements of HeartBERT,
particularly in terms of its ability to perform well with smaller training
datasets, reduced learning parameters, and effective performance compared to
rival models. The code and data are publicly available at
https://github.com/ecgResearch/HeartBert.
|
2411.12593 | Yuanbin Man | Yuanbin Man, Ying Huang, Chengming Zhang, Bingzhe Li, Wei Niu, Miao
Yin | AdaCM$^2$: On Understanding Extremely Long-Term Video with Adaptive
Cross-Modality Memory Reduction | CVPR 2025 Highlight | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The advancements in large language models (LLMs) have propelled the
improvement of video understanding tasks by incorporating LLMs with visual
models. However, most existing LLM-based models (e.g., VideoLLaMA, VideoChat)
are constrained to processing short-duration videos. Recent attempts to
understand long-term videos by extracting and compressing visual features into
a fixed memory size. Nevertheless, those methods leverage only visual modality
to merge video tokens and overlook the correlation between visual and textual
queries, leading to difficulties in effectively handling complex
question-answering tasks. To address the challenges of long videos and complex
prompts, we propose AdaCM$^2$, which, for the first time, introduces an
adaptive cross-modality memory reduction approach to video-text alignment in an
auto-regressive manner on video streams. Our extensive experiments on various
video understanding tasks, such as video captioning, video question answering,
and video classification, demonstrate that AdaCM$^2$ achieves state-of-the-art
performance across multiple datasets while significantly reducing memory usage.
Notably, it achieves a 4.5% improvement across multiple tasks in the LVU
dataset with a GPU memory consumption reduction of up to 65%.
| [
{
"version": "v1",
"created": "Tue, 19 Nov 2024 18:04:13 GMT"
},
{
"version": "v2",
"created": "Tue, 18 Mar 2025 02:28:48 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 17:58:08 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Man",
"Yuanbin",
""
],
[
"Huang",
"Ying",
""
],
[
"Zhang",
"Chengming",
""
],
[
"Li",
"Bingzhe",
""
],
[
"Niu",
"Wei",
""
],
[
"Yin",
"Miao",
""
]
] | TITLE: AdaCM$^2$: On Understanding Extremely Long-Term Video with Adaptive
Cross-Modality Memory Reduction
ABSTRACT: The advancements in large language models (LLMs) have propelled the
improvement of video understanding tasks by incorporating LLMs with visual
models. However, most existing LLM-based models (e.g., VideoLLaMA, VideoChat)
are constrained to processing short-duration videos. Recent attempts to
understand long-term videos by extracting and compressing visual features into
a fixed memory size. Nevertheless, those methods leverage only visual modality
to merge video tokens and overlook the correlation between visual and textual
queries, leading to difficulties in effectively handling complex
question-answering tasks. To address the challenges of long videos and complex
prompts, we propose AdaCM$^2$, which, for the first time, introduces an
adaptive cross-modality memory reduction approach to video-text alignment in an
auto-regressive manner on video streams. Our extensive experiments on various
video understanding tasks, such as video captioning, video question answering,
and video classification, demonstrate that AdaCM$^2$ achieves state-of-the-art
performance across multiple datasets while significantly reducing memory usage.
Notably, it achieves a 4.5% improvement across multiple tasks in the LVU
dataset with a GPU memory consumption reduction of up to 65%.
|
2412.05900 | Woojin Kim | Mathieu Carri\`ere, Seunghyun Kim, Woojin Kim | Sparsification of the Generalized Persistence Diagrams for Scalability
through Gradient Descent | Full version of the paper in the Proceedings of the 41st
International Symposium on Computational Geometry (SoCG 2025); Simplified the
formulation of the sparse erosion distance without altering its definition.
20 pages, 5 figures, 3 tables | null | null | null | math.AT cs.CG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The generalized persistence diagram (GPD) is a natural extension of the
classical persistence barcode to the setting of multi-parameter persistence and
beyond. The GPD is defined as an integer-valued function whose domain is the
set of intervals in the indexing poset of a persistence module, and is known to
be able to capture richer topological information than its single-parameter
counterpart. However, computing the GPD is computationally prohibitive due to
the sheer size of the interval set. Restricting the GPD to a subset of
intervals provides a way to manage this complexity, compromising discriminating
power to some extent. However, identifying and computing an effective
restriction of the domain that minimizes the loss of discriminating power
remains an open challenge.
In this work, we introduce a novel method for optimizing the domain of the
GPD through gradient descent optimization. To achieve this, we introduce a loss
function tailored to optimize the selection of intervals, balancing
computational efficiency and discriminative accuracy. The design of the loss
function is based on the known erosion stability property of the GPD. We
showcase the efficiency of our sparsification method for dataset classification
in supervised machine learning. Experimental results demonstrate that our
sparsification method significantly reduces the time required for computing the
GPDs associated to several datasets, while maintaining classification
accuracies comparable to those achieved using full GPDs. Our method thus opens
the way for the use of GPD-based methods to applications at an unprecedented
scale.
| [
{
"version": "v1",
"created": "Sun, 8 Dec 2024 11:36:53 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 08:21:22 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Carrière",
"Mathieu",
""
],
[
"Kim",
"Seunghyun",
""
],
[
"Kim",
"Woojin",
""
]
] | TITLE: Sparsification of the Generalized Persistence Diagrams for Scalability
through Gradient Descent
ABSTRACT: The generalized persistence diagram (GPD) is a natural extension of the
classical persistence barcode to the setting of multi-parameter persistence and
beyond. The GPD is defined as an integer-valued function whose domain is the
set of intervals in the indexing poset of a persistence module, and is known to
be able to capture richer topological information than its single-parameter
counterpart. However, computing the GPD is computationally prohibitive due to
the sheer size of the interval set. Restricting the GPD to a subset of
intervals provides a way to manage this complexity, compromising discriminating
power to some extent. However, identifying and computing an effective
restriction of the domain that minimizes the loss of discriminating power
remains an open challenge.
In this work, we introduce a novel method for optimizing the domain of the
GPD through gradient descent optimization. To achieve this, we introduce a loss
function tailored to optimize the selection of intervals, balancing
computational efficiency and discriminative accuracy. The design of the loss
function is based on the known erosion stability property of the GPD. We
showcase the efficiency of our sparsification method for dataset classification
in supervised machine learning. Experimental results demonstrate that our
sparsification method significantly reduces the time required for computing the
GPDs associated to several datasets, while maintaining classification
accuracies comparable to those achieved using full GPDs. Our method thus opens
the way for the use of GPD-based methods to applications at an unprecedented
scale.
|
2412.07030 | Amirhossein Abaskohi | Amirhossein Abaskohi, Spandana Gella, Giuseppe Carenini, Issam H.
Laradji | FM2DS: Few-Shot Multimodal Multihop Data Synthesis with Knowledge
Distillation for Question Answering | null | null | null | null | cs.CL cs.AI cs.CV cs.IR cs.LG | http://creativecommons.org/licenses/by/4.0/ | Multimodal multihop question answering (MMQA) requires reasoning over images
and text from multiple sources. Despite advances in visual question answering,
this multihop setting remains underexplored due to a lack of quality datasets.
Existing methods focus on single-hop, single-modality, or short texts, limiting
real-world applications like interpreting educational documents with long,
multimodal content. To fill this gap, we introduce FM2DS, the first framework
for creating a high-quality dataset for MMQA. Our approach consists of a
5-stage pipeline that involves acquiring relevant multimodal documents from
Wikipedia, synthetically generating high-level questions and answers, and
validating them through rigorous criteria to ensure data quality. We evaluate
our methodology by training models on our synthesized dataset and testing on
two benchmarks: MultimodalQA and WebQA. Our results demonstrate that, with an
equal sample size, models trained on our synthesized data outperform those
trained on human-collected data by 1.9 in exact match (EM) score on average.
Additionally, we introduce M2QA-Bench with 1k samples, the first benchmark for
MMQA on long documents, generated using FM2DS and refined by human annotators.
We believe our data synthesis method will serve as a strong foundation for
training and evaluating MMQA models.
| [
{
"version": "v1",
"created": "Mon, 9 Dec 2024 22:35:44 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Dec 2024 20:38:21 GMT"
},
{
"version": "v3",
"created": "Sat, 1 Feb 2025 10:26:39 GMT"
},
{
"version": "v4",
"created": "Thu, 3 Apr 2025 22:39:17 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Abaskohi",
"Amirhossein",
""
],
[
"Gella",
"Spandana",
""
],
[
"Carenini",
"Giuseppe",
""
],
[
"Laradji",
"Issam H.",
""
]
] | TITLE: FM2DS: Few-Shot Multimodal Multihop Data Synthesis with Knowledge
Distillation for Question Answering
ABSTRACT: Multimodal multihop question answering (MMQA) requires reasoning over images
and text from multiple sources. Despite advances in visual question answering,
this multihop setting remains underexplored due to a lack of quality datasets.
Existing methods focus on single-hop, single-modality, or short texts, limiting
real-world applications like interpreting educational documents with long,
multimodal content. To fill this gap, we introduce FM2DS, the first framework
for creating a high-quality dataset for MMQA. Our approach consists of a
5-stage pipeline that involves acquiring relevant multimodal documents from
Wikipedia, synthetically generating high-level questions and answers, and
validating them through rigorous criteria to ensure data quality. We evaluate
our methodology by training models on our synthesized dataset and testing on
two benchmarks: MultimodalQA and WebQA. Our results demonstrate that, with an
equal sample size, models trained on our synthesized data outperform those
trained on human-collected data by 1.9 in exact match (EM) score on average.
Additionally, we introduce M2QA-Bench with 1k samples, the first benchmark for
MMQA on long documents, generated using FM2DS and refined by human annotators.
We believe our data synthesis method will serve as a strong foundation for
training and evaluating MMQA models.
|
2412.12997 | Umer Butt | Umer Butt, Stalin Varanasi and G\"unter Neumann | Enabling Low-Resource Language Retrieval: Establishing Baselines for
Urdu MS MARCO | 7 pages, ECIR 2025, conference camera-ready version | null | null | null | cs.CL cs.AI cs.IR | http://creativecommons.org/licenses/by/4.0/ | As the Information Retrieval (IR) field increasingly recognizes the
importance of inclusivity, addressing the needs of low-resource languages
remains a significant challenge. This paper introduces the first large-scale
Urdu IR dataset, created by translating the MS MARCO dataset through machine
translation. We establish baseline results through zero-shot learning for IR in
Urdu and subsequently apply the mMARCO multilingual IR methodology to this
newly translated dataset. Our findings demonstrate that the fine-tuned model
(Urdu-mT5-mMARCO) achieves a Mean Reciprocal Rank (MRR@10) of 0.247 and a
Recall@10 of 0.439, representing significant improvements over zero-shot
results and showing the potential for expanding IR access for Urdu speakers. By
bridging access gaps for speakers of low-resource languages, this work not only
advances multilingual IR research but also emphasizes the ethical and societal
importance of inclusive IR technologies. This work provides valuable insights
into the challenges and solutions for improving language representation and
lays the groundwork for future research, especially in South Asian languages,
which can benefit from the adaptable methods used in this study.
| [
{
"version": "v1",
"created": "Tue, 17 Dec 2024 15:21:28 GMT"
},
{
"version": "v2",
"created": "Fri, 17 Jan 2025 10:02:38 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 10:07:23 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Butt",
"Umer",
""
],
[
"Varanasi",
"Stalin",
""
],
[
"Neumann",
"Günter",
""
]
] | TITLE: Enabling Low-Resource Language Retrieval: Establishing Baselines for
Urdu MS MARCO
ABSTRACT: As the Information Retrieval (IR) field increasingly recognizes the
importance of inclusivity, addressing the needs of low-resource languages
remains a significant challenge. This paper introduces the first large-scale
Urdu IR dataset, created by translating the MS MARCO dataset through machine
translation. We establish baseline results through zero-shot learning for IR in
Urdu and subsequently apply the mMARCO multilingual IR methodology to this
newly translated dataset. Our findings demonstrate that the fine-tuned model
(Urdu-mT5-mMARCO) achieves a Mean Reciprocal Rank (MRR@10) of 0.247 and a
Recall@10 of 0.439, representing significant improvements over zero-shot
results and showing the potential for expanding IR access for Urdu speakers. By
bridging access gaps for speakers of low-resource languages, this work not only
advances multilingual IR research but also emphasizes the ethical and societal
importance of inclusive IR technologies. This work provides valuable insights
into the challenges and solutions for improving language representation and
lays the groundwork for future research, especially in South Asian languages,
which can benefit from the adaptable methods used in this study.
|
2412.16915 | Tianyun Zhong | Tianyun Zhong, Chao Liang, Jianwen Jiang, Gaojie Lin, Jiaqi Yang, Zhou
Zhao | FADA: Fast Diffusion Avatar Synthesis with Mixed-Supervised Multi-CFG
Distillation | CVPR 2025, Homepage https://fadavatar.github.io/ | null | null | null | cs.CV cs.AI cs.GR cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Diffusion-based audio-driven talking avatar methods have recently gained
attention for their high-fidelity, vivid, and expressive results. However,
their slow inference speed limits practical applications. Despite the
development of various distillation techniques for diffusion models, we found
that naive diffusion distillation methods do not yield satisfactory results.
Distilled models exhibit reduced robustness with open-set input images and a
decreased correlation between audio and video compared to teacher models,
undermining the advantages of diffusion models. To address this, we propose
FADA (Fast Diffusion Avatar Synthesis with Mixed-Supervised Multi-CFG
Distillation). We first designed a mixed-supervised loss to leverage data of
varying quality and enhance the overall model capability as well as robustness.
Additionally, we propose a multi-CFG distillation with learnable tokens to
utilize the correlation between audio and reference image conditions, reducing
the threefold inference runs caused by multi-CFG with acceptable quality
degradation. Extensive experiments across multiple datasets show that FADA
generates vivid videos comparable to recent diffusion model-based methods while
achieving an NFE speedup of 4.17-12.5 times. Demos are available at our webpage
http://fadavatar.github.io.
| [
{
"version": "v1",
"created": "Sun, 22 Dec 2024 08:19:22 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 06:07:56 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Zhong",
"Tianyun",
""
],
[
"Liang",
"Chao",
""
],
[
"Jiang",
"Jianwen",
""
],
[
"Lin",
"Gaojie",
""
],
[
"Yang",
"Jiaqi",
""
],
[
"Zhao",
"Zhou",
""
]
] | TITLE: FADA: Fast Diffusion Avatar Synthesis with Mixed-Supervised Multi-CFG
Distillation
ABSTRACT: Diffusion-based audio-driven talking avatar methods have recently gained
attention for their high-fidelity, vivid, and expressive results. However,
their slow inference speed limits practical applications. Despite the
development of various distillation techniques for diffusion models, we found
that naive diffusion distillation methods do not yield satisfactory results.
Distilled models exhibit reduced robustness with open-set input images and a
decreased correlation between audio and video compared to teacher models,
undermining the advantages of diffusion models. To address this, we propose
FADA (Fast Diffusion Avatar Synthesis with Mixed-Supervised Multi-CFG
Distillation). We first designed a mixed-supervised loss to leverage data of
varying quality and enhance the overall model capability as well as robustness.
Additionally, we propose a multi-CFG distillation with learnable tokens to
utilize the correlation between audio and reference image conditions, reducing
the threefold inference runs caused by multi-CFG with acceptable quality
degradation. Extensive experiments across multiple datasets show that FADA
generates vivid videos comparable to recent diffusion model-based methods while
achieving an NFE speedup of 4.17-12.5 times. Demos are available at our webpage
http://fadavatar.github.io.
|
2412.18773 | Seth Nabat | Seth Nabat, Aishik Ghosh, Edmund Witkowski, Gregor Kasieczka, Daniel
Whiteson | Learning Broken Symmetries with Approximate Invariance | 7 pages, 8 figures | Phys. Rev. D 111 (2025) 072002 | 10.1103/PhysRevD.111.072002 | null | hep-ph cs.LG hep-ex | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recognizing symmetries in data allows for significant boosts in neural
network training, which is especially important where training data are
limited. In many cases, however, the exact underlying symmetry is present only
in an idealized dataset, and is broken in actual data, due to asymmetries in
the detector, or varying response resolution as a function of particle
momentum. Standard approaches, such as data augmentation or equivariant
networks fail to represent the nature of the full, broken symmetry, effectively
overconstraining the response of the neural network. We propose a learning
model which balances the generality and asymptotic performance of unconstrained
networks with the rapid learning of constrained networks. This is achieved
through a dual-subnet structure, where one network is constrained by the
symmetry and the other is not, along with a learned symmetry factor. In a
simplified toy example that demonstrates violation of Lorentz invariance, our
model learns as rapidly as symmetry-constrained networks but escapes its
performance limitations.
| [
{
"version": "v1",
"created": "Wed, 25 Dec 2024 04:29:04 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 00:58:59 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Nabat",
"Seth",
""
],
[
"Ghosh",
"Aishik",
""
],
[
"Witkowski",
"Edmund",
""
],
[
"Kasieczka",
"Gregor",
""
],
[
"Whiteson",
"Daniel",
""
]
] | TITLE: Learning Broken Symmetries with Approximate Invariance
ABSTRACT: Recognizing symmetries in data allows for significant boosts in neural
network training, which is especially important where training data are
limited. In many cases, however, the exact underlying symmetry is present only
in an idealized dataset, and is broken in actual data, due to asymmetries in
the detector, or varying response resolution as a function of particle
momentum. Standard approaches, such as data augmentation or equivariant
networks fail to represent the nature of the full, broken symmetry, effectively
overconstraining the response of the neural network. We propose a learning
model which balances the generality and asymptotic performance of unconstrained
networks with the rapid learning of constrained networks. This is achieved
through a dual-subnet structure, where one network is constrained by the
symmetry and the other is not, along with a learned symmetry factor. In a
simplified toy example that demonstrates violation of Lorentz invariance, our
model learns as rapidly as symmetry-constrained networks but escapes its
performance limitations.
|
2412.19331 | Muntasir Wahed | Kiet A. Nguyen, Adheesh Juvekar, Tianjiao Yu, Muntasir Wahed, Ismini
Lourentzou | CALICO: Part-Focused Semantic Co-Segmentation with Large Vision-Language
Models | Accepted to CVPR 2025. Project page:
https://plan-lab.github.io/calico/ | null | null | null | cs.CV cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in Large Vision-Language Models (LVLMs) have enabled
general-purpose vision tasks through visual instruction tuning. While existing
LVLMs can generate segmentation masks from text prompts for single images, they
struggle with segmentation-grounded reasoning across images, especially at
finer granularities such as object parts. In this paper, we introduce the new
task of part-focused semantic co-segmentation, which involves identifying and
segmenting common objects, as well as common and unique object parts across
images. To address this task, we present CALICO, the first LVLM designed for
multi-image part-level reasoning segmentation. CALICO features two key
components, a novel Correspondence Extraction Module that identifies semantic
part-level correspondences, and Correspondence Adaptation Modules that embed
this information into the LVLM to facilitate multi-image understanding in a
parameter-efficient manner. To support training and evaluation, we curate
MixedParts, a large-scale multi-image segmentation dataset containing
$\sim$2.4M samples across $\sim$44K images spanning diverse object and part
categories. Experimental results demonstrate that CALICO, with just 0.3% of its
parameters finetuned, achieves strong performance on this challenging task.
| [
{
"version": "v1",
"created": "Thu, 26 Dec 2024 18:59:37 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 17:59:25 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Nguyen",
"Kiet A.",
""
],
[
"Juvekar",
"Adheesh",
""
],
[
"Yu",
"Tianjiao",
""
],
[
"Wahed",
"Muntasir",
""
],
[
"Lourentzou",
"Ismini",
""
]
] | TITLE: CALICO: Part-Focused Semantic Co-Segmentation with Large Vision-Language
Models
ABSTRACT: Recent advances in Large Vision-Language Models (LVLMs) have enabled
general-purpose vision tasks through visual instruction tuning. While existing
LVLMs can generate segmentation masks from text prompts for single images, they
struggle with segmentation-grounded reasoning across images, especially at
finer granularities such as object parts. In this paper, we introduce the new
task of part-focused semantic co-segmentation, which involves identifying and
segmenting common objects, as well as common and unique object parts across
images. To address this task, we present CALICO, the first LVLM designed for
multi-image part-level reasoning segmentation. CALICO features two key
components, a novel Correspondence Extraction Module that identifies semantic
part-level correspondences, and Correspondence Adaptation Modules that embed
this information into the LVLM to facilitate multi-image understanding in a
parameter-efficient manner. To support training and evaluation, we curate
MixedParts, a large-scale multi-image segmentation dataset containing
$\sim$2.4M samples across $\sim$44K images spanning diverse object and part
categories. Experimental results demonstrate that CALICO, with just 0.3% of its
parameters finetuned, achieves strong performance on this challenging task.
|
2501.02014 | Abu Saleh Musa Miah Dr. | Masahiro Matsumoto, Abu Saleh Musa Miah, Nobuyoshi Asai, Jungpil Shin | Machine Learning-Based Differential Diagnosis of Parkinson's Disease
Using Kinematic Feature Extraction and Selection | null | IEEE Access, vol. 13, pp. 54090-54104, 2025 | 10.1109/ACCESS.2025.3553528 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Parkinson's disease (PD), the second most common neurodegenerative disorder,
is characterized by dopaminergic neuron loss and the accumulation of abnormal
synuclein. PD presents both motor and non-motor symptoms that progressively
impair daily functioning. The severity of these symptoms is typically assessed
using the MDS-UPDRS rating scale, which is subjective and dependent on the
physician's experience. Additionally, PD shares symptoms with other
neurodegenerative diseases, such as progressive supranuclear palsy (PSP) and
multiple system atrophy (MSA), complicating accurate diagnosis. To address
these diagnostic challenges, we propose a machine learning-based system for
differential diagnosis of PD, PSP, MSA, and healthy controls (HC). This system
utilizes a kinematic feature-based hierarchical feature extraction and
selection approach. Initially, 18 kinematic features are extracted, including
two newly proposed features: Thumb-to-index vector velocity and acceleration,
which provide insights into motor control patterns. In addition, 41 statistical
features were extracted here from each kinematic feature, including some new
approaches such as Average Absolute Change, Rhythm, Amplitude, Frequency,
Standard Deviation of Frequency, and Slope. Feature selection is performed
using One-way ANOVA to rank features, followed by Sequential Forward Floating
Selection (SFFS) to identify the most relevant ones, aiming to reduce the
computational complexity. The final feature set is used for classification,
achieving a classification accuracy of 66.67% for each dataset and 88.89% for
each patient, with particularly high performance for the MSA and HC groups
using the SVM algorithm. This system shows potential as a rapid and accurate
diagnostic tool in clinical practice, though further data collection and
refinement are needed to enhance its reliability.
| [
{
"version": "v1",
"created": "Thu, 2 Jan 2025 14:43:39 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Matsumoto",
"Masahiro",
""
],
[
"Miah",
"Abu Saleh Musa",
""
],
[
"Asai",
"Nobuyoshi",
""
],
[
"Shin",
"Jungpil",
""
]
] | TITLE: Machine Learning-Based Differential Diagnosis of Parkinson's Disease
Using Kinematic Feature Extraction and Selection
ABSTRACT: Parkinson's disease (PD), the second most common neurodegenerative disorder,
is characterized by dopaminergic neuron loss and the accumulation of abnormal
synuclein. PD presents both motor and non-motor symptoms that progressively
impair daily functioning. The severity of these symptoms is typically assessed
using the MDS-UPDRS rating scale, which is subjective and dependent on the
physician's experience. Additionally, PD shares symptoms with other
neurodegenerative diseases, such as progressive supranuclear palsy (PSP) and
multiple system atrophy (MSA), complicating accurate diagnosis. To address
these diagnostic challenges, we propose a machine learning-based system for
differential diagnosis of PD, PSP, MSA, and healthy controls (HC). This system
utilizes a kinematic feature-based hierarchical feature extraction and
selection approach. Initially, 18 kinematic features are extracted, including
two newly proposed features: Thumb-to-index vector velocity and acceleration,
which provide insights into motor control patterns. In addition, 41 statistical
features were extracted here from each kinematic feature, including some new
approaches such as Average Absolute Change, Rhythm, Amplitude, Frequency,
Standard Deviation of Frequency, and Slope. Feature selection is performed
using One-way ANOVA to rank features, followed by Sequential Forward Floating
Selection (SFFS) to identify the most relevant ones, aiming to reduce the
computational complexity. The final feature set is used for classification,
achieving a classification accuracy of 66.67% for each dataset and 88.89% for
each patient, with particularly high performance for the MSA and HC groups
using the SVM algorithm. This system shows potential as a rapid and accurate
diagnostic tool in clinical practice, though further data collection and
refinement are needed to enhance its reliability.
|
2501.03544 | Xinfeng Li | Lingzhi Yuan, Xiaojun Jia, Yihao Huang, Wei Dong, Yang Liu | PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for
Text-to-Image Models | 16 pages, 8 figures, 10 tables | null | null | null | cs.CV cs.AI cs.CR | http://creativecommons.org/licenses/by-sa/4.0/ | Text-to-image (T2I) models have been shown to be vulnerable to misuse,
particularly in generating not-safe-for-work (NSFW) content, raising serious
ethical concerns. In this work, we present PromptGuard, a novel content
moderation technique that draws inspiration from the system prompt mechanism in
large language models (LLMs) for safety alignment. Unlike LLMs, T2I models lack
a direct interface for enforcing behavioral guidelines. Our key idea is to
optimize a safety soft prompt that functions as an implicit system prompt
within the T2I model's textual embedding space. This universal soft prompt (P*)
directly moderates NSFW inputs, enabling safe yet realistic image generation
without altering the inference efficiency or requiring proxy models. Extensive
experiments across three datasets demonstrate that PromptGuard effectively
mitigates NSFW content generation while preserving high-quality benign outputs.
PromptGuard achieves 7.8 times faster than prior content moderation methods,
surpassing eight state-of-the-art defenses with an optimal unsafe ratio down to
5.84%.
| [
{
"version": "v1",
"created": "Tue, 7 Jan 2025 05:39:21 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 05:56:04 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yuan",
"Lingzhi",
""
],
[
"Jia",
"Xiaojun",
""
],
[
"Huang",
"Yihao",
""
],
[
"Dong",
"Wei",
""
],
[
"Liu",
"Yang",
""
]
] | TITLE: PromptGuard: Soft Prompt-Guided Unsafe Content Moderation for
Text-to-Image Models
ABSTRACT: Text-to-image (T2I) models have been shown to be vulnerable to misuse,
particularly in generating not-safe-for-work (NSFW) content, raising serious
ethical concerns. In this work, we present PromptGuard, a novel content
moderation technique that draws inspiration from the system prompt mechanism in
large language models (LLMs) for safety alignment. Unlike LLMs, T2I models lack
a direct interface for enforcing behavioral guidelines. Our key idea is to
optimize a safety soft prompt that functions as an implicit system prompt
within the T2I model's textual embedding space. This universal soft prompt (P*)
directly moderates NSFW inputs, enabling safe yet realistic image generation
without altering the inference efficiency or requiring proxy models. Extensive
experiments across three datasets demonstrate that PromptGuard effectively
mitigates NSFW content generation while preserving high-quality benign outputs.
PromptGuard achieves 7.8 times faster than prior content moderation methods,
surpassing eight state-of-the-art defenses with an optimal unsafe ratio down to
5.84%.
|
2501.08598 | Myeongsoo Kim | Myeongsoo Kim, Saurabh Sinha, and Alessandro Orso | LlamaRestTest: Effective REST API Testing with Small Language Models | To be published in the ACM International Conference on the
Foundations of Software Engineering (FSE 2025) | null | null | null | cs.SE cs.AI | http://creativecommons.org/licenses/by/4.0/ | Modern web services rely heavily on REST APIs, typically documented using the
OpenAPI specification. The widespread adoption of this standard has resulted in
the development of many black-box testing tools that generate tests based on
OpenAPI specifications. Although Large Language Models (LLMs) have shown
promising test-generation abilities, their application to REST API testing
remains mostly unexplored. We present LlamaRestTest, a novel approach that
employs two custom LLMs-created by fine-tuning and quantizing the Llama3-8B
model using mined datasets of REST API example values and inter-parameter
dependencies-to generate realistic test inputs and uncover inter-parameter
dependencies during the testing process by analyzing server responses. We
evaluated LlamaRestTest on 12 real-world services (including popular services
such as Spotify), comparing it against RESTGPT, a GPT-powered
specification-enhancement tool, as well as several state-of-the-art REST API
testing tools, including RESTler, MoRest, EvoMaster, and ARAT-RL. Our results
demonstrate that fine-tuning enables smaller models to outperform much larger
models in detecting actionable parameter-dependency rules and generating valid
inputs for REST API testing. We also evaluated different tool configurations,
ranging from the base Llama3-8B model to fine-tuned versions, and explored
multiple quantization techniques, including 2-bit, 4-bit, and 8-bit integer
formats. Our study shows that small language models can perform as well as, or
better than, large language models in REST API testing, balancing effectiveness
and efficiency. Furthermore, LlamaRestTest outperforms state-of-the-art REST
API testing tools in code coverage achieved and internal server errors
identified, even when those tools use RESTGPT-enhanced specifications.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2025 05:51:20 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 19:42:32 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Kim",
"Myeongsoo",
""
],
[
"Sinha",
"Saurabh",
""
],
[
"Orso",
"Alessandro",
""
]
] | TITLE: LlamaRestTest: Effective REST API Testing with Small Language Models
ABSTRACT: Modern web services rely heavily on REST APIs, typically documented using the
OpenAPI specification. The widespread adoption of this standard has resulted in
the development of many black-box testing tools that generate tests based on
OpenAPI specifications. Although Large Language Models (LLMs) have shown
promising test-generation abilities, their application to REST API testing
remains mostly unexplored. We present LlamaRestTest, a novel approach that
employs two custom LLMs-created by fine-tuning and quantizing the Llama3-8B
model using mined datasets of REST API example values and inter-parameter
dependencies-to generate realistic test inputs and uncover inter-parameter
dependencies during the testing process by analyzing server responses. We
evaluated LlamaRestTest on 12 real-world services (including popular services
such as Spotify), comparing it against RESTGPT, a GPT-powered
specification-enhancement tool, as well as several state-of-the-art REST API
testing tools, including RESTler, MoRest, EvoMaster, and ARAT-RL. Our results
demonstrate that fine-tuning enables smaller models to outperform much larger
models in detecting actionable parameter-dependency rules and generating valid
inputs for REST API testing. We also evaluated different tool configurations,
ranging from the base Llama3-8B model to fine-tuned versions, and explored
multiple quantization techniques, including 2-bit, 4-bit, and 8-bit integer
formats. Our study shows that small language models can perform as well as, or
better than, large language models in REST API testing, balancing effectiveness
and efficiency. Furthermore, LlamaRestTest outperforms state-of-the-art REST
API testing tools in code coverage achieved and internal server errors
identified, even when those tools use RESTGPT-enhanced specifications.
|
2501.09898 | Bowen Wen | Bowen Wen, Matthew Trepte, Joseph Aribido, Jan Kautz, Orazio Gallo,
Stan Birchfield | FoundationStereo: Zero-Shot Stereo Matching | CVPR 2025 | null | null | null | cs.CV cs.LG cs.RO | http://creativecommons.org/licenses/by/4.0/ | Tremendous progress has been made in deep stereo matching to excel on
benchmark datasets through per-domain fine-tuning. However, achieving strong
zero-shot generalization - a hallmark of foundation models in other computer
vision tasks - remains challenging for stereo matching. We introduce
FoundationStereo, a foundation model for stereo depth estimation designed to
achieve strong zero-shot generalization. To this end, we first construct a
large-scale (1M stereo pairs) synthetic training dataset featuring large
diversity and high photorealism, followed by an automatic self-curation
pipeline to remove ambiguous samples. We then design a number of network
architecture components to enhance scalability, including a side-tuning feature
backbone that adapts rich monocular priors from vision foundation models to
mitigate the sim-to-real gap, and long-range context reasoning for effective
cost volume filtering. Together, these components lead to strong robustness and
accuracy across domains, establishing a new standard in zero-shot stereo depth
estimation. Project page: https://nvlabs.github.io/FoundationStereo/
| [
{
"version": "v1",
"created": "Fri, 17 Jan 2025 01:01:44 GMT"
},
{
"version": "v2",
"created": "Tue, 21 Jan 2025 18:46:52 GMT"
},
{
"version": "v3",
"created": "Fri, 7 Mar 2025 04:45:23 GMT"
},
{
"version": "v4",
"created": "Fri, 4 Apr 2025 00:51:17 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Wen",
"Bowen",
""
],
[
"Trepte",
"Matthew",
""
],
[
"Aribido",
"Joseph",
""
],
[
"Kautz",
"Jan",
""
],
[
"Gallo",
"Orazio",
""
],
[
"Birchfield",
"Stan",
""
]
] | TITLE: FoundationStereo: Zero-Shot Stereo Matching
ABSTRACT: Tremendous progress has been made in deep stereo matching to excel on
benchmark datasets through per-domain fine-tuning. However, achieving strong
zero-shot generalization - a hallmark of foundation models in other computer
vision tasks - remains challenging for stereo matching. We introduce
FoundationStereo, a foundation model for stereo depth estimation designed to
achieve strong zero-shot generalization. To this end, we first construct a
large-scale (1M stereo pairs) synthetic training dataset featuring large
diversity and high photorealism, followed by an automatic self-curation
pipeline to remove ambiguous samples. We then design a number of network
architecture components to enhance scalability, including a side-tuning feature
backbone that adapts rich monocular priors from vision foundation models to
mitigate the sim-to-real gap, and long-range context reasoning for effective
cost volume filtering. Together, these components lead to strong robustness and
accuracy across domains, establishing a new standard in zero-shot stereo depth
estimation. Project page: https://nvlabs.github.io/FoundationStereo/
|
2502.03771 | Luis Gaspar Schroeder | Luis Gaspar Schroeder, Shu Liu, Alejandro Cuadron, Mark Zhao, Stephan
Krusche, Alfons Kemper, Matei Zaharia, Joseph E. Gonzalez | Adaptive Semantic Prompt Caching with VectorQ | null | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | Semantic prompt caches reduce the latency and cost of large language model
(LLM) inference by reusing cached LLM-generated responses for semantically
similar prompts. Vector similarity metrics assign a numerical score to quantify
the similarity between an embedded prompt and its nearest neighbor in the
cache. Existing systems rely on a static threshold to classify whether the
similarity score is sufficiently high to result in a cache hit. We show that
this one-size-fits-all threshold is insufficient across different embeddings.
We propose VectorQ, an online framework with a threshold convergence guarantee
to learn embedding-specific threshold regions that adapt to the uncertainty of
an embedding. Through evaluations on a combination of three diverse datasets,
we show that VectorQ consistently outperforms state-of-the-art systems across
all static thresholds, achieving up to 26x increases in cache hit rate and
error rate reductions up to 74%.
| [
{
"version": "v1",
"created": "Thu, 6 Feb 2025 04:16:20 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 16:51:15 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Schroeder",
"Luis Gaspar",
""
],
[
"Liu",
"Shu",
""
],
[
"Cuadron",
"Alejandro",
""
],
[
"Zhao",
"Mark",
""
],
[
"Krusche",
"Stephan",
""
],
[
"Kemper",
"Alfons",
""
],
[
"Zaharia",
"Matei",
""
],
[
"Gonzalez",
"Joseph E.",
""
]
] | TITLE: Adaptive Semantic Prompt Caching with VectorQ
ABSTRACT: Semantic prompt caches reduce the latency and cost of large language model
(LLM) inference by reusing cached LLM-generated responses for semantically
similar prompts. Vector similarity metrics assign a numerical score to quantify
the similarity between an embedded prompt and its nearest neighbor in the
cache. Existing systems rely on a static threshold to classify whether the
similarity score is sufficiently high to result in a cache hit. We show that
this one-size-fits-all threshold is insufficient across different embeddings.
We propose VectorQ, an online framework with a threshold convergence guarantee
to learn embedding-specific threshold regions that adapt to the uncertainty of
an embedding. Through evaluations on a combination of three diverse datasets,
we show that VectorQ consistently outperforms state-of-the-art systems across
all static thresholds, achieving up to 26x increases in cache hit rate and
error rate reductions up to 74%.
|
2502.09563 | Youming Deng | Youming Deng, Wenqi Xian, Guandao Yang, Leonidas Guibas, Gordon
Wetzstein, Steve Marschner, Paul Debevec | Self-Calibrating Gaussian Splatting for Large Field of View
Reconstruction | Project Page: https://denghilbert.github.io/self-cali/ | null | null | null | cs.CV cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this paper, we present a self-calibrating framework that jointly optimizes
camera parameters, lens distortion and 3D Gaussian representations, enabling
accurate and efficient scene reconstruction. In particular, our technique
enables high-quality scene reconstruction from Large field-of-view (FOV)
imagery taken with wide-angle lenses, allowing the scene to be modeled from a
smaller number of images. Our approach introduces a novel method for modeling
complex lens distortions using a hybrid network that combines invertible
residual networks with explicit grids. This design effectively regularizes the
optimization process, achieving greater accuracy than conventional camera
models. Additionally, we propose a cubemap-based resampling strategy to support
large FOV images without sacrificing resolution or introducing distortion
artifacts. Our method is compatible with the fast rasterization of Gaussian
Splatting, adaptable to a wide variety of camera lens distortion, and
demonstrates state-of-the-art performance on both synthetic and real-world
datasets.
| [
{
"version": "v1",
"created": "Thu, 13 Feb 2025 18:15:10 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 20:24:51 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Deng",
"Youming",
""
],
[
"Xian",
"Wenqi",
""
],
[
"Yang",
"Guandao",
""
],
[
"Guibas",
"Leonidas",
""
],
[
"Wetzstein",
"Gordon",
""
],
[
"Marschner",
"Steve",
""
],
[
"Debevec",
"Paul",
""
]
] | TITLE: Self-Calibrating Gaussian Splatting for Large Field of View
Reconstruction
ABSTRACT: In this paper, we present a self-calibrating framework that jointly optimizes
camera parameters, lens distortion and 3D Gaussian representations, enabling
accurate and efficient scene reconstruction. In particular, our technique
enables high-quality scene reconstruction from Large field-of-view (FOV)
imagery taken with wide-angle lenses, allowing the scene to be modeled from a
smaller number of images. Our approach introduces a novel method for modeling
complex lens distortions using a hybrid network that combines invertible
residual networks with explicit grids. This design effectively regularizes the
optimization process, achieving greater accuracy than conventional camera
models. Additionally, we propose a cubemap-based resampling strategy to support
large FOV images without sacrificing resolution or introducing distortion
artifacts. Our method is compatible with the fast rasterization of Gaussian
Splatting, adaptable to a wide variety of camera lens distortion, and
demonstrates state-of-the-art performance on both synthetic and real-world
datasets.
|
2502.14202 | Amirali Sajadi | Amirali Sajadi, Binh Le, Anh Nguyen, Kostadin Damevski, Preetha
Chatterjee | Do LLMs Consider Security? An Empirical Study on Responses to
Programming Questions | Accepted to EMSE | null | null | null | cs.SE cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | The widespread adoption of conversational LLMs for software development has
raised new security concerns regarding the safety of LLM-generated content. Our
motivational study outlines ChatGPT's potential in volunteering
context-specific information to the developers, promoting safe coding
practices. Motivated by this finding, we conduct a study to evaluate the degree
of security awareness exhibited by three prominent LLMs: Claude 3, GPT-4, and
Llama 3. We prompt these LLMs with Stack Overflow questions that contain
vulnerable code to evaluate whether they merely provide answers to the
questions or if they also warn users about the insecure code, thereby
demonstrating a degree of security awareness. Further, we assess whether LLM
responses provide information about the causes, exploits, and the potential
fixes of the vulnerability, to help raise users' awareness. Our findings show
that all three models struggle to accurately detect and warn users about
vulnerabilities, achieving a detection rate of only 12.6% to 40% across our
datasets. We also observe that the LLMs tend to identify certain types of
vulnerabilities related to sensitive information exposure and improper input
neutralization much more frequently than other types, such as those involving
external control of file names or paths. Furthermore, when LLMs do issue
security warnings, they often provide more information on the causes, exploits,
and fixes of vulnerabilities compared to Stack Overflow responses. Finally, we
provide an in-depth discussion on the implications of our findings and present
a CLI-based prompting tool that can be used to generate significantly more
secure LLM responses.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 02:20:06 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 22:13:44 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Sajadi",
"Amirali",
""
],
[
"Le",
"Binh",
""
],
[
"Nguyen",
"Anh",
""
],
[
"Damevski",
"Kostadin",
""
],
[
"Chatterjee",
"Preetha",
""
]
] | TITLE: Do LLMs Consider Security? An Empirical Study on Responses to
Programming Questions
ABSTRACT: The widespread adoption of conversational LLMs for software development has
raised new security concerns regarding the safety of LLM-generated content. Our
motivational study outlines ChatGPT's potential in volunteering
context-specific information to the developers, promoting safe coding
practices. Motivated by this finding, we conduct a study to evaluate the degree
of security awareness exhibited by three prominent LLMs: Claude 3, GPT-4, and
Llama 3. We prompt these LLMs with Stack Overflow questions that contain
vulnerable code to evaluate whether they merely provide answers to the
questions or if they also warn users about the insecure code, thereby
demonstrating a degree of security awareness. Further, we assess whether LLM
responses provide information about the causes, exploits, and the potential
fixes of the vulnerability, to help raise users' awareness. Our findings show
that all three models struggle to accurately detect and warn users about
vulnerabilities, achieving a detection rate of only 12.6% to 40% across our
datasets. We also observe that the LLMs tend to identify certain types of
vulnerabilities related to sensitive information exposure and improper input
neutralization much more frequently than other types, such as those involving
external control of file names or paths. Furthermore, when LLMs do issue
security warnings, they often provide more information on the causes, exploits,
and fixes of vulnerabilities compared to Stack Overflow responses. Finally, we
provide an in-depth discussion on the implications of our findings and present
a CLI-based prompting tool that can be used to generate significantly more
secure LLM responses.
|
2502.16587 | Sicheng Xie | Sicheng Xie, Haidong Cao, Zejia Weng, Zhen Xing, Shiwei Shen, Jiaqi
Leng, Xipeng Qiu, Yanwei Fu, Zuxuan Wu, Yu-Gang Jiang | Human2Robot: Learning Robot Actions from Paired Human-Robot Videos | null | null | null | null | cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Distilling knowledge from human demonstrations is a promising way for robots
to learn and act. Existing work often overlooks the differences between humans
and robots, producing unsatisfactory results. In this paper, we study how
perfectly aligned human-robot pairs benefit robot learning. Capitalizing on
VR-based teleportation, we introduce H\&R, a third-person dataset with 2,600
episodes, each of which captures the fine-grained correspondence between human
hand and robot gripper. Inspired by the recent success of diffusion models, we
introduce Human2Robot, an end-to-end diffusion framework that formulates
learning from human demonstration as a generative task. Human2Robot fully
explores temporal dynamics in human videos to generate robot videos and predict
actions at the same time. Through comprehensive evaluations of 4 carefully
selected tasks in real-world settings, we demonstrate that Human2Robot can not
only generate high-quality robot videos but also excels in seen tasks and
generalizing to different positions, unseen appearances, novel instances, and
even new backgrounds and task types.
| [
{
"version": "v1",
"created": "Sun, 23 Feb 2025 14:29:28 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 15:25:00 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Xie",
"Sicheng",
""
],
[
"Cao",
"Haidong",
""
],
[
"Weng",
"Zejia",
""
],
[
"Xing",
"Zhen",
""
],
[
"Shen",
"Shiwei",
""
],
[
"Leng",
"Jiaqi",
""
],
[
"Qiu",
"Xipeng",
""
],
[
"Fu",
"Yanwei",
""
],
[
"Wu",
"Zuxuan",
""
],
[
"Jiang",
"Yu-Gang",
""
]
] | TITLE: Human2Robot: Learning Robot Actions from Paired Human-Robot Videos
ABSTRACT: Distilling knowledge from human demonstrations is a promising way for robots
to learn and act. Existing work often overlooks the differences between humans
and robots, producing unsatisfactory results. In this paper, we study how
perfectly aligned human-robot pairs benefit robot learning. Capitalizing on
VR-based teleportation, we introduce H\&R, a third-person dataset with 2,600
episodes, each of which captures the fine-grained correspondence between human
hand and robot gripper. Inspired by the recent success of diffusion models, we
introduce Human2Robot, an end-to-end diffusion framework that formulates
learning from human demonstration as a generative task. Human2Robot fully
explores temporal dynamics in human videos to generate robot videos and predict
actions at the same time. Through comprehensive evaluations of 4 carefully
selected tasks in real-world settings, we demonstrate that Human2Robot can not
only generate high-quality robot videos but also excels in seen tasks and
generalizing to different positions, unseen appearances, novel instances, and
even new backgrounds and task types.
|
2502.20837 | Xianchao Xiu | Long Chen, Xianchao Xiu | Tuning-Free Structured Sparse PCA via Deep Unfolding Networks | CCC 2025 | null | null | null | cs.LG math.OC | http://creativecommons.org/licenses/by/4.0/ | Sparse principal component analysis (PCA) is a well-established
dimensionality reduction technique that is often used for unsupervised feature
selection (UFS). However, determining the regularization parameters is rather
challenging, and conventional approaches, including grid search and Bayesian
optimization, not only bring great computational costs but also exhibit high
sensitivity. To address these limitations, we first establish a structured
sparse PCA formulation by integrating $\ell_1$-norm and $\ell_{2,1}$-norm to
capture the local and global structures, respectively. Building upon the
off-the-shelf alternating direction method of multipliers (ADMM) optimization
framework, we then design an interpretable deep unfolding network that
translates iterative optimization steps into trainable neural architectures.
This innovation enables automatic learning of the regularization parameters,
effectively bypassing the empirical tuning requirements of conventional
methods. Numerical experiments on benchmark datasets validate the advantages of
our proposed method over the existing state-of-the-art methods. Our code will
be accessible at https://github.com/xianchaoxiu/SPCA-Net.
| [
{
"version": "v1",
"created": "Fri, 28 Feb 2025 08:32:51 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 07:47:35 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Chen",
"Long",
""
],
[
"Xiu",
"Xianchao",
""
]
] | TITLE: Tuning-Free Structured Sparse PCA via Deep Unfolding Networks
ABSTRACT: Sparse principal component analysis (PCA) is a well-established
dimensionality reduction technique that is often used for unsupervised feature
selection (UFS). However, determining the regularization parameters is rather
challenging, and conventional approaches, including grid search and Bayesian
optimization, not only bring great computational costs but also exhibit high
sensitivity. To address these limitations, we first establish a structured
sparse PCA formulation by integrating $\ell_1$-norm and $\ell_{2,1}$-norm to
capture the local and global structures, respectively. Building upon the
off-the-shelf alternating direction method of multipliers (ADMM) optimization
framework, we then design an interpretable deep unfolding network that
translates iterative optimization steps into trainable neural architectures.
This innovation enables automatic learning of the regularization parameters,
effectively bypassing the empirical tuning requirements of conventional
methods. Numerical experiments on benchmark datasets validate the advantages of
our proposed method over the existing state-of-the-art methods. Our code will
be accessible at https://github.com/xianchaoxiu/SPCA-Net.
|
2503.00808 | KaShun Shum | Kashun Shum, Yuzhen Huang, Hongjian Zou, Qi Ding, Yixuan Liao, Xiaoxin
Chen, Qian Liu, Junxian He | Predictive Data Selection: The Data That Predicts Is the Data That
Teaches | 22 pages | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Language model pretraining involves training on extensive corpora, where data
quality plays a pivotal role. In this work, we aim to directly estimate the
contribution of data during pretraining and select pretraining data in an
efficient manner. Specifically, we draw inspiration from recent findings
showing that compression efficiency (i.e., the normalized loss) of diverse
models on certain text correlates strongly with their downstream performance,
when the text domain aligns with the downstream benchmarks(Huang et al., 2024).
Building on this observation, we hypothesize that data on which model losses
are predictive of downstream abilities also contribute effectively to learning.
To leverage this insight, we introduce predictive data selection (PreSelect), a
lightweight and efficient data selection method that requires training and
deploying only a fastText-based scorer. Through comprehensive experiments with
1B and 3B parameter models, we demonstrate that models trained on 30B tokens
selected with PreSelect surpass the performance of the vanilla baseline trained
on 300B tokens, achieving a 10x reduction in compute requirements. Furthermore,
PreSelect significantly outperforms other competitive data selection baselines,
such as DCLM and FineWeb-Edu on a scale of 3B models trained on 100B tokens. We
open-source our trained data selection scorer along with the curated datasets
at https://github.com/hkust-nlp/PreSelect.
| [
{
"version": "v1",
"created": "Sun, 2 Mar 2025 09:21:28 GMT"
},
{
"version": "v2",
"created": "Tue, 4 Mar 2025 06:15:27 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 10:59:54 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Shum",
"Kashun",
""
],
[
"Huang",
"Yuzhen",
""
],
[
"Zou",
"Hongjian",
""
],
[
"Ding",
"Qi",
""
],
[
"Liao",
"Yixuan",
""
],
[
"Chen",
"Xiaoxin",
""
],
[
"Liu",
"Qian",
""
],
[
"He",
"Junxian",
""
]
] | TITLE: Predictive Data Selection: The Data That Predicts Is the Data That
Teaches
ABSTRACT: Language model pretraining involves training on extensive corpora, where data
quality plays a pivotal role. In this work, we aim to directly estimate the
contribution of data during pretraining and select pretraining data in an
efficient manner. Specifically, we draw inspiration from recent findings
showing that compression efficiency (i.e., the normalized loss) of diverse
models on certain text correlates strongly with their downstream performance,
when the text domain aligns with the downstream benchmarks(Huang et al., 2024).
Building on this observation, we hypothesize that data on which model losses
are predictive of downstream abilities also contribute effectively to learning.
To leverage this insight, we introduce predictive data selection (PreSelect), a
lightweight and efficient data selection method that requires training and
deploying only a fastText-based scorer. Through comprehensive experiments with
1B and 3B parameter models, we demonstrate that models trained on 30B tokens
selected with PreSelect surpass the performance of the vanilla baseline trained
on 300B tokens, achieving a 10x reduction in compute requirements. Furthermore,
PreSelect significantly outperforms other competitive data selection baselines,
such as DCLM and FineWeb-Edu on a scale of 3B models trained on 100B tokens. We
open-source our trained data selection scorer along with the curated datasets
at https://github.com/hkust-nlp/PreSelect.
|
2503.12507 | Guangqian Guo | Guangqian Guo, Yong Guo, Xuehui Yu, Wenbo Li, Yaoxing Wang, Shan Gao | Segment Any-Quality Images with Generative Latent Space Enhancement | Accepted by CVPR2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Despite their success, Segment Anything Models (SAMs) experience significant
performance drops on severely degraded, low-quality images, limiting their
effectiveness in real-world scenarios. To address this, we propose GleSAM,
which utilizes Generative Latent space Enhancement to boost robustness on
low-quality images, thus enabling generalization across various image
qualities. Specifically, we adapt the concept of latent diffusion to SAM-based
segmentation frameworks and perform the generative diffusion process in the
latent space of SAM to reconstruct high-quality representation, thereby
improving segmentation. Additionally, we introduce two techniques to improve
compatibility between the pre-trained diffusion model and the segmentation
framework. Our method can be applied to pre-trained SAM and SAM2 with only
minimal additional learnable parameters, allowing for efficient optimization.
We also construct the LQSeg dataset with a greater diversity of degradation
types and levels for training and evaluating the model. Extensive experiments
demonstrate that GleSAM significantly improves segmentation robustness on
complex degradations while maintaining generalization to clear images.
Furthermore, GleSAM also performs well on unseen degradations, underscoring the
versatility of our approach and dataset.
| [
{
"version": "v1",
"created": "Sun, 16 Mar 2025 13:58:13 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 04:47:08 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Guo",
"Guangqian",
""
],
[
"Guo",
"Yong",
""
],
[
"Yu",
"Xuehui",
""
],
[
"Li",
"Wenbo",
""
],
[
"Wang",
"Yaoxing",
""
],
[
"Gao",
"Shan",
""
]
] | TITLE: Segment Any-Quality Images with Generative Latent Space Enhancement
ABSTRACT: Despite their success, Segment Anything Models (SAMs) experience significant
performance drops on severely degraded, low-quality images, limiting their
effectiveness in real-world scenarios. To address this, we propose GleSAM,
which utilizes Generative Latent space Enhancement to boost robustness on
low-quality images, thus enabling generalization across various image
qualities. Specifically, we adapt the concept of latent diffusion to SAM-based
segmentation frameworks and perform the generative diffusion process in the
latent space of SAM to reconstruct high-quality representation, thereby
improving segmentation. Additionally, we introduce two techniques to improve
compatibility between the pre-trained diffusion model and the segmentation
framework. Our method can be applied to pre-trained SAM and SAM2 with only
minimal additional learnable parameters, allowing for efficient optimization.
We also construct the LQSeg dataset with a greater diversity of degradation
types and levels for training and evaluating the model. Extensive experiments
demonstrate that GleSAM significantly improves segmentation robustness on
complex degradations while maintaining generalization to clear images.
Furthermore, GleSAM also performs well on unseen degradations, underscoring the
versatility of our approach and dataset.
|
2503.13558 | Jianfei Zhang | Jingyuan Xue, Longfei Wei, Fang Sheng, Jianfei Zhang | Survival Analysis with Machine Learning for Predicting Li-ion Battery
Remaining Useful Life | null | null | null | null | eess.SP cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Battery degradation significantly impacts the reliability and efficiency of
energy storage systems, particularly in electric vehicles (EVs) and industrial
applications. Predicting the remaining useful life (RUL) of lithium-ion
(Li-ion) batteries is crucial for optimizing maintenance schedules, reducing
costs, and improving safety. Traditional RUL prediction methods often struggle
with nonlinear degradation patterns and uncertainty quantification. To address
these challenges, we propose a hybrid survival analysis framework integrating
both statistical and machine-learning-based models for RUL estimation. Our
approach transforms time-series battery data into time-to-failure data using
path signatures, enabling effective survival modeling. We apply five models,
including Cox-based survival models and machine-learning-based methods such as
DeepHit and MTLR, to estimate failure-free probabilities over time. Experiments
conducted on 362 Toyota battery datasets demonstrate the effectiveness of our
approach, achieving high time-dependent AUC and concordance index while
maintaining a low integrated Brier score. The proposed methodology provides
actionable insights for battery manufacturers and engineers, supporting dynamic
maintenance strategies and optimized lifecycle management.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 02:49:34 GMT"
},
{
"version": "v2",
"created": "Fri, 21 Mar 2025 09:53:22 GMT"
},
{
"version": "v3",
"created": "Mon, 24 Mar 2025 10:57:18 GMT"
},
{
"version": "v4",
"created": "Thu, 3 Apr 2025 21:38:07 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Xue",
"Jingyuan",
""
],
[
"Wei",
"Longfei",
""
],
[
"Sheng",
"Fang",
""
],
[
"Zhang",
"Jianfei",
""
]
] | TITLE: Survival Analysis with Machine Learning for Predicting Li-ion Battery
Remaining Useful Life
ABSTRACT: Battery degradation significantly impacts the reliability and efficiency of
energy storage systems, particularly in electric vehicles (EVs) and industrial
applications. Predicting the remaining useful life (RUL) of lithium-ion
(Li-ion) batteries is crucial for optimizing maintenance schedules, reducing
costs, and improving safety. Traditional RUL prediction methods often struggle
with nonlinear degradation patterns and uncertainty quantification. To address
these challenges, we propose a hybrid survival analysis framework integrating
both statistical and machine-learning-based models for RUL estimation. Our
approach transforms time-series battery data into time-to-failure data using
path signatures, enabling effective survival modeling. We apply five models,
including Cox-based survival models and machine-learning-based methods such as
DeepHit and MTLR, to estimate failure-free probabilities over time. Experiments
conducted on 362 Toyota battery datasets demonstrate the effectiveness of our
approach, achieving high time-dependent AUC and concordance index while
maintaining a low integrated Brier score. The proposed methodology provides
actionable insights for battery manufacturers and engineers, supporting dynamic
maintenance strategies and optimized lifecycle management.
|
2503.15683 | Yanis Benidir | Yanis Benidir, Nicolas Gonthier, Clement Mallet | The Change You Want To Detect: Semantic Change Detection In Earth
Observation With Hybrid Data Generation | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Bi-temporal change detection at scale based on Very High Resolution (VHR)
images is crucial for Earth monitoring. This remains poorly addressed so far:
methods either require large volumes of annotated data (semantic case), or are
limited to restricted datasets (binary set-ups). Most approaches do not exhibit
the versatility required for temporal and spatial adaptation: simplicity in
architecture design and pretraining on realistic and comprehensive datasets.
Synthetic datasets are the key solution but still fail to handle complex and
diverse scenes. In this paper, we present HySCDG a generative pipeline for
creating a large hybrid semantic change detection dataset that contains both
real VHR images and inpainted ones, along with land cover semantic map at both
dates and the change map. Being semantically and spatially guided, HySCDG
generates realistic images, leading to a comprehensive and hybrid
transfer-proof dataset FSC-180k. We evaluate FSC-180k on five change detection
cases (binary and semantic), from zero-shot to mixed and sequential training,
and also under low data regime training. Experiments demonstrate that
pretraining on our hybrid dataset leads to a significant performance boost,
outperforming SyntheWorld, a fully synthetic dataset, in every configuration.
All codes, models, and data are available here:
https://yb23.github.io/projects/cywd/
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 20:32:37 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 14:49:37 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Benidir",
"Yanis",
""
],
[
"Gonthier",
"Nicolas",
""
],
[
"Mallet",
"Clement",
""
]
] | TITLE: The Change You Want To Detect: Semantic Change Detection In Earth
Observation With Hybrid Data Generation
ABSTRACT: Bi-temporal change detection at scale based on Very High Resolution (VHR)
images is crucial for Earth monitoring. This remains poorly addressed so far:
methods either require large volumes of annotated data (semantic case), or are
limited to restricted datasets (binary set-ups). Most approaches do not exhibit
the versatility required for temporal and spatial adaptation: simplicity in
architecture design and pretraining on realistic and comprehensive datasets.
Synthetic datasets are the key solution but still fail to handle complex and
diverse scenes. In this paper, we present HySCDG a generative pipeline for
creating a large hybrid semantic change detection dataset that contains both
real VHR images and inpainted ones, along with land cover semantic map at both
dates and the change map. Being semantically and spatially guided, HySCDG
generates realistic images, leading to a comprehensive and hybrid
transfer-proof dataset FSC-180k. We evaluate FSC-180k on five change detection
cases (binary and semantic), from zero-shot to mixed and sequential training,
and also under low data regime training. Experiments demonstrate that
pretraining on our hybrid dataset leads to a significant performance boost,
outperforming SyntheWorld, a fully synthetic dataset, in every configuration.
All codes, models, and data are available here:
https://yb23.github.io/projects/cywd/
|
2503.19207 | Rong Wang | Rong Wang, Fabian Prada, Ziyan Wang, Zhongshi Jiang, Chengxiang Yin,
Junxuan Li, Shunsuke Saito, Igor Santesteban, Javier Romero, Rohan Joshi,
Hongdong Li, Jason Saragih, Yaser Sheikh | FRESA: Feedforward Reconstruction of Personalized Skinned Avatars from
Few Images | Published in CVPR 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | We present a novel method for reconstructing personalized 3D human avatars
with realistic animation from only a few images. Due to the large variations in
body shapes, poses, and cloth types, existing methods mostly require hours of
per-subject optimization during inference, which limits their practical
applications. In contrast, we learn a universal prior from over a thousand
clothed humans to achieve instant feedforward generation and zero-shot
generalization. Specifically, instead of rigging the avatar with shared
skinning weights, we jointly infer personalized avatar shape, skinning weights,
and pose-dependent deformations, which effectively improves overall geometric
fidelity and reduces deformation artifacts. Moreover, to normalize pose
variations and resolve coupled ambiguity between canonical shapes and skinning
weights, we design a 3D canonicalization process to produce pixel-aligned
initial conditions, which helps to reconstruct fine-grained geometric details.
We then propose a multi-frame feature aggregation to robustly reduce artifacts
introduced in canonicalization and fuse a plausible avatar preserving
person-specific identities. Finally, we train the model in an end-to-end
framework on a large-scale capture dataset, which contains diverse human
subjects paired with high-quality 3D scans. Extensive experiments show that our
method generates more authentic reconstruction and animation than
state-of-the-arts, and can be directly generalized to inputs from casually
taken phone photos. Project page and code is available at
https://github.com/rongakowang/FRESA.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 23:20:47 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 08:17:08 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Wang",
"Rong",
""
],
[
"Prada",
"Fabian",
""
],
[
"Wang",
"Ziyan",
""
],
[
"Jiang",
"Zhongshi",
""
],
[
"Yin",
"Chengxiang",
""
],
[
"Li",
"Junxuan",
""
],
[
"Saito",
"Shunsuke",
""
],
[
"Santesteban",
"Igor",
""
],
[
"Romero",
"Javier",
""
],
[
"Joshi",
"Rohan",
""
],
[
"Li",
"Hongdong",
""
],
[
"Saragih",
"Jason",
""
],
[
"Sheikh",
"Yaser",
""
]
] | TITLE: FRESA: Feedforward Reconstruction of Personalized Skinned Avatars from
Few Images
ABSTRACT: We present a novel method for reconstructing personalized 3D human avatars
with realistic animation from only a few images. Due to the large variations in
body shapes, poses, and cloth types, existing methods mostly require hours of
per-subject optimization during inference, which limits their practical
applications. In contrast, we learn a universal prior from over a thousand
clothed humans to achieve instant feedforward generation and zero-shot
generalization. Specifically, instead of rigging the avatar with shared
skinning weights, we jointly infer personalized avatar shape, skinning weights,
and pose-dependent deformations, which effectively improves overall geometric
fidelity and reduces deformation artifacts. Moreover, to normalize pose
variations and resolve coupled ambiguity between canonical shapes and skinning
weights, we design a 3D canonicalization process to produce pixel-aligned
initial conditions, which helps to reconstruct fine-grained geometric details.
We then propose a multi-frame feature aggregation to robustly reduce artifacts
introduced in canonicalization and fuse a plausible avatar preserving
person-specific identities. Finally, we train the model in an end-to-end
framework on a large-scale capture dataset, which contains diverse human
subjects paired with high-quality 3D scans. Extensive experiments show that our
method generates more authentic reconstruction and animation than
state-of-the-arts, and can be directly generalized to inputs from casually
taken phone photos. Project page and code is available at
https://github.com/rongakowang/FRESA.
|
2503.20880 | Amaya Gallagher-Syed | Amaya Gallagher-Syed, Henry Senior, Omnia Alwazzan, Elena Pontarini,
Michele Bombardieri, Costantino Pitzalis, Myles J. Lewis, Michael R. Barnes,
Luca Rossi, Gregory Slabaugh | BioX-CPath: Biologically-driven Explainable Diagnostics for Multistain
IHC Computational Pathology | Accepted for publication at CVPR 2025 | null | null | null | cs.CV q-bio.CB q-bio.QM q-bio.TO | http://creativecommons.org/licenses/by/4.0/ | The development of biologically interpretable and explainable models remains
a key challenge in computational pathology, particularly for multistain
immunohistochemistry (IHC) analysis. We present BioX-CPath, an explainable
graph neural network architecture for whole slide image (WSI) classification
that leverages both spatial and semantic features across multiple stains. At
its core, BioX-CPath introduces a novel Stain-Aware Attention Pooling (SAAP)
module that generates biologically meaningful, stain-aware patient embeddings.
Our approach achieves state-of-the-art performance on both Rheumatoid Arthritis
and Sjogren's Disease multistain datasets. Beyond performance metrics,
BioX-CPath provides interpretable insights through stain attention scores,
entropy measures, and stain interaction scores, that permit measuring model
alignment with known pathological mechanisms. This biological grounding,
combined with strong classification performance, makes BioX-CPath particularly
suitable for clinical applications where interpretability is key. Source code
and documentation can be found at: https://github.com/AmayaGS/BioX-CPath.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 18:00:22 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 17:47:49 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Gallagher-Syed",
"Amaya",
""
],
[
"Senior",
"Henry",
""
],
[
"Alwazzan",
"Omnia",
""
],
[
"Pontarini",
"Elena",
""
],
[
"Bombardieri",
"Michele",
""
],
[
"Pitzalis",
"Costantino",
""
],
[
"Lewis",
"Myles J.",
""
],
[
"Barnes",
"Michael R.",
""
],
[
"Rossi",
"Luca",
""
],
[
"Slabaugh",
"Gregory",
""
]
] | TITLE: BioX-CPath: Biologically-driven Explainable Diagnostics for Multistain
IHC Computational Pathology
ABSTRACT: The development of biologically interpretable and explainable models remains
a key challenge in computational pathology, particularly for multistain
immunohistochemistry (IHC) analysis. We present BioX-CPath, an explainable
graph neural network architecture for whole slide image (WSI) classification
that leverages both spatial and semantic features across multiple stains. At
its core, BioX-CPath introduces a novel Stain-Aware Attention Pooling (SAAP)
module that generates biologically meaningful, stain-aware patient embeddings.
Our approach achieves state-of-the-art performance on both Rheumatoid Arthritis
and Sjogren's Disease multistain datasets. Beyond performance metrics,
BioX-CPath provides interpretable insights through stain attention scores,
entropy measures, and stain interaction scores, that permit measuring model
alignment with known pathological mechanisms. This biological grounding,
combined with strong classification performance, makes BioX-CPath particularly
suitable for clinical applications where interpretability is key. Source code
and documentation can be found at: https://github.com/AmayaGS/BioX-CPath.
|
2503.21530 | Umer Butt | Umer Butt, Stalin Veranasi, G\"unter Neumann | Low-Resource Transliteration for Roman-Urdu and Urdu Using
Transformer-Based Models | null | null | null | null | cs.CL cs.AI | http://creativecommons.org/licenses/by/4.0/ | As the Information Retrieval (IR) field increasingly recognizes the
importance of inclusivity, addressing the needs of low-resource languages
remains a significant challenge. Transliteration between Urdu and its Romanized
form, Roman Urdu, remains underexplored despite the widespread use of both
scripts in South Asia. Prior work using RNNs on the Roman-Urdu-Parl dataset
showed promising results but suffered from poor domain adaptability and limited
evaluation. We propose a transformer-based approach using the m2m100
multilingual translation model, enhanced with masked language modeling (MLM)
pretraining and fine-tuning on both Roman-Urdu-Parl and the domain-diverse
Dakshina dataset. To address previous evaluation flaws, we introduce rigorous
dataset splits and assess performance using BLEU, character-level BLEU, and
CHRF. Our model achieves strong transliteration performance, with Char-BLEU
scores of 96.37 for Urdu->Roman-Urdu and 97.44 for Roman-Urdu->Urdu. These
results outperform both RNN baselines and GPT-4o Mini and demonstrate the
effectiveness of multilingual transfer learning for low-resource
transliteration tasks.
| [
{
"version": "v1",
"created": "Thu, 27 Mar 2025 14:18:50 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 09:55:38 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Butt",
"Umer",
""
],
[
"Veranasi",
"Stalin",
""
],
[
"Neumann",
"Günter",
""
]
] | TITLE: Low-Resource Transliteration for Roman-Urdu and Urdu Using
Transformer-Based Models
ABSTRACT: As the Information Retrieval (IR) field increasingly recognizes the
importance of inclusivity, addressing the needs of low-resource languages
remains a significant challenge. Transliteration between Urdu and its Romanized
form, Roman Urdu, remains underexplored despite the widespread use of both
scripts in South Asia. Prior work using RNNs on the Roman-Urdu-Parl dataset
showed promising results but suffered from poor domain adaptability and limited
evaluation. We propose a transformer-based approach using the m2m100
multilingual translation model, enhanced with masked language modeling (MLM)
pretraining and fine-tuning on both Roman-Urdu-Parl and the domain-diverse
Dakshina dataset. To address previous evaluation flaws, we introduce rigorous
dataset splits and assess performance using BLEU, character-level BLEU, and
CHRF. Our model achieves strong transliteration performance, with Char-BLEU
scores of 96.37 for Urdu->Roman-Urdu and 97.44 for Roman-Urdu->Urdu. These
results outperform both RNN baselines and GPT-4o Mini and demonstrate the
effectiveness of multilingual transfer learning for low-resource
transliteration tasks.
|
2503.22925 | Yanliang Huang | Yanliang Huang, Sebastian Mair, Zhuoqi Zeng, Matthias Althoff | Predictive Traffic Rule Compliance using Reinforcement Learning | 12 pages, 7 figures. Preprint intended for submission to IEEE ITSC
2025 | null | null | null | cs.RO cs.AI | http://creativecommons.org/licenses/by/4.0/ | Autonomous vehicle path planning has reached a stage where safety and
regulatory compliance are crucial. This paper presents an approach that
integrates a motion planner with a deep reinforcement learning model to predict
potential traffic rule violations. Our main innovation is replacing the
standard actor network in an actor-critic method with a motion planning module,
which ensures both stable and interpretable trajectory generation. In this
setup, we use traffic rule robustness as the reward to train a reinforcement
learning agent's critic, and the output of the critic is directly used as the
cost function of the motion planner, which guides the choices of the
trajectory. We incorporate some key interstate rules from the German Road
Traffic Regulation into a rule book and use a graph-based state representation
to handle complex traffic information. Experiments on an open German highway
dataset show that the model can predict and prevent traffic rule violations
beyond the planning horizon, increasing safety and rule compliance in
challenging traffic scenarios.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 01:04:08 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 14:28:47 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Huang",
"Yanliang",
""
],
[
"Mair",
"Sebastian",
""
],
[
"Zeng",
"Zhuoqi",
""
],
[
"Althoff",
"Matthias",
""
]
] | TITLE: Predictive Traffic Rule Compliance using Reinforcement Learning
ABSTRACT: Autonomous vehicle path planning has reached a stage where safety and
regulatory compliance are crucial. This paper presents an approach that
integrates a motion planner with a deep reinforcement learning model to predict
potential traffic rule violations. Our main innovation is replacing the
standard actor network in an actor-critic method with a motion planning module,
which ensures both stable and interpretable trajectory generation. In this
setup, we use traffic rule robustness as the reward to train a reinforcement
learning agent's critic, and the output of the critic is directly used as the
cost function of the motion planner, which guides the choices of the
trajectory. We incorporate some key interstate rules from the German Road
Traffic Regulation into a rule book and use a graph-based state representation
to handle complex traffic information. Experiments on an open German highway
dataset show that the model can predict and prevent traffic rule violations
beyond the planning horizon, increasing safety and rule compliance in
challenging traffic scenarios.
|
2503.23056 | Arjun Roy | Arjun Roy and Stavroula Rizou and Symeon Papadopoulos and Eirini
Ntoutsi | Achieving Socio-Economic Parity through the Lens of EU AI Act | null | null | null | null | cs.CY | http://creativecommons.org/licenses/by/4.0/ | Unfair treatment and discrimination are critical ethical concerns in AI
systems, particularly as their adoption expands across diverse domains.
Addressing these challenges, the recent introduction of the EU AI Act
establishes a unified legal framework to ensure legal certainty for AI
innovation and investment while safeguarding public interests, such as health,
safety, fundamental rights, democracy, and the rule of law (Recital 8). The Act
encourages stakeholders to initiate dialogue on existing AI fairness notions to
address discriminatory outcomes of AI systems. However, these notions often
overlook the critical role of Socio-Economic Status (SES), inadvertently
perpetuating biases that favour the economically advantaged. This is
concerning, given that principles of equalization advocate for equalizing
resources or opportunities to mitigate disadvantages beyond an individual's
control. While provisions for discrimination are laid down in the AI Act,
specialized directions should be broadened, particularly in addressing economic
disparities perpetuated by AI systems. In this work, we explore the limitations
of popular AI fairness notions using a real-world dataset (Adult), highlighting
their inability to address SES-driven disparities. To fill this gap, we propose
a novel fairness notion, Socio-Economic Parity (SEP), which incorporates SES
and promotes positive actions for underprivileged groups while accounting for
factors within an individual's control, such as working hours, which can serve
as a proxy for effort. We define a corresponding fairness measure and optimize
a model constrained by SEP to demonstrate practical utility. Our results show
the effectiveness of SEP in mitigating SES-driven biases. By analyzing the AI
Act alongside our method, we lay a foundation for aligning AI fairness with SES
factors while ensuring legal compliance.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 12:27:27 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 11:39:22 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Roy",
"Arjun",
""
],
[
"Rizou",
"Stavroula",
""
],
[
"Papadopoulos",
"Symeon",
""
],
[
"Ntoutsi",
"Eirini",
""
]
] | TITLE: Achieving Socio-Economic Parity through the Lens of EU AI Act
ABSTRACT: Unfair treatment and discrimination are critical ethical concerns in AI
systems, particularly as their adoption expands across diverse domains.
Addressing these challenges, the recent introduction of the EU AI Act
establishes a unified legal framework to ensure legal certainty for AI
innovation and investment while safeguarding public interests, such as health,
safety, fundamental rights, democracy, and the rule of law (Recital 8). The Act
encourages stakeholders to initiate dialogue on existing AI fairness notions to
address discriminatory outcomes of AI systems. However, these notions often
overlook the critical role of Socio-Economic Status (SES), inadvertently
perpetuating biases that favour the economically advantaged. This is
concerning, given that principles of equalization advocate for equalizing
resources or opportunities to mitigate disadvantages beyond an individual's
control. While provisions for discrimination are laid down in the AI Act,
specialized directions should be broadened, particularly in addressing economic
disparities perpetuated by AI systems. In this work, we explore the limitations
of popular AI fairness notions using a real-world dataset (Adult), highlighting
their inability to address SES-driven disparities. To fill this gap, we propose
a novel fairness notion, Socio-Economic Parity (SEP), which incorporates SES
and promotes positive actions for underprivileged groups while accounting for
factors within an individual's control, such as working hours, which can serve
as a proxy for effort. We define a corresponding fairness measure and optimize
a model constrained by SEP to demonstrate practical utility. Our results show
the effectiveness of SEP in mitigating SES-driven biases. By analyzing the AI
Act alongside our method, we lay a foundation for aligning AI fairness with SES
factors while ensuring legal compliance.
|
2503.23130 | Long Bai | Boyi Ma, Yanguang Zhao, Jie Wang, Guankun Wang, Kun Yuan, Tong Chen,
Long Bai, Hongliang Ren | Can DeepSeek Reason Like a Surgeon? An Empirical Evaluation for
Vision-Language Understanding in Robotic-Assisted Surgery | Technical Report | null | null | null | cs.CV cs.CL cs.RO | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The DeepSeek models have shown exceptional performance in general scene
understanding, question-answering (QA), and text generation tasks, owing to
their efficient training paradigm and strong reasoning capabilities. In this
study, we investigate the dialogue capabilities of the DeepSeek model in
robotic surgery scenarios, focusing on tasks such as Single Phrase QA, Visual
QA, and Detailed Description. The Single Phrase QA tasks further include
sub-tasks such as surgical instrument recognition, action understanding, and
spatial position analysis. We conduct extensive evaluations using publicly
available datasets, including EndoVis18 and CholecT50, along with their
corresponding dialogue data. Our empirical study shows that, compared to
existing general-purpose multimodal large language models, DeepSeek-VL2
performs better on complex understanding tasks in surgical scenes.
Additionally, although DeepSeek-V3 is purely a language model, we find that
when image tokens are directly inputted, the model demonstrates better
performance on single-sentence QA tasks. However, overall, the DeepSeek models
still fall short of meeting the clinical requirements for understanding
surgical scenes. Under general prompts, DeepSeek models lack the ability to
effectively analyze global surgical concepts and fail to provide detailed
insights into surgical scenarios. Based on our observations, we argue that the
DeepSeek models are not ready for vision-language tasks in surgical contexts
without fine-tuning on surgery-specific datasets.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 15:48:46 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 07:14:07 GMT"
},
{
"version": "v3",
"created": "Fri, 4 Apr 2025 02:45:12 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Ma",
"Boyi",
""
],
[
"Zhao",
"Yanguang",
""
],
[
"Wang",
"Jie",
""
],
[
"Wang",
"Guankun",
""
],
[
"Yuan",
"Kun",
""
],
[
"Chen",
"Tong",
""
],
[
"Bai",
"Long",
""
],
[
"Ren",
"Hongliang",
""
]
] | TITLE: Can DeepSeek Reason Like a Surgeon? An Empirical Evaluation for
Vision-Language Understanding in Robotic-Assisted Surgery
ABSTRACT: The DeepSeek models have shown exceptional performance in general scene
understanding, question-answering (QA), and text generation tasks, owing to
their efficient training paradigm and strong reasoning capabilities. In this
study, we investigate the dialogue capabilities of the DeepSeek model in
robotic surgery scenarios, focusing on tasks such as Single Phrase QA, Visual
QA, and Detailed Description. The Single Phrase QA tasks further include
sub-tasks such as surgical instrument recognition, action understanding, and
spatial position analysis. We conduct extensive evaluations using publicly
available datasets, including EndoVis18 and CholecT50, along with their
corresponding dialogue data. Our empirical study shows that, compared to
existing general-purpose multimodal large language models, DeepSeek-VL2
performs better on complex understanding tasks in surgical scenes.
Additionally, although DeepSeek-V3 is purely a language model, we find that
when image tokens are directly inputted, the model demonstrates better
performance on single-sentence QA tasks. However, overall, the DeepSeek models
still fall short of meeting the clinical requirements for understanding
surgical scenes. Under general prompts, DeepSeek models lack the ability to
effectively analyze global surgical concepts and fail to provide detailed
insights into surgical scenarios. Based on our observations, we argue that the
DeepSeek models are not ready for vision-language tasks in surgical contexts
without fine-tuning on surgery-specific datasets.
|
2504.00059 | Vitor Cerqueira | Vitor Cerqueira, Luis Roque, Carlos Soares | ModelRadar: Aspect-based Forecast Evaluation | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Accurate evaluation of forecasting models is essential for ensuring reliable
predictions. Current practices for evaluating and comparing forecasting models
focus on summarising performance into a single score, using metrics such as
SMAPE. While convenient, averaging performance over all samples dilutes
relevant information about model behavior under varying conditions. This
limitation is especially problematic for time series forecasting, where
multiple layers of averaging--across time steps, horizons, and multiple time
series in a dataset--can mask relevant performance variations. We address this
limitation by proposing ModelRadar, a framework for evaluating univariate time
series forecasting models across multiple aspects, such as stationarity,
presence of anomalies, or forecasting horizons. We demonstrate the advantages
of this framework by comparing 24 forecasting methods, including classical
approaches and different machine learning algorithms. NHITS, a state-of-the-art
neural network architecture, performs best overall but its superiority varies
with forecasting conditions. For instance, concerning the forecasting horizon,
we found that NHITS (and also other neural networks) only outperforms classical
approaches for multi-step ahead forecasting. Another relevant insight is that
classical approaches such as ETS or Theta are notably more robust in the
presence of anomalies. These and other findings highlight the importance of
aspect-based model evaluation for both practitioners and researchers.
ModelRadar is available as a Python package.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 11:50:45 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Cerqueira",
"Vitor",
""
],
[
"Roque",
"Luis",
""
],
[
"Soares",
"Carlos",
""
]
] | TITLE: ModelRadar: Aspect-based Forecast Evaluation
ABSTRACT: Accurate evaluation of forecasting models is essential for ensuring reliable
predictions. Current practices for evaluating and comparing forecasting models
focus on summarising performance into a single score, using metrics such as
SMAPE. While convenient, averaging performance over all samples dilutes
relevant information about model behavior under varying conditions. This
limitation is especially problematic for time series forecasting, where
multiple layers of averaging--across time steps, horizons, and multiple time
series in a dataset--can mask relevant performance variations. We address this
limitation by proposing ModelRadar, a framework for evaluating univariate time
series forecasting models across multiple aspects, such as stationarity,
presence of anomalies, or forecasting horizons. We demonstrate the advantages
of this framework by comparing 24 forecasting methods, including classical
approaches and different machine learning algorithms. NHITS, a state-of-the-art
neural network architecture, performs best overall but its superiority varies
with forecasting conditions. For instance, concerning the forecasting horizon,
we found that NHITS (and also other neural networks) only outperforms classical
approaches for multi-step ahead forecasting. Another relevant insight is that
classical approaches such as ETS or Theta are notably more robust in the
presence of anomalies. These and other findings highlight the importance of
aspect-based model evaluation for both practitioners and researchers.
ModelRadar is available as a Python package.
|
2504.00396 | Xian Xiaole | Xiaole Xian, Zhichao Liao, Qingyu Li, Wenyu Qin, Pengfei Wan, Weicheng
Xie, Long Zeng, Linlin Shen, Pingfa Feng | SPF-Portrait: Towards Pure Portrait Customization with Semantic
Pollution-Free Fine-tuning | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Fine-tuning a pre-trained Text-to-Image (T2I) model on a tailored portrait
dataset is the mainstream method for text-driven customization of portrait
attributes. Due to Semantic Pollution during fine-tuning, existing methods
struggle to maintain the original model's behavior and achieve incremental
learning while customizing target attributes. To address this issue, we propose
SPF-Portrait, a pioneering work to purely understand customized semantics while
eliminating semantic pollution in text-driven portrait customization. In our
SPF-Portrait, we propose a dual-path pipeline that introduces the original
model as a reference for the conventional fine-tuning path. Through contrastive
learning, we ensure adaptation to target attributes and purposefully align
other unrelated attributes with the original portrait. We introduce a novel
Semantic-Aware Fine Control Map, which represents the precise response regions
of the target semantics, to spatially guide the alignment process between the
contrastive paths. This alignment process not only effectively preserves the
performance of the original model but also avoids over-alignment. Furthermore,
we propose a novel response enhancement mechanism to reinforce the performance
of target attributes, while mitigating representation discrepancy inherent in
direct cross-modal supervision. Extensive experiments demonstrate that
SPF-Portrait achieves state-of-the-art performance. Project webpage:
https://spf-portrait.github.io/SPF-Portrait/
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 03:37:30 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 07:56:33 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Xian",
"Xiaole",
""
],
[
"Liao",
"Zhichao",
""
],
[
"Li",
"Qingyu",
""
],
[
"Qin",
"Wenyu",
""
],
[
"Wan",
"Pengfei",
""
],
[
"Xie",
"Weicheng",
""
],
[
"Zeng",
"Long",
""
],
[
"Shen",
"Linlin",
""
],
[
"Feng",
"Pingfa",
""
]
] | TITLE: SPF-Portrait: Towards Pure Portrait Customization with Semantic
Pollution-Free Fine-tuning
ABSTRACT: Fine-tuning a pre-trained Text-to-Image (T2I) model on a tailored portrait
dataset is the mainstream method for text-driven customization of portrait
attributes. Due to Semantic Pollution during fine-tuning, existing methods
struggle to maintain the original model's behavior and achieve incremental
learning while customizing target attributes. To address this issue, we propose
SPF-Portrait, a pioneering work to purely understand customized semantics while
eliminating semantic pollution in text-driven portrait customization. In our
SPF-Portrait, we propose a dual-path pipeline that introduces the original
model as a reference for the conventional fine-tuning path. Through contrastive
learning, we ensure adaptation to target attributes and purposefully align
other unrelated attributes with the original portrait. We introduce a novel
Semantic-Aware Fine Control Map, which represents the precise response regions
of the target semantics, to spatially guide the alignment process between the
contrastive paths. This alignment process not only effectively preserves the
performance of the original model but also avoids over-alignment. Furthermore,
we propose a novel response enhancement mechanism to reinforce the performance
of target attributes, while mitigating representation discrepancy inherent in
direct cross-modal supervision. Extensive experiments demonstrate that
SPF-Portrait achieves state-of-the-art performance. Project webpage:
https://spf-portrait.github.io/SPF-Portrait/
|
2504.00589 | Owen Cook | Owen Cook, Jake Vasilakes, Ian Roberts and Xingyi Song | Efficient Annotator Reliability Assessment with EffiARA | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Data annotation is an essential component of the machine learning pipeline;
it is also a costly and time-consuming process. With the introduction of
transformer-based models, annotation at the document level is increasingly
popular; however, there is no standard framework for structuring such tasks.
The EffiARA annotation framework is, to our knowledge, the first project to
support the whole annotation pipeline, from understanding the resources
required for an annotation task to compiling the annotated dataset and gaining
insights into the reliability of individual annotators as well as the dataset
as a whole. The framework's efficacy is supported by two previous studies: one
improving classification performance through annotator-reliability-based soft
label aggregation and sample weighting, and the other increasing the overall
agreement among annotators through removing identifying and replacing an
unreliable annotator. This work introduces the EffiARA Python package and its
accompanying webtool, which provides an accessible graphical user interface for
the system. We open-source the EffiARA Python package at
https://github.com/MiniEggz/EffiARA and the webtool is publicly accessible at
https://effiara.gate.ac.uk.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 09:48:09 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 22:24:47 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Cook",
"Owen",
""
],
[
"Vasilakes",
"Jake",
""
],
[
"Roberts",
"Ian",
""
],
[
"Song",
"Xingyi",
""
]
] | TITLE: Efficient Annotator Reliability Assessment with EffiARA
ABSTRACT: Data annotation is an essential component of the machine learning pipeline;
it is also a costly and time-consuming process. With the introduction of
transformer-based models, annotation at the document level is increasingly
popular; however, there is no standard framework for structuring such tasks.
The EffiARA annotation framework is, to our knowledge, the first project to
support the whole annotation pipeline, from understanding the resources
required for an annotation task to compiling the annotated dataset and gaining
insights into the reliability of individual annotators as well as the dataset
as a whole. The framework's efficacy is supported by two previous studies: one
improving classification performance through annotator-reliability-based soft
label aggregation and sample weighting, and the other increasing the overall
agreement among annotators through removing identifying and replacing an
unreliable annotator. This work introduces the EffiARA Python package and its
accompanying webtool, which provides an accessible graphical user interface for
the system. We open-source the EffiARA Python package at
https://github.com/MiniEggz/EffiARA and the webtool is publicly accessible at
https://effiara.gate.ac.uk.
|
2504.02178 | Shanilka Haturusinghe | Shanilka Haturusinghe, Tharindu Cyril Weerasooriya, Marcos Zampieri,
Christopher M. Homan, S.R. Liyanage | Subasa - Adapting Language Models for Low-resourced Offensive Language
Detection in Sinhala | Accepted to appear at NAACL SRW 2025 | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Accurate detection of offensive language is essential for a number of
applications related to social media safety. There is a sharp contrast in
performance in this task between low and high-resource languages. In this
paper, we adapt fine-tuning strategies that have not been previously explored
for Sinhala in the downstream task of offensive language detection. Using this
approach, we introduce four models: "Subasa-XLM-R", which incorporates an
intermediate Pre-Finetuning step using Masked Rationale Prediction. Two
variants of "Subasa-Llama" and "Subasa-Mistral", are fine-tuned versions of
Llama (3.2) and Mistral (v0.3), respectively, with a task-specific strategy. We
evaluate our models on the SOLD benchmark dataset for Sinhala offensive
language detection. All our models outperform existing baselines. Subasa-XLM-R
achieves the highest Macro F1 score (0.84) surpassing state-of-the-art large
language models like GPT-4o when evaluated on the same SOLD benchmark dataset
under zero-shot settings. The models and code are publicly available.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 23:46:49 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Haturusinghe",
"Shanilka",
""
],
[
"Weerasooriya",
"Tharindu Cyril",
""
],
[
"Zampieri",
"Marcos",
""
],
[
"Homan",
"Christopher M.",
""
],
[
"Liyanage",
"S. R.",
""
]
] | TITLE: Subasa - Adapting Language Models for Low-resourced Offensive Language
Detection in Sinhala
ABSTRACT: Accurate detection of offensive language is essential for a number of
applications related to social media safety. There is a sharp contrast in
performance in this task between low and high-resource languages. In this
paper, we adapt fine-tuning strategies that have not been previously explored
for Sinhala in the downstream task of offensive language detection. Using this
approach, we introduce four models: "Subasa-XLM-R", which incorporates an
intermediate Pre-Finetuning step using Masked Rationale Prediction. Two
variants of "Subasa-Llama" and "Subasa-Mistral", are fine-tuned versions of
Llama (3.2) and Mistral (v0.3), respectively, with a task-specific strategy. We
evaluate our models on the SOLD benchmark dataset for Sinhala offensive
language detection. All our models outperform existing baselines. Subasa-XLM-R
achieves the highest Macro F1 score (0.84) surpassing state-of-the-art large
language models like GPT-4o when evaluated on the same SOLD benchmark dataset
under zero-shot settings. The models and code are publicly available.
|
2504.02249 | Sungwoo Kang | Sungwoo Kang | Stock Price Prediction Using Triple Barrier Labeling and Raw OHLCV Data:
Evidence from Korean Markets | 7 pages, 2 figures | null | null | null | cs.CE | http://creativecommons.org/licenses/by/4.0/ | This paper demonstrates that deep learning models trained on raw OHLCV
(open-high-low-close-volume) data can achieve comparable performance to
traditional machine learning (ML) models using technical indicators for stock
price prediction in Korean markets. While previous studies have emphasized the
importance of technical indicators and feature engineering, we show that a
simple LSTM network trained on raw OHLCV data alone can match the performance
of sophisticated ML models that incorporate technical indicators. Using a
dataset of Korean stocks from 2006 to 2024, we optimize the triple barrier
labeling parameters to achieve balanced label proportions with a 29-day window
and 9\% barriers. Our experiments reveal that LSTM networks achieve similar
performance to traditional machine learning models like XGBoost, despite using
only raw OHLCV data without any technical indicators. Furthermore, we identify
that the optimal window size varies with model hidden size, with a
configuration of window size 100 and hidden size 8 yielding the best
performance. Additionally, our results confirm that using full OHLCV data
provides better predictive accuracy compared to using only close price or close
price with volume. These findings challenge conventional approaches to feature
engineering in financial forecasting and suggest that simpler approaches
focusing on raw data and appropriate model selection may be more effective than
complex feature engineering strategies.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 03:30:50 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 10:51:24 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Kang",
"Sungwoo",
""
]
] | TITLE: Stock Price Prediction Using Triple Barrier Labeling and Raw OHLCV Data:
Evidence from Korean Markets
ABSTRACT: This paper demonstrates that deep learning models trained on raw OHLCV
(open-high-low-close-volume) data can achieve comparable performance to
traditional machine learning (ML) models using technical indicators for stock
price prediction in Korean markets. While previous studies have emphasized the
importance of technical indicators and feature engineering, we show that a
simple LSTM network trained on raw OHLCV data alone can match the performance
of sophisticated ML models that incorporate technical indicators. Using a
dataset of Korean stocks from 2006 to 2024, we optimize the triple barrier
labeling parameters to achieve balanced label proportions with a 29-day window
and 9\% barriers. Our experiments reveal that LSTM networks achieve similar
performance to traditional machine learning models like XGBoost, despite using
only raw OHLCV data without any technical indicators. Furthermore, we identify
that the optimal window size varies with model hidden size, with a
configuration of window size 100 and hidden size 8 yielding the best
performance. Additionally, our results confirm that using full OHLCV data
provides better predictive accuracy compared to using only close price or close
price with volume. These findings challenge conventional approaches to feature
engineering in financial forecasting and suggest that simpler approaches
focusing on raw data and appropriate model selection may be more effective than
complex feature engineering strategies.
|
2504.02587 | Yan Ma | Yan Ma and Steffi Chern and Xuyang Shen and Yiran Zhong and Pengfei
Liu | Rethinking RL Scaling for Vision Language Models: A Transparent,
From-Scratch Framework and Comprehensive Evaluation Scheme | Code is public and available at: https://github.com/GAIR-NLP/MAYE | null | null | null | cs.LG cs.CL cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reinforcement learning (RL) has recently shown strong potential in improving
the reasoning capabilities of large language models and is now being actively
extended to vision-language models (VLMs). However, existing RL applications in
VLMs often rely on heavily engineered frameworks that hinder reproducibility
and accessibility, while lacking standardized evaluation protocols, making it
difficult to compare results or interpret training dynamics. This work
introduces a transparent, from-scratch framework for RL in VLMs, offering a
minimal yet functional four-step pipeline validated across multiple models and
datasets. In addition, a standardized evaluation scheme is proposed to assess
training dynamics and reflective behaviors. Extensive experiments on visual
reasoning tasks uncover key empirical findings: response length is sensitive to
random seeds, reflection correlates with output length, and RL consistently
outperforms supervised fine-tuning (SFT) in generalization, even with
high-quality data. These findings, together with the proposed framework, aim to
establish a reproducible baseline and support broader engagement in RL-based
VLM research.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 13:53:28 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 01:07:06 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Ma",
"Yan",
""
],
[
"Chern",
"Steffi",
""
],
[
"Shen",
"Xuyang",
""
],
[
"Zhong",
"Yiran",
""
],
[
"Liu",
"Pengfei",
""
]
] | TITLE: Rethinking RL Scaling for Vision Language Models: A Transparent,
From-Scratch Framework and Comprehensive Evaluation Scheme
ABSTRACT: Reinforcement learning (RL) has recently shown strong potential in improving
the reasoning capabilities of large language models and is now being actively
extended to vision-language models (VLMs). However, existing RL applications in
VLMs often rely on heavily engineered frameworks that hinder reproducibility
and accessibility, while lacking standardized evaluation protocols, making it
difficult to compare results or interpret training dynamics. This work
introduces a transparent, from-scratch framework for RL in VLMs, offering a
minimal yet functional four-step pipeline validated across multiple models and
datasets. In addition, a standardized evaluation scheme is proposed to assess
training dynamics and reflective behaviors. Extensive experiments on visual
reasoning tasks uncover key empirical findings: response length is sensitive to
random seeds, reflection correlates with output length, and RL consistently
outperforms supervised fine-tuning (SFT) in generalization, even with
high-quality data. These findings, together with the proposed framework, aim to
establish a reproducible baseline and support broader engagement in RL-based
VLM research.
|
2504.02598 | Bharani Jayakumar | Bharani Jayakumar and Orkun \"Ozo\u{g}lu | Graphs are everywhere -- Psst! In Music Recommendation too | 5 pages, 4 figures, 2 tables, and a few equations | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | In recent years, graphs have gained prominence across various domains,
especially in recommendation systems. Within the realm of music recommendation,
graphs play a crucial role in enhancing genre-based recommendations by
integrating Mel-Frequency Cepstral Coefficients (MFCC) with advanced graph
embeddings. This study explores the efficacy of Graph Convolutional Networks
(GCN), GraphSAGE, and Graph Transformer (GT) models in learning embeddings that
effectively capture intricate relationships between music items and genres
represented within graph structures. Through comprehensive empirical
evaluations on diverse real-world music datasets, our findings consistently
demonstrate that these graph-based approaches outperform traditional methods
that rely solely on MFCC features or collaborative filtering techniques.
Specifically, the graph-enhanced models achieve notably higher accuracy in
predicting genre-specific preferences and offering relevant music suggestions
to users. These results underscore the effectiveness of utilizing graph
embeddings to enrich feature representations and exploit latent associations
within music data, thereby illustrating their potential to advance the
capabilities of personalized and context-aware music recommendation systems.
Keywords: graphs, recommendation systems, neural networks, MFCC
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 14:00:52 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 07:51:18 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Jayakumar",
"Bharani",
""
],
[
"Özoğlu",
"Orkun",
""
]
] | TITLE: Graphs are everywhere -- Psst! In Music Recommendation too
ABSTRACT: In recent years, graphs have gained prominence across various domains,
especially in recommendation systems. Within the realm of music recommendation,
graphs play a crucial role in enhancing genre-based recommendations by
integrating Mel-Frequency Cepstral Coefficients (MFCC) with advanced graph
embeddings. This study explores the efficacy of Graph Convolutional Networks
(GCN), GraphSAGE, and Graph Transformer (GT) models in learning embeddings that
effectively capture intricate relationships between music items and genres
represented within graph structures. Through comprehensive empirical
evaluations on diverse real-world music datasets, our findings consistently
demonstrate that these graph-based approaches outperform traditional methods
that rely solely on MFCC features or collaborative filtering techniques.
Specifically, the graph-enhanced models achieve notably higher accuracy in
predicting genre-specific preferences and offering relevant music suggestions
to users. These results underscore the effectiveness of utilizing graph
embeddings to enrich feature representations and exploit latent associations
within music data, thereby illustrating their potential to advance the
capabilities of personalized and context-aware music recommendation systems.
Keywords: graphs, recommendation systems, neural networks, MFCC
|
2504.02737 | Nusrat Jahan Mozumder | Nusrat Jahan Mozumder, Felipe Toledo, Swaroopa Dola and Matthew B.
Dwyer | RBT4DNN: Requirements-based Testing of Neural Networks | null | null | null | null | cs.SE cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Deep neural network (DNN) testing is crucial for the reliability and safety
of critical systems, where failures can have severe consequences. Although
various techniques have been developed to create robustness test suites,
requirements-based testing for DNNs remains largely unexplored - yet such tests
are recognized as an essential component of software validation of critical
systems. In this work, we propose a requirements-based test suite generation
method that uses structured natural language requirements formulated in a
semantic feature space to create test suites by prompting text-conditional
latent diffusion models with the requirement precondition and then using the
associated postcondition to define a test oracle to judge outputs of the DNN
under test. We investigate the approach using fine-tuned variants of
pre-trained generative models. Our experiments on the MNIST, CelebA-HQ,
ImageNet, and autonomous car driving datasets demonstrate that the generated
test suites are realistic, diverse, consistent with preconditions, and capable
of revealing faults.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 16:24:49 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 01:24:07 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Mozumder",
"Nusrat Jahan",
""
],
[
"Toledo",
"Felipe",
""
],
[
"Dola",
"Swaroopa",
""
],
[
"Dwyer",
"Matthew B.",
""
]
] | TITLE: RBT4DNN: Requirements-based Testing of Neural Networks
ABSTRACT: Deep neural network (DNN) testing is crucial for the reliability and safety
of critical systems, where failures can have severe consequences. Although
various techniques have been developed to create robustness test suites,
requirements-based testing for DNNs remains largely unexplored - yet such tests
are recognized as an essential component of software validation of critical
systems. In this work, we propose a requirements-based test suite generation
method that uses structured natural language requirements formulated in a
semantic feature space to create test suites by prompting text-conditional
latent diffusion models with the requirement precondition and then using the
associated postcondition to define a test oracle to judge outputs of the DNN
under test. We investigate the approach using fine-tuned variants of
pre-trained generative models. Our experiments on the MNIST, CelebA-HQ,
ImageNet, and autonomous car driving datasets demonstrate that the generated
test suites are realistic, diverse, consistent with preconditions, and capable
of revealing faults.
|
2504.02800 | Zhuohan Ge | Zhuohan Ge, Nicole Hu, Darian Li, Yubo Wang, Shihao Qi, Yuming Xu, Han
Shi, Jason Zhang | A Survey of Large Language Models in Mental Health Disorder Detection on
Social Media | 13 pages, 4 figures | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The detection and intervention of mental health issues represent a critical
global research focus, and social media data has been recognized as an
important resource for mental health research. However, how to utilize Large
Language Models (LLMs) for mental health problem detection on social media
poses significant challenges. Hence, this paper aims to explore the potential
of LLM applications in social media data analysis, focusing not only on the
most common psychological disorders such as depression and anxiety but also
incorporating psychotic disorders and externalizing disorders, summarizing the
application methods of LLM from different dimensions, such as text data
analysis and detection of mental disorders, and revealing the major challenges
and shortcomings of current research. In addition, the paper provides an
overview of popular datasets, and evaluation metrics. The survey in this paper
provides a comprehensive frame of reference for researchers in the field of
mental health, while demonstrating the great potential of LLMs in mental health
detection to facilitate the further application of LLMs in future mental health
interventions.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 17:43:14 GMT"
},
{
"version": "v2",
"created": "Fri, 4 Apr 2025 02:07:59 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Ge",
"Zhuohan",
""
],
[
"Hu",
"Nicole",
""
],
[
"Li",
"Darian",
""
],
[
"Wang",
"Yubo",
""
],
[
"Qi",
"Shihao",
""
],
[
"Xu",
"Yuming",
""
],
[
"Shi",
"Han",
""
],
[
"Zhang",
"Jason",
""
]
] | TITLE: A Survey of Large Language Models in Mental Health Disorder Detection on
Social Media
ABSTRACT: The detection and intervention of mental health issues represent a critical
global research focus, and social media data has been recognized as an
important resource for mental health research. However, how to utilize Large
Language Models (LLMs) for mental health problem detection on social media
poses significant challenges. Hence, this paper aims to explore the potential
of LLM applications in social media data analysis, focusing not only on the
most common psychological disorders such as depression and anxiety but also
incorporating psychotic disorders and externalizing disorders, summarizing the
application methods of LLM from different dimensions, such as text data
analysis and detection of mental disorders, and revealing the major challenges
and shortcomings of current research. In addition, the paper provides an
overview of popular datasets, and evaluation metrics. The survey in this paper
provides a comprehensive frame of reference for researchers in the field of
mental health, while demonstrating the great potential of LLMs in mental health
detection to facilitate the further application of LLMs in future mental health
interventions.
|
2504.02842 | Ting Tan | Baozhuo Su, Qingli Dou, Kang Liu, Zhengxian Qu, Jerry Deng, Ting Tan,
and Yanan Gu | Enhanced ECG Arrhythmia Detection Accuracy by Optimizing
Divergence-Based Data Fusion | 13 pages, 8 figures, 6 tables | null | null | null | eess.SP cs.LG stat.AP stat.ME | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | AI computation in healthcare faces significant challenges when clinical
datasets are limited and heterogeneous. Integrating datasets from multiple
sources and different equipments is critical for effective AI computation but
is complicated by their diversity, complexity, and lack of representativeness,
so we often need to join multiple datasets for analysis. The currently used
method is fusion after normalization. But when using this method, it can
introduce redundant information, decreasing the signal-to-noise ratio and
reducing classification accuracy. To tackle this issue, we propose a
feature-based fusion algorithm utilizing Kernel Density Estimation (KDE) and
Kullback-Leibler (KL) divergence. Our approach involves initially preprocessing
and continuous estimation on the extracted features, followed by employing the
gradient descent method to identify the optimal linear parameters that minimize
the KL divergence between the feature distributions. Using our in-house
datasets consisting of ECG signals collected from 2000 healthy and 2000
diseased individuals by different equipments and verifying our method by using
the publicly available PTB-XL dataset which contains 21,837 ECG recordings from
18,885 patients. We employ a Light Gradient Boosting Machine (LGBM) model to do
the binary classification. The results demonstrate that the proposed fusion
method significantly enhances feature-based classification accuracy for
abnormal ECG cases in the merged datasets, compared to the normalization
method. This data fusion strategy provides a new approach to process
heterogeneous datasets for the optimal AI computation results.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 12:16:48 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Su",
"Baozhuo",
""
],
[
"Dou",
"Qingli",
""
],
[
"Liu",
"Kang",
""
],
[
"Qu",
"Zhengxian",
""
],
[
"Deng",
"Jerry",
""
],
[
"Tan",
"Ting",
""
],
[
"Gu",
"Yanan",
""
]
] | TITLE: Enhanced ECG Arrhythmia Detection Accuracy by Optimizing
Divergence-Based Data Fusion
ABSTRACT: AI computation in healthcare faces significant challenges when clinical
datasets are limited and heterogeneous. Integrating datasets from multiple
sources and different equipments is critical for effective AI computation but
is complicated by their diversity, complexity, and lack of representativeness,
so we often need to join multiple datasets for analysis. The currently used
method is fusion after normalization. But when using this method, it can
introduce redundant information, decreasing the signal-to-noise ratio and
reducing classification accuracy. To tackle this issue, we propose a
feature-based fusion algorithm utilizing Kernel Density Estimation (KDE) and
Kullback-Leibler (KL) divergence. Our approach involves initially preprocessing
and continuous estimation on the extracted features, followed by employing the
gradient descent method to identify the optimal linear parameters that minimize
the KL divergence between the feature distributions. Using our in-house
datasets consisting of ECG signals collected from 2000 healthy and 2000
diseased individuals by different equipments and verifying our method by using
the publicly available PTB-XL dataset which contains 21,837 ECG recordings from
18,885 patients. We employ a Light Gradient Boosting Machine (LGBM) model to do
the binary classification. The results demonstrate that the proposed fusion
method significantly enhances feature-based classification accuracy for
abnormal ECG cases in the merged datasets, compared to the normalization
method. This data fusion strategy provides a new approach to process
heterogeneous datasets for the optimal AI computation results.
|
2504.02863 | Girme Yohannis Bade | Girma Yohannis Bade, Zahra Ahani, Olga Kolesnikova, Jos\'e Luis
Oropeza, Grigori Sidorov | GS_DravidianLangTech@2025: Women Targeted Abusive Texts Detection on
Social Media | null | null | null | null | cs.CL cs.SI | http://creativecommons.org/licenses/by/4.0/ | The increasing misuse of social media has become a concern; however,
technological solutions are being developed to moderate its content
effectively. This paper focuses on detecting abusive texts targeting women on
social media platforms. Abusive speech refers to communication intended to harm
or incite hatred against vulnerable individuals or groups. Specifically, this
study aims to identify abusive language directed toward women. To achieve this,
we utilized logistic regression and BERT as base models to train datasets
sourced from DravidianLangTech@2025 for Tamil and Malayalam languages. The
models were evaluated on test datasets, resulting in a 0.729 macro F1 score for
BERT and 0.6279 for logistic regression in Tamil and Malayalam, respectively.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 00:00:07 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Bade",
"Girma Yohannis",
""
],
[
"Ahani",
"Zahra",
""
],
[
"Kolesnikova",
"Olga",
""
],
[
"Oropeza",
"José Luis",
""
],
[
"Sidorov",
"Grigori",
""
]
] | TITLE: GS_DravidianLangTech@2025: Women Targeted Abusive Texts Detection on
Social Media
ABSTRACT: The increasing misuse of social media has become a concern; however,
technological solutions are being developed to moderate its content
effectively. This paper focuses on detecting abusive texts targeting women on
social media platforms. Abusive speech refers to communication intended to harm
or incite hatred against vulnerable individuals or groups. Specifically, this
study aims to identify abusive language directed toward women. To achieve this,
we utilized logistic regression and BERT as base models to train datasets
sourced from DravidianLangTech@2025 for Tamil and Malayalam languages. The
models were evaluated on test datasets, resulting in a 0.729 macro F1 score for
BERT and 0.6279 for logistic regression in Tamil and Malayalam, respectively.
|
2504.02864 | Peter Adelson | Peter Adelson and Julian Nyarko | The Material Contracts Corpus | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper introduces the Material Contracts Corpus (MCC), a publicly
available dataset comprising over one million contracts filed by public
companies with the U.S. Securities and Exchange Commission (SEC) between 2000
and 2023. The MCC facilitates empirical research on contract design and legal
language, and supports the development of AI-based legal tools. Contracts in
the corpus are categorized by agreement type and linked to specific parties
using machine learning and natural language processing techniques, including a
fine-tuned LLaMA-2 model for contract classification. The MCC further provides
metadata such as filing form, document format, and amendment status. We
document trends in contractual language, length, and complexity over time, and
highlight the dominance of employment and security agreements in SEC filings.
This resource is available for bulk download and online access at
https://mcc.law.stanford.edu.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 00:06:04 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Adelson",
"Peter",
""
],
[
"Nyarko",
"Julian",
""
]
] | TITLE: The Material Contracts Corpus
ABSTRACT: This paper introduces the Material Contracts Corpus (MCC), a publicly
available dataset comprising over one million contracts filed by public
companies with the U.S. Securities and Exchange Commission (SEC) between 2000
and 2023. The MCC facilitates empirical research on contract design and legal
language, and supports the development of AI-based legal tools. Contracts in
the corpus are categorized by agreement type and linked to specific parties
using machine learning and natural language processing techniques, including a
fine-tuned LLaMA-2 model for contract classification. The MCC further provides
metadata such as filing form, document format, and amendment status. We
document trends in contractual language, length, and complexity over time, and
highlight the dominance of employment and security agreements in SEC filings.
This resource is available for bulk download and online access at
https://mcc.law.stanford.edu.
|
2504.02866 | Filip Biljecki | Xiucheng Liang, Jinheng Xie, Tianhong Zhao, Rudi Stouffs, Filip
Biljecki | OpenFACADES: An Open Framework for Architectural Caption and Attribute
Data Enrichment via Street View Imagery | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Building properties, such as height, usage, and material composition, play a
crucial role in spatial data infrastructures, supporting applications such as
energy simulation, risk assessment, and environmental modeling. Despite their
importance, comprehensive and high-quality building attribute data remain
scarce in many urban areas. Recent advances have enabled the extraction and
tagging of objective building attributes using remote sensing and street-level
imagery. However, establishing a method and pipeline that integrates diverse
open datasets, acquires holistic building imagery at scale, and infers
comprehensive building attributes remains a significant challenge. Among the
first, this study bridges the gaps by introducing OpenFACADES, an open
framework that leverages multimodal crowdsourced data to enrich building
profiles with both objective attributes and semantic descriptors through
multimodal large language models. Our methodology proceeds in three major
steps. First, we integrate street-level image metadata from Mapillary with
OpenStreetMap geometries via isovist analysis, effectively identifying images
that provide suitable vantage points for observing target buildings. Second, we
automate the detection of building facades in panoramic imagery and tailor a
reprojection approach to convert objects into holistic perspective views that
approximate real-world observation. Third, we introduce an innovative approach
that harnesses and systematically investigates the capabilities of open-source
large vision-language models (VLMs) for multi-attribute prediction and
open-vocabulary captioning in building-level analytics, leveraging a globally
sourced dataset of 30,180 labeled images from seven cities. Evaluation shows
that fine-tuned VLM excel in multi-attribute inference, outperforming
single-attribute computer vision models and zero-shot ChatGPT-4o.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 08:20:13 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Liang",
"Xiucheng",
""
],
[
"Xie",
"Jinheng",
""
],
[
"Zhao",
"Tianhong",
""
],
[
"Stouffs",
"Rudi",
""
],
[
"Biljecki",
"Filip",
""
]
] | TITLE: OpenFACADES: An Open Framework for Architectural Caption and Attribute
Data Enrichment via Street View Imagery
ABSTRACT: Building properties, such as height, usage, and material composition, play a
crucial role in spatial data infrastructures, supporting applications such as
energy simulation, risk assessment, and environmental modeling. Despite their
importance, comprehensive and high-quality building attribute data remain
scarce in many urban areas. Recent advances have enabled the extraction and
tagging of objective building attributes using remote sensing and street-level
imagery. However, establishing a method and pipeline that integrates diverse
open datasets, acquires holistic building imagery at scale, and infers
comprehensive building attributes remains a significant challenge. Among the
first, this study bridges the gaps by introducing OpenFACADES, an open
framework that leverages multimodal crowdsourced data to enrich building
profiles with both objective attributes and semantic descriptors through
multimodal large language models. Our methodology proceeds in three major
steps. First, we integrate street-level image metadata from Mapillary with
OpenStreetMap geometries via isovist analysis, effectively identifying images
that provide suitable vantage points for observing target buildings. Second, we
automate the detection of building facades in panoramic imagery and tailor a
reprojection approach to convert objects into holistic perspective views that
approximate real-world observation. Third, we introduce an innovative approach
that harnesses and systematically investigates the capabilities of open-source
large vision-language models (VLMs) for multi-attribute prediction and
open-vocabulary captioning in building-level analytics, leveraging a globally
sourced dataset of 30,180 labeled images from seven cities. Evaluation shows
that fine-tuned VLM excel in multi-attribute inference, outperforming
single-attribute computer vision models and zero-shot ChatGPT-4o.
|
2504.02868 | Ariadna Toh\`a-Dalmau | Ariadna Toh\`a-Dalmau (1), Josep Rosin\'es-Fonoll (2), Enrique Romero
(1 and 3), Ferran Mazzanti (4), Ruben Martin-Pinardel (5), Sonia Marias-Perez
(2), Carolina Bernal-Morales (2, 5 and 6), Rafael Castro-Dominguez (2),
Andrea Mendez (2), Emilio Ortega (5, 6 and 7), Irene Vinagre (5, 6 and 7),
Marga Gimenez (5, 6 and 7), Alfredo Vellido (1 and 3) and Javier
Zarranz-Ventura (2, 5, 6 and 7) ((1) Department of Computer Science,
Universitat Polit\`ecnica de Catalunya (2) Institut Cl\'inic
d'Oftalmolog\'ia, Hospital Cl\'inic de Barcelona (3) Intelligent Data Science
and Artificial Intelligence Research Center (4) Department of Physics,
Universitat Polit\`ecnica de Catalunya (5) August Pi i Sunyer Biomedical
Research Institute (6) Diabetes Unit, Hospital Cl\'inic de Barcelona (7)
School of Medicine, Universitat de Barcelona) | Machine Learning Prediction of Cardiovascular Risk in Type 1 Diabetes
Mellitus Using Radiomics Features from Multimodal Retinal Images | 19 pages, 7 figures. Submitted to Ophthalmology Science, under second
review | null | null | null | eess.IV cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | This study aimed to develop a machine learning (ML) algorithm capable of
determining cardiovascular risk in multimodal retinal images from patients with
type 1 diabetes mellitus, distinguishing between moderate, high, and very
high-risk levels. Radiomic features were extracted from fundus retinography,
optical coherence tomography (OCT), and OCT angiography (OCTA) images. ML
models were trained using these features either individually or combined with
clinical data. A dataset of 597 eyes (359 individuals) was analyzed, and models
trained only with radiomic features achieved AUC values of (0.79 $\pm$ 0.03)
for identifying moderate risk cases from high and very high-risk cases, and
(0.73 $\pm$ 0.07) for distinguishing between high and very high-risk cases. The
addition of clinical variables improved all AUC values, reaching (0.99 $\pm$
0.01) for identifying moderate risk cases and (0.95 $\pm$ 0.02) for
differentiating between high and very high-risk cases. For very high CV risk,
radiomics combined with OCT+OCTA metrics and ocular data achieved an AUC of
(0.89 $\pm$ 0.02) without systemic data input. These results demonstrate that
radiomic features obtained from multimodal retinal images are useful for
discriminating and classifying CV risk labels, highlighting the potential of
this oculomics approach for CV risk assessment.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 10:25:38 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Tohà-Dalmau",
"Ariadna",
"",
"1 and 3"
],
[
"Rosinés-Fonoll",
"Josep",
"",
"1 and 3"
],
[
"Romero",
"Enrique",
"",
"1 and 3"
],
[
"Mazzanti",
"Ferran",
"",
"2, 5 and 6"
],
[
"Martin-Pinardel",
"Ruben",
"",
"2, 5 and 6"
],
[
"Marias-Perez",
"Sonia",
"",
"2, 5 and 6"
],
[
"Bernal-Morales",
"Carolina",
"",
"2, 5 and 6"
],
[
"Castro-Dominguez",
"Rafael",
"",
"5, 6 and 7"
],
[
"Mendez",
"Andrea",
"",
"5, 6 and 7"
],
[
"Ortega",
"Emilio",
"",
"5, 6 and 7"
],
[
"Vinagre",
"Irene",
"",
"5, 6 and 7"
],
[
"Gimenez",
"Marga",
"",
"5, 6 and 7"
],
[
"Vellido",
"Alfredo",
"",
"1 and 3"
],
[
"Zarranz-Ventura",
"Javier",
"",
"2, 5, 6 and 7"
]
] | TITLE: Machine Learning Prediction of Cardiovascular Risk in Type 1 Diabetes
Mellitus Using Radiomics Features from Multimodal Retinal Images
ABSTRACT: This study aimed to develop a machine learning (ML) algorithm capable of
determining cardiovascular risk in multimodal retinal images from patients with
type 1 diabetes mellitus, distinguishing between moderate, high, and very
high-risk levels. Radiomic features were extracted from fundus retinography,
optical coherence tomography (OCT), and OCT angiography (OCTA) images. ML
models were trained using these features either individually or combined with
clinical data. A dataset of 597 eyes (359 individuals) was analyzed, and models
trained only with radiomic features achieved AUC values of (0.79 $\pm$ 0.03)
for identifying moderate risk cases from high and very high-risk cases, and
(0.73 $\pm$ 0.07) for distinguishing between high and very high-risk cases. The
addition of clinical variables improved all AUC values, reaching (0.99 $\pm$
0.01) for identifying moderate risk cases and (0.95 $\pm$ 0.02) for
differentiating between high and very high-risk cases. For very high CV risk,
radiomics combined with OCT+OCTA metrics and ocular data achieved an AUC of
(0.89 $\pm$ 0.02) without systemic data input. These results demonstrate that
radiomic features obtained from multimodal retinal images are useful for
discriminating and classifying CV risk labels, highlighting the potential of
this oculomics approach for CV risk assessment.
|
2504.02870 | Frank Po Wen Lo | Frank P.-W. Lo, Jianing Qiu, Zeyu Wang, Haibao Yu, Yeming Chen, Gao
Zhang, Benny Lo | AI Hiring with LLMs: A Context-Aware and Explainable Multi-Agent
Framework for Resume Screening | Accepted by CVPR 2025 Workshop | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Resume screening is a critical yet time-intensive process in talent
acquisition, requiring recruiters to analyze vast volume of job applications
while remaining objective, accurate, and fair. With the advancements in Large
Language Models (LLMs), their reasoning capabilities and extensive knowledge
bases demonstrate new opportunities to streamline and automate recruitment
workflows. In this work, we propose a multi-agent framework for resume
screening using LLMs to systematically process and evaluate resumes. The
framework consists of four core agents, including a resume extractor, an
evaluator, a summarizer, and a score formatter. To enhance the contextual
relevance of candidate assessments, we integrate Retrieval-Augmented Generation
(RAG) within the resume evaluator, allowing incorporation of external knowledge
sources, such as industry-specific expertise, professional certifications,
university rankings, and company-specific hiring criteria. This dynamic
adaptation enables personalized recruitment, bridging the gap between AI
automation and talent acquisition. We assess the effectiveness of our approach
by comparing AI-generated scores with ratings provided by HR professionals on a
dataset of anonymized online resumes. The findings highlight the potential of
multi-agent RAG-LLM systems in automating resume screening, enabling more
efficient and scalable hiring workflows.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 12:56:39 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Lo",
"Frank P. -W.",
""
],
[
"Qiu",
"Jianing",
""
],
[
"Wang",
"Zeyu",
""
],
[
"Yu",
"Haibao",
""
],
[
"Chen",
"Yeming",
""
],
[
"Zhang",
"Gao",
""
],
[
"Lo",
"Benny",
""
]
] | TITLE: AI Hiring with LLMs: A Context-Aware and Explainable Multi-Agent
Framework for Resume Screening
ABSTRACT: Resume screening is a critical yet time-intensive process in talent
acquisition, requiring recruiters to analyze vast volume of job applications
while remaining objective, accurate, and fair. With the advancements in Large
Language Models (LLMs), their reasoning capabilities and extensive knowledge
bases demonstrate new opportunities to streamline and automate recruitment
workflows. In this work, we propose a multi-agent framework for resume
screening using LLMs to systematically process and evaluate resumes. The
framework consists of four core agents, including a resume extractor, an
evaluator, a summarizer, and a score formatter. To enhance the contextual
relevance of candidate assessments, we integrate Retrieval-Augmented Generation
(RAG) within the resume evaluator, allowing incorporation of external knowledge
sources, such as industry-specific expertise, professional certifications,
university rankings, and company-specific hiring criteria. This dynamic
adaptation enables personalized recruitment, bridging the gap between AI
automation and talent acquisition. We assess the effectiveness of our approach
by comparing AI-generated scores with ratings provided by HR professionals on a
dataset of anonymized online resumes. The findings highlight the potential of
multi-agent RAG-LLM systems in automating resume screening, enabling more
efficient and scalable hiring workflows.
|
2504.02872 | Giuseppe Cascavilla | Ingmar Bakermans, Daniel De Pascale, Gon\c{c}alo Marcelino, Giuseppe
Cascavilla, and Zeno Geradts | Scraping the Shadows: Deep Learning Breakthroughs in Dark Web
Intelligence | 17 pages, 17 images | null | null | null | cs.CL cs.AI cs.CY cs.IR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Darknet markets (DNMs) facilitate the trade of illegal goods on a global
scale. Gathering data on DNMs is critical to ensuring law enforcement agencies
can effectively combat crime. Manually extracting data from DNMs is an
error-prone and time-consuming task. Aiming to automate this process we develop
a framework for extracting data from DNMs and evaluate the application of three
state-of-the-art Named Entity Recognition (NER) models, ELMo-BiLSTM
\citep{ShahEtAl2022}, UniversalNER \citep{ZhouEtAl2024}, and GLiNER
\citep{ZaratianaEtAl2023}, at the task of extracting complex entities from DNM
product listing pages. We propose a new annotated dataset, which we use to
train, fine-tune, and evaluate the models. Our findings show that
state-of-the-art NER models perform well in information extraction from DNMs,
achieving 91% Precision, 96% Recall, and an F1 score of 94%. In addition,
fine-tuning enhances model performance, with UniversalNER achieving the best
performance.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 16:12:19 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Bakermans",
"Ingmar",
""
],
[
"De Pascale",
"Daniel",
""
],
[
"Marcelino",
"Gonçalo",
""
],
[
"Cascavilla",
"Giuseppe",
""
],
[
"Geradts",
"Zeno",
""
]
] | TITLE: Scraping the Shadows: Deep Learning Breakthroughs in Dark Web
Intelligence
ABSTRACT: Darknet markets (DNMs) facilitate the trade of illegal goods on a global
scale. Gathering data on DNMs is critical to ensuring law enforcement agencies
can effectively combat crime. Manually extracting data from DNMs is an
error-prone and time-consuming task. Aiming to automate this process we develop
a framework for extracting data from DNMs and evaluate the application of three
state-of-the-art Named Entity Recognition (NER) models, ELMo-BiLSTM
\citep{ShahEtAl2022}, UniversalNER \citep{ZhouEtAl2024}, and GLiNER
\citep{ZaratianaEtAl2023}, at the task of extracting complex entities from DNM
product listing pages. We propose a new annotated dataset, which we use to
train, fine-tune, and evaluate the models. Our findings show that
state-of-the-art NER models perform well in information extraction from DNMs,
achieving 91% Precision, 96% Recall, and an F1 score of 94%. In addition,
fine-tuning enhances model performance, with UniversalNER achieving the best
performance.
|
2504.02873 | Minjia Mao | Dongjun Wei, Minjia Mao, Xiao Fang, Michael Chau | Short-PHD: Detecting Short LLM-generated Text with Topological Data
Analysis After Off-topic Content Insertion | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The malicious usage of large language models (LLMs) has motivated the
detection of LLM-generated texts. Previous work in topological data analysis
shows that the persistent homology dimension (PHD) of text embeddings can serve
as a more robust and promising score than other zero-shot methods. However,
effectively detecting short LLM-generated texts remains a challenge. This paper
presents Short-PHD, a zero-shot LLM-generated text detection method tailored
for short texts. Short-PHD stabilizes the estimation of the previous PHD method
for short texts by inserting off-topic content before the given input text and
identifies LLM-generated text based on an established detection threshold.
Experimental results on both public and generated datasets demonstrate that
Short-PHD outperforms existing zero-shot methods in short LLM-generated text
detection. Implementation codes are available online.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 21:26:49 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Wei",
"Dongjun",
""
],
[
"Mao",
"Minjia",
""
],
[
"Fang",
"Xiao",
""
],
[
"Chau",
"Michael",
""
]
] | TITLE: Short-PHD: Detecting Short LLM-generated Text with Topological Data
Analysis After Off-topic Content Insertion
ABSTRACT: The malicious usage of large language models (LLMs) has motivated the
detection of LLM-generated texts. Previous work in topological data analysis
shows that the persistent homology dimension (PHD) of text embeddings can serve
as a more robust and promising score than other zero-shot methods. However,
effectively detecting short LLM-generated texts remains a challenge. This paper
presents Short-PHD, a zero-shot LLM-generated text detection method tailored
for short texts. Short-PHD stabilizes the estimation of the previous PHD method
for short texts by inserting off-topic content before the given input text and
identifies LLM-generated text based on an established detection threshold.
Experimental results on both public and generated datasets demonstrate that
Short-PHD outperforms existing zero-shot methods in short LLM-generated text
detection. Implementation codes are available online.
|
2504.02874 | Luis Felipe | Luis Felipe, Carlos Garcia, Issam El Naqa, Monique Shotande, Aakash
Tripathi, Vivek Rudrapatna, Ghulam Rasool, Danielle Bitterman, Gilmer Valdes | TheBlueScrubs-v1, a comprehensive curated medical dataset derived from
the internet | 22 pages, 8 figures, 10 tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | The need for robust and diverse data sets to train clinical large language
models (cLLMs) is critical given that currently available public repositories
often prove too limited in size or scope for comprehensive medical use. While
resources like PubMed provide foundational medical literature, they capture
only a narrow range of formal publications and omit the broader medical
discourse on the internet. To address these deficits, we introduce
TheBlueScrubs-v1, a curated dataset of over 25 billion medical tokens - nearly
three times larger than PubMed - drawn from a broad-scale internet corpus. Our
two-stage filtering pipeline employs a Logistic Regression model for document
screening (achieving an AUC of approximately 0.95 on external validation),
followed by verification via a 70B-parameter Llama 3.1 instruct model. Each
text is assigned three LLM-based quality scores encompassing medical relevance,
precision and factual detail, and safety and ethical standards. Clinician
reviews confirm high concordance with these automated evaluations, and a
specialized cancer classifier further labels approximately 11 billion oncology
tokens. Two demonstration tasks highlight the dataset's practical value: first,
we distill the safety evaluations to a smaller BERT-style model that reaches an
AUC near 0.96 on unseen data; second, we fine-tune a compact LLM on a filtered
subset, showing measurable improvements over standard baselines in medical
benchmarks as well as private ones. This Data Descriptor details the dataset's
creation and validation, underscoring its potential utility for medical AI
research.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 22:25:19 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Felipe",
"Luis",
""
],
[
"Garcia",
"Carlos",
""
],
[
"Naqa",
"Issam El",
""
],
[
"Shotande",
"Monique",
""
],
[
"Tripathi",
"Aakash",
""
],
[
"Rudrapatna",
"Vivek",
""
],
[
"Rasool",
"Ghulam",
""
],
[
"Bitterman",
"Danielle",
""
],
[
"Valdes",
"Gilmer",
""
]
] | TITLE: TheBlueScrubs-v1, a comprehensive curated medical dataset derived from
the internet
ABSTRACT: The need for robust and diverse data sets to train clinical large language
models (cLLMs) is critical given that currently available public repositories
often prove too limited in size or scope for comprehensive medical use. While
resources like PubMed provide foundational medical literature, they capture
only a narrow range of formal publications and omit the broader medical
discourse on the internet. To address these deficits, we introduce
TheBlueScrubs-v1, a curated dataset of over 25 billion medical tokens - nearly
three times larger than PubMed - drawn from a broad-scale internet corpus. Our
two-stage filtering pipeline employs a Logistic Regression model for document
screening (achieving an AUC of approximately 0.95 on external validation),
followed by verification via a 70B-parameter Llama 3.1 instruct model. Each
text is assigned three LLM-based quality scores encompassing medical relevance,
precision and factual detail, and safety and ethical standards. Clinician
reviews confirm high concordance with these automated evaluations, and a
specialized cancer classifier further labels approximately 11 billion oncology
tokens. Two demonstration tasks highlight the dataset's practical value: first,
we distill the safety evaluations to a smaller BERT-style model that reaches an
AUC near 0.96 on unseen data; second, we fine-tune a compact LLM on a filtered
subset, showing measurable improvements over standard baselines in medical
benchmarks as well as private ones. This Data Descriptor details the dataset's
creation and validation, underscoring its potential utility for medical AI
research.
|
2504.02875 | Priyanka Ladha | Liuxin Yang and Priyanka Ladha | Real Time Animator: High-Quality Cartoon Style Transfer in 6 Animation
Styles on Images and Videos | 9 pages, images and videos with link | null | null | null | cs.GR | http://creativecommons.org/licenses/by/4.0/ | This paper presents a comprehensive pipeline that integrates state-of-the-art
techniques to achieve high-quality cartoon style transfer for educational
images and videos. The proposed approach combines the Inversion-based Style
Transfer (InST) framework for both image and video style stylization, the
Pre-Trained Image Processing Transformer (IPT) for post-denoising, and the
Domain-Calibrated Translation Network (DCT-Net) for more consistent video style
transfer. By fine-tuning InST with specific cartoon styles, applying IPT for
artifact reduction, and leveraging DCT-Net for temporal consistency, the
pipeline generates visually appealing and educationally effective stylized
content. Extensive experiments and evaluations using the scenery and monuments
dataset demonstrate the superiority of the proposed approach in terms of style
transfer accuracy, content preservation, and visual quality compared to the
baseline method, AdaAttN. The CLIP similarity scores further validate the
effectiveness of InST in capturing style attributes while maintaining semantic
content. The proposed pipeline streamlines the creation of engaging educational
content, empowering educators and content creators to produce visually
captivating and informative materials efficiently.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 23:56:11 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yang",
"Liuxin",
""
],
[
"Ladha",
"Priyanka",
""
]
] | TITLE: Real Time Animator: High-Quality Cartoon Style Transfer in 6 Animation
Styles on Images and Videos
ABSTRACT: This paper presents a comprehensive pipeline that integrates state-of-the-art
techniques to achieve high-quality cartoon style transfer for educational
images and videos. The proposed approach combines the Inversion-based Style
Transfer (InST) framework for both image and video style stylization, the
Pre-Trained Image Processing Transformer (IPT) for post-denoising, and the
Domain-Calibrated Translation Network (DCT-Net) for more consistent video style
transfer. By fine-tuning InST with specific cartoon styles, applying IPT for
artifact reduction, and leveraging DCT-Net for temporal consistency, the
pipeline generates visually appealing and educationally effective stylized
content. Extensive experiments and evaluations using the scenery and monuments
dataset demonstrate the superiority of the proposed approach in terms of style
transfer accuracy, content preservation, and visual quality compared to the
baseline method, AdaAttN. The CLIP similarity scores further validate the
effectiveness of InST in capturing style attributes while maintaining semantic
content. The proposed pipeline streamlines the creation of engaging educational
content, empowering educators and content creators to produce visually
captivating and informative materials efficiently.
|
2504.02876 | Yangxiao Lu | Yangxiao Lu, Ruosen Li, Liqiang Jing, Jikai Wang, Xinya Du, Yunhui
Guo, Nicholas Ruozzi, Yu Xiang | Multimodal Reference Visual Grounding | Project page with our code and dataset:
https://irvlutd.github.io/MultiGrounding | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Visual grounding focuses on detecting objects from images based on language
expressions. Recent Large Vision-Language Models (LVLMs) have significantly
advanced visual grounding performance by training large models with large-scale
datasets. However, the problem remains challenging, especially when similar
objects appear in the input image. For example, an LVLM may not be able to
differentiate Diet Coke and regular Coke in an image. In this case, if
additional reference images of Diet Coke and regular Coke are available, it can
help the visual grounding of similar objects.
In this work, we introduce a new task named Multimodal Reference Visual
Grounding (MRVG). In this task, a model has access to a set of reference images
of objects in a database. Based on these reference images and a language
expression, the model is required to detect a target object from a query image.
We first introduce a new dataset to study the MRVG problem. Then we introduce a
novel method, named MRVG-Net, to solve this visual grounding problem. We show
that by efficiently using reference images with few-shot object detection and
using Large Language Models (LLMs) for object matching, our method achieves
superior visual grounding performance compared to the state-of-the-art LVLMs
such as Qwen2.5-VL-7B. Our approach bridges the gap between few-shot detection
and visual grounding, unlocking new capabilities for visual understanding.
Project page with our code and dataset:
https://irvlutd.github.io/MultiGrounding
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 00:19:05 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Lu",
"Yangxiao",
""
],
[
"Li",
"Ruosen",
""
],
[
"Jing",
"Liqiang",
""
],
[
"Wang",
"Jikai",
""
],
[
"Du",
"Xinya",
""
],
[
"Guo",
"Yunhui",
""
],
[
"Ruozzi",
"Nicholas",
""
],
[
"Xiang",
"Yu",
""
]
] | TITLE: Multimodal Reference Visual Grounding
ABSTRACT: Visual grounding focuses on detecting objects from images based on language
expressions. Recent Large Vision-Language Models (LVLMs) have significantly
advanced visual grounding performance by training large models with large-scale
datasets. However, the problem remains challenging, especially when similar
objects appear in the input image. For example, an LVLM may not be able to
differentiate Diet Coke and regular Coke in an image. In this case, if
additional reference images of Diet Coke and regular Coke are available, it can
help the visual grounding of similar objects.
In this work, we introduce a new task named Multimodal Reference Visual
Grounding (MRVG). In this task, a model has access to a set of reference images
of objects in a database. Based on these reference images and a language
expression, the model is required to detect a target object from a query image.
We first introduce a new dataset to study the MRVG problem. Then we introduce a
novel method, named MRVG-Net, to solve this visual grounding problem. We show
that by efficiently using reference images with few-shot object detection and
using Large Language Models (LLMs) for object matching, our method achieves
superior visual grounding performance compared to the state-of-the-art LVLMs
such as Qwen2.5-VL-7B. Our approach bridges the gap between few-shot detection
and visual grounding, unlocking new capabilities for visual understanding.
Project page with our code and dataset:
https://irvlutd.github.io/MultiGrounding
|
2504.02878 | Lilin Xu | Lilin Xu and Kaiyuan Hou and Xiaofan Jiang | Exploring the Capabilities of LLMs for IMU-based Fine-grained Human
Activity Understanding | Accepted to The 2nd International Workshop on Foundation Models for
Cyber-Physical Systems & Internet of Things (FMSys 2025) | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Human activity recognition (HAR) using inertial measurement units (IMUs)
increasingly leverages large language models (LLMs), yet existing approaches
focus on coarse activities like walking or running. Our preliminary study
indicates that pretrained LLMs fail catastrophically on fine-grained HAR tasks
such as air-written letter recognition, achieving only near-random guessing
accuracy. In this work, we first bridge this gap for flat-surface writing
scenarios: by fine-tuning LLMs with a self-collected dataset and few-shot
learning, we achieved up to a 129x improvement on 2D data. To extend this to 3D
scenarios, we designed an encoder-based pipeline that maps 3D data into 2D
equivalents, preserving the spatiotemporal information for robust letter
prediction. Our end-to-end pipeline achieves 78% accuracy on word recognition
with up to 5 letters in mid-air writing scenarios, establishing LLMs as viable
tools for fine-grained HAR.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 03:42:58 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Xu",
"Lilin",
""
],
[
"Hou",
"Kaiyuan",
""
],
[
"Jiang",
"Xiaofan",
""
]
] | TITLE: Exploring the Capabilities of LLMs for IMU-based Fine-grained Human
Activity Understanding
ABSTRACT: Human activity recognition (HAR) using inertial measurement units (IMUs)
increasingly leverages large language models (LLMs), yet existing approaches
focus on coarse activities like walking or running. Our preliminary study
indicates that pretrained LLMs fail catastrophically on fine-grained HAR tasks
such as air-written letter recognition, achieving only near-random guessing
accuracy. In this work, we first bridge this gap for flat-surface writing
scenarios: by fine-tuning LLMs with a self-collected dataset and few-shot
learning, we achieved up to a 129x improvement on 2D data. To extend this to 3D
scenarios, we designed an encoder-based pipeline that maps 3D data into 2D
equivalents, preserving the spatiotemporal information for robust letter
prediction. Our end-to-end pipeline achieves 78% accuracy on word recognition
with up to 5 letters in mid-air writing scenarios, establishing LLMs as viable
tools for fine-grained HAR.
|
2504.02880 | Junchi Zhou | Junchi Zhou, Haozhou Wang, Yoichiro Kato, Tejasri Nampally, P.
Rajalakshmi, M. Balram, Keisuke Katsura, Hao Lu, Yue Mu, Wanneng Yang,
Yangmingrui Gao, Feng Xiao, Hongtao Chen, Yuhao Chen, Wenjuan Li, Jingwen
Wang, Fenghua Yu, Jian Zhou, Wensheng Wang, Xiaochun Hu, Yuanzhu Yang,
Yanfeng Ding, Wei Guo, Shouyang Liu | Global Rice Multi-Class Segmentation Dataset (RiceSEG): A Comprehensive
and Diverse High-Resolution RGB-Annotated Images for the Development and
Benchmarking of Rice Segmentation Algorithms | null | null | null | null | eess.IV cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | Developing computer vision-based rice phenotyping techniques is crucial for
precision field management and accelerating breeding, thereby continuously
advancing rice production. Among phenotyping tasks, distinguishing image
components is a key prerequisite for characterizing plant growth and
development at the organ scale, enabling deeper insights into eco-physiological
processes. However, due to the fine structure of rice organs and complex
illumination within the canopy, this task remains highly challenging,
underscoring the need for a high-quality training dataset. Such datasets are
scarce, both due to a lack of large, representative collections of rice field
images and the time-intensive nature of annotation. To address this gap, we
established the first comprehensive multi-class rice semantic segmentation
dataset, RiceSEG. We gathered nearly 50,000 high-resolution, ground-based
images from five major rice-growing countries (China, Japan, India, the
Philippines, and Tanzania), encompassing over 6,000 genotypes across all growth
stages. From these original images, 3,078 representative samples were selected
and annotated with six classes (background, green vegetation, senescent
vegetation, panicle, weeds, and duckweed) to form the RiceSEG dataset. Notably,
the sub-dataset from China spans all major genotypes and rice-growing
environments from the northeast to the south. Both state-of-the-art
convolutional neural networks and transformer-based semantic segmentation
models were used as baselines. While these models perform reasonably well in
segmenting background and green vegetation, they face difficulties during the
reproductive stage, when canopy structures are more complex and multiple
classes are involved. These findings highlight the importance of our dataset
for developing specialized segmentation models for rice and other crops.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 04:03:23 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Zhou",
"Junchi",
""
],
[
"Wang",
"Haozhou",
""
],
[
"Kato",
"Yoichiro",
""
],
[
"Nampally",
"Tejasri",
""
],
[
"Rajalakshmi",
"P.",
""
],
[
"Balram",
"M.",
""
],
[
"Katsura",
"Keisuke",
""
],
[
"Lu",
"Hao",
""
],
[
"Mu",
"Yue",
""
],
[
"Yang",
"Wanneng",
""
],
[
"Gao",
"Yangmingrui",
""
],
[
"Xiao",
"Feng",
""
],
[
"Chen",
"Hongtao",
""
],
[
"Chen",
"Yuhao",
""
],
[
"Li",
"Wenjuan",
""
],
[
"Wang",
"Jingwen",
""
],
[
"Yu",
"Fenghua",
""
],
[
"Zhou",
"Jian",
""
],
[
"Wang",
"Wensheng",
""
],
[
"Hu",
"Xiaochun",
""
],
[
"Yang",
"Yuanzhu",
""
],
[
"Ding",
"Yanfeng",
""
],
[
"Guo",
"Wei",
""
],
[
"Liu",
"Shouyang",
""
]
] | TITLE: Global Rice Multi-Class Segmentation Dataset (RiceSEG): A Comprehensive
and Diverse High-Resolution RGB-Annotated Images for the Development and
Benchmarking of Rice Segmentation Algorithms
ABSTRACT: Developing computer vision-based rice phenotyping techniques is crucial for
precision field management and accelerating breeding, thereby continuously
advancing rice production. Among phenotyping tasks, distinguishing image
components is a key prerequisite for characterizing plant growth and
development at the organ scale, enabling deeper insights into eco-physiological
processes. However, due to the fine structure of rice organs and complex
illumination within the canopy, this task remains highly challenging,
underscoring the need for a high-quality training dataset. Such datasets are
scarce, both due to a lack of large, representative collections of rice field
images and the time-intensive nature of annotation. To address this gap, we
established the first comprehensive multi-class rice semantic segmentation
dataset, RiceSEG. We gathered nearly 50,000 high-resolution, ground-based
images from five major rice-growing countries (China, Japan, India, the
Philippines, and Tanzania), encompassing over 6,000 genotypes across all growth
stages. From these original images, 3,078 representative samples were selected
and annotated with six classes (background, green vegetation, senescent
vegetation, panicle, weeds, and duckweed) to form the RiceSEG dataset. Notably,
the sub-dataset from China spans all major genotypes and rice-growing
environments from the northeast to the south. Both state-of-the-art
convolutional neural networks and transformer-based semantic segmentation
models were used as baselines. While these models perform reasonably well in
segmenting background and green vegetation, they face difficulties during the
reproductive stage, when canopy structures are more complex and multiple
classes are involved. These findings highlight the importance of our dataset
for developing specialized segmentation models for rice and other crops.
|
2504.02882 | Sunghee Jung | Sunghee Jung, Donghun Lee, Shinbok Lee, Gaeun Seo, Daniel Lee,
Byeongil Ko, Junrae Cho, Kihyun Kim, Eunggyun Kim, and Myeongcheol Shin | DiaTool-DPO: Multi-Turn Direct Preference Optimization for
Tool-Augmented Large Language Models | null | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Tool-Augmented Larage Language Models (TA-LLMs) have shown promise in
real-world applications, but face challenges in handling incomplete queries and
out-of-scope requests. While existing approaches rely mainly on Supervised
Fine-Tuning with expert trajectories, we propose DiaTool-DPO, a novel method
that enhances TA-LLM's dialogue capabilities through Direct Preference
Optimization. We model TA-LLM interactions as a Markov Decision Process with 5
distinct dialogue states and categorize user queries into 3 types based on
their state transition trajectories. We automatically construct paired
trajectory datasets of correct and incorrect dialogue flows and introduce a
specialized objective loss for dialogue control. Our comprehensive evaluation
demonstrates that DiaTool-DPO approaches GPT-4o's performance (94.8% in
information gathering, 91% in tool call rejection) with substantial
improvements over baseline (44% and 9.6% respectively) while maintaining core
functionality. Our approach opens new possibilities for developing TA-LLMs that
can handle diverse real-world scenarios without requiring additional expert
demonstrations or human labeling.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 05:47:28 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Jung",
"Sunghee",
""
],
[
"Lee",
"Donghun",
""
],
[
"Lee",
"Shinbok",
""
],
[
"Seo",
"Gaeun",
""
],
[
"Lee",
"Daniel",
""
],
[
"Ko",
"Byeongil",
""
],
[
"Cho",
"Junrae",
""
],
[
"Kim",
"Kihyun",
""
],
[
"Kim",
"Eunggyun",
""
],
[
"Shin",
"Myeongcheol",
""
]
] | TITLE: DiaTool-DPO: Multi-Turn Direct Preference Optimization for
Tool-Augmented Large Language Models
ABSTRACT: Tool-Augmented Larage Language Models (TA-LLMs) have shown promise in
real-world applications, but face challenges in handling incomplete queries and
out-of-scope requests. While existing approaches rely mainly on Supervised
Fine-Tuning with expert trajectories, we propose DiaTool-DPO, a novel method
that enhances TA-LLM's dialogue capabilities through Direct Preference
Optimization. We model TA-LLM interactions as a Markov Decision Process with 5
distinct dialogue states and categorize user queries into 3 types based on
their state transition trajectories. We automatically construct paired
trajectory datasets of correct and incorrect dialogue flows and introduce a
specialized objective loss for dialogue control. Our comprehensive evaluation
demonstrates that DiaTool-DPO approaches GPT-4o's performance (94.8% in
information gathering, 91% in tool call rejection) with substantial
improvements over baseline (44% and 9.6% respectively) while maintaining core
functionality. Our approach opens new possibilities for developing TA-LLMs that
can handle diverse real-world scenarios without requiring additional expert
demonstrations or human labeling.
|
2504.02883 | Anil Ramakrishna | Anil Ramakrishna, Yixin Wan, Xiaomeng Jin, Kai-Wei Chang, Zhiqi Bu,
Bhanukiran Vinzamuri, Volkan Cevher, Mingyi Hong, Rahul Gupta | SemEval-2025 Task 4: Unlearning sensitive content from Large Language
Models | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | We introduce SemEval-2025 Task 4: unlearning sensitive content from Large
Language Models (LLMs). The task features 3 subtasks for LLM unlearning
spanning different use cases: (1) unlearn long form synthetic creative
documents spanning different genres; (2) unlearn short form synthetic
biographies containing personally identifiable information (PII), including
fake names, phone number, SSN, email and home addresses, and (3) unlearn real
documents sampled from the target model's training dataset. We received over
100 submissions from over 30 institutions and we summarize the key techniques
and lessons in this paper.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 07:24:59 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Ramakrishna",
"Anil",
""
],
[
"Wan",
"Yixin",
""
],
[
"Jin",
"Xiaomeng",
""
],
[
"Chang",
"Kai-Wei",
""
],
[
"Bu",
"Zhiqi",
""
],
[
"Vinzamuri",
"Bhanukiran",
""
],
[
"Cevher",
"Volkan",
""
],
[
"Hong",
"Mingyi",
""
],
[
"Gupta",
"Rahul",
""
]
] | TITLE: SemEval-2025 Task 4: Unlearning sensitive content from Large Language
Models
ABSTRACT: We introduce SemEval-2025 Task 4: unlearning sensitive content from Large
Language Models (LLMs). The task features 3 subtasks for LLM unlearning
spanning different use cases: (1) unlearn long form synthetic creative
documents spanning different genres; (2) unlearn short form synthetic
biographies containing personally identifiable information (PII), including
fake names, phone number, SSN, email and home addresses, and (3) unlearn real
documents sampled from the target model's training dataset. We received over
100 submissions from over 30 institutions and we summarize the key techniques
and lessons in this paper.
|
2504.02884 | Baba Ibrahim | Baba Ibrahim and Zhou Kui (Hubei University of Automotive Technology
and Hubei University of Automotive Technology) | Enhancing Traffic Sign Recognition On The Performance Based On Yolov8 | 27 Pages, 6 Figures, 10 Tables and 20 References | null | null | null | cs.CV cs.PF | http://creativecommons.org/licenses/by-sa/4.0/ | This paper Traffic sign recognition plays a crucial role in the development
of autonomous vehicles and advanced driver-assistance systems (ADAS). Despite
significant advances in deep learning and object detection, accurately
detecting and classifying traffic signs remains challenging due to their small
sizes, variable environmental conditions, occlusion, and class imbalance. This
thesis presents an enhanced YOLOv8-based detection system that integrates
advanced data augmentation techniques, novel architectural enhancements
including Coordinate Attention (CA), Bidirectional Feature Pyramid Network
(BiFPN), and dynamic modules such as ODConv and LSKA, along with refined loss
functions (EIoU and WIoU combined with Focal Loss). Extensive experiments
conducted on datasets including GTSRB, TT100K, and GTSDB demonstrate marked
improvements in detection accuracy, robustness under adverse conditions, and
real-time inference on edge devices. The findings contribute actionable
insights for deploying reliable traffic sign recognition systems in real-world
autonomous driving scenarios.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 07:28:05 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Ibrahim",
"Baba",
"",
"Hubei University of Automotive Technology\n and Hubei University of Automotive Technology"
],
[
"Kui",
"Zhou",
"",
"Hubei University of Automotive Technology\n and Hubei University of Automotive Technology"
]
] | TITLE: Enhancing Traffic Sign Recognition On The Performance Based On Yolov8
ABSTRACT: This paper Traffic sign recognition plays a crucial role in the development
of autonomous vehicles and advanced driver-assistance systems (ADAS). Despite
significant advances in deep learning and object detection, accurately
detecting and classifying traffic signs remains challenging due to their small
sizes, variable environmental conditions, occlusion, and class imbalance. This
thesis presents an enhanced YOLOv8-based detection system that integrates
advanced data augmentation techniques, novel architectural enhancements
including Coordinate Attention (CA), Bidirectional Feature Pyramid Network
(BiFPN), and dynamic modules such as ODConv and LSKA, along with refined loss
functions (EIoU and WIoU combined with Focal Loss). Extensive experiments
conducted on datasets including GTSRB, TT100K, and GTSDB demonstrate marked
improvements in detection accuracy, robustness under adverse conditions, and
real-time inference on edge devices. The findings contribute actionable
insights for deploying reliable traffic sign recognition systems in real-world
autonomous driving scenarios.
|
2504.02885 | Hao Wang | Hao Wang, Shuchang Ye, Jinghao Lin, Usman Naseem, Jinman Kim | LVMed-R2: Perception and Reflection-driven Complex Reasoning for Medical
Report Generation | 10 pages, 3 figures, 1 table | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large vision-language models (LVMs) hold a great promise for automating
medical report generation, potentially reducing the burden of manual reporting.
State-of-the-art (SOTA) research fine-tunes general LVMs with medical data to
align radiology images to corresponding medical reports. However, there are two
key factors that limit these LVM's performance. Firstly, LVMs lack complex
reasoning capability that leads to logical inconsistencies and potential
diagnostic errors in generated reports. Secondly, LVMs lack reflection
mechanism that leads to an inability to discover errors in the thinking
process. To address these gaps, we propose LVMed-R2, a new fine-tuning strategy
that introduces complex reasoning and reflection mechanisms for LVMs to enhance
medical report generation. To the best of our knowledge, this is the first work
to introduce complex reasoning to the medical report generation (MRG) task. Our
proposed complex reasoning contains medical knowledge injection and
perception-enhancing modules which improve the accuracy of LVMs diagnosis,
coupled with a perception tree to provide guidance to limit the perception
range. Further, the reflection mechanism forces self-verification for outputs
to correct for potential errors. We experimented by fine-tuning LVMs with our
proposed LVMed-R2 strategy, using IU-Xray and MIMIC-CXR datasets. Our results,
measured on natural language generation (NLG) metrics and clinical efficacy
(CE) metrics, demonstrate that LVMs fine-tuned with the proposed reflection
mechanism possess the ability to correct outputs and complex reasoning
effectively and improve LVMs performance for MRG.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 08:18:54 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Wang",
"Hao",
""
],
[
"Ye",
"Shuchang",
""
],
[
"Lin",
"Jinghao",
""
],
[
"Naseem",
"Usman",
""
],
[
"Kim",
"Jinman",
""
]
] | TITLE: LVMed-R2: Perception and Reflection-driven Complex Reasoning for Medical
Report Generation
ABSTRACT: Large vision-language models (LVMs) hold a great promise for automating
medical report generation, potentially reducing the burden of manual reporting.
State-of-the-art (SOTA) research fine-tunes general LVMs with medical data to
align radiology images to corresponding medical reports. However, there are two
key factors that limit these LVM's performance. Firstly, LVMs lack complex
reasoning capability that leads to logical inconsistencies and potential
diagnostic errors in generated reports. Secondly, LVMs lack reflection
mechanism that leads to an inability to discover errors in the thinking
process. To address these gaps, we propose LVMed-R2, a new fine-tuning strategy
that introduces complex reasoning and reflection mechanisms for LVMs to enhance
medical report generation. To the best of our knowledge, this is the first work
to introduce complex reasoning to the medical report generation (MRG) task. Our
proposed complex reasoning contains medical knowledge injection and
perception-enhancing modules which improve the accuracy of LVMs diagnosis,
coupled with a perception tree to provide guidance to limit the perception
range. Further, the reflection mechanism forces self-verification for outputs
to correct for potential errors. We experimented by fine-tuning LVMs with our
proposed LVMed-R2 strategy, using IU-Xray and MIMIC-CXR datasets. Our results,
measured on natural language generation (NLG) metrics and clinical efficacy
(CE) metrics, demonstrate that LVMs fine-tuned with the proposed reflection
mechanism possess the ability to correct outputs and complex reasoning
effectively and improve LVMs performance for MRG.
|
2504.02887 | John Chen | John Chen, Alexandros Lotsos, Grace Wang, Lexie Zhao, Bruce Sherin,
Uri Wilensky, Michael Horn | Processes Matter: How ML/GAI Approaches Could Support Open Qualitative
Coding of Online Discourse Datasets | This paper was recommended for acceptance as a long paper by CSCL
reviewers, but ends up as a short paper. The arXiv version here is its longer
form, revised with reviewers' comments | null | null | null | cs.CL cs.HC cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Open coding, a key inductive step in qualitative research, discovers and
constructs concepts from human datasets. However, capturing extensive and
nuanced aspects or "coding moments" can be challenging, especially with large
discourse datasets. While some studies explore machine learning (ML)/Generative
AI (GAI)'s potential for open coding, few evaluation studies exist. We compare
open coding results by five recently published ML/GAI approaches and four human
coders, using a dataset of online chat messages around a mobile learning
software. Our systematic analysis reveals ML/GAI approaches' strengths and
weaknesses, uncovering the complementary potential between humans and AI.
Line-by-line AI approaches effectively identify content-based codes, while
humans excel in interpreting conversational dynamics. We discussed how embedded
analytical processes could shape the results of ML/GAI approaches. Instead of
replacing humans in open coding, researchers should integrate AI with and
according to their analytical processes, e.g., as parallel co-coders.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 13:43:54 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Chen",
"John",
""
],
[
"Lotsos",
"Alexandros",
""
],
[
"Wang",
"Grace",
""
],
[
"Zhao",
"Lexie",
""
],
[
"Sherin",
"Bruce",
""
],
[
"Wilensky",
"Uri",
""
],
[
"Horn",
"Michael",
""
]
] | TITLE: Processes Matter: How ML/GAI Approaches Could Support Open Qualitative
Coding of Online Discourse Datasets
ABSTRACT: Open coding, a key inductive step in qualitative research, discovers and
constructs concepts from human datasets. However, capturing extensive and
nuanced aspects or "coding moments" can be challenging, especially with large
discourse datasets. While some studies explore machine learning (ML)/Generative
AI (GAI)'s potential for open coding, few evaluation studies exist. We compare
open coding results by five recently published ML/GAI approaches and four human
coders, using a dataset of online chat messages around a mobile learning
software. Our systematic analysis reveals ML/GAI approaches' strengths and
weaknesses, uncovering the complementary potential between humans and AI.
Line-by-line AI approaches effectively identify content-based codes, while
humans excel in interpreting conversational dynamics. We discussed how embedded
analytical processes could shape the results of ML/GAI approaches. Instead of
replacing humans in open coding, researchers should integrate AI with and
according to their analytical processes, e.g., as parallel co-coders.
|
2504.02889 | Takanori Ugai | Takanori Ugai | Embedding Method for Knowledge Graph with Densely Defined Ontology | 6pages, 4 figures | null | null | null | cs.SI cs.AI | http://creativecommons.org/licenses/by/4.0/ | Knowledge graph embedding (KGE) is a technique that enhances knowledge graphs
by addressing incompleteness and improving knowledge retrieval. A limitation of
the existing KGE models is their underutilization of ontologies, specifically
the relationships between properties. This study proposes a KGE model, TransU,
designed for knowledge graphs with well-defined ontologies that incorporate
relationships between properties. The model treats properties as a subset of
entities, enabling a unified representation. We present experimental results
using a standard dataset and a practical dataset.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 14:43:47 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Ugai",
"Takanori",
""
]
] | TITLE: Embedding Method for Knowledge Graph with Densely Defined Ontology
ABSTRACT: Knowledge graph embedding (KGE) is a technique that enhances knowledge graphs
by addressing incompleteness and improving knowledge retrieval. A limitation of
the existing KGE models is their underutilization of ontologies, specifically
the relationships between properties. This study proposes a KGE model, TransU,
designed for knowledge graphs with well-defined ontologies that incorporate
relationships between properties. The model treats properties as a subset of
entities, enabling a unified representation. We present experimental results
using a standard dataset and a practical dataset.
|
2504.02894 | Ahsan Bilal | Ahsan Bilal, Beiyu Lin, Mehdi Zaeifi | OnRL-RAG: Real-Time Personalized Mental Health Dialogue System | null | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large language models (LLMs) have been widely used for various tasks and
applications. However, LLMs and fine-tuning are limited to the pre-trained
data. For example, ChatGPT's world knowledge until 2021 can be outdated or
inaccurate. To enhance the capabilities of LLMs, Retrieval-Augmented Generation
(RAG), is proposed to augment LLMs with additional, new, latest details and
information to LLMs. While RAG offers the correct information, it may not best
present it, especially to different population groups with personalizations.
Reinforcement Learning from Human Feedback (RLHF) adapts to user needs by
aligning model responses with human preference through feedback loops. In
real-life applications, such as mental health problems, a dynamic and
feedback-based model would continuously adapt to new information and offer
personalized assistance due to complex factors fluctuating in a daily
environment. Thus, we propose an Online Reinforcement Learning-based
Retrieval-Augmented Generation (OnRL-RAG) system to detect and personalize the
responding systems to mental health problems, such as stress, anxiety, and
depression. We use an open-source dataset collected from 2028 College Students
with 28 survey questions for each student to demonstrate the performance of our
proposed system with the existing systems. Our system achieves superior
performance compared to standard RAG and simple LLM via GPT-4o, GPT-4o-mini,
Gemini-1.5, and GPT-3.5. This work would open up the possibilities of real-life
applications of LLMs for personalized services in the everyday environment. The
results will also help researchers in the fields of sociology, psychology, and
neuroscience to align their theories more closely with the actual human daily
environment.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 18:44:53 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Bilal",
"Ahsan",
""
],
[
"Lin",
"Beiyu",
""
],
[
"Zaeifi",
"Mehdi",
""
]
] | TITLE: OnRL-RAG: Real-Time Personalized Mental Health Dialogue System
ABSTRACT: Large language models (LLMs) have been widely used for various tasks and
applications. However, LLMs and fine-tuning are limited to the pre-trained
data. For example, ChatGPT's world knowledge until 2021 can be outdated or
inaccurate. To enhance the capabilities of LLMs, Retrieval-Augmented Generation
(RAG), is proposed to augment LLMs with additional, new, latest details and
information to LLMs. While RAG offers the correct information, it may not best
present it, especially to different population groups with personalizations.
Reinforcement Learning from Human Feedback (RLHF) adapts to user needs by
aligning model responses with human preference through feedback loops. In
real-life applications, such as mental health problems, a dynamic and
feedback-based model would continuously adapt to new information and offer
personalized assistance due to complex factors fluctuating in a daily
environment. Thus, we propose an Online Reinforcement Learning-based
Retrieval-Augmented Generation (OnRL-RAG) system to detect and personalize the
responding systems to mental health problems, such as stress, anxiety, and
depression. We use an open-source dataset collected from 2028 College Students
with 28 survey questions for each student to demonstrate the performance of our
proposed system with the existing systems. Our system achieves superior
performance compared to standard RAG and simple LLM via GPT-4o, GPT-4o-mini,
Gemini-1.5, and GPT-3.5. This work would open up the possibilities of real-life
applications of LLMs for personalized services in the everyday environment. The
results will also help researchers in the fields of sociology, psychology, and
neuroscience to align their theories more closely with the actual human daily
environment.
|
2504.02895 | Malcolm Mielle Dr | Farida Al Haddad, Yuxin Wang, Malcolm Mielle | UAC: Uncertainty-Aware Calibration of Neural Networks for Gesture
Detection | 12 pages, 2 figures | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Artificial intelligence has the potential to impact safety and efficiency in
safety-critical domains such as construction, manufacturing, and healthcare.
For example, using sensor data from wearable devices, such as inertial
measurement units (IMUs), human gestures can be detected while maintaining
privacy, thereby ensuring that safety protocols are followed. However, strict
safety requirements in these domains have limited the adoption of AI, since
accurate calibration of predicted probabilities and robustness against
out-of-distribution (OOD) data is necessary.
This paper proposes UAC (Uncertainty-Aware Calibration), a novel two-step
method to address these challenges in IMU-based gesture recognition. First, we
present an uncertainty-aware gesture network architecture that predicts both
gesture probabilities and their associated uncertainties from IMU data. This
uncertainty is then used to calibrate the probabilities of each potential
gesture. Second, an entropy-weighted expectation of predictions over multiple
IMU data windows is used to improve accuracy while maintaining correct
calibration.
Our method is evaluated using three publicly available IMU datasets for
gesture detection and is compared to three state-of-the-art calibration methods
for neural networks: temperature scaling, entropy maximization, and Laplace
approximation. UAC outperforms existing methods, achieving improved accuracy
and calibration in both OOD and in-distribution scenarios. Moreover, we find
that, unlike our method, none of the state-of-the-art methods significantly
improve the calibration of IMU-based gesture recognition models. In conclusion,
our work highlights the advantages of uncertainty-aware calibration of neural
networks, demonstrating improvements in both calibration and accuracy for
gesture detection using IMU data.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 21:40:01 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Haddad",
"Farida Al",
""
],
[
"Wang",
"Yuxin",
""
],
[
"Mielle",
"Malcolm",
""
]
] | TITLE: UAC: Uncertainty-Aware Calibration of Neural Networks for Gesture
Detection
ABSTRACT: Artificial intelligence has the potential to impact safety and efficiency in
safety-critical domains such as construction, manufacturing, and healthcare.
For example, using sensor data from wearable devices, such as inertial
measurement units (IMUs), human gestures can be detected while maintaining
privacy, thereby ensuring that safety protocols are followed. However, strict
safety requirements in these domains have limited the adoption of AI, since
accurate calibration of predicted probabilities and robustness against
out-of-distribution (OOD) data is necessary.
This paper proposes UAC (Uncertainty-Aware Calibration), a novel two-step
method to address these challenges in IMU-based gesture recognition. First, we
present an uncertainty-aware gesture network architecture that predicts both
gesture probabilities and their associated uncertainties from IMU data. This
uncertainty is then used to calibrate the probabilities of each potential
gesture. Second, an entropy-weighted expectation of predictions over multiple
IMU data windows is used to improve accuracy while maintaining correct
calibration.
Our method is evaluated using three publicly available IMU datasets for
gesture detection and is compared to three state-of-the-art calibration methods
for neural networks: temperature scaling, entropy maximization, and Laplace
approximation. UAC outperforms existing methods, achieving improved accuracy
and calibration in both OOD and in-distribution scenarios. Moreover, we find
that, unlike our method, none of the state-of-the-art methods significantly
improve the calibration of IMU-based gesture recognition models. In conclusion,
our work highlights the advantages of uncertainty-aware calibration of neural
networks, demonstrating improvements in both calibration and accuracy for
gesture detection using IMU data.
|
2504.02900 | Matheus Batista Martins | Matheus Martins Batista | Comparative Analysis of Deepfake Detection Models: New Approaches and
Perspectives | Bachelor's thesis | null | null | null | cs.CV cs.LG stat.CO stat.ML | http://creativecommons.org/licenses/by/4.0/ | The growing threat posed by deepfake videos, capable of manipulating
realities and disseminating misinformation, drives the urgent need for
effective detection methods. This work investigates and compares different
approaches for identifying deepfakes, focusing on the GenConViT model and its
performance relative to other architectures present in the DeepfakeBenchmark.
To contextualize the research, the social and legal impacts of deepfakes are
addressed, as well as the technical fundamentals of their creation and
detection, including digital image processing, machine learning, and artificial
neural networks, with emphasis on Convolutional Neural Networks (CNNs),
Generative Adversarial Networks (GANs), and Transformers. The performance
evaluation of the models was conducted using relevant metrics and new datasets
established in the literature, such as WildDeep-fake and DeepSpeak, aiming to
identify the most effective tools in the battle against misinformation and
media manipulation. The obtained results indicated that GenConViT, after
fine-tuning, exhibited superior performance in terms of accuracy (93.82%) and
generalization capacity, surpassing other architectures in the
DeepfakeBenchmark on the DeepSpeak dataset. This study contributes to the
advancement of deepfake detection techniques, offering contributions to the
development of more robust and effective solutions against the dissemination of
false information.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 02:10:27 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Batista",
"Matheus Martins",
""
]
] | TITLE: Comparative Analysis of Deepfake Detection Models: New Approaches and
Perspectives
ABSTRACT: The growing threat posed by deepfake videos, capable of manipulating
realities and disseminating misinformation, drives the urgent need for
effective detection methods. This work investigates and compares different
approaches for identifying deepfakes, focusing on the GenConViT model and its
performance relative to other architectures present in the DeepfakeBenchmark.
To contextualize the research, the social and legal impacts of deepfakes are
addressed, as well as the technical fundamentals of their creation and
detection, including digital image processing, machine learning, and artificial
neural networks, with emphasis on Convolutional Neural Networks (CNNs),
Generative Adversarial Networks (GANs), and Transformers. The performance
evaluation of the models was conducted using relevant metrics and new datasets
established in the literature, such as WildDeep-fake and DeepSpeak, aiming to
identify the most effective tools in the battle against misinformation and
media manipulation. The obtained results indicated that GenConViT, after
fine-tuning, exhibited superior performance in terms of accuracy (93.82%) and
generalization capacity, surpassing other architectures in the
DeepfakeBenchmark on the DeepSpeak dataset. This study contributes to the
advancement of deepfake detection techniques, offering contributions to the
development of more robust and effective solutions against the dissemination of
false information.
|
2504.02901 | Bo Yuan | Bo Yuan, Yulin Chen, Yin Zhang, Wei Jiang | Hide and Seek in Noise Labels: Noise-Robust Collaborative Active
Learning with LLM-Powered Assistance | null | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Learning from noisy labels (LNL) is a challenge that arises in many
real-world scenarios where collected training data can contain incorrect or
corrupted labels. Most existing solutions identify noisy labels and adopt
active learning to query human experts on them for denoising. In the era of
large language models (LLMs), although we can reduce the human effort to
improve these methods, their performances are still subject to accurately
separating the clean and noisy samples from noisy data. In this paper, we
propose an innovative collaborative learning framework NoiseAL based on active
learning to combine LLMs and small models (SMs) for learning from noisy labels.
During collaborative training, we first adopt two SMs to form a co-prediction
network and propose a dynamic-enhanced threshold strategy to divide the noisy
data into different subsets, then select the clean and noisy samples from these
subsets to feed the active annotator LLMs to rectify noisy samples. Finally, we
employ different optimization objectives to conquer subsets with different
degrees of label noises. Extensive experiments on synthetic and real-world
noise datasets further demonstrate the superiority of our framework over
state-of-the-art baselines.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 04:36:39 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Yuan",
"Bo",
""
],
[
"Chen",
"Yulin",
""
],
[
"Zhang",
"Yin",
""
],
[
"Jiang",
"Wei",
""
]
] | TITLE: Hide and Seek in Noise Labels: Noise-Robust Collaborative Active
Learning with LLM-Powered Assistance
ABSTRACT: Learning from noisy labels (LNL) is a challenge that arises in many
real-world scenarios where collected training data can contain incorrect or
corrupted labels. Most existing solutions identify noisy labels and adopt
active learning to query human experts on them for denoising. In the era of
large language models (LLMs), although we can reduce the human effort to
improve these methods, their performances are still subject to accurately
separating the clean and noisy samples from noisy data. In this paper, we
propose an innovative collaborative learning framework NoiseAL based on active
learning to combine LLMs and small models (SMs) for learning from noisy labels.
During collaborative training, we first adopt two SMs to form a co-prediction
network and propose a dynamic-enhanced threshold strategy to divide the noisy
data into different subsets, then select the clean and noisy samples from these
subsets to feed the active annotator LLMs to rectify noisy samples. Finally, we
employ different optimization objectives to conquer subsets with different
degrees of label noises. Extensive experiments on synthetic and real-world
noise datasets further demonstrate the superiority of our framework over
state-of-the-art baselines.
|
2504.02904 | Weikai Li | Hongzhe Du, Weikai Li, Min Cai, Karim Saraipour, Zimin Zhang,
Himabindu Lakkaraju, Yizhou Sun, Shichang Zhang | How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge,
Truthfulness, Refusal, and Confidence | null | null | null | null | cs.CL cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Post-training is essential for the success of large language models (LLMs),
transforming pre-trained base models into more useful and aligned post-trained
models. While plenty of works have studied post-training algorithms and
evaluated post-training models by their outputs, it remains understudied how
post-training reshapes LLMs internally. In this paper, we compare base and
post-trained LLMs mechanistically from four perspectives to better understand
post-training effects. Our findings across model families and datasets reveal
that: (1) Post-training does not change the factual knowledge storage
locations, and it adapts knowledge representations from the base model while
developing new knowledge representations; (2) Both truthfulness and refusal can
be represented by linear vectors in the hidden representation space. The
truthfulness direction is highly similar between the base and post-trained
model, and it is effectively transferable for interventions; (3) The refusal
direction is different between the base and post-trained models, and it shows
limited forward transferability; (4) Differences in confidence between the base
and post-trained models cannot be attributed to entropy neurons. Our study
provides insights into the fundamental mechanisms preserved and altered during
post-training, facilitates downstream tasks like model steering, and could
potentially benefit future research in interpretability and LLM post-training.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 06:30:55 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Du",
"Hongzhe",
""
],
[
"Li",
"Weikai",
""
],
[
"Cai",
"Min",
""
],
[
"Saraipour",
"Karim",
""
],
[
"Zhang",
"Zimin",
""
],
[
"Lakkaraju",
"Himabindu",
""
],
[
"Sun",
"Yizhou",
""
],
[
"Zhang",
"Shichang",
""
]
] | TITLE: How Post-Training Reshapes LLMs: A Mechanistic View on Knowledge,
Truthfulness, Refusal, and Confidence
ABSTRACT: Post-training is essential for the success of large language models (LLMs),
transforming pre-trained base models into more useful and aligned post-trained
models. While plenty of works have studied post-training algorithms and
evaluated post-training models by their outputs, it remains understudied how
post-training reshapes LLMs internally. In this paper, we compare base and
post-trained LLMs mechanistically from four perspectives to better understand
post-training effects. Our findings across model families and datasets reveal
that: (1) Post-training does not change the factual knowledge storage
locations, and it adapts knowledge representations from the base model while
developing new knowledge representations; (2) Both truthfulness and refusal can
be represented by linear vectors in the hidden representation space. The
truthfulness direction is highly similar between the base and post-trained
model, and it is effectively transferable for interventions; (3) The refusal
direction is different between the base and post-trained models, and it shows
limited forward transferability; (4) Differences in confidence between the base
and post-trained models cannot be attributed to entropy neurons. Our study
provides insights into the fundamental mechanisms preserved and altered during
post-training, facilitates downstream tasks like model steering, and could
potentially benefit future research in interpretability and LLM post-training.
|
2504.02906 | Zhihan Zhang | Zhihan Zhang, Yixin Cao, Lizi Liao | Enhancing Chart-to-Code Generation in Multimodal Large Language Models
via Iterative Dual Preference Learning | 21 pages, 5 figures | null | null | null | cs.CL cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Chart-to-code generation, the process of converting chart images into
executable plotting scripts, provides a lossless representation of chart
information, requiring models to accurately capture and summarize all visual
and structural elements. However, this remains a significant challenge for
multimodal large language models (MLLMs), which are not inherently well-aligned
with code generation tasks. To bridge this gap, we introduce Chart2Code, a
novel iterative dual preference learning framework designed to enhance MLLMs'
chart-to-code generation capabilities through structured code variant
generation and fine-grained dual reward signals. We validate Chart2Code across
three MLLMs and find that iterative preference learning consistently improves
out-of-distribution chart-to-code generation quality. Throughout this process,
our dual scoring method, which evaluates both the textual code structure and
its visual representation, leads to greater performance improvements, even with
a reduced preference dataset size. Further analysis explores the key components
of our framework and highlights the interplay between chart-to-code generation
and broader chart reasoning, paving the way for future advancements in chart
comprehension.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 07:51:20 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Zhang",
"Zhihan",
""
],
[
"Cao",
"Yixin",
""
],
[
"Liao",
"Lizi",
""
]
] | TITLE: Enhancing Chart-to-Code Generation in Multimodal Large Language Models
via Iterative Dual Preference Learning
ABSTRACT: Chart-to-code generation, the process of converting chart images into
executable plotting scripts, provides a lossless representation of chart
information, requiring models to accurately capture and summarize all visual
and structural elements. However, this remains a significant challenge for
multimodal large language models (MLLMs), which are not inherently well-aligned
with code generation tasks. To bridge this gap, we introduce Chart2Code, a
novel iterative dual preference learning framework designed to enhance MLLMs'
chart-to-code generation capabilities through structured code variant
generation and fine-grained dual reward signals. We validate Chart2Code across
three MLLMs and find that iterative preference learning consistently improves
out-of-distribution chart-to-code generation quality. Throughout this process,
our dual scoring method, which evaluates both the textual code structure and
its visual representation, leads to greater performance improvements, even with
a reduced preference dataset size. Further analysis explores the key components
of our framework and highlights the interplay between chart-to-code generation
and broader chart reasoning, paving the way for future advancements in chart
comprehension.
|
2504.02912 | Rohit Agarwal | Rohit Agarwal, Aryan Dessai, Arif Ahmed Sekh, Krishna Agarwal,
Alexander Horsch, Dilip K. Prasad | Haphazard Inputs as Images in Online Learning | Accepted at IJCNN 2025 | null | null | null | cs.CV cs.AI cs.ET cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The field of varying feature space in online learning settings, also known as
haphazard inputs, is very prominent nowadays due to its applicability in
various fields. However, the current solutions to haphazard inputs are
model-dependent and cannot benefit from the existing advanced deep-learning
methods, which necessitate inputs of fixed dimensions. Therefore, we propose to
transform the varying feature space in an online learning setting to a
fixed-dimension image representation on the fly. This simple yet novel approach
is model-agnostic, allowing any vision-based models to be applicable for
haphazard inputs, as demonstrated using ResNet and ViT. The image
representation handles the inconsistent input data seamlessly, making our
proposed approach scalable and robust. We show the efficacy of our method on
four publicly available datasets. The code is available at
https://github.com/Rohit102497/HaphazardInputsAsImages.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 11:14:05 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Agarwal",
"Rohit",
""
],
[
"Dessai",
"Aryan",
""
],
[
"Sekh",
"Arif Ahmed",
""
],
[
"Agarwal",
"Krishna",
""
],
[
"Horsch",
"Alexander",
""
],
[
"Prasad",
"Dilip K.",
""
]
] | TITLE: Haphazard Inputs as Images in Online Learning
ABSTRACT: The field of varying feature space in online learning settings, also known as
haphazard inputs, is very prominent nowadays due to its applicability in
various fields. However, the current solutions to haphazard inputs are
model-dependent and cannot benefit from the existing advanced deep-learning
methods, which necessitate inputs of fixed dimensions. Therefore, we propose to
transform the varying feature space in an online learning setting to a
fixed-dimension image representation on the fly. This simple yet novel approach
is model-agnostic, allowing any vision-based models to be applicable for
haphazard inputs, as demonstrated using ResNet and ViT. The image
representation handles the inconsistent input data seamlessly, making our
proposed approach scalable and robust. We show the efficacy of our method on
four publicly available datasets. The code is available at
https://github.com/Rohit102497/HaphazardInputsAsImages.
|
2504.02983 | Xiaoyu Tong | Xiaoyu Tong and Zhi Zhang and Martha Lewis and Ekaterina Shutova | Hummus: A Dataset of Humorous Multimodal Metaphor Use | null | null | null | null | cs.CL cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Metaphor and humor share a lot of common ground, and metaphor is one of the
most common humorous mechanisms. This study focuses on the humorous capacity of
multimodal metaphors, which has not received due attention in the community. We
take inspiration from the Incongruity Theory of humor, the Conceptual Metaphor
Theory, and the annotation scheme behind the VU Amsterdam Metaphor Corpus, and
developed a novel annotation scheme for humorous multimodal metaphor use in
image-caption pairs. We create the Hummus Dataset of Humorous Multimodal
Metaphor Use, providing expert annotation on 1k image-caption pairs sampled
from the New Yorker Caption Contest corpus. Using the dataset, we test
state-of-the-art multimodal large language models (MLLMs) on their ability to
detect and understand humorous multimodal metaphor use. Our experiments show
that current MLLMs still struggle with processing humorous multimodal
metaphors, particularly with regard to integrating visual and textual
information. We release our dataset and code at
github.com/xiaoyuisrain/humorous-multimodal-metaphor-use.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 19:15:01 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Tong",
"Xiaoyu",
""
],
[
"Zhang",
"Zhi",
""
],
[
"Lewis",
"Martha",
""
],
[
"Shutova",
"Ekaterina",
""
]
] | TITLE: Hummus: A Dataset of Humorous Multimodal Metaphor Use
ABSTRACT: Metaphor and humor share a lot of common ground, and metaphor is one of the
most common humorous mechanisms. This study focuses on the humorous capacity of
multimodal metaphors, which has not received due attention in the community. We
take inspiration from the Incongruity Theory of humor, the Conceptual Metaphor
Theory, and the annotation scheme behind the VU Amsterdam Metaphor Corpus, and
developed a novel annotation scheme for humorous multimodal metaphor use in
image-caption pairs. We create the Hummus Dataset of Humorous Multimodal
Metaphor Use, providing expert annotation on 1k image-caption pairs sampled
from the New Yorker Caption Contest corpus. Using the dataset, we test
state-of-the-art multimodal large language models (MLLMs) on their ability to
detect and understand humorous multimodal metaphor use. Our experiments show
that current MLLMs still struggle with processing humorous multimodal
metaphors, particularly with regard to integrating visual and textual
information. We release our dataset and code at
github.com/xiaoyuisrain/humorous-multimodal-metaphor-use.
|
2504.02994 | Yiyuan Xiong | Yiyuan Xiong, Shaofeng Cai | Improving log-based anomaly detection through learned adaptive filter | null | null | null | null | cs.LG cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Log messages record important system runtime information and are useful for
detecting anomalous behaviors and managing modern software systems. Many
supervised and unsupervised learning methods have been proposed recently for
log-based anomaly detection. State-of-the-art unsupervised methods predict the
next log event given a log sequence and apply fixed configurations that use the
same filter condition (i.e. k, the top k predicted log events will be regarded
as normal next events) which leads to inferior performance in the detection
stage because it sets one fixed k for all log sequences, which ignores the
dynamic nature and variance in different log sequences. Recently, deep
reinforcement learning (DRL) are widely applied to make intelligent decisions
in a dynamic environment. In this work, we contend that it is necessary to
apply adaptive filters for different log sequences. To achieve this, we propose
a novel approach based on DRL to construct a learned adaptive filter and apply
different normal/abnormal filter thresholds for different log sequences. We
define the Markov Decision Process (MDP) and formulate the learned adaptive
filter as a problem that can be solved by DRL. We evaluate the learned adaptive
filter on two state-of-the-art log-based anomaly detection unsupervised
approaches DeepLog and LogAnomaly in two datasets HDFS and BGL. Extensive
experiments show that our approach outperforms the fixed configurations and
achieves significantly better performance in log-based anomaly detection.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 19:31:24 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Xiong",
"Yiyuan",
""
],
[
"Cai",
"Shaofeng",
""
]
] | TITLE: Improving log-based anomaly detection through learned adaptive filter
ABSTRACT: Log messages record important system runtime information and are useful for
detecting anomalous behaviors and managing modern software systems. Many
supervised and unsupervised learning methods have been proposed recently for
log-based anomaly detection. State-of-the-art unsupervised methods predict the
next log event given a log sequence and apply fixed configurations that use the
same filter condition (i.e. k, the top k predicted log events will be regarded
as normal next events) which leads to inferior performance in the detection
stage because it sets one fixed k for all log sequences, which ignores the
dynamic nature and variance in different log sequences. Recently, deep
reinforcement learning (DRL) are widely applied to make intelligent decisions
in a dynamic environment. In this work, we contend that it is necessary to
apply adaptive filters for different log sequences. To achieve this, we propose
a novel approach based on DRL to construct a learned adaptive filter and apply
different normal/abnormal filter thresholds for different log sequences. We
define the Markov Decision Process (MDP) and formulate the learned adaptive
filter as a problem that can be solved by DRL. We evaluate the learned adaptive
filter on two state-of-the-art log-based anomaly detection unsupervised
approaches DeepLog and LogAnomaly in two datasets HDFS and BGL. Extensive
experiments show that our approach outperforms the fixed configurations and
achieves significantly better performance in log-based anomaly detection.
|
2504.02996 | Siqi Wang | Siqi Wang, Aoming Liu, Bryan A. Plummer | Noise-Aware Generalization: Robustness to In-Domain Noise and
Out-of-Domain Generalization | null | null | null | null | cs.LG cs.CV | http://creativecommons.org/licenses/by/4.0/ | Multi-source Domain Generalization (DG) aims to improve model robustness to
new distributions. However, DG methods often overlook the effect of label
noise, which can confuse a model during training, reducing performance. Limited
prior work has analyzed DG method's noise-robustness, typically focused on an
analysis of existing methods rather than new solutions. In this paper, we
investigate this underexplored space, where models are evaluated under both
distribution shifts and label noise, which we refer to as Noise-Aware
Generalization (NAG). A natural solution to address label noise would be to
combine a Learning with Noisy Labels (LNL) method with those from DG. Many LNL
methods aim to detect distribution shifts in a class's samples, i.e., they
assume that distribution shifts often correspond to label noise. However, in
NAG distribution shifts can be due to label noise or domain shifts, breaking
the assumptions used by LNL methods. A naive solution is to make a similar
assumption made by many DG methods, where we presume to have domain labels
during training, enabling us to isolate the two types of shifts. However, this
ignores valuable cross-domain information. Specifically, our proposed DL4ND
approach improves noise detection by taking advantage of the observation that
noisy samples that may appear indistinguishable within a single domain often
show greater variation when compared across domains. Experiments show that
DL4ND significantly improves performance across four diverse datasets, offering
a promising direction for tackling NAG.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 19:37:57 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Wang",
"Siqi",
""
],
[
"Liu",
"Aoming",
""
],
[
"Plummer",
"Bryan A.",
""
]
] | TITLE: Noise-Aware Generalization: Robustness to In-Domain Noise and
Out-of-Domain Generalization
ABSTRACT: Multi-source Domain Generalization (DG) aims to improve model robustness to
new distributions. However, DG methods often overlook the effect of label
noise, which can confuse a model during training, reducing performance. Limited
prior work has analyzed DG method's noise-robustness, typically focused on an
analysis of existing methods rather than new solutions. In this paper, we
investigate this underexplored space, where models are evaluated under both
distribution shifts and label noise, which we refer to as Noise-Aware
Generalization (NAG). A natural solution to address label noise would be to
combine a Learning with Noisy Labels (LNL) method with those from DG. Many LNL
methods aim to detect distribution shifts in a class's samples, i.e., they
assume that distribution shifts often correspond to label noise. However, in
NAG distribution shifts can be due to label noise or domain shifts, breaking
the assumptions used by LNL methods. A naive solution is to make a similar
assumption made by many DG methods, where we presume to have domain labels
during training, enabling us to isolate the two types of shifts. However, this
ignores valuable cross-domain information. Specifically, our proposed DL4ND
approach improves noise detection by taking advantage of the observation that
noisy samples that may appear indistinguishable within a single domain often
show greater variation when compared across domains. Experiments show that
DL4ND significantly improves performance across four diverse datasets, offering
a promising direction for tackling NAG.
|
2504.02999 | Bahareh Golchin | Bahareh Golchin, Banafsheh Rekabdar | Anomaly Detection in Time Series Data Using Reinforcement Learning,
Variational Autoencoder, and Active Learning | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | A novel approach to detecting anomalies in time series data is presented in
this paper. This approach is pivotal in domains such as data centers, sensor
networks, and finance. Traditional methods often struggle with manual parameter
tuning and cannot adapt to new anomaly types. Our method overcomes these
limitations by integrating Deep Reinforcement Learning (DRL) with a Variational
Autoencoder (VAE) and Active Learning. By incorporating a Long Short-Term
Memory (LSTM) network, our approach models sequential data and its dependencies
effectively, allowing for the detection of new anomaly classes with minimal
labeled data. Our innovative DRL- VAE and Active Learning combination
significantly improves existing methods, as shown by our evaluations on
real-world datasets, enhancing anomaly detection techniques and advancing time
series analysis.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 19:41:52 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Golchin",
"Bahareh",
""
],
[
"Rekabdar",
"Banafsheh",
""
]
] | TITLE: Anomaly Detection in Time Series Data Using Reinforcement Learning,
Variational Autoencoder, and Active Learning
ABSTRACT: A novel approach to detecting anomalies in time series data is presented in
this paper. This approach is pivotal in domains such as data centers, sensor
networks, and finance. Traditional methods often struggle with manual parameter
tuning and cannot adapt to new anomaly types. Our method overcomes these
limitations by integrating Deep Reinforcement Learning (DRL) with a Variational
Autoencoder (VAE) and Active Learning. By incorporating a Long Short-Term
Memory (LSTM) network, our approach models sequential data and its dependencies
effectively, allowing for the detection of new anomaly classes with minimal
labeled data. Our innovative DRL- VAE and Active Learning combination
significantly improves existing methods, as shown by our evaluations on
real-world datasets, enhancing anomaly detection techniques and advancing time
series analysis.
|
2504.03000 | Raquel Fernandez-Peralta | Raquel Fernandez-Peralta | Fuzzy Implicative Rules: A Unified Approach | null | null | null | null | cs.LO | http://creativecommons.org/licenses/by/4.0/ | Rule mining algorithms are one of the fundamental techniques in data mining
for disclosing significant patterns in terms of linguistic rules expressed in
natural language. In this paper, we revisit the concept of fuzzy implicative
rule to provide a solid theoretical framework for any fuzzy rule mining
algorithm interested in capturing patterns in terms of logical conditionals
rather than the co-occurrence of antecedent and consequent. In particular, we
study which properties should satisfy the fuzzy operators to ensure a coherent
behavior of different quality measures. As a consequence of this study, we
introduce a new property of fuzzy implication functions related to a monotone
behavior of the generalized modus ponens for which we provide different valid
solutions. Also, we prove that our modeling generalizes others if an adequate
choice of the fuzzy implication function is made, so it can be seen as an
unifying framework. Further, we provide an open-source implementation in Python
for mining fuzzy implicative associative rules. We test the applicability and
relevance of our framework for different real datasets and fuzzy operators.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 19:44:31 GMT"
}
] | 2025-04-07T00:00:00 | [
[
"Fernandez-Peralta",
"Raquel",
""
]
] | TITLE: Fuzzy Implicative Rules: A Unified Approach
ABSTRACT: Rule mining algorithms are one of the fundamental techniques in data mining
for disclosing significant patterns in terms of linguistic rules expressed in
natural language. In this paper, we revisit the concept of fuzzy implicative
rule to provide a solid theoretical framework for any fuzzy rule mining
algorithm interested in capturing patterns in terms of logical conditionals
rather than the co-occurrence of antecedent and consequent. In particular, we
study which properties should satisfy the fuzzy operators to ensure a coherent
behavior of different quality measures. As a consequence of this study, we
introduce a new property of fuzzy implication functions related to a monotone
behavior of the generalized modus ponens for which we provide different valid
solutions. Also, we prove that our modeling generalizes others if an adequate
choice of the fuzzy implication function is made, so it can be seen as an
unifying framework. Further, we provide an open-source implementation in Python
for mining fuzzy implicative associative rules. We test the applicability and
relevance of our framework for different real datasets and fuzzy operators.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.