id
stringlengths
9
16
submitter
stringlengths
3
64
authors
stringlengths
5
6.63k
title
stringlengths
7
245
comments
stringlengths
1
482
journal-ref
stringlengths
4
382
doi
stringlengths
9
151
report-no
stringclasses
984 values
categories
stringlengths
5
108
license
stringclasses
9 values
abstract
stringlengths
83
3.41k
versions
listlengths
1
20
update_date
timestamp[s]date
2007-05-23 00:00:00
2025-04-11 00:00:00
authors_parsed
sequencelengths
1
427
prompt
stringlengths
166
3.49k
label
stringclasses
2 values
prob
float64
0.5
0.98
2204.07661
Soumyajit Gupta
Soumyajit Gupta, Venelin Kovatchev, Anubrata Das, Maria De-Arteaga, Matthew Lease
Finding Pareto Trade-offs in Fair and Accurate Detection of Toxic Speech
null
null
null
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Optimizing NLP models for fairness poses many challenges. Lack of differentiable fairness measures prevents gradient-based loss training or requires surrogate losses that diverge from the true metric of interest. In addition, competing objectives (e.g., accuracy vs. fairness) often require making trade-offs based on stakeholder preferences, but stakeholders may not know their preferences before seeing system performance under different trade-off settings. To address these challenges, we begin by formulating a differentiable version of a popular fairness measure, Accuracy Parity, to provide balanced accuracy across demographic groups. Next, we show how model-agnostic, HyperNetwork optimization can efficiently train arbitrary NLP model architectures to learn Pareto-optimal trade-offs between competing metrics. Focusing on the task of toxic language detection, we show the generality and efficacy of our methods across two datasets, three neural architectures, and three fairness losses.
[ { "version": "v1", "created": "Fri, 15 Apr 2022 22:11:25 GMT" }, { "version": "v2", "created": "Tue, 10 May 2022 18:36:41 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 00:29:44 GMT" } ]
2025-04-11T00:00:00
[ [ "Gupta", "Soumyajit", "" ], [ "Kovatchev", "Venelin", "" ], [ "Das", "Anubrata", "" ], [ "De-Arteaga", "Maria", "" ], [ "Lease", "Matthew", "" ] ]
TITLE: Finding Pareto Trade-offs in Fair and Accurate Detection of Toxic Speech ABSTRACT: Optimizing NLP models for fairness poses many challenges. Lack of differentiable fairness measures prevents gradient-based loss training or requires surrogate losses that diverge from the true metric of interest. In addition, competing objectives (e.g., accuracy vs. fairness) often require making trade-offs based on stakeholder preferences, but stakeholders may not know their preferences before seeing system performance under different trade-off settings. To address these challenges, we begin by formulating a differentiable version of a popular fairness measure, Accuracy Parity, to provide balanced accuracy across demographic groups. Next, we show how model-agnostic, HyperNetwork optimization can efficiently train arbitrary NLP model architectures to learn Pareto-optimal trade-offs between competing metrics. Focusing on the task of toxic language detection, we show the generality and efficacy of our methods across two datasets, three neural architectures, and three fairness losses.
no_new_dataset
0.947235
2303.17408
Yucheng Ruan
Yucheng Ruan, Xiang Lan, Daniel J. Tan, Hairil Rizal Abdullah, Mengling Feng
P-Transformer: A Prompt-based Multimodal Transformer Architecture For Medical Tabular Data
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Medical tabular data, abundant in Electronic Health Records (EHRs), is a valuable resource for diverse medical tasks such as risk prediction. While deep learning approaches, particularly transformer-based models, have shown remarkable performance in tabular data prediction, there are still problems remaining for existing work to be effectively adapted into medical domain, such as ignoring unstructured free-texts and underutilizing the textual information in structured data. To address these issues, we propose PTransformer, a \underline{P}rompt-based multimodal \underline{Transformer} architecture designed specifically for medical tabular data. This framework consists of two critical components: a tabular cell embedding generator and a tabular transformer. The former efficiently encodes diverse modalities from both structured and unstructured tabular data into a harmonized language semantic space with the help of pre-trained sentence encoder and medical prompts. The latter integrates cell representations to generate patient embeddings for various medical tasks. In comprehensive experiments on two real-world datasets for three medical tasks, PTransformer demonstrated the improvements with 10.9%/11.0% on RMSE/MAE, 0.5%/2.2% on RMSE/MAE, and 1.6%/0.8% on BACC/AUROC compared to state-of-the-art (SOTA) baselines in predictability.
[ { "version": "v1", "created": "Thu, 30 Mar 2023 14:25:44 GMT" }, { "version": "v2", "created": "Wed, 9 Aug 2023 08:58:25 GMT" }, { "version": "v3", "created": "Tue, 9 Jan 2024 10:28:00 GMT" }, { "version": "v4", "created": "Thu, 10 Apr 2025 06:24:36 GMT" } ]
2025-04-11T00:00:00
[ [ "Ruan", "Yucheng", "" ], [ "Lan", "Xiang", "" ], [ "Tan", "Daniel J.", "" ], [ "Abdullah", "Hairil Rizal", "" ], [ "Feng", "Mengling", "" ] ]
TITLE: P-Transformer: A Prompt-based Multimodal Transformer Architecture For Medical Tabular Data ABSTRACT: Medical tabular data, abundant in Electronic Health Records (EHRs), is a valuable resource for diverse medical tasks such as risk prediction. While deep learning approaches, particularly transformer-based models, have shown remarkable performance in tabular data prediction, there are still problems remaining for existing work to be effectively adapted into medical domain, such as ignoring unstructured free-texts and underutilizing the textual information in structured data. To address these issues, we propose PTransformer, a \underline{P}rompt-based multimodal \underline{Transformer} architecture designed specifically for medical tabular data. This framework consists of two critical components: a tabular cell embedding generator and a tabular transformer. The former efficiently encodes diverse modalities from both structured and unstructured tabular data into a harmonized language semantic space with the help of pre-trained sentence encoder and medical prompts. The latter integrates cell representations to generate patient embeddings for various medical tasks. In comprehensive experiments on two real-world datasets for three medical tasks, PTransformer demonstrated the improvements with 10.9%/11.0% on RMSE/MAE, 0.5%/2.2% on RMSE/MAE, and 1.6%/0.8% on BACC/AUROC compared to state-of-the-art (SOTA) baselines in predictability.
no_new_dataset
0.944638
2305.03535
Jonas Hein
Jonas Hein, Nicola Cavalcanti, Daniel Suter, Lukas Zingg, Fabio Carrillo, Lilian Calvet, Mazda Farshad, Marc Pollefeys, Nassir Navab, Philipp F\"urnstahl
Next-generation Surgical Navigation: Marker-less Multi-view 6DoF Pose Estimation of Surgical Instruments
Accepted for publication in Medical Image Analysis. Project page: https://jonashein.github.io/mvpsp/
null
null
null
cs.CV
http://creativecommons.org/licenses/by-nc-nd/4.0/
State-of-the-art research of traditional computer vision is increasingly leveraged in the surgical domain. A particular focus in computer-assisted surgery is to replace marker-based tracking systems for instrument localization with pure image-based 6DoF pose estimation using deep-learning methods. However, state-of-the-art single-view pose estimation methods do not yet meet the accuracy required for surgical navigation. In this context, we investigate the benefits of multi-view setups for highly accurate and occlusion-robust 6DoF pose estimation of surgical instruments and derive recommendations for an ideal camera system that addresses the challenges in the operating room. The contributions of this work are threefold. First, we present a multi-camera capture setup consisting of static and head-mounted cameras, which allows us to study the performance of pose estimation methods under various camera configurations. Second, we publish a multi-view RGB-D video dataset of ex-vivo spine surgeries, captured in a surgical wet lab and a real operating theatre and including rich annotations for surgeon, instrument, and patient anatomy. Third, we evaluate three state-of-the-art single-view and multi-view methods for the task of 6DoF pose estimation of surgical instruments and analyze the influence of camera configurations, training data, and occlusions on the pose accuracy and generalization ability. The best method utilizes five cameras in a multi-view pose optimization and achieves an average position and orientation error of 1.01 mm and 0.89{\deg} for a surgical drill as well as 2.79 mm and 3.33{\deg} for a screwdriver under optimal conditions. Our results demonstrate that marker-less tracking of surgical instruments is becoming a feasible alternative to existing marker-based systems.
[ { "version": "v1", "created": "Fri, 5 May 2023 13:42:19 GMT" }, { "version": "v2", "created": "Fri, 22 Dec 2023 20:52:50 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 17:23:33 GMT" } ]
2025-04-11T00:00:00
[ [ "Hein", "Jonas", "" ], [ "Cavalcanti", "Nicola", "" ], [ "Suter", "Daniel", "" ], [ "Zingg", "Lukas", "" ], [ "Carrillo", "Fabio", "" ], [ "Calvet", "Lilian", "" ], [ "Farshad", "Mazda", "" ], [ "Pollefeys", "Marc", "" ], [ "Navab", "Nassir", "" ], [ "Fürnstahl", "Philipp", "" ] ]
TITLE: Next-generation Surgical Navigation: Marker-less Multi-view 6DoF Pose Estimation of Surgical Instruments ABSTRACT: State-of-the-art research of traditional computer vision is increasingly leveraged in the surgical domain. A particular focus in computer-assisted surgery is to replace marker-based tracking systems for instrument localization with pure image-based 6DoF pose estimation using deep-learning methods. However, state-of-the-art single-view pose estimation methods do not yet meet the accuracy required for surgical navigation. In this context, we investigate the benefits of multi-view setups for highly accurate and occlusion-robust 6DoF pose estimation of surgical instruments and derive recommendations for an ideal camera system that addresses the challenges in the operating room. The contributions of this work are threefold. First, we present a multi-camera capture setup consisting of static and head-mounted cameras, which allows us to study the performance of pose estimation methods under various camera configurations. Second, we publish a multi-view RGB-D video dataset of ex-vivo spine surgeries, captured in a surgical wet lab and a real operating theatre and including rich annotations for surgeon, instrument, and patient anatomy. Third, we evaluate three state-of-the-art single-view and multi-view methods for the task of 6DoF pose estimation of surgical instruments and analyze the influence of camera configurations, training data, and occlusions on the pose accuracy and generalization ability. The best method utilizes five cameras in a multi-view pose optimization and achieves an average position and orientation error of 1.01 mm and 0.89{\deg} for a surgical drill as well as 2.79 mm and 3.33{\deg} for a screwdriver under optimal conditions. Our results demonstrate that marker-less tracking of surgical instruments is becoming a feasible alternative to existing marker-based systems.
new_dataset
0.963161
2305.15932
Jie He
Jie He and Simon Chi Lok U and V\'ictor Guti\'errez-Basulto and Jeff Z. Pan
BUCA: A Binary Classification Approach to Unsupervised Commonsense Question Answering
There is a text error in Table 10
null
null
null
cs.CL
http://creativecommons.org/licenses/by-nc-nd/4.0/
Unsupervised commonsense reasoning (UCR) is becoming increasingly popular as the construction of commonsense reasoning datasets is expensive, and they are inevitably limited in their scope. A popular approach to UCR is to fine-tune language models with external knowledge (e.g., knowledge graphs), but this usually requires a large number of training examples. In this paper, we propose to transform the downstream multiple choice question answering task into a simpler binary classification task by ranking all candidate answers according to their reasonableness. To this end, for training the model, we convert the knowledge graph triples into reasonable and unreasonable texts. Extensive experimental results show the effectiveness of our approach on various multiple choice question answering benchmarks. Furthermore, compared with existing UCR approaches using KGs, ours is less data hungry. Our code is available at https://github.com/probe2/BUCA.
[ { "version": "v1", "created": "Thu, 25 May 2023 10:59:47 GMT" }, { "version": "v2", "created": "Wed, 7 Jun 2023 20:33:09 GMT" }, { "version": "v3", "created": "Mon, 10 Mar 2025 09:28:29 GMT" }, { "version": "v4", "created": "Wed, 9 Apr 2025 21:40:10 GMT" } ]
2025-04-11T00:00:00
[ [ "He", "Jie", "" ], [ "U", "Simon Chi Lok", "" ], [ "Gutiérrez-Basulto", "Víctor", "" ], [ "Pan", "Jeff Z.", "" ] ]
TITLE: BUCA: A Binary Classification Approach to Unsupervised Commonsense Question Answering ABSTRACT: Unsupervised commonsense reasoning (UCR) is becoming increasingly popular as the construction of commonsense reasoning datasets is expensive, and they are inevitably limited in their scope. A popular approach to UCR is to fine-tune language models with external knowledge (e.g., knowledge graphs), but this usually requires a large number of training examples. In this paper, we propose to transform the downstream multiple choice question answering task into a simpler binary classification task by ranking all candidate answers according to their reasonableness. To this end, for training the model, we convert the knowledge graph triples into reasonable and unreasonable texts. Extensive experimental results show the effectiveness of our approach on various multiple choice question answering benchmarks. Furthermore, compared with existing UCR approaches using KGs, ours is less data hungry. Our code is available at https://github.com/probe2/BUCA.
no_new_dataset
0.947672
2308.07223
Melanie Roschewitz
M\'elanie Roschewitz and Ben Glocker
Distance Matters For Improving Performance Estimation Under Covariate Shift
Accepted to ICCV Workshop on Uncertainty Quantification for Computer Vision 2023
Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV) Workshops, 2023, pp. 4549-4559
10.1109/ICCVW60793.2023.00489
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Performance estimation under covariate shift is a crucial component of safe AI model deployment, especially for sensitive use-cases. Recently, several solutions were proposed to tackle this problem, most leveraging model predictions or softmax confidence to derive accuracy estimates. However, under dataset shifts, confidence scores may become ill-calibrated if samples are too far from the training distribution. In this work, we show that taking into account distances of test samples to their expected training distribution can significantly improve performance estimation under covariate shift. Precisely, we introduce a "distance-check" to flag samples that lie too far from the expected distribution, to avoid relying on their untrustworthy model outputs in the accuracy estimation step. We demonstrate the effectiveness of this method on 13 image classification tasks, across a wide-range of natural and synthetic distribution shifts and hundreds of models, with a median relative MAE improvement of 27% over the best baseline across all tasks, and SOTA performance on 10 out of 13 tasks. Our code is publicly available at https://github.com/melanibe/distance_matters_performance_estimation.
[ { "version": "v1", "created": "Mon, 14 Aug 2023 15:49:19 GMT" } ]
2025-04-11T00:00:00
[ [ "Roschewitz", "Mélanie", "" ], [ "Glocker", "Ben", "" ] ]
TITLE: Distance Matters For Improving Performance Estimation Under Covariate Shift ABSTRACT: Performance estimation under covariate shift is a crucial component of safe AI model deployment, especially for sensitive use-cases. Recently, several solutions were proposed to tackle this problem, most leveraging model predictions or softmax confidence to derive accuracy estimates. However, under dataset shifts, confidence scores may become ill-calibrated if samples are too far from the training distribution. In this work, we show that taking into account distances of test samples to their expected training distribution can significantly improve performance estimation under covariate shift. Precisely, we introduce a "distance-check" to flag samples that lie too far from the expected distribution, to avoid relying on their untrustworthy model outputs in the accuracy estimation step. We demonstrate the effectiveness of this method on 13 image classification tasks, across a wide-range of natural and synthetic distribution shifts and hundreds of models, with a median relative MAE improvement of 27% over the best baseline across all tasks, and SOTA performance on 10 out of 13 tasks. Our code is publicly available at https://github.com/melanibe/distance_matters_performance_estimation.
no_new_dataset
0.946349
2309.15408
Mario Beraha
Mario Beraha, Stefano Favaro, Matteo Sesia
A smoothed-Bayesian approach to frequency recovery from sketched data
null
null
null
null
stat.ME cs.DS cs.IR math.ST stat.TH
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We provide a novel statistical perspective on a classical problem at the intersection of computer science and information theory: recovering the empirical frequency of a symbol in a large discrete dataset using only a compressed representation, or sketch, obtained via random hashing. Departing from traditional algorithmic approaches, recent works have proposed Bayesian nonparametric (BNP) methods that can provide more informative frequency estimates by leveraging modeling assumptions about the distribution of the sketched data. In this paper, we propose a smoothed-Bayesian method, inspired by existing BNP approaches but designed in a frequentist framework to overcome the computational limitations of the BNP approaches when dealing with large-scale data from realistic distributions, including those with power-law tail behaviors. For sketches obtained with a single hash function, our approach is supported by rigorous frequentist properties, including unbiasedness and optimality under a squared error loss function within an intuitive class of linear estimators. For sketches with multiple hash functions, we introduce an approach based on multi-view learning to construct computationally efficient frequency estimators. We validate our method on synthetic and real data, comparing its performance to that of existing alternatives.
[ { "version": "v1", "created": "Wed, 27 Sep 2023 05:20:53 GMT" }, { "version": "v2", "created": "Wed, 12 Jun 2024 13:15:11 GMT" }, { "version": "v3", "created": "Thu, 28 Nov 2024 13:46:38 GMT" }, { "version": "v4", "created": "Thu, 10 Apr 2025 08:21:29 GMT" } ]
2025-04-11T00:00:00
[ [ "Beraha", "Mario", "" ], [ "Favaro", "Stefano", "" ], [ "Sesia", "Matteo", "" ] ]
TITLE: A smoothed-Bayesian approach to frequency recovery from sketched data ABSTRACT: We provide a novel statistical perspective on a classical problem at the intersection of computer science and information theory: recovering the empirical frequency of a symbol in a large discrete dataset using only a compressed representation, or sketch, obtained via random hashing. Departing from traditional algorithmic approaches, recent works have proposed Bayesian nonparametric (BNP) methods that can provide more informative frequency estimates by leveraging modeling assumptions about the distribution of the sketched data. In this paper, we propose a smoothed-Bayesian method, inspired by existing BNP approaches but designed in a frequentist framework to overcome the computational limitations of the BNP approaches when dealing with large-scale data from realistic distributions, including those with power-law tail behaviors. For sketches obtained with a single hash function, our approach is supported by rigorous frequentist properties, including unbiasedness and optimality under a squared error loss function within an intuitive class of linear estimators. For sketches with multiple hash functions, we introduce an approach based on multi-view learning to construct computationally efficient frequency estimators. We validate our method on synthetic and real data, comparing its performance to that of existing alternatives.
no_new_dataset
0.9434
2311.06293
Zeynab Kaseb
Zeynab Kaseb, Matthias Moller, Giorgio Tosti Balducci, Peter Palensky, Pedro P. Vergara
Quantum Neural Networks for Power Flow Analysis
8 pages, 13 figures
null
10.1016/j.epsr.2024.110677
null
quant-ph cs.LG cs.SY eess.SY
http://creativecommons.org/licenses/by/4.0/
This paper explores the potential application of quantum and hybrid quantum-classical neural networks in power flow analysis. Experiments are conducted using two datasets based on 4-bus and 33-bus test systems. A systematic performance comparison is also conducted among quantum, hybrid quantum-classical, and classical neural networks. The comparison is based on (i) generalization ability, (ii) robustness, (iii) training dataset size needed, (iv) training error, and (v) training process stability. The results show that the developed hybrid quantum-classical neural network outperforms both quantum and classical neural networks, and hence can improve deep learning-based power flow analysis in the noisy-intermediate-scale quantum (NISQ) and fault-tolerant quantum (FTQ) era.
[ { "version": "v1", "created": "Sat, 4 Nov 2023 11:25:31 GMT" }, { "version": "v2", "created": "Sun, 10 Mar 2024 15:49:57 GMT" } ]
2025-04-11T00:00:00
[ [ "Kaseb", "Zeynab", "" ], [ "Moller", "Matthias", "" ], [ "Balducci", "Giorgio Tosti", "" ], [ "Palensky", "Peter", "" ], [ "Vergara", "Pedro P.", "" ] ]
TITLE: Quantum Neural Networks for Power Flow Analysis ABSTRACT: This paper explores the potential application of quantum and hybrid quantum-classical neural networks in power flow analysis. Experiments are conducted using two datasets based on 4-bus and 33-bus test systems. A systematic performance comparison is also conducted among quantum, hybrid quantum-classical, and classical neural networks. The comparison is based on (i) generalization ability, (ii) robustness, (iii) training dataset size needed, (iv) training error, and (v) training process stability. The results show that the developed hybrid quantum-classical neural network outperforms both quantum and classical neural networks, and hence can improve deep learning-based power flow analysis in the noisy-intermediate-scale quantum (NISQ) and fault-tolerant quantum (FTQ) era.
no_new_dataset
0.951278
2311.13706
Nicol\'as Gaggion Ph.D.
Nicol\'as Gaggion, Benjamin A. Matheson, Yan Xia, Rodrigo Bonazzola, Nishant Ravikumar, Zeike A. Taylor, Diego H. Milone, Alejandro F. Frangi, Enzo Ferrante
Multi-view Hybrid Graph Convolutional Network for Volume-to-mesh Reconstruction in Cardiovascular MRI
null
null
null
null
eess.IV cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Cardiovascular magnetic resonance imaging is emerging as a crucial tool to examine cardiac morphology and function. Essential to this endeavour are anatomical 3D surface and volumetric meshes derived from CMR images, which facilitate computational anatomy studies, biomarker discovery, and in-silico simulations. Traditional approaches typically follow complex multi-step pipelines, first segmenting images and then reconstructing meshes, making them time-consuming and prone to error propagation. In response, we introduce HybridVNet, a novel architecture for direct image-to-mesh extraction seamlessly integrating standard convolutional neural networks with graph convolutions, which we prove can efficiently handle surface and volumetric meshes by encoding them as graph structures. To further enhance accuracy, we propose a multi-view HybridVNet architecture which processes both long axis and short axis CMR, showing that it can increase the performance of cardiac MR mesh generation. Our model combines traditional convolutional networks with variational graph generative models, deep supervision and mesh-specific regularisation. Experiments on a comprehensive dataset from the UK Biobank confirm the potential of HybridVNet to significantly advance cardiac imaging and computational cardiology by efficiently generating high-fidelity meshes from CMR images. Multi-view HybridVNet outperforms the state-of-the-art, achieving improvements of up to $\sim$27\% reduction in Mean Contour Distance (from 1.86 mm to 1.35 mm for the LV Myocardium), up to $\sim$18\% improvement in Hausdorff distance (from 4.74 mm to 3.89mm, for the LV Endocardium), and up to $\sim$8\% in Dice Coefficient (from 0.78 to 0.84, for the LV Myocardium), highlighting its superior accuracy.
[ { "version": "v1", "created": "Wed, 22 Nov 2023 21:51:29 GMT" }, { "version": "v2", "created": "Tue, 13 Aug 2024 19:18:41 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 16:25:45 GMT" } ]
2025-04-11T00:00:00
[ [ "Gaggion", "Nicolás", "" ], [ "Matheson", "Benjamin A.", "" ], [ "Xia", "Yan", "" ], [ "Bonazzola", "Rodrigo", "" ], [ "Ravikumar", "Nishant", "" ], [ "Taylor", "Zeike A.", "" ], [ "Milone", "Diego H.", "" ], [ "Frangi", "Alejandro F.", "" ], [ "Ferrante", "Enzo", "" ] ]
TITLE: Multi-view Hybrid Graph Convolutional Network for Volume-to-mesh Reconstruction in Cardiovascular MRI ABSTRACT: Cardiovascular magnetic resonance imaging is emerging as a crucial tool to examine cardiac morphology and function. Essential to this endeavour are anatomical 3D surface and volumetric meshes derived from CMR images, which facilitate computational anatomy studies, biomarker discovery, and in-silico simulations. Traditional approaches typically follow complex multi-step pipelines, first segmenting images and then reconstructing meshes, making them time-consuming and prone to error propagation. In response, we introduce HybridVNet, a novel architecture for direct image-to-mesh extraction seamlessly integrating standard convolutional neural networks with graph convolutions, which we prove can efficiently handle surface and volumetric meshes by encoding them as graph structures. To further enhance accuracy, we propose a multi-view HybridVNet architecture which processes both long axis and short axis CMR, showing that it can increase the performance of cardiac MR mesh generation. Our model combines traditional convolutional networks with variational graph generative models, deep supervision and mesh-specific regularisation. Experiments on a comprehensive dataset from the UK Biobank confirm the potential of HybridVNet to significantly advance cardiac imaging and computational cardiology by efficiently generating high-fidelity meshes from CMR images. Multi-view HybridVNet outperforms the state-of-the-art, achieving improvements of up to $\sim$27\% reduction in Mean Contour Distance (from 1.86 mm to 1.35 mm for the LV Myocardium), up to $\sim$18\% improvement in Hausdorff distance (from 4.74 mm to 3.89mm, for the LV Endocardium), and up to $\sim$8\% in Dice Coefficient (from 0.78 to 0.84, for the LV Myocardium), highlighting its superior accuracy.
no_new_dataset
0.955194
2401.03048
Xin Ma
Xin Ma, Yaohui Wang, Xinyuan Chen, Gengyun Jia, Ziwei Liu, Yuan-Fang Li, Cunjian Chen, Yu Qiao
Latte: Latent Diffusion Transformer for Video Generation
Accepted by Transactions on Machine Learning Research 2025; Project page: https://maxin-cn.github.io/latte_project
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We propose a novel Latent Diffusion Transformer, namely Latte, for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation.
[ { "version": "v1", "created": "Fri, 5 Jan 2024 19:55:15 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 09:28:20 GMT" } ]
2025-04-11T00:00:00
[ [ "Ma", "Xin", "" ], [ "Wang", "Yaohui", "" ], [ "Chen", "Xinyuan", "" ], [ "Jia", "Gengyun", "" ], [ "Liu", "Ziwei", "" ], [ "Li", "Yuan-Fang", "" ], [ "Chen", "Cunjian", "" ], [ "Qiao", "Yu", "" ] ]
TITLE: Latte: Latent Diffusion Transformer for Video Generation ABSTRACT: We propose a novel Latent Diffusion Transformer, namely Latte, for video generation. Latte first extracts spatio-temporal tokens from input videos and then adopts a series of Transformer blocks to model video distribution in the latent space. In order to model a substantial number of tokens extracted from videos, four efficient variants are introduced from the perspective of decomposing the spatial and temporal dimensions of input videos. To improve the quality of generated videos, we determine the best practices of Latte through rigorous experimental analysis, including video clip patch embedding, model variants, timestep-class information injection, temporal positional embedding, and learning strategies. Our comprehensive evaluation demonstrates that Latte achieves state-of-the-art performance across four standard video generation datasets, i.e., FaceForensics, SkyTimelapse, UCF101, and Taichi-HD. In addition, we extend Latte to text-to-video generation (T2V) task, where Latte achieves comparable results compared to recent T2V models. We strongly believe that Latte provides valuable insights for future research on incorporating Transformers into diffusion models for video generation.
no_new_dataset
0.949106
2402.18206
Abhijnya Bhat
Rishubh Parihar, Abhijnya Bhat, Abhipsa Basu, Saswat Mallick, Jogendra Nath Kundu, R. Venkatesh Babu
Balancing Act: Distribution-Guided Debiasing in Diffusion Models
CVPR 2024. Project Page : https://ab-34.github.io/balancing_act/
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Diffusion Models (DMs) have emerged as powerful generative models with unprecedented image generation capability. These models are widely used for data augmentation and creative applications. However, DMs reflect the biases present in the training datasets. This is especially concerning in the context of faces, where the DM prefers one demographic subgroup vs others (eg. female vs male). In this work, we present a method for debiasing DMs without relying on additional data or model retraining. Specifically, we propose Distribution Guidance, which enforces the generated images to follow the prescribed attribute distribution. To realize this, we build on the key insight that the latent features of denoising UNet hold rich demographic semantics, and the same can be leveraged to guide debiased generation. We train Attribute Distribution Predictor (ADP) - a small mlp that maps the latent features to the distribution of attributes. ADP is trained with pseudo labels generated from existing attribute classifiers. The proposed Distribution Guidance with ADP enables us to do fair generation. Our method reduces bias across single/multiple attributes and outperforms the baseline by a significant margin for unconditional and text-conditional diffusion models. Further, we present a downstream task of training a fair attribute classifier by rebalancing the training set with our generated data.
[ { "version": "v1", "created": "Wed, 28 Feb 2024 09:53:17 GMT" }, { "version": "v2", "created": "Wed, 22 May 2024 17:23:22 GMT" }, { "version": "v3", "created": "Wed, 29 May 2024 13:33:57 GMT" }, { "version": "v4", "created": "Thu, 10 Apr 2025 14:39:59 GMT" } ]
2025-04-11T00:00:00
[ [ "Parihar", "Rishubh", "" ], [ "Bhat", "Abhijnya", "" ], [ "Basu", "Abhipsa", "" ], [ "Mallick", "Saswat", "" ], [ "Kundu", "Jogendra Nath", "" ], [ "Babu", "R. Venkatesh", "" ] ]
TITLE: Balancing Act: Distribution-Guided Debiasing in Diffusion Models ABSTRACT: Diffusion Models (DMs) have emerged as powerful generative models with unprecedented image generation capability. These models are widely used for data augmentation and creative applications. However, DMs reflect the biases present in the training datasets. This is especially concerning in the context of faces, where the DM prefers one demographic subgroup vs others (eg. female vs male). In this work, we present a method for debiasing DMs without relying on additional data or model retraining. Specifically, we propose Distribution Guidance, which enforces the generated images to follow the prescribed attribute distribution. To realize this, we build on the key insight that the latent features of denoising UNet hold rich demographic semantics, and the same can be leveraged to guide debiased generation. We train Attribute Distribution Predictor (ADP) - a small mlp that maps the latent features to the distribution of attributes. ADP is trained with pseudo labels generated from existing attribute classifiers. The proposed Distribution Guidance with ADP enables us to do fair generation. Our method reduces bias across single/multiple attributes and outperforms the baseline by a significant margin for unconditional and text-conditional diffusion models. Further, we present a downstream task of training a fair attribute classifier by rebalancing the training set with our generated data.
no_new_dataset
0.951684
2402.18307
Joanne Lin
Joanne Lin, Nantheera Anantrasirichai, David Bull
Multi-Scale Denoising in the Feature Space for Low-Light Instance Segmentation
Accepted by ICASSP 2025
2025 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Hyderabad, India, 2025, pp. 1-5
10.1109/ICASSP49660.2025.10889336
ICASSP 2025
cs.CV
http://creativecommons.org/licenses/by/4.0/
Instance segmentation for low-light imagery remains largely unexplored due to the challenges imposed by such conditions, for example shot noise due to low photon count, color distortions and reduced contrast. In this paper, we propose an end-to-end solution to address this challenging task. Our proposed method implements weighted non-local blocks (wNLB) in the feature extractor. This integration enables an inherent denoising process at the feature level. As a result, our method eliminates the need for aligned ground truth images during training, thus supporting training on real-world low-light datasets. We introduce additional learnable weights at each layer in order to enhance the network's adaptability to real-world noise characteristics, which affect different feature scales in different ways. Experimental results on several object detectors show that the proposed method outperforms the pretrained networks with an Average Precision (AP) improvement of at least +7.6, with the introduction of wNLB further enhancing AP by upto +1.3.
[ { "version": "v1", "created": "Wed, 28 Feb 2024 13:07:16 GMT" }, { "version": "v2", "created": "Tue, 10 Sep 2024 08:50:11 GMT" }, { "version": "v3", "created": "Thu, 2 Jan 2025 23:06:37 GMT" } ]
2025-04-11T00:00:00
[ [ "Lin", "Joanne", "" ], [ "Anantrasirichai", "Nantheera", "" ], [ "Bull", "David", "" ] ]
TITLE: Multi-Scale Denoising in the Feature Space for Low-Light Instance Segmentation ABSTRACT: Instance segmentation for low-light imagery remains largely unexplored due to the challenges imposed by such conditions, for example shot noise due to low photon count, color distortions and reduced contrast. In this paper, we propose an end-to-end solution to address this challenging task. Our proposed method implements weighted non-local blocks (wNLB) in the feature extractor. This integration enables an inherent denoising process at the feature level. As a result, our method eliminates the need for aligned ground truth images during training, thus supporting training on real-world low-light datasets. We introduce additional learnable weights at each layer in order to enhance the network's adaptability to real-world noise characteristics, which affect different feature scales in different ways. Experimental results on several object detectors show that the proposed method outperforms the pretrained networks with an Average Precision (AP) improvement of at least +7.6, with the introduction of wNLB further enhancing AP by upto +1.3.
no_new_dataset
0.949106
2404.05297
Sujin Han
Sujin Han, Jinseo Kim, Sung-Ju Lee, Insu Yun
Automated Attack Synthesis for Constant Product Market Makers
22 pages, 16 figures, 8 tables. Accepted at ACM ISSTA 2025
null
10.1145/3728872
null
cs.CR cs.SE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Decentralized Finance (DeFi) enables many novel applications that were impossible in traditional finances. However, it also introduces new types of vulnerabilities. An example of such vulnerabilities is a composability bug between token contracts and Decentralized Exchange (DEX) that follows the Constant Product Market Maker (CPMM) model. This type of bug, which we refer to as CPMM composability bug, originates from issues in token contracts that make them incompatible with CPMMs, thereby endangering other tokens within the CPMM ecosystem. Since 2022, 23 exploits of such kind have resulted in a total loss of 2.2M USD. BlockSec, a smart contract auditing company, reported that 138 exploits of such kind occurred just in February 2023. In this paper, we propose CPMMX , a tool that automatically detects CPMM composability bugs across entire blockchains. To achieve such scalability, we first formalized CPMM composability bugs and found that these bugs can be induced by breaking two safety invariants. Based on this finding, we designed CPMMX equipped with a two-step approach, called shallow-then-deep search. In more detail, it first uses shallow search to find transactions that break the invariants. Then, it uses deep search to refine these transactions, making them profitable for the attacker. We evaluated CPMMX against five baselines on two public datasets and one synthetic dataset. In our evaluation, CPMMX detected 2.5x to 1.5x more vulnerabilities compared to baseline methods. It also analyzed contracts significantly faster, achieving higher F1 scores than the baselines. Additionally, we applied CPMMX to all contracts on the latest blocks of the Ethereum and Binance networks and discovered 26 new exploits that can result in 15.7K USD profit in total.
[ { "version": "v1", "created": "Mon, 8 Apr 2024 08:35:15 GMT" }, { "version": "v2", "created": "Thu, 25 Apr 2024 01:02:53 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 06:19:13 GMT" } ]
2025-04-11T00:00:00
[ [ "Han", "Sujin", "" ], [ "Kim", "Jinseo", "" ], [ "Lee", "Sung-Ju", "" ], [ "Yun", "Insu", "" ] ]
TITLE: Automated Attack Synthesis for Constant Product Market Makers ABSTRACT: Decentralized Finance (DeFi) enables many novel applications that were impossible in traditional finances. However, it also introduces new types of vulnerabilities. An example of such vulnerabilities is a composability bug between token contracts and Decentralized Exchange (DEX) that follows the Constant Product Market Maker (CPMM) model. This type of bug, which we refer to as CPMM composability bug, originates from issues in token contracts that make them incompatible with CPMMs, thereby endangering other tokens within the CPMM ecosystem. Since 2022, 23 exploits of such kind have resulted in a total loss of 2.2M USD. BlockSec, a smart contract auditing company, reported that 138 exploits of such kind occurred just in February 2023. In this paper, we propose CPMMX , a tool that automatically detects CPMM composability bugs across entire blockchains. To achieve such scalability, we first formalized CPMM composability bugs and found that these bugs can be induced by breaking two safety invariants. Based on this finding, we designed CPMMX equipped with a two-step approach, called shallow-then-deep search. In more detail, it first uses shallow search to find transactions that break the invariants. Then, it uses deep search to refine these transactions, making them profitable for the attacker. We evaluated CPMMX against five baselines on two public datasets and one synthetic dataset. In our evaluation, CPMMX detected 2.5x to 1.5x more vulnerabilities compared to baseline methods. It also analyzed contracts significantly faster, achieving higher F1 scores than the baselines. Additionally, we applied CPMMX to all contracts on the latest blocks of the Ethereum and Binance networks and discovered 26 new exploits that can result in 15.7K USD profit in total.
no_new_dataset
0.934783
2404.10702
Arka Ujjal Dey
Arka Ujjal Dey, Artemis Llabr\'es, Ernest Valveny and Dimosthenis Karatzas
Retrieval Augmented Verification for Zero-Shot Detection of Multimodal Disinformation
null
null
null
null
cs.MM
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The rise of disinformation on social media, especially through the strategic manipulation or repurposing of images, paired with provocative text, presents a complex challenge for traditional fact-checking methods. In this paper, we introduce a novel zero-shot approach to identify and interpret such multimodal disinformation, leveraging real-time evidence from credible sources. Our framework goes beyond simple true-or-false classifications by analyzing both the textual and visual components of social media claims in a structured, interpretable manner. By constructing a graph-based representation of entities and relationships within the claim, combined with pretrained visual features, our system automatically retrieves and matches external evidence to identify inconsistencies. Unlike traditional models dependent on labeled datasets, our method empowers users with transparency, illuminating exactly which aspects of the claim hold up to scrutiny and which do not. Our framework achieves competitive performance with state-of-the-art methods while offering enhanced explainability.
[ { "version": "v1", "created": "Tue, 16 Apr 2024 16:19:22 GMT" }, { "version": "v2", "created": "Mon, 29 Apr 2024 17:19:53 GMT" }, { "version": "v3", "created": "Wed, 9 Apr 2025 22:23:08 GMT" } ]
2025-04-11T00:00:00
[ [ "Dey", "Arka Ujjal", "" ], [ "Llabrés", "Artemis", "" ], [ "Valveny", "Ernest", "" ], [ "Karatzas", "Dimosthenis", "" ] ]
TITLE: Retrieval Augmented Verification for Zero-Shot Detection of Multimodal Disinformation ABSTRACT: The rise of disinformation on social media, especially through the strategic manipulation or repurposing of images, paired with provocative text, presents a complex challenge for traditional fact-checking methods. In this paper, we introduce a novel zero-shot approach to identify and interpret such multimodal disinformation, leveraging real-time evidence from credible sources. Our framework goes beyond simple true-or-false classifications by analyzing both the textual and visual components of social media claims in a structured, interpretable manner. By constructing a graph-based representation of entities and relationships within the claim, combined with pretrained visual features, our system automatically retrieves and matches external evidence to identify inconsistencies. Unlike traditional models dependent on labeled datasets, our method empowers users with transparency, illuminating exactly which aspects of the claim hold up to scrutiny and which do not. Our framework achieves competitive performance with state-of-the-art methods while offering enhanced explainability.
no_new_dataset
0.9463
2405.16924
Francesco Montagna
Francesco Montagna, Max Cairney-Leeming, Dhanya Sridhar, Francesco Locatello
Demystifying amortized causal discovery with transformers
null
null
null
null
cs.LG stat.ML
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Supervised learning approaches for causal discovery from observational data often achieve competitive performance despite seemingly avoiding explicit assumptions that traditional methods make for identifiability. In this work, we investigate CSIvA (Ke et al., 2023), a transformer-based model promising to train on synthetic data and transfer to real data. First, we bridge the gap with existing identifiability theory and show that constraints on the training data distribution implicitly define a prior on the test observations. Consistent with classical approaches, good performance is achieved when we have a good prior on the test data, and the underlying model is identifiable. At the same time, we find new trade-offs. Training on datasets generated from different classes of causal models, unambiguously identifiable in isolation, improves the test generalization. Performance is still guaranteed, as the ambiguous cases resulting from the mixture of identifiable causal models are unlikely to occur (which we formally prove). Overall, our study finds that amortized causal discovery still needs to obey identifiability theory, but it also differs from classical methods in how the assumptions are formulated, trading more reliance on assumptions on the noise type for fewer hypotheses on the mechanisms.
[ { "version": "v1", "created": "Mon, 27 May 2024 08:17:49 GMT" }, { "version": "v2", "created": "Wed, 9 Apr 2025 20:30:46 GMT" } ]
2025-04-11T00:00:00
[ [ "Montagna", "Francesco", "" ], [ "Cairney-Leeming", "Max", "" ], [ "Sridhar", "Dhanya", "" ], [ "Locatello", "Francesco", "" ] ]
TITLE: Demystifying amortized causal discovery with transformers ABSTRACT: Supervised learning approaches for causal discovery from observational data often achieve competitive performance despite seemingly avoiding explicit assumptions that traditional methods make for identifiability. In this work, we investigate CSIvA (Ke et al., 2023), a transformer-based model promising to train on synthetic data and transfer to real data. First, we bridge the gap with existing identifiability theory and show that constraints on the training data distribution implicitly define a prior on the test observations. Consistent with classical approaches, good performance is achieved when we have a good prior on the test data, and the underlying model is identifiable. At the same time, we find new trade-offs. Training on datasets generated from different classes of causal models, unambiguously identifiable in isolation, improves the test generalization. Performance is still guaranteed, as the ambiguous cases resulting from the mixture of identifiable causal models are unlikely to occur (which we formally prove). Overall, our study finds that amortized causal discovery still needs to obey identifiability theory, but it also differs from classical methods in how the assumptions are formulated, trading more reliance on assumptions on the noise type for fewer hypotheses on the mechanisms.
no_new_dataset
0.943138
2405.18560
Shubhang Bhatnagar
Shubhang Bhatnagar, Narendra Ahuja
Potential Field Based Deep Metric Learning
Accepted to CVPR 2025
null
null
null
cs.CV cs.AI cs.IR cs.LG eess.IV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Deep metric learning (DML) involves training a network to learn a semantically meaningful representation space. Many current approaches mine n-tuples of examples and model interactions within each tuplets. We present a novel, compositional DML model that instead of in tuples, represents the influence of each example (embedding) by a continuous potential field, and superposes the fields to obtain their combined global potential field. We use attractive/repulsive potential fields to represent interactions among embeddings from images of the same/different classes. Contrary to typical learning methods, where mutual influence of samples is proportional to their distance, we enforce reduction in such influence with distance, leading to a decaying field. We show that such decay helps improve performance on real world datasets with large intra-class variations and label noise. Like other proxy-based methods, we also use proxies to succinctly represent sub-populations of examples. We evaluate our method on three standard DML benchmarks- Cars-196, CUB-200-2011, and SOP datasets where it outperforms state-of-the-art baselines.
[ { "version": "v1", "created": "Tue, 28 May 2024 20:10:06 GMT" }, { "version": "v2", "created": "Sun, 1 Dec 2024 05:22:22 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 04:49:39 GMT" } ]
2025-04-11T00:00:00
[ [ "Bhatnagar", "Shubhang", "" ], [ "Ahuja", "Narendra", "" ] ]
TITLE: Potential Field Based Deep Metric Learning ABSTRACT: Deep metric learning (DML) involves training a network to learn a semantically meaningful representation space. Many current approaches mine n-tuples of examples and model interactions within each tuplets. We present a novel, compositional DML model that instead of in tuples, represents the influence of each example (embedding) by a continuous potential field, and superposes the fields to obtain their combined global potential field. We use attractive/repulsive potential fields to represent interactions among embeddings from images of the same/different classes. Contrary to typical learning methods, where mutual influence of samples is proportional to their distance, we enforce reduction in such influence with distance, leading to a decaying field. We show that such decay helps improve performance on real world datasets with large intra-class variations and label noise. Like other proxy-based methods, we also use proxies to succinctly represent sub-populations of examples. We evaluate our method on three standard DML benchmarks- Cars-196, CUB-200-2011, and SOP datasets where it outperforms state-of-the-art baselines.
no_new_dataset
0.945851
2406.07409
Juntao You
HanQin Cai, Longxiu Huang, Xiliang Lu, Juntao You
Accelerating Ill-conditioned Hankel Matrix Recovery via Structured Newton-like Descent
null
null
null
null
stat.ML cs.IT cs.LG eess.SP math.IT math.OC
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This paper studies the robust Hankel recovery problem, which simultaneously removes the sparse outliers and fulfills missing entries from the partial observation. We propose a novel non-convex algorithm, coined Hankel Structured Newton-Like Descent (HSNLD), to tackle the robust Hankel recovery problem. HSNLD is highly efficient with linear convergence, and its convergence rate is independent of the condition number of the underlying Hankel matrix. The recovery guarantee has been established under some mild conditions. Numerical experiments on both synthetic and real datasets show the superior performance of HSNLD against state-of-the-art algorithms.
[ { "version": "v1", "created": "Tue, 11 Jun 2024 16:14:30 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 07:55:52 GMT" } ]
2025-04-11T00:00:00
[ [ "Cai", "HanQin", "" ], [ "Huang", "Longxiu", "" ], [ "Lu", "Xiliang", "" ], [ "You", "Juntao", "" ] ]
TITLE: Accelerating Ill-conditioned Hankel Matrix Recovery via Structured Newton-like Descent ABSTRACT: This paper studies the robust Hankel recovery problem, which simultaneously removes the sparse outliers and fulfills missing entries from the partial observation. We propose a novel non-convex algorithm, coined Hankel Structured Newton-Like Descent (HSNLD), to tackle the robust Hankel recovery problem. HSNLD is highly efficient with linear convergence, and its convergence rate is independent of the condition number of the underlying Hankel matrix. The recovery guarantee has been established under some mild conditions. Numerical experiments on both synthetic and real datasets show the superior performance of HSNLD against state-of-the-art algorithms.
no_new_dataset
0.946051
2406.12123
Jingxi Xu
Jingxi Xu, Runsheng Wang, Siqi Shang, Ava Chen, Lauren Winterbottom, To-Liang Hsu, Wenxi Chen, Khondoker Ahmed, Pedro Leandro La Rotta, Xinyue Zhu, Dawn M. Nilsen, Joel Stein, Matei Ciocarlie
ChatEMG: Synthetic Data Generation to Control a Robotic Hand Orthosis for Stroke
8 pages; accepted to RA-L in November 2024; published at RA-L in February 2025
null
null
null
cs.RO cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Intent inferral on a hand orthosis for stroke patients is challenging due to the difficulty of data collection. Additionally, EMG signals exhibit significant variations across different conditions, sessions, and subjects, making it hard for classifiers to generalize. Traditional approaches require a large labeled dataset from the new condition, session, or subject to train intent classifiers; however, this data collection process is burdensome and time-consuming. In this paper, we propose ChatEMG, an autoregressive generative model that can generate synthetic EMG signals conditioned on prompts (i.e., a given sequence of EMG signals). ChatEMG enables us to collect only a small dataset from the new condition, session, or subject and expand it with synthetic samples conditioned on prompts from this new context. ChatEMG leverages a vast repository of previous data via generative training while still remaining context-specific via prompting. Our experiments show that these synthetic samples are classifier-agnostic and can improve intent inferral accuracy for different types of classifiers. We demonstrate that our complete approach can be integrated into a single patient session, including the use of the classifier for functional orthosis-assisted tasks. To the best of our knowledge, this is the first time an intent classifier trained partially on synthetic data has been deployed for functional control of an orthosis by a stroke survivor. Videos, source code, and additional information can be found at https://jxu.ai/chatemg.
[ { "version": "v1", "created": "Mon, 17 Jun 2024 22:04:44 GMT" }, { "version": "v2", "created": "Fri, 22 Nov 2024 23:15:26 GMT" }, { "version": "v3", "created": "Wed, 9 Apr 2025 21:49:04 GMT" } ]
2025-04-11T00:00:00
[ [ "Xu", "Jingxi", "" ], [ "Wang", "Runsheng", "" ], [ "Shang", "Siqi", "" ], [ "Chen", "Ava", "" ], [ "Winterbottom", "Lauren", "" ], [ "Hsu", "To-Liang", "" ], [ "Chen", "Wenxi", "" ], [ "Ahmed", "Khondoker", "" ], [ "La Rotta", "Pedro Leandro", "" ], [ "Zhu", "Xinyue", "" ], [ "Nilsen", "Dawn M.", "" ], [ "Stein", "Joel", "" ], [ "Ciocarlie", "Matei", "" ] ]
TITLE: ChatEMG: Synthetic Data Generation to Control a Robotic Hand Orthosis for Stroke ABSTRACT: Intent inferral on a hand orthosis for stroke patients is challenging due to the difficulty of data collection. Additionally, EMG signals exhibit significant variations across different conditions, sessions, and subjects, making it hard for classifiers to generalize. Traditional approaches require a large labeled dataset from the new condition, session, or subject to train intent classifiers; however, this data collection process is burdensome and time-consuming. In this paper, we propose ChatEMG, an autoregressive generative model that can generate synthetic EMG signals conditioned on prompts (i.e., a given sequence of EMG signals). ChatEMG enables us to collect only a small dataset from the new condition, session, or subject and expand it with synthetic samples conditioned on prompts from this new context. ChatEMG leverages a vast repository of previous data via generative training while still remaining context-specific via prompting. Our experiments show that these synthetic samples are classifier-agnostic and can improve intent inferral accuracy for different types of classifiers. We demonstrate that our complete approach can be integrated into a single patient session, including the use of the classifier for functional orthosis-assisted tasks. To the best of our knowledge, this is the first time an intent classifier trained partially on synthetic data has been deployed for functional control of an orthosis by a stroke survivor. Videos, source code, and additional information can be found at https://jxu.ai/chatemg.
no_new_dataset
0.948822
2406.19388
Moayed Haji-Ali
Moayed Haji-Ali, Willi Menapace, Aliaksandr Siarohin, Guha Balakrishnan, Sergey Tulyakov, Vicente Ordonez
Taming Data and Transformers for Scalable Audio Generation
Project Webpage: https://snap-research.github.io/GenAU/
null
null
null
cs.SD cs.CL cs.CV cs.MM eess.AS
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The scalability of ambient sound generators is hindered by data scarcity, insufficient caption quality, and limited scalability in model architecture. This work addresses these challenges by advancing both data and model scaling. First, we propose an efficient and scalable dataset collection pipeline tailored for ambient audio generation, resulting in AutoReCap-XL, the largest ambient audio-text dataset with over 47 million clips. To provide high-quality textual annotations, we propose AutoCap, a high-quality automatic audio captioning model. By adopting a Q-Former module and leveraging audio metadata, AutoCap substantially enhances caption quality, reaching a CIDEr score of $83.2$, a $3.2\%$ improvement over previous captioning models. Finally, we propose GenAu, a scalable transformer-based audio generation architecture that we scale up to 1.25B parameters. We demonstrate its benefits from data scaling with synthetic captions as well as model size scaling. When compared to baseline audio generators trained at similar size and data scale, GenAu obtains significant improvements of $4.7\%$ in FAD score, $11.1\%$ in IS, and $13.5\%$ in CLAP score. Our code, model checkpoints, and dataset are publicly available.
[ { "version": "v1", "created": "Thu, 27 Jun 2024 17:58:54 GMT" }, { "version": "v2", "created": "Thu, 24 Oct 2024 17:56:21 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 17:55:02 GMT" } ]
2025-04-11T00:00:00
[ [ "Haji-Ali", "Moayed", "" ], [ "Menapace", "Willi", "" ], [ "Siarohin", "Aliaksandr", "" ], [ "Balakrishnan", "Guha", "" ], [ "Tulyakov", "Sergey", "" ], [ "Ordonez", "Vicente", "" ] ]
TITLE: Taming Data and Transformers for Scalable Audio Generation ABSTRACT: The scalability of ambient sound generators is hindered by data scarcity, insufficient caption quality, and limited scalability in model architecture. This work addresses these challenges by advancing both data and model scaling. First, we propose an efficient and scalable dataset collection pipeline tailored for ambient audio generation, resulting in AutoReCap-XL, the largest ambient audio-text dataset with over 47 million clips. To provide high-quality textual annotations, we propose AutoCap, a high-quality automatic audio captioning model. By adopting a Q-Former module and leveraging audio metadata, AutoCap substantially enhances caption quality, reaching a CIDEr score of $83.2$, a $3.2\%$ improvement over previous captioning models. Finally, we propose GenAu, a scalable transformer-based audio generation architecture that we scale up to 1.25B parameters. We demonstrate its benefits from data scaling with synthetic captions as well as model size scaling. When compared to baseline audio generators trained at similar size and data scale, GenAu obtains significant improvements of $4.7\%$ in FAD score, $11.1\%$ in IS, and $13.5\%$ in CLAP score. Our code, model checkpoints, and dataset are publicly available.
no_new_dataset
0.843895
2407.15817
Baudouin Denis de Senneville PhD
Florian Robert, Alexia Calovoulos, Laurent Facq, Fanny Decoeur, Etienne Gontier, Christophe F. Grosset, Baudouin Denis de Senneville
Enhancing Cell Instance Segmentation in Scanning Electron Microscopy Images via a Deep Contour Closing Operator
13 pages, 8 figures, 2 tables
null
10.1016/j.compbiomed.2025.109972
null
eess.IV cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurately segmenting and individualizing cells in SEM images is a highly promising technique for elucidating tissue architecture in oncology. While current AI-based methods are effective, errors persist, necessitating time-consuming manual corrections, particularly in areas where the quality of cell contours in the image is poor and requires gap filling. This study presents a novel AI-driven approach for refining cell boundary delineation to improve instance-based cell segmentation in SEM images, also reducing the necessity for residual manual correction. A CNN COp-Net is introduced to address gaps in cell contours, effectively filling in regions with deficient or absent information. The network takes as input cell contour probability maps with potentially inadequate or missing information and outputs corrected cell contour delineations. The lack of training data was addressed by generating low integrity probability maps using a tailored PDE. We showcase the efficacy of our approach in augmenting cell boundary precision using both private SEM images from PDX hepatoblastoma tissues and publicly accessible images datasets. The proposed cell contour closing operator exhibits a notable improvement in tested datasets, achieving respectively close to 50% (private data) and 10% (public data) increase in the accurately-delineated cell proportion compared to state-of-the-art methods. Additionally, the need for manual corrections was significantly reduced, therefore facilitating the overall digitalization process. Our results demonstrate a notable enhancement in the accuracy of cell instance segmentation, particularly in highly challenging regions where image quality compromises the integrity of cell boundaries, necessitating gap filling. Therefore, our work should ultimately facilitate the study of tumour tissue bioarchitecture in onconanotomy field.
[ { "version": "v1", "created": "Mon, 22 Jul 2024 17:32:06 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 09:20:30 GMT" } ]
2025-04-11T00:00:00
[ [ "Robert", "Florian", "" ], [ "Calovoulos", "Alexia", "" ], [ "Facq", "Laurent", "" ], [ "Decoeur", "Fanny", "" ], [ "Gontier", "Etienne", "" ], [ "Grosset", "Christophe F.", "" ], [ "de Senneville", "Baudouin Denis", "" ] ]
TITLE: Enhancing Cell Instance Segmentation in Scanning Electron Microscopy Images via a Deep Contour Closing Operator ABSTRACT: Accurately segmenting and individualizing cells in SEM images is a highly promising technique for elucidating tissue architecture in oncology. While current AI-based methods are effective, errors persist, necessitating time-consuming manual corrections, particularly in areas where the quality of cell contours in the image is poor and requires gap filling. This study presents a novel AI-driven approach for refining cell boundary delineation to improve instance-based cell segmentation in SEM images, also reducing the necessity for residual manual correction. A CNN COp-Net is introduced to address gaps in cell contours, effectively filling in regions with deficient or absent information. The network takes as input cell contour probability maps with potentially inadequate or missing information and outputs corrected cell contour delineations. The lack of training data was addressed by generating low integrity probability maps using a tailored PDE. We showcase the efficacy of our approach in augmenting cell boundary precision using both private SEM images from PDX hepatoblastoma tissues and publicly accessible images datasets. The proposed cell contour closing operator exhibits a notable improvement in tested datasets, achieving respectively close to 50% (private data) and 10% (public data) increase in the accurately-delineated cell proportion compared to state-of-the-art methods. Additionally, the need for manual corrections was significantly reduced, therefore facilitating the overall digitalization process. Our results demonstrate a notable enhancement in the accuracy of cell instance segmentation, particularly in highly challenging regions where image quality compromises the integrity of cell boundaries, necessitating gap filling. Therefore, our work should ultimately facilitate the study of tumour tissue bioarchitecture in onconanotomy field.
no_new_dataset
0.959875
2409.10365
Melanie Roschewitz
M\'elanie Roschewitz, Fabio De Sousa Ribeiro, Tian Xia, Galvin Khara, Ben Glocker
Robust image representations with counterfactual contrastive learning
Code available at https://github.com/biomedia-mira/counterfactual-contrastive/
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Contrastive pretraining can substantially increase model generalisation and downstream performance. However, the quality of the learned representations is highly dependent on the data augmentation strategy applied to generate positive pairs. Positive contrastive pairs should preserve semantic meaning while discarding unwanted variations related to the data acquisition domain. Traditional contrastive pipelines attempt to simulate domain shifts through pre-defined generic image transformations. However, these do not always mimic realistic and relevant domain variations for medical imaging, such as scanner differences. To tackle this issue, we herein introduce counterfactual contrastive learning, a novel framework leveraging recent advances in causal image synthesis to create contrastive positive pairs that faithfully capture relevant domain variations. Our method, evaluated across five datasets encompassing both chest radiography and mammography data, for two established contrastive objectives (SimCLR and DINO-v2), outperforms standard contrastive learning in terms of robustness to acquisition shift. Notably, counterfactual contrastive learning achieves superior downstream performance on both in-distribution and external datasets, especially for images acquired with scanners under-represented in the training set. Further experiments show that the proposed framework extends beyond acquisition shifts, with models trained with counterfactual contrastive learning reducing subgroup disparities across biological sex.
[ { "version": "v1", "created": "Mon, 16 Sep 2024 15:11:00 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 16:19:20 GMT" } ]
2025-04-11T00:00:00
[ [ "Roschewitz", "Mélanie", "" ], [ "Ribeiro", "Fabio De Sousa", "" ], [ "Xia", "Tian", "" ], [ "Khara", "Galvin", "" ], [ "Glocker", "Ben", "" ] ]
TITLE: Robust image representations with counterfactual contrastive learning ABSTRACT: Contrastive pretraining can substantially increase model generalisation and downstream performance. However, the quality of the learned representations is highly dependent on the data augmentation strategy applied to generate positive pairs. Positive contrastive pairs should preserve semantic meaning while discarding unwanted variations related to the data acquisition domain. Traditional contrastive pipelines attempt to simulate domain shifts through pre-defined generic image transformations. However, these do not always mimic realistic and relevant domain variations for medical imaging, such as scanner differences. To tackle this issue, we herein introduce counterfactual contrastive learning, a novel framework leveraging recent advances in causal image synthesis to create contrastive positive pairs that faithfully capture relevant domain variations. Our method, evaluated across five datasets encompassing both chest radiography and mammography data, for two established contrastive objectives (SimCLR and DINO-v2), outperforms standard contrastive learning in terms of robustness to acquisition shift. Notably, counterfactual contrastive learning achieves superior downstream performance on both in-distribution and external datasets, especially for images acquired with scanners under-represented in the training set. Further experiments show that the proposed framework extends beyond acquisition shifts, with models trained with counterfactual contrastive learning reducing subgroup disparities across biological sex.
no_new_dataset
0.949529
2409.19075
Jie He
Yu Fu, Jie He, Yifan Yang, Qun Liu, Deyi Xiong
Meta-RTL: Reinforcement-Based Meta-Transfer Learning for Low-Resource Commonsense Reasoning
There is a text error in table 6
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Meta learning has been widely used to exploit rich-resource source tasks to improve the performance of low-resource target tasks. Unfortunately, most existing meta learning approaches treat different source tasks equally, ignoring the relatedness of source tasks to the target task in knowledge transfer. To mitigate this issue, we propose a reinforcement-based multi-source meta-transfer learning framework (Meta-RTL) for low-resource commonsense reasoning. In this framework, we present a reinforcement-based approach to dynamically estimating source task weights that measure the contribution of the corresponding tasks to the target task in the meta-transfer learning. The differences between the general loss of the meta model and task-specific losses of source-specific temporal meta models on sampled target data are fed into the policy network of the reinforcement learning module as rewards. The policy network is built upon LSTMs that capture long-term dependencies on source task weight estimation across meta learning iterations. We evaluate the proposed Meta-RTL using both BERT and ALBERT as the backbone of the meta model on three commonsense reasoning benchmark datasets. Experimental results demonstrate that Meta-RTL substantially outperforms strong baselines and previous task selection strategies and achieves larger improvements on extremely low-resource settings.
[ { "version": "v1", "created": "Fri, 27 Sep 2024 18:22:22 GMT" }, { "version": "v2", "created": "Tue, 11 Mar 2025 09:31:15 GMT" }, { "version": "v3", "created": "Wed, 9 Apr 2025 21:49:23 GMT" } ]
2025-04-11T00:00:00
[ [ "Fu", "Yu", "" ], [ "He", "Jie", "" ], [ "Yang", "Yifan", "" ], [ "Liu", "Qun", "" ], [ "Xiong", "Deyi", "" ] ]
TITLE: Meta-RTL: Reinforcement-Based Meta-Transfer Learning for Low-Resource Commonsense Reasoning ABSTRACT: Meta learning has been widely used to exploit rich-resource source tasks to improve the performance of low-resource target tasks. Unfortunately, most existing meta learning approaches treat different source tasks equally, ignoring the relatedness of source tasks to the target task in knowledge transfer. To mitigate this issue, we propose a reinforcement-based multi-source meta-transfer learning framework (Meta-RTL) for low-resource commonsense reasoning. In this framework, we present a reinforcement-based approach to dynamically estimating source task weights that measure the contribution of the corresponding tasks to the target task in the meta-transfer learning. The differences between the general loss of the meta model and task-specific losses of source-specific temporal meta models on sampled target data are fed into the policy network of the reinforcement learning module as rewards. The policy network is built upon LSTMs that capture long-term dependencies on source task weight estimation across meta learning iterations. We evaluate the proposed Meta-RTL using both BERT and ALBERT as the backbone of the meta model on three commonsense reasoning benchmark datasets. Experimental results demonstrate that Meta-RTL substantially outperforms strong baselines and previous task selection strategies and achieves larger improvements on extremely low-resource settings.
no_new_dataset
0.944689
2410.01595
Amin Karimi Monsefi
Pouyan Navard, Amin Karimi Monsefi, Mengxi Zhou, Wei-Lun Chao, Alper Yilmaz, Rajiv Ramnath
KnobGen: Controlling the Sophistication of Artwork in Sketch-Based Diffusion Models
Accepted to CVPR 2025 Workshop on CVEU
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by/4.0/
Recent advances in diffusion models have significantly improved text-to-image (T2I) generation, but they often struggle to balance fine-grained precision with high-level control. Methods like ControlNet and T2I-Adapter excel at following sketches by seasoned artists but tend to be overly rigid, replicating unintentional flaws in sketches from novice users. Meanwhile, coarse-grained methods, such as sketch-based abstraction frameworks, offer more accessible input handling but lack the precise control needed for detailed, professional use. To address these limitations, we propose KnobGen, a dual-pathway framework that democratizes sketch-based image generation by seamlessly adapting to varying levels of sketch complexity and user skill. KnobGen uses a Coarse-Grained Controller (CGC) module for high-level semantics and a Fine-Grained Controller (FGC) module for detailed refinement. The relative strength of these two modules can be adjusted through our knob inference mechanism to align with the user's specific needs. These mechanisms ensure that KnobGen can flexibly generate images from both novice sketches and those drawn by seasoned artists. This maintains control over the final output while preserving the natural appearance of the image, as evidenced on the MultiGen-20M dataset and a newly collected sketch dataset.
[ { "version": "v1", "created": "Wed, 2 Oct 2024 14:33:12 GMT" }, { "version": "v2", "created": "Fri, 11 Oct 2024 12:47:48 GMT" }, { "version": "v3", "created": "Wed, 9 Apr 2025 22:27:10 GMT" } ]
2025-04-11T00:00:00
[ [ "Navard", "Pouyan", "" ], [ "Monsefi", "Amin Karimi", "" ], [ "Zhou", "Mengxi", "" ], [ "Chao", "Wei-Lun", "" ], [ "Yilmaz", "Alper", "" ], [ "Ramnath", "Rajiv", "" ] ]
TITLE: KnobGen: Controlling the Sophistication of Artwork in Sketch-Based Diffusion Models ABSTRACT: Recent advances in diffusion models have significantly improved text-to-image (T2I) generation, but they often struggle to balance fine-grained precision with high-level control. Methods like ControlNet and T2I-Adapter excel at following sketches by seasoned artists but tend to be overly rigid, replicating unintentional flaws in sketches from novice users. Meanwhile, coarse-grained methods, such as sketch-based abstraction frameworks, offer more accessible input handling but lack the precise control needed for detailed, professional use. To address these limitations, we propose KnobGen, a dual-pathway framework that democratizes sketch-based image generation by seamlessly adapting to varying levels of sketch complexity and user skill. KnobGen uses a Coarse-Grained Controller (CGC) module for high-level semantics and a Fine-Grained Controller (FGC) module for detailed refinement. The relative strength of these two modules can be adjusted through our knob inference mechanism to align with the user's specific needs. These mechanisms ensure that KnobGen can flexibly generate images from both novice sketches and those drawn by seasoned artists. This maintains control over the final output while preserving the natural appearance of the image, as evidenced on the MultiGen-20M dataset and a newly collected sketch dataset.
new_dataset
0.967564
2410.03749
Larry Liebovitch
K. Lian (1), L. S. Liebovitch (1), M. Wild (1), H. West (1), P. T. Coleman (1), F. Chen (2), E. Kimani (2), K. Sieck (2) ((1) Columbia University, (2) Toyota Research Institute)
Machine Learning Classification of Peaceful Countries: A Comparative Analysis and Dataset Optimization
5 pages, 5 figures
2025 59th Annual Conference on Information Sciences and Systems (CISS), Baltimore, MD, USA, 2025, pp. 1-5
10.1109/CISS64860.2025.10944706
null
cs.CL cs.LG
http://creativecommons.org/licenses/by/4.0/
This paper presents a machine learning approach to classify countries as peaceful or non-peaceful using linguistic patterns extracted from global media articles. We employ vector embeddings and cosine similarity to develop a supervised classification model that effectively identifies peaceful countries. Additionally, we explore the impact of dataset size on model performance, investigating how shrinking the dataset influences classification accuracy. Our results highlight the challenges and opportunities associated with using large-scale text data for peace studies.
[ { "version": "v1", "created": "Tue, 1 Oct 2024 19:28:03 GMT" } ]
2025-04-11T00:00:00
[ [ "Lian", "K.", "" ], [ "Liebovitch", "L. S.", "" ], [ "Wild", "M.", "" ], [ "West", "H.", "" ], [ "Coleman", "P. T.", "" ], [ "Chen", "F.", "" ], [ "Kimani", "E.", "" ], [ "Sieck", "K.", "" ] ]
TITLE: Machine Learning Classification of Peaceful Countries: A Comparative Analysis and Dataset Optimization ABSTRACT: This paper presents a machine learning approach to classify countries as peaceful or non-peaceful using linguistic patterns extracted from global media articles. We employ vector embeddings and cosine similarity to develop a supervised classification model that effectively identifies peaceful countries. Additionally, we explore the impact of dataset size on model performance, investigating how shrinking the dataset influences classification accuracy. Our results highlight the challenges and opportunities associated with using large-scale text data for peace studies.
no_new_dataset
0.950869
2410.08206
Idil Esen Zulfikar
Ilya Fradlin, Idil Esen Zulfikar, Kadir Yilmaz, Theodora Kontogianni, Bastian Leibe
Interactive4D: Interactive 4D LiDAR Segmentation
Accepted to ICRA2025!
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Interactive segmentation has an important role in facilitating the annotation process of future LiDAR datasets. Existing approaches sequentially segment individual objects at each LiDAR scan, repeating the process throughout the entire sequence, which is redundant and ineffective. In this work, we propose interactive 4D segmentation, a new paradigm that allows segmenting multiple objects on multiple LiDAR scans simultaneously, and Interactive4D, the first interactive 4D segmentation model that segments multiple objects on superimposed consecutive LiDAR scans in a single iteration by utilizing the sequential nature of LiDAR data. While performing interactive segmentation, our model leverages the entire space-time volume, leading to more efficient segmentation. Operating on the 4D volume, it directly provides consistent instance IDs over time and also simplifies tracking annotations. Moreover, we show that click simulations are crucial for successful model training on LiDAR point clouds. To this end, we design a click simulation strategy that is better suited for the characteristics of LiDAR data. To demonstrate its accuracy and effectiveness, we evaluate Interactive4D on multiple LiDAR datasets, where Interactive4D achieves a new state-of-the-art by a large margin. We publicly release the code and models at https://vision.rwth-aachen.de/Interactive4D.
[ { "version": "v1", "created": "Thu, 10 Oct 2024 17:59:45 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 17:59:53 GMT" } ]
2025-04-11T00:00:00
[ [ "Fradlin", "Ilya", "" ], [ "Zulfikar", "Idil Esen", "" ], [ "Yilmaz", "Kadir", "" ], [ "Kontogianni", "Theodora", "" ], [ "Leibe", "Bastian", "" ] ]
TITLE: Interactive4D: Interactive 4D LiDAR Segmentation ABSTRACT: Interactive segmentation has an important role in facilitating the annotation process of future LiDAR datasets. Existing approaches sequentially segment individual objects at each LiDAR scan, repeating the process throughout the entire sequence, which is redundant and ineffective. In this work, we propose interactive 4D segmentation, a new paradigm that allows segmenting multiple objects on multiple LiDAR scans simultaneously, and Interactive4D, the first interactive 4D segmentation model that segments multiple objects on superimposed consecutive LiDAR scans in a single iteration by utilizing the sequential nature of LiDAR data. While performing interactive segmentation, our model leverages the entire space-time volume, leading to more efficient segmentation. Operating on the 4D volume, it directly provides consistent instance IDs over time and also simplifies tracking annotations. Moreover, we show that click simulations are crucial for successful model training on LiDAR point clouds. To this end, we design a click simulation strategy that is better suited for the characteristics of LiDAR data. To demonstrate its accuracy and effectiveness, we evaluate Interactive4D on multiple LiDAR datasets, where Interactive4D achieves a new state-of-the-art by a large margin. We publicly release the code and models at https://vision.rwth-aachen.de/Interactive4D.
no_new_dataset
0.953622
2410.13453
Ant Duru
Ant Duru, Alptekin Temizel
Adaptive Augmentation Policy Optimization with LLM Feedback
15 pages, 4 tables, 3 figures submitted for consideration to 2025 Medical Image Understanding and Analysis Conference (MIUA)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Data augmentation is a critical component of deep learning pipelines, enhancing model generalization by increasing dataset diversity. Traditional augmentation strategies rely on manually designed transformations, stochastic sampling, or automated search-based approaches. Although automated methods improve performance, they often require extensive computational resources and are tailored to specific datasets. In this work, we propose a Large Language Model (LLM)-guided augmentation optimization strategy that refines augmentation policies based on model performance feedback. We introduce two approaches: (1) LLM-Guided Augmentation Policy Optimization, where augmentation policies are selected by an LLM prior to training and iteratively refined across multiple training cycles, and (2) Adaptive LLM-Guided Augmentation Policy Optimization, where policies adapt in real-time based on performance metrics. This in-training approach eliminates the need for full model retraining before receiving LLM feedback, thereby reducing computational costs while improving performance. Our methodology employs an LLM to dynamically select augmentation transformations based on dataset characteristics, model architecture, and prior training outcomes. Unlike traditional search-based methods, our approach leverages the contextual knowledge of LLMs, particularly in specialized domains like medical imaging, to recommend augmentation strategies tailored to domain-specific data. We evaluate our approach on multiple domain-specific image classification datasets where augmentation is key to model robustness. Results show that LLM-guided augmentation optimization outperforms traditional methods, improving model accuracy. These findings highlight the potential of LLMs in automating and adapting deep learning training workflows.
[ { "version": "v1", "created": "Thu, 17 Oct 2024 11:26:10 GMT" }, { "version": "v2", "created": "Tue, 8 Apr 2025 11:05:01 GMT" }, { "version": "v3", "created": "Wed, 9 Apr 2025 18:00:00 GMT" } ]
2025-04-11T00:00:00
[ [ "Duru", "Ant", "" ], [ "Temizel", "Alptekin", "" ] ]
TITLE: Adaptive Augmentation Policy Optimization with LLM Feedback ABSTRACT: Data augmentation is a critical component of deep learning pipelines, enhancing model generalization by increasing dataset diversity. Traditional augmentation strategies rely on manually designed transformations, stochastic sampling, or automated search-based approaches. Although automated methods improve performance, they often require extensive computational resources and are tailored to specific datasets. In this work, we propose a Large Language Model (LLM)-guided augmentation optimization strategy that refines augmentation policies based on model performance feedback. We introduce two approaches: (1) LLM-Guided Augmentation Policy Optimization, where augmentation policies are selected by an LLM prior to training and iteratively refined across multiple training cycles, and (2) Adaptive LLM-Guided Augmentation Policy Optimization, where policies adapt in real-time based on performance metrics. This in-training approach eliminates the need for full model retraining before receiving LLM feedback, thereby reducing computational costs while improving performance. Our methodology employs an LLM to dynamically select augmentation transformations based on dataset characteristics, model architecture, and prior training outcomes. Unlike traditional search-based methods, our approach leverages the contextual knowledge of LLMs, particularly in specialized domains like medical imaging, to recommend augmentation strategies tailored to domain-specific data. We evaluate our approach on multiple domain-specific image classification datasets where augmentation is key to model robustness. Results show that LLM-guided augmentation optimization outperforms traditional methods, improving model accuracy. These findings highlight the potential of LLMs in automating and adapting deep learning training workflows.
no_new_dataset
0.949295
2410.22318
Can Chen
Can Chen, Jun-Kun Wang
Online Detecting LLM-Generated Texts via Sequential Hypothesis Testing by Betting
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Developing algorithms to differentiate between machine-generated texts and human-written texts has garnered substantial attention in recent years. Existing methods in this direction typically concern an offline setting where a dataset containing a mix of real and machine-generated texts is given upfront, and the task is to determine whether each sample in the dataset is from a large language model (LLM) or a human. However, in many practical scenarios, sources such as news websites, social media accounts, or on other forums publish content in a streaming fashion. Therefore, in this online scenario, how to quickly and accurately determine whether the source is an LLM with strong statistical guarantees is crucial for these media or platforms to function effectively and prevent the spread of misinformation and other potential misuse of LLMs. To tackle the problem of online detection, we develop an algorithm based on the techniques of sequential hypothesis testing by betting that not only builds upon and complements existing offline detection techniques but also enjoys statistical guarantees, which include a controlled false positive rate and the expected time to correctly identify a source as an LLM. Experiments were conducted to demonstrate the effectiveness of our method.
[ { "version": "v1", "created": "Tue, 29 Oct 2024 17:55:14 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 00:51:20 GMT" } ]
2025-04-11T00:00:00
[ [ "Chen", "Can", "" ], [ "Wang", "Jun-Kun", "" ] ]
TITLE: Online Detecting LLM-Generated Texts via Sequential Hypothesis Testing by Betting ABSTRACT: Developing algorithms to differentiate between machine-generated texts and human-written texts has garnered substantial attention in recent years. Existing methods in this direction typically concern an offline setting where a dataset containing a mix of real and machine-generated texts is given upfront, and the task is to determine whether each sample in the dataset is from a large language model (LLM) or a human. However, in many practical scenarios, sources such as news websites, social media accounts, or on other forums publish content in a streaming fashion. Therefore, in this online scenario, how to quickly and accurately determine whether the source is an LLM with strong statistical guarantees is crucial for these media or platforms to function effectively and prevent the spread of misinformation and other potential misuse of LLMs. To tackle the problem of online detection, we develop an algorithm based on the techniques of sequential hypothesis testing by betting that not only builds upon and complements existing offline detection techniques but also enjoys statistical guarantees, which include a controlled false positive rate and the expected time to correctly identify a source as an LLM. Experiments were conducted to demonstrate the effectiveness of our method.
no_new_dataset
0.940626
2410.23780
Maixuan Xue
Xinyuan Chang, Maixuan Xue, Xinran Liu, Zheng Pan, Xing Wei
Driving by the Rules: A Benchmark for Integrating Traffic Sign Regulations into Vectorized HD Map
26 pages, 16 figures. Accepted as a Highlight at CVPR 2025. Project page: https://miv-xjtu.github.io/MapDR/
null
null
null
cs.CV cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Ensuring adherence to traffic sign regulations is essential for both human and autonomous vehicle navigation. While current online mapping solutions often prioritize the construction of the geometric and connectivity layers of HD maps, overlooking the construction of the traffic regulation layer within HD maps. Addressing this gap, we introduce MapDR, a novel dataset designed for the extraction of Driving Rules from traffic signs and their association with vectorized, locally perceived HD Maps. MapDR features over $10,000$ annotated video clips that capture the intricate correlation between traffic sign regulations and lanes. Built upon this benchmark and the newly defined task of integrating traffic regulations into online HD maps, we provide modular and end-to-end solutions: VLE-MEE and RuleVLM, offering a strong baseline for advancing autonomous driving technology. It fills a critical gap in the integration of traffic sign rules, contributing to the development of reliable autonomous driving systems. Code is available at https://github.com/MIV-XJTU/MapDR.
[ { "version": "v1", "created": "Thu, 31 Oct 2024 09:53:21 GMT" }, { "version": "v2", "created": "Mon, 6 Jan 2025 12:07:55 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 11:13:00 GMT" } ]
2025-04-11T00:00:00
[ [ "Chang", "Xinyuan", "" ], [ "Xue", "Maixuan", "" ], [ "Liu", "Xinran", "" ], [ "Pan", "Zheng", "" ], [ "Wei", "Xing", "" ] ]
TITLE: Driving by the Rules: A Benchmark for Integrating Traffic Sign Regulations into Vectorized HD Map ABSTRACT: Ensuring adherence to traffic sign regulations is essential for both human and autonomous vehicle navigation. While current online mapping solutions often prioritize the construction of the geometric and connectivity layers of HD maps, overlooking the construction of the traffic regulation layer within HD maps. Addressing this gap, we introduce MapDR, a novel dataset designed for the extraction of Driving Rules from traffic signs and their association with vectorized, locally perceived HD Maps. MapDR features over $10,000$ annotated video clips that capture the intricate correlation between traffic sign regulations and lanes. Built upon this benchmark and the newly defined task of integrating traffic regulations into online HD maps, we provide modular and end-to-end solutions: VLE-MEE and RuleVLM, offering a strong baseline for advancing autonomous driving technology. It fills a critical gap in the integration of traffic sign rules, contributing to the development of reliable autonomous driving systems. Code is available at https://github.com/MIV-XJTU/MapDR.
new_dataset
0.958731
2411.08033
Yushi Lan
Yushi Lan, Shangchen Zhou, Zhaoyang Lyu, Fangzhou Hong, Shuai Yang, Bo Dai, Xingang Pan, Chen Change Loy
GaussianAnything: Interactive Point Cloud Flow Matching For 3D Object Generation
ICLR 2025 project page: https://nirvanalan.github.io/projects/GA/
null
null
null
cs.CV cs.AI cs.GR
http://creativecommons.org/licenses/by-nc-nd/4.0/
While 3D content generation has advanced significantly, existing methods still face challenges with input formats, latent space design, and output representations. This paper introduces a novel 3D generation framework that addresses these challenges, offering scalable, high-quality 3D generation with an interactive Point Cloud-structured Latent space. Our framework employs a Variational Autoencoder (VAE) with multi-view posed RGB-D(epth)-N(ormal) renderings as input, using a unique latent space design that preserves 3D shape information, and incorporates a cascaded latent flow-based model for improved shape-texture disentanglement. The proposed method, GaussianAnything, supports multi-modal conditional 3D generation, allowing for point cloud, caption, and single image inputs. Notably, the newly proposed latent space naturally enables geometry-texture disentanglement, thus allowing 3D-aware editing. Experimental results demonstrate the effectiveness of our approach on multiple datasets, outperforming existing native 3D methods in both text- and image-conditioned 3D generation.
[ { "version": "v1", "created": "Tue, 12 Nov 2024 18:59:32 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 12:24:52 GMT" } ]
2025-04-11T00:00:00
[ [ "Lan", "Yushi", "" ], [ "Zhou", "Shangchen", "" ], [ "Lyu", "Zhaoyang", "" ], [ "Hong", "Fangzhou", "" ], [ "Yang", "Shuai", "" ], [ "Dai", "Bo", "" ], [ "Pan", "Xingang", "" ], [ "Loy", "Chen Change", "" ] ]
TITLE: GaussianAnything: Interactive Point Cloud Flow Matching For 3D Object Generation ABSTRACT: While 3D content generation has advanced significantly, existing methods still face challenges with input formats, latent space design, and output representations. This paper introduces a novel 3D generation framework that addresses these challenges, offering scalable, high-quality 3D generation with an interactive Point Cloud-structured Latent space. Our framework employs a Variational Autoencoder (VAE) with multi-view posed RGB-D(epth)-N(ormal) renderings as input, using a unique latent space design that preserves 3D shape information, and incorporates a cascaded latent flow-based model for improved shape-texture disentanglement. The proposed method, GaussianAnything, supports multi-modal conditional 3D generation, allowing for point cloud, caption, and single image inputs. Notably, the newly proposed latent space naturally enables geometry-texture disentanglement, thus allowing 3D-aware editing. Experimental results demonstrate the effectiveness of our approach on multiple datasets, outperforming existing native 3D methods in both text- and image-conditioned 3D generation.
no_new_dataset
0.952662
2411.08753
Sagnik Majumder
Sagnik Majumder, Tushar Nagarajan, Ziad Al-Halah, Reina Pradhan, Kristen Grauman
Which Viewpoint Shows it Best? Language for Weakly Supervising View Selection in Multi-view Instructional Videos
Accepted to CVPR 2025 (Highlight)
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Given a multi-view video, which viewpoint is most informative for a human observer? Existing methods rely on heuristics or expensive "best-view" supervision to answer this question, limiting their applicability. We propose a weakly supervised approach that leverages language accompanying an instructional multi-view video as a means to recover its most informative viewpoint(s). Our key hypothesis is that the more accurately an individual view can predict a view-agnostic text summary, the more informative it is. To put this into action, we propose LangView, a framework that uses the relative accuracy of view-dependent caption predictions as a proxy for best view pseudo-labels. Then, those pseudo-labels are used to train a view selector, together with an auxiliary camera pose predictor that enhances view-sensitivity. During inference, our model takes as input only a multi-view video--no language or camera poses--and returns the best viewpoint to watch at each timestep. On two challenging datasets comprised of diverse multi-camera setups and how-to activities, our model consistently outperforms state-of-the-art baselines, both with quantitative metrics and human evaluation. Project page: https://vision.cs.utexas.edu/projects/which-view-shows-it-best.
[ { "version": "v1", "created": "Wed, 13 Nov 2024 16:31:08 GMT" }, { "version": "v2", "created": "Thu, 26 Dec 2024 15:49:20 GMT" }, { "version": "v3", "created": "Fri, 4 Apr 2025 20:45:11 GMT" }, { "version": "v4", "created": "Thu, 10 Apr 2025 02:02:49 GMT" } ]
2025-04-11T00:00:00
[ [ "Majumder", "Sagnik", "" ], [ "Nagarajan", "Tushar", "" ], [ "Al-Halah", "Ziad", "" ], [ "Pradhan", "Reina", "" ], [ "Grauman", "Kristen", "" ] ]
TITLE: Which Viewpoint Shows it Best? Language for Weakly Supervising View Selection in Multi-view Instructional Videos ABSTRACT: Given a multi-view video, which viewpoint is most informative for a human observer? Existing methods rely on heuristics or expensive "best-view" supervision to answer this question, limiting their applicability. We propose a weakly supervised approach that leverages language accompanying an instructional multi-view video as a means to recover its most informative viewpoint(s). Our key hypothesis is that the more accurately an individual view can predict a view-agnostic text summary, the more informative it is. To put this into action, we propose LangView, a framework that uses the relative accuracy of view-dependent caption predictions as a proxy for best view pseudo-labels. Then, those pseudo-labels are used to train a view selector, together with an auxiliary camera pose predictor that enhances view-sensitivity. During inference, our model takes as input only a multi-view video--no language or camera poses--and returns the best viewpoint to watch at each timestep. On two challenging datasets comprised of diverse multi-camera setups and how-to activities, our model consistently outperforms state-of-the-art baselines, both with quantitative metrics and human evaluation. Project page: https://vision.cs.utexas.edu/projects/which-view-shows-it-best.
no_new_dataset
0.951459
2411.10739
Jiangang Chen
Jiangang Chen, Yung-Hong Sun, Kristen Pickett, Barbara King, Yu Hen Hu, Hongrui Jiang
A Wearable Gait Monitoring System for 17 Gait Parameters Based on Computer Vision
13 pages, 14 figures. This paper was submitted for publication to the IEEE Transactions on Instrumentation and Measurement
null
10.1109/TIM.2025.3557814
null
eess.SY cs.CV cs.SY eess.SP
http://creativecommons.org/licenses/by/4.0/
We developed a shoe-mounted gait monitoring system capable of tracking up to 17 gait parameters, including gait length, step time, stride velocity, and others. The system employs a stereo camera mounted on one shoe to track a marker placed on the opposite shoe, enabling the estimation of spatial gait parameters. Additionally, a Force Sensitive Resistor (FSR) affixed to the heel of the shoe, combined with a custom-designed algorithm, is utilized to measure temporal gait parameters. Through testing on multiple participants and comparison with the gait mat, the proposed gait monitoring system exhibited notable performance, with the accuracy of all measured gait parameters exceeding 93.61%. The system also demonstrated a low drift of 4.89% during long-distance walking. A gait identification task conducted on participants using a trained Transformer model achieved 95.7% accuracy on the dataset collected by the proposed system, demonstrating that our hardware has the potential to collect long-sequence gait data suitable for integration with current Large Language Models (LLMs). The system is cost-effective, user-friendly, and well-suited for real-life measurements.
[ { "version": "v1", "created": "Sat, 16 Nov 2024 08:25:22 GMT" } ]
2025-04-11T00:00:00
[ [ "Chen", "Jiangang", "" ], [ "Sun", "Yung-Hong", "" ], [ "Pickett", "Kristen", "" ], [ "King", "Barbara", "" ], [ "Hu", "Yu Hen", "" ], [ "Jiang", "Hongrui", "" ] ]
TITLE: A Wearable Gait Monitoring System for 17 Gait Parameters Based on Computer Vision ABSTRACT: We developed a shoe-mounted gait monitoring system capable of tracking up to 17 gait parameters, including gait length, step time, stride velocity, and others. The system employs a stereo camera mounted on one shoe to track a marker placed on the opposite shoe, enabling the estimation of spatial gait parameters. Additionally, a Force Sensitive Resistor (FSR) affixed to the heel of the shoe, combined with a custom-designed algorithm, is utilized to measure temporal gait parameters. Through testing on multiple participants and comparison with the gait mat, the proposed gait monitoring system exhibited notable performance, with the accuracy of all measured gait parameters exceeding 93.61%. The system also demonstrated a low drift of 4.89% during long-distance walking. A gait identification task conducted on participants using a trained Transformer model achieved 95.7% accuracy on the dataset collected by the proposed system, demonstrating that our hardware has the potential to collect long-sequence gait data suitable for integration with current Large Language Models (LLMs). The system is cost-effective, user-friendly, and well-suited for real-life measurements.
no_new_dataset
0.92421
2411.11260
Cameron Morin
Cameron Morin and Matti Marttinen Larsson
Large corpora and large language models: a replicable method for automating grammatical annotation
null
Linguistics Vanguard, 1-10 (2025)
10.1515/lingvan-2024-0228
null
cs.CL
http://creativecommons.org/licenses/by/4.0/
Much linguistic research relies on annotated datasets of features extracted from text corpora, but the rapid quantitative growth of these corpora has created practical difficulties for linguists to manually annotate large data samples. In this paper, we present a replicable, supervised method that leverages large language models for assisting the linguist in grammatical annotation through prompt engineering, training, and evaluation. We introduce a methodological pipeline applied to the case study of formal variation in the English evaluative verb construction 'consider X (as) (to be) Y', based on the large language model Claude 3.5 Sonnet and corpus data from Davies' NOW and EnTenTen21 (SketchEngine). Overall, we reach a model accuracy of over 90% on our held-out test samples with only a small amount of training data, validating the method for the annotation of very large quantities of tokens of the construction in the future. We discuss the generalisability of our results for a wider range of case studies of grammatical constructions and grammatical variation and change, underlining the value of AI copilots as tools for future linguistic research, notwithstanding some important caveats.
[ { "version": "v1", "created": "Mon, 18 Nov 2024 03:29:48 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 07:24:50 GMT" } ]
2025-04-11T00:00:00
[ [ "Morin", "Cameron", "" ], [ "Larsson", "Matti Marttinen", "" ] ]
TITLE: Large corpora and large language models: a replicable method for automating grammatical annotation ABSTRACT: Much linguistic research relies on annotated datasets of features extracted from text corpora, but the rapid quantitative growth of these corpora has created practical difficulties for linguists to manually annotate large data samples. In this paper, we present a replicable, supervised method that leverages large language models for assisting the linguist in grammatical annotation through prompt engineering, training, and evaluation. We introduce a methodological pipeline applied to the case study of formal variation in the English evaluative verb construction 'consider X (as) (to be) Y', based on the large language model Claude 3.5 Sonnet and corpus data from Davies' NOW and EnTenTen21 (SketchEngine). Overall, we reach a model accuracy of over 90% on our held-out test samples with only a small amount of training data, validating the method for the annotation of very large quantities of tokens of the construction in the future. We discuss the generalisability of our results for a wider range of case studies of grammatical constructions and grammatical variation and change, underlining the value of AI copilots as tools for future linguistic research, notwithstanding some important caveats.
no_new_dataset
0.941922
2411.15139
Bencheng Liao
Bencheng Liao, Shaoyu Chen, Haoran Yin, Bo Jiang, Cheng Wang, Sixu Yan, Xinbang Zhang, Xiangyu Li, Ying Zhang, Qian Zhang, Xinggang Wang
DiffusionDrive: Truncated Diffusion Model for End-to-End Autonomous Driving
Accepted to CVPR 2025 as Highlight. Code & demo & model are available at https://github.com/hustvl/DiffusionDrive
null
null
null
cs.CV cs.RO
http://creativecommons.org/licenses/by-nc-sa/4.0/
Recently, the diffusion model has emerged as a powerful generative technique for robotic policy learning, capable of modeling multi-mode action distributions. Leveraging its capability for end-to-end autonomous driving is a promising direction. However, the numerous denoising steps in the robotic diffusion policy and the more dynamic, open-world nature of traffic scenes pose substantial challenges for generating diverse driving actions at a real-time speed. To address these challenges, we propose a novel truncated diffusion policy that incorporates prior multi-mode anchors and truncates the diffusion schedule, enabling the model to learn denoising from anchored Gaussian distribution to the multi-mode driving action distribution. Additionally, we design an efficient cascade diffusion decoder for enhanced interaction with conditional scene context. The proposed model, DiffusionDrive, demonstrates 10$\times$ reduction in denoising steps compared to vanilla diffusion policy, delivering superior diversity and quality in just 2 steps. On the planning-oriented NAVSIM dataset, with the aligned ResNet-34 backbone, DiffusionDrive achieves 88.1 PDMS without bells and whistles, setting a new record, while running at a real-time speed of 45 FPS on an NVIDIA 4090. Qualitative results on challenging scenarios further confirm that DiffusionDrive can robustly generate diverse plausible driving actions. Code and model will be available at https://github.com/hustvl/DiffusionDrive.
[ { "version": "v1", "created": "Fri, 22 Nov 2024 18:59:47 GMT" }, { "version": "v2", "created": "Mon, 24 Mar 2025 03:02:15 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 08:48:37 GMT" } ]
2025-04-11T00:00:00
[ [ "Liao", "Bencheng", "" ], [ "Chen", "Shaoyu", "" ], [ "Yin", "Haoran", "" ], [ "Jiang", "Bo", "" ], [ "Wang", "Cheng", "" ], [ "Yan", "Sixu", "" ], [ "Zhang", "Xinbang", "" ], [ "Li", "Xiangyu", "" ], [ "Zhang", "Ying", "" ], [ "Zhang", "Qian", "" ], [ "Wang", "Xinggang", "" ] ]
TITLE: DiffusionDrive: Truncated Diffusion Model for End-to-End Autonomous Driving ABSTRACT: Recently, the diffusion model has emerged as a powerful generative technique for robotic policy learning, capable of modeling multi-mode action distributions. Leveraging its capability for end-to-end autonomous driving is a promising direction. However, the numerous denoising steps in the robotic diffusion policy and the more dynamic, open-world nature of traffic scenes pose substantial challenges for generating diverse driving actions at a real-time speed. To address these challenges, we propose a novel truncated diffusion policy that incorporates prior multi-mode anchors and truncates the diffusion schedule, enabling the model to learn denoising from anchored Gaussian distribution to the multi-mode driving action distribution. Additionally, we design an efficient cascade diffusion decoder for enhanced interaction with conditional scene context. The proposed model, DiffusionDrive, demonstrates 10$\times$ reduction in denoising steps compared to vanilla diffusion policy, delivering superior diversity and quality in just 2 steps. On the planning-oriented NAVSIM dataset, with the aligned ResNet-34 backbone, DiffusionDrive achieves 88.1 PDMS without bells and whistles, setting a new record, while running at a real-time speed of 45 FPS on an NVIDIA 4090. Qualitative results on challenging scenarios further confirm that DiffusionDrive can robustly generate diverse plausible driving actions. Code and model will be available at https://github.com/hustvl/DiffusionDrive.
no_new_dataset
0.945951
2411.17060
Mark Iskarous
Mark M. Iskarous, Zan Chaudhry, Fangjie Li, Samuel Bello, Sriramana Sankar, Ariel Slepyan, Natasha Chugh, Christopher L. Hunt, Rebecca J. Greene, Nitish V. Thakor
Invariant neuromorphic representations of tactile stimuli improve robustness of a real-time texture classification system
34 pages, 9 figures, 1 table
null
10.1002/aisy.202401078
null
cs.RO eess.SP
http://creativecommons.org/licenses/by-nc-nd/4.0/
Humans have an exquisite sense of touch which robotic and prosthetic systems aim to recreate. We developed algorithms to create neuron-like (neuromorphic) spiking representations of texture that are invariant to the scanning speed and contact force applied in the sensing process. The spiking representations are based on mimicking activity from mechanoreceptors in human skin and further processing up to the brain. The neuromorphic encoding process transforms analog sensor readings into speed and force invariant spiking representations in three sequential stages: the force invariance module (in the analog domain), the spiking activity encoding module (transforms from analog to spiking domain), and the speed invariance module (in the spiking domain). The algorithms were tested on a tactile texture dataset collected in 15 speed-force conditions. An offline texture classification system built on the invariant representations has higher classification accuracy, improved computational efficiency, and increased capability to identify textures explored in novel speed-force conditions. The speed invariance algorithm was adapted to a real-time human-operated texture classification system. Similarly, the invariant representations improved classification accuracy, computational efficiency, and capability to identify textures explored in novel conditions. The invariant representation is even more crucial in this context due to human imprecision which seems to the classification system as a novel condition. These results demonstrate that invariant neuromorphic representations enable better performing neurorobotic tactile sensing systems. Furthermore, because the neuromorphic representations are based on biological processing, this work can be used in the future as the basis for naturalistic sensory feedback for upper limb amputees.
[ { "version": "v1", "created": "Tue, 26 Nov 2024 02:57:37 GMT" } ]
2025-04-11T00:00:00
[ [ "Iskarous", "Mark M.", "" ], [ "Chaudhry", "Zan", "" ], [ "Li", "Fangjie", "" ], [ "Bello", "Samuel", "" ], [ "Sankar", "Sriramana", "" ], [ "Slepyan", "Ariel", "" ], [ "Chugh", "Natasha", "" ], [ "Hunt", "Christopher L.", "" ], [ "Greene", "Rebecca J.", "" ], [ "Thakor", "Nitish V.", "" ] ]
TITLE: Invariant neuromorphic representations of tactile stimuli improve robustness of a real-time texture classification system ABSTRACT: Humans have an exquisite sense of touch which robotic and prosthetic systems aim to recreate. We developed algorithms to create neuron-like (neuromorphic) spiking representations of texture that are invariant to the scanning speed and contact force applied in the sensing process. The spiking representations are based on mimicking activity from mechanoreceptors in human skin and further processing up to the brain. The neuromorphic encoding process transforms analog sensor readings into speed and force invariant spiking representations in three sequential stages: the force invariance module (in the analog domain), the spiking activity encoding module (transforms from analog to spiking domain), and the speed invariance module (in the spiking domain). The algorithms were tested on a tactile texture dataset collected in 15 speed-force conditions. An offline texture classification system built on the invariant representations has higher classification accuracy, improved computational efficiency, and increased capability to identify textures explored in novel speed-force conditions. The speed invariance algorithm was adapted to a real-time human-operated texture classification system. Similarly, the invariant representations improved classification accuracy, computational efficiency, and capability to identify textures explored in novel conditions. The invariant representation is even more crucial in this context due to human imprecision which seems to the classification system as a novel condition. These results demonstrate that invariant neuromorphic representations enable better performing neurorobotic tactile sensing systems. Furthermore, because the neuromorphic representations are based on biological processing, this work can be used in the future as the basis for naturalistic sensory feedback for upper limb amputees.
no_new_dataset
0.952662
2411.19050
Nicola Fanelli
Nicola Fanelli, Gennaro Vessio, Giovanna Castellano
I Dream My Painting: Connecting MLLMs and Diffusion Models via Prompt Generation for Text-Guided Multi-Mask Inpainting
Accepted at WACV 2025
null
10.1109/WACV61041.2025.00592
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Inpainting focuses on filling missing or corrupted regions of an image to blend seamlessly with its surrounding content and style. While conditional diffusion models have proven effective for text-guided inpainting, we introduce the novel task of multi-mask inpainting, where multiple regions are simultaneously inpainted using distinct prompts. Furthermore, we design a fine-tuning procedure for multimodal LLMs, such as LLaVA, to generate multi-mask prompts automatically using corrupted images as inputs. These models can generate helpful and detailed prompt suggestions for filling the masked regions. The generated prompts are then fed to Stable Diffusion, which is fine-tuned for the multi-mask inpainting problem using rectified cross-attention, enforcing prompts onto their designated regions for filling. Experiments on digitized paintings from WikiArt and the Densely Captioned Images dataset demonstrate that our pipeline delivers creative and accurate inpainting results. Our code, data, and trained models are available at https://cilabuniba.github.io/i-dream-my-painting.
[ { "version": "v1", "created": "Thu, 28 Nov 2024 10:55:09 GMT" }, { "version": "v2", "created": "Fri, 6 Dec 2024 10:58:53 GMT" } ]
2025-04-11T00:00:00
[ [ "Fanelli", "Nicola", "" ], [ "Vessio", "Gennaro", "" ], [ "Castellano", "Giovanna", "" ] ]
TITLE: I Dream My Painting: Connecting MLLMs and Diffusion Models via Prompt Generation for Text-Guided Multi-Mask Inpainting ABSTRACT: Inpainting focuses on filling missing or corrupted regions of an image to blend seamlessly with its surrounding content and style. While conditional diffusion models have proven effective for text-guided inpainting, we introduce the novel task of multi-mask inpainting, where multiple regions are simultaneously inpainted using distinct prompts. Furthermore, we design a fine-tuning procedure for multimodal LLMs, such as LLaVA, to generate multi-mask prompts automatically using corrupted images as inputs. These models can generate helpful and detailed prompt suggestions for filling the masked regions. The generated prompts are then fed to Stable Diffusion, which is fine-tuned for the multi-mask inpainting problem using rectified cross-attention, enforcing prompts onto their designated regions for filling. Experiments on digitized paintings from WikiArt and the Densely Captioned Images dataset demonstrate that our pipeline delivers creative and accurate inpainting results. Our code, data, and trained models are available at https://cilabuniba.github.io/i-dream-my-painting.
no_new_dataset
0.947817
2411.19346
Mohamed Fazli Imam
Mohamed Fazli Imam, Rufael Fedaku Marew, Jameel Hassan, Mustansar Fiaz, Alham Fikri Aji, Hisham Cholakkal
CLIP meets DINO for Tuning Zero-Shot Classifier using Unlabeled Image Collections
null
null
null
null
cs.CV cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
In the era of foundation models, CLIP has emerged as a powerful tool for aligning text & visual modalities into a common embedding space. However, the alignment objective used to train CLIP often results in subpar visual features for fine-grained tasks. In contrast, SSL-pretrained models like DINO excel at extracting rich visual features due to their specialized training paradigm. Yet, these SSL models require an additional supervised linear probing step, which relies on fully labeled data which is often expensive and difficult to obtain at scale. In this paper, we propose a label-free prompt-tuning method that leverages the rich visual features of self-supervised learning models (DINO) and the broad textual knowledge of large language models (LLMs) to largely enhance CLIP-based image classification performance using unlabeled images. Our approach unfolds in three key steps: (1) We generate robust textual feature embeddings that more accurately represent object classes by leveraging class-specific descriptions from LLMs, enabling more effective zero-shot classification compared to CLIP's default name-specific prompts. (2) These textual embeddings are then used to produce pseudo-labels to train an alignment module that integrates the complementary strengths of LLM description-based textual embeddings & DINO's visual features. (3) Finally, we prompt-tune CLIP's vision encoder through DINO-assisted supervision using the trained alignment module. This three-step process allows us to harness the best of visual & textual foundation models, resulting in a powerful and efficient approach that surpasses state-of-the-art label-free classification methods. Notably, our framework, NoLA (No Labels Attached), achieves an average absolute gain of 3.6% over the state-of-the-art LaFTer across 11 diverse image classification datasets. Our code & models can be found at https://github.com/fazliimam/NoLA.
[ { "version": "v1", "created": "Thu, 28 Nov 2024 19:48:54 GMT" }, { "version": "v2", "created": "Fri, 7 Mar 2025 08:08:18 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 11:09:41 GMT" } ]
2025-04-11T00:00:00
[ [ "Imam", "Mohamed Fazli", "" ], [ "Marew", "Rufael Fedaku", "" ], [ "Hassan", "Jameel", "" ], [ "Fiaz", "Mustansar", "" ], [ "Aji", "Alham Fikri", "" ], [ "Cholakkal", "Hisham", "" ] ]
TITLE: CLIP meets DINO for Tuning Zero-Shot Classifier using Unlabeled Image Collections ABSTRACT: In the era of foundation models, CLIP has emerged as a powerful tool for aligning text & visual modalities into a common embedding space. However, the alignment objective used to train CLIP often results in subpar visual features for fine-grained tasks. In contrast, SSL-pretrained models like DINO excel at extracting rich visual features due to their specialized training paradigm. Yet, these SSL models require an additional supervised linear probing step, which relies on fully labeled data which is often expensive and difficult to obtain at scale. In this paper, we propose a label-free prompt-tuning method that leverages the rich visual features of self-supervised learning models (DINO) and the broad textual knowledge of large language models (LLMs) to largely enhance CLIP-based image classification performance using unlabeled images. Our approach unfolds in three key steps: (1) We generate robust textual feature embeddings that more accurately represent object classes by leveraging class-specific descriptions from LLMs, enabling more effective zero-shot classification compared to CLIP's default name-specific prompts. (2) These textual embeddings are then used to produce pseudo-labels to train an alignment module that integrates the complementary strengths of LLM description-based textual embeddings & DINO's visual features. (3) Finally, we prompt-tune CLIP's vision encoder through DINO-assisted supervision using the trained alignment module. This three-step process allows us to harness the best of visual & textual foundation models, resulting in a powerful and efficient approach that surpasses state-of-the-art label-free classification methods. Notably, our framework, NoLA (No Labels Attached), achieves an average absolute gain of 3.6% over the state-of-the-art LaFTer across 11 diverse image classification datasets. Our code & models can be found at https://github.com/fazliimam/NoLA.
no_new_dataset
0.949295
2412.01721
Marc Van Droogenbroeck
Floriane Magera and Thomas Hoyoux and Olivier Barnich and Marc Van Droogenbroeck
BroadTrack: Broadcast Camera Tracking for Soccer
12 pages, 4 figures, 3 tables, 60 references
null
10.1109/wacv61041.2025.00602
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Camera calibration and localization, sometimes simply named camera calibration, enables many applications in the context of soccer broadcasting, for instance regarding the interpretation and analysis of the game, or the insertion of augmented reality graphics for storytelling or refereeing purposes. To contribute to such applications, the research community has typically focused on single-view calibration methods, leveraging the near-omnipresence of soccer field markings in wide-angle broadcast views, but leaving all temporal aspects, if considered at all, to general-purpose tracking or filtering techniques. Only a few contributions have been made to leverage any domain-specific knowledge for this tracking task, and, as a result, there lacks a truly performant and off-the-shelf camera tracking system tailored for soccer broadcasting, specifically for elevated tripod-mounted cameras around the stadium. In this work, we present such a system capable of addressing the task of soccer broadcast camera tracking efficiently, robustly, and accurately, outperforming by far the most precise methods of the state-of-the-art. By combining the available open-source soccer field detectors with carefully designed camera and tripod models, our tracking system, BroadTrack, halves the mean reprojection error rate and gains more than 15% in terms of Jaccard index for camera calibration on the SoccerNet dataset. Furthermore, as the SoccerNet dataset videos are relatively short (30 seconds), we also present qualitative results on a 20-minute broadcast clip to showcase the robustness and the soundness of our system.
[ { "version": "v1", "created": "Mon, 2 Dec 2024 17:10:52 GMT" } ]
2025-04-11T00:00:00
[ [ "Magera", "Floriane", "" ], [ "Hoyoux", "Thomas", "" ], [ "Barnich", "Olivier", "" ], [ "Van Droogenbroeck", "Marc", "" ] ]
TITLE: BroadTrack: Broadcast Camera Tracking for Soccer ABSTRACT: Camera calibration and localization, sometimes simply named camera calibration, enables many applications in the context of soccer broadcasting, for instance regarding the interpretation and analysis of the game, or the insertion of augmented reality graphics for storytelling or refereeing purposes. To contribute to such applications, the research community has typically focused on single-view calibration methods, leveraging the near-omnipresence of soccer field markings in wide-angle broadcast views, but leaving all temporal aspects, if considered at all, to general-purpose tracking or filtering techniques. Only a few contributions have been made to leverage any domain-specific knowledge for this tracking task, and, as a result, there lacks a truly performant and off-the-shelf camera tracking system tailored for soccer broadcasting, specifically for elevated tripod-mounted cameras around the stadium. In this work, we present such a system capable of addressing the task of soccer broadcast camera tracking efficiently, robustly, and accurately, outperforming by far the most precise methods of the state-of-the-art. By combining the available open-source soccer field detectors with carefully designed camera and tripod models, our tracking system, BroadTrack, halves the mean reprojection error rate and gains more than 15% in terms of Jaccard index for camera calibration on the SoccerNet dataset. Furthermore, as the SoccerNet dataset videos are relatively short (30 seconds), we also present qualitative results on a 20-minute broadcast clip to showcase the robustness and the soundness of our system.
no_new_dataset
0.930268
2412.08864
Jiankang Wang
Jiankang Wang, Jianjun Xu, Xiaorui Wang, Yuxin Wang, Mengting Xing, Shancheng Fang, Zhineng Chen, Hongtao Xie, Yongdong Zhang
A Graph-Based Synthetic Data Pipeline for Scaling High-Quality Reasoning Instructions
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Synthesizing high-quality reasoning data for continual training has been proven to be effective in enhancing the performance of Large Language Models (LLMs). However, previous synthetic approaches struggle to easily scale up data and incur high costs in the pursuit of high quality. In this paper, we propose the Graph-based Synthetic Data Pipeline (GSDP), an economical and scalable framework for high-quality reasoning data synthesis. Inspired by knowledge graphs, we extracted knowledge points from seed data and constructed a knowledge point relationships graph to explore their interconnections. By exploring the implicit relationships among knowledge, our method achieves $\times$255 data expansion. Furthermore, GSDP led by open-source models, achieves synthesis quality comparable to GPT-4-0613 while maintaining $\times$100 lower costs. To tackle the most challenging mathematical reasoning task, we present the GSDP-MATH dataset comprising over 1.91 million pairs of math problems and answers. After fine-tuning on GSDP-MATH, GSDP-7B based on Mistral-7B achieves 37.7% accuracy on MATH and 78.4% on GSM8K, demonstrating the effectiveness of our method. The dataset and models will be released in https://github.com/Jayce1kk/GSDP.
[ { "version": "v1", "created": "Thu, 12 Dec 2024 01:52:25 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 10:47:53 GMT" } ]
2025-04-11T00:00:00
[ [ "Wang", "Jiankang", "" ], [ "Xu", "Jianjun", "" ], [ "Wang", "Xiaorui", "" ], [ "Wang", "Yuxin", "" ], [ "Xing", "Mengting", "" ], [ "Fang", "Shancheng", "" ], [ "Chen", "Zhineng", "" ], [ "Xie", "Hongtao", "" ], [ "Zhang", "Yongdong", "" ] ]
TITLE: A Graph-Based Synthetic Data Pipeline for Scaling High-Quality Reasoning Instructions ABSTRACT: Synthesizing high-quality reasoning data for continual training has been proven to be effective in enhancing the performance of Large Language Models (LLMs). However, previous synthetic approaches struggle to easily scale up data and incur high costs in the pursuit of high quality. In this paper, we propose the Graph-based Synthetic Data Pipeline (GSDP), an economical and scalable framework for high-quality reasoning data synthesis. Inspired by knowledge graphs, we extracted knowledge points from seed data and constructed a knowledge point relationships graph to explore their interconnections. By exploring the implicit relationships among knowledge, our method achieves $\times$255 data expansion. Furthermore, GSDP led by open-source models, achieves synthesis quality comparable to GPT-4-0613 while maintaining $\times$100 lower costs. To tackle the most challenging mathematical reasoning task, we present the GSDP-MATH dataset comprising over 1.91 million pairs of math problems and answers. After fine-tuning on GSDP-MATH, GSDP-7B based on Mistral-7B achieves 37.7% accuracy on MATH and 78.4% on GSM8K, demonstrating the effectiveness of our method. The dataset and models will be released in https://github.com/Jayce1kk/GSDP.
new_dataset
0.959154
2412.08912
Ali Mollaahmadi Dehaghi
Ali Mollaahmadi Dehaghi, Reza Razavi, Mohammad Moshirpour
Reversing the Damage: A QP-Aware Transformer-Diffusion Approach for 8K Video Restoration under Codec Compression
12 pages, 8 figures
null
10.1109/WACV61041.2025.00130
null
cs.CV cs.MM
http://creativecommons.org/licenses/by-nc-nd/4.0/
In this paper, we introduce DiQP; a novel Transformer-Diffusion model for restoring 8K video quality degraded by codec compression. To the best of our knowledge, our model is the first to consider restoring the artifacts introduced by various codecs (AV1, HEVC) by Denoising Diffusion without considering additional noise. This approach allows us to model the complex, non-Gaussian nature of compression artifacts, effectively learning to reverse the degradation. Our architecture combines the power of Transformers to capture long-range dependencies with an enhanced windowed mechanism that preserves spatiotemporal context within groups of pixels across frames. To further enhance restoration, the model incorporates auxiliary "Look Ahead" and "Look Around" modules, providing both future and surrounding frame information to aid in reconstructing fine details and enhancing overall visual quality. Extensive experiments on different datasets demonstrate that our model outperforms state-of-the-art methods, particularly for high-resolution videos such as 4K and 8K, showcasing its effectiveness in restoring perceptually pleasing videos from highly compressed sources.
[ { "version": "v1", "created": "Thu, 12 Dec 2024 03:49:22 GMT" } ]
2025-04-11T00:00:00
[ [ "Dehaghi", "Ali Mollaahmadi", "" ], [ "Razavi", "Reza", "" ], [ "Moshirpour", "Mohammad", "" ] ]
TITLE: Reversing the Damage: A QP-Aware Transformer-Diffusion Approach for 8K Video Restoration under Codec Compression ABSTRACT: In this paper, we introduce DiQP; a novel Transformer-Diffusion model for restoring 8K video quality degraded by codec compression. To the best of our knowledge, our model is the first to consider restoring the artifacts introduced by various codecs (AV1, HEVC) by Denoising Diffusion without considering additional noise. This approach allows us to model the complex, non-Gaussian nature of compression artifacts, effectively learning to reverse the degradation. Our architecture combines the power of Transformers to capture long-range dependencies with an enhanced windowed mechanism that preserves spatiotemporal context within groups of pixels across frames. To further enhance restoration, the model incorporates auxiliary "Look Ahead" and "Look Around" modules, providing both future and surrounding frame information to aid in reconstructing fine details and enhancing overall visual quality. Extensive experiments on different datasets demonstrate that our model outperforms state-of-the-art methods, particularly for high-resolution videos such as 4K and 8K, showcasing its effectiveness in restoring perceptually pleasing videos from highly compressed sources.
no_new_dataset
0.948442
2412.14602
Yuxuan Liang
Yuxuan Liang, Wentao Zhang, Zeang Sheng, Ling Yang, Quanqing Xu, Jiawei Jiang, Yunhai Tong, Bin Cui
Towards Scalable and Deep Graph Neural Networks via Noise Masking
null
null
null
null
cs.LG cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In recent years, Graph Neural Networks (GNNs) have achieved remarkable success in many graph mining tasks. However, scaling them to large graphs is challenging due to the high computational and storage costs of repeated feature propagation and non-linear transformation during training. One commonly employed approach to address this challenge is model-simplification, which only executes the Propagation (P) once in the pre-processing, and Combine (C) these receptive fields in different ways and then feed them into a simple model for better performance. Despite their high predictive performance and scalability, these methods still face two limitations. First, existing approaches mainly focus on exploring different C methods from the model perspective, neglecting the crucial problem of performance degradation with increasing P depth from the data-centric perspective, known as the over-smoothing problem. Second, pre-processing overhead takes up most of the end-to-end processing time, especially for large-scale graphs. To address these limitations, we present random walk with noise masking (RMask), a plug-and-play module compatible with the existing model-simplification works. This module enables the exploration of deeper GNNs while preserving their scalability. Unlike the previous model-simplification works, we focus on continuous P and found that the noise existing inside each P is the cause of the over-smoothing issue, and use the efficient masking mechanism to eliminate them. Experimental results on six real-world datasets demonstrate that model-simplification works equipped with RMask yield superior performance compared to their original version and can make a good trade-off between accuracy and efficiency.
[ { "version": "v1", "created": "Thu, 19 Dec 2024 07:48:14 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 02:16:19 GMT" } ]
2025-04-11T00:00:00
[ [ "Liang", "Yuxuan", "" ], [ "Zhang", "Wentao", "" ], [ "Sheng", "Zeang", "" ], [ "Yang", "Ling", "" ], [ "Xu", "Quanqing", "" ], [ "Jiang", "Jiawei", "" ], [ "Tong", "Yunhai", "" ], [ "Cui", "Bin", "" ] ]
TITLE: Towards Scalable and Deep Graph Neural Networks via Noise Masking ABSTRACT: In recent years, Graph Neural Networks (GNNs) have achieved remarkable success in many graph mining tasks. However, scaling them to large graphs is challenging due to the high computational and storage costs of repeated feature propagation and non-linear transformation during training. One commonly employed approach to address this challenge is model-simplification, which only executes the Propagation (P) once in the pre-processing, and Combine (C) these receptive fields in different ways and then feed them into a simple model for better performance. Despite their high predictive performance and scalability, these methods still face two limitations. First, existing approaches mainly focus on exploring different C methods from the model perspective, neglecting the crucial problem of performance degradation with increasing P depth from the data-centric perspective, known as the over-smoothing problem. Second, pre-processing overhead takes up most of the end-to-end processing time, especially for large-scale graphs. To address these limitations, we present random walk with noise masking (RMask), a plug-and-play module compatible with the existing model-simplification works. This module enables the exploration of deeper GNNs while preserving their scalability. Unlike the previous model-simplification works, we focus on continuous P and found that the noise existing inside each P is the cause of the over-smoothing issue, and use the efficient masking mechanism to eliminate them. Experimental results on six real-world datasets demonstrate that model-simplification works equipped with RMask yield superior performance compared to their original version and can make a good trade-off between accuracy and efficiency.
no_new_dataset
0.943764
2412.14719
Kun Li
Kun Li, Dan Guo, Guoliang Chen, Chunxiao Fan, Jingyuan Xu, Zhiliang Wu, Hehe Fan, Meng Wang
Prototypical Calibrating Ambiguous Samples for Micro-Action Recognition
Fix typos; Accepted by AAAI 2025
null
null
null
cs.CV cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Micro-Action Recognition (MAR) has gained increasing attention due to its crucial role as a form of non-verbal communication in social interactions, with promising potential for applications in human communication and emotion analysis. However, current approaches often overlook the inherent ambiguity in micro-actions, which arises from the wide category range and subtle visual differences between categories. This oversight hampers the accuracy of micro-action recognition. In this paper, we propose a novel Prototypical Calibrating Ambiguous Network (PCAN) to unleash and mitigate the ambiguity of MAR. Firstly, we employ a hierarchical action-tree to identify the ambiguous sample, categorizing them into distinct sets of ambiguous samples of false negatives and false positives, considering both body- and action-level categories. Secondly, we implement an ambiguous contrastive refinement module to calibrate these ambiguous samples by regulating the distance between ambiguous samples and their corresponding prototypes. This calibration process aims to pull false negative (FN) samples closer to their respective prototypes and push false positive (FP) samples apart from their affiliated prototypes. In addition, we propose a new prototypical diversity amplification loss to strengthen the model's capacity by amplifying the differences between different prototypes. Finally, we propose a prototype-guided rectification to rectify prediction by incorporating the representability of prototypes. Extensive experiments conducted on the benchmark dataset demonstrate the superior performance of our method compared to existing approaches. The code is available at https://github.com/kunli-cs/PCAN.
[ { "version": "v1", "created": "Thu, 19 Dec 2024 10:41:24 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 05:13:15 GMT" } ]
2025-04-11T00:00:00
[ [ "Li", "Kun", "" ], [ "Guo", "Dan", "" ], [ "Chen", "Guoliang", "" ], [ "Fan", "Chunxiao", "" ], [ "Xu", "Jingyuan", "" ], [ "Wu", "Zhiliang", "" ], [ "Fan", "Hehe", "" ], [ "Wang", "Meng", "" ] ]
TITLE: Prototypical Calibrating Ambiguous Samples for Micro-Action Recognition ABSTRACT: Micro-Action Recognition (MAR) has gained increasing attention due to its crucial role as a form of non-verbal communication in social interactions, with promising potential for applications in human communication and emotion analysis. However, current approaches often overlook the inherent ambiguity in micro-actions, which arises from the wide category range and subtle visual differences between categories. This oversight hampers the accuracy of micro-action recognition. In this paper, we propose a novel Prototypical Calibrating Ambiguous Network (PCAN) to unleash and mitigate the ambiguity of MAR. Firstly, we employ a hierarchical action-tree to identify the ambiguous sample, categorizing them into distinct sets of ambiguous samples of false negatives and false positives, considering both body- and action-level categories. Secondly, we implement an ambiguous contrastive refinement module to calibrate these ambiguous samples by regulating the distance between ambiguous samples and their corresponding prototypes. This calibration process aims to pull false negative (FN) samples closer to their respective prototypes and push false positive (FP) samples apart from their affiliated prototypes. In addition, we propose a new prototypical diversity amplification loss to strengthen the model's capacity by amplifying the differences between different prototypes. Finally, we propose a prototype-guided rectification to rectify prediction by incorporating the representability of prototypes. Extensive experiments conducted on the benchmark dataset demonstrate the superior performance of our method compared to existing approaches. The code is available at https://github.com/kunli-cs/PCAN.
no_new_dataset
0.948728
2412.17534
Ege Yi\u{g}it \c{C}elik
Ege Yi\u{g}it \c{C}elik and Selma Tekir
CiteBART: Learning to Generate Citations for Local Citation Recommendation
17 pages, 2 figures, 10 tables
null
null
null
cs.IR cs.AI cs.CL
http://creativecommons.org/licenses/by/4.0/
Local citation recommendation (LCR) suggests a set of papers for a citation placeholder within a given context. The task has evolved as generative approaches have become more promising than the traditional pre-fetch and re-rank-based state-of-the-art approaches. This paper introduces citation-specific pre-training within an encoder-decoder architecture, where author-date citation tokens are masked to learn to reconstruct them to fulfill LCR. There are two variants for this pre-training. In the local context-only base scheme (CiteBART-Base), the citation token in a local context is masked to learn to predict the citation. The global version (CiteBART-Global) extends the local context with the citing paper's title and abstract to enrich the learning signal. CiteBART-Global achieves state-of-the-art performance on LCR benchmarks except for the FullTextPeerRead dataset, which is quite small to see the advantage of generative pre-training. The effect is significant in the larger benchmarks, e.g., Refseer and ArXiv., with the Refseer benchmark-trained model emerging as the best-performing model. We perform comprehensive experiments, including an ablation study, a qualitative analysis, and a taxonomy of hallucinations with detailed statistics. Our analyses confirm that CiteBART-Global has a cross-dataset generalization capability; the macro hallucination rate (MaHR) at the top-3 predictions is 4\%, and when the ground-truth is in the top-k prediction list, the hallucination tendency in the other predictions drops significantly.
[ { "version": "v1", "created": "Mon, 23 Dec 2024 12:58:30 GMT" }, { "version": "v2", "created": "Wed, 9 Apr 2025 20:23:16 GMT" } ]
2025-04-11T00:00:00
[ [ "Çelik", "Ege Yiğit", "" ], [ "Tekir", "Selma", "" ] ]
TITLE: CiteBART: Learning to Generate Citations for Local Citation Recommendation ABSTRACT: Local citation recommendation (LCR) suggests a set of papers for a citation placeholder within a given context. The task has evolved as generative approaches have become more promising than the traditional pre-fetch and re-rank-based state-of-the-art approaches. This paper introduces citation-specific pre-training within an encoder-decoder architecture, where author-date citation tokens are masked to learn to reconstruct them to fulfill LCR. There are two variants for this pre-training. In the local context-only base scheme (CiteBART-Base), the citation token in a local context is masked to learn to predict the citation. The global version (CiteBART-Global) extends the local context with the citing paper's title and abstract to enrich the learning signal. CiteBART-Global achieves state-of-the-art performance on LCR benchmarks except for the FullTextPeerRead dataset, which is quite small to see the advantage of generative pre-training. The effect is significant in the larger benchmarks, e.g., Refseer and ArXiv., with the Refseer benchmark-trained model emerging as the best-performing model. We perform comprehensive experiments, including an ablation study, a qualitative analysis, and a taxonomy of hallucinations with detailed statistics. Our analyses confirm that CiteBART-Global has a cross-dataset generalization capability; the macro hallucination rate (MaHR) at the top-3 predictions is 4\%, and when the ground-truth is in the top-k prediction list, the hallucination tendency in the other predictions drops significantly.
no_new_dataset
0.953622
2412.20439
Wangyu Wu
Wangyu Wu, Xianglin Qiu, Siqi Song, Zhenhong Chen, Xiaowei Huang, Fei Ma, Jimin Xiao
Image Augmentation Agent for Weakly Supervised Semantic Segmentation
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Weakly-supervised semantic segmentation (WSSS) has achieved remarkable progress using only image-level labels. However, most existing WSSS methods focus on designing new network structures and loss functions to generate more accurate dense labels, overlooking the limitations imposed by fixed datasets, which can constrain performance improvements. We argue that more diverse trainable images provides WSSS richer information and help model understand more comprehensive semantic pattern. Therefore in this paper, we introduce a novel approach called Image Augmentation Agent (IAA) which shows that it is possible to enhance WSSS from data generation perspective. IAA mainly design an augmentation agent that leverages large language models (LLMs) and diffusion models to automatically generate additional images for WSSS. In practice, to address the instability in prompt generation by LLMs, we develop a prompt self-refinement mechanism. It allow LLMs to re-evaluate the rationality of generated prompts to produce more coherent prompts. Additionally, we insert an online filter into diffusion generation process to dynamically ensure the quality and balance of generated images. Experimental results show that our method significantly surpasses state-of-the-art WSSS approaches on the PASCAL VOC 2012 and MS COCO 2014 datasets.
[ { "version": "v1", "created": "Sun, 29 Dec 2024 11:32:55 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 08:36:11 GMT" } ]
2025-04-11T00:00:00
[ [ "Wu", "Wangyu", "" ], [ "Qiu", "Xianglin", "" ], [ "Song", "Siqi", "" ], [ "Chen", "Zhenhong", "" ], [ "Huang", "Xiaowei", "" ], [ "Ma", "Fei", "" ], [ "Xiao", "Jimin", "" ] ]
TITLE: Image Augmentation Agent for Weakly Supervised Semantic Segmentation ABSTRACT: Weakly-supervised semantic segmentation (WSSS) has achieved remarkable progress using only image-level labels. However, most existing WSSS methods focus on designing new network structures and loss functions to generate more accurate dense labels, overlooking the limitations imposed by fixed datasets, which can constrain performance improvements. We argue that more diverse trainable images provides WSSS richer information and help model understand more comprehensive semantic pattern. Therefore in this paper, we introduce a novel approach called Image Augmentation Agent (IAA) which shows that it is possible to enhance WSSS from data generation perspective. IAA mainly design an augmentation agent that leverages large language models (LLMs) and diffusion models to automatically generate additional images for WSSS. In practice, to address the instability in prompt generation by LLMs, we develop a prompt self-refinement mechanism. It allow LLMs to re-evaluate the rationality of generated prompts to produce more coherent prompts. Additionally, we insert an online filter into diffusion generation process to dynamically ensure the quality and balance of generated images. Experimental results show that our method significantly surpasses state-of-the-art WSSS approaches on the PASCAL VOC 2012 and MS COCO 2014 datasets.
no_new_dataset
0.94868
2412.21037
Soujanya Poria
Chia-Yu Hung, Navonil Majumder, Zhifeng Kong, Ambuj Mehrish, Amir Ali Bagherzadeh, Chuan Li, Rafael Valle, Bryan Catanzaro, Soujanya Poria
TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching and Clap-Ranked Preference Optimization
https://tangoflux.github.io/
null
null
null
cs.SD cs.AI cs.CL eess.AS
http://creativecommons.org/licenses/by-sa/4.0/
We introduce TangoFlux, an efficient Text-to-Audio (TTA) generative model with 515M parameters, capable of generating up to 30 seconds of 44.1kHz audio in just 3.7 seconds on a single A40 GPU. A key challenge in aligning TTA models lies in the difficulty of creating preference pairs, as TTA lacks structured mechanisms like verifiable rewards or gold-standard answers available for Large Language Models (LLMs). To address this, we propose CLAP-Ranked Preference Optimization (CRPO), a novel framework that iteratively generates and optimizes preference data to enhance TTA alignment. We demonstrate that the audio preference dataset generated using CRPO outperforms existing alternatives. With this framework, TangoFlux achieves state-of-the-art performance across both objective and subjective benchmarks. We open source all code and models to support further research in TTA generation.
[ { "version": "v1", "created": "Mon, 30 Dec 2024 16:02:44 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 05:01:32 GMT" } ]
2025-04-11T00:00:00
[ [ "Hung", "Chia-Yu", "" ], [ "Majumder", "Navonil", "" ], [ "Kong", "Zhifeng", "" ], [ "Mehrish", "Ambuj", "" ], [ "Bagherzadeh", "Amir Ali", "" ], [ "Li", "Chuan", "" ], [ "Valle", "Rafael", "" ], [ "Catanzaro", "Bryan", "" ], [ "Poria", "Soujanya", "" ] ]
TITLE: TangoFlux: Super Fast and Faithful Text to Audio Generation with Flow Matching and Clap-Ranked Preference Optimization ABSTRACT: We introduce TangoFlux, an efficient Text-to-Audio (TTA) generative model with 515M parameters, capable of generating up to 30 seconds of 44.1kHz audio in just 3.7 seconds on a single A40 GPU. A key challenge in aligning TTA models lies in the difficulty of creating preference pairs, as TTA lacks structured mechanisms like verifiable rewards or gold-standard answers available for Large Language Models (LLMs). To address this, we propose CLAP-Ranked Preference Optimization (CRPO), a novel framework that iteratively generates and optimizes preference data to enhance TTA alignment. We demonstrate that the audio preference dataset generated using CRPO outperforms existing alternatives. With this framework, TangoFlux achieves state-of-the-art performance across both objective and subjective benchmarks. We open source all code and models to support further research in TTA generation.
no_new_dataset
0.943815
2501.05449
MD Arafat Alam Khandaker Arafat
Md. Arafat Alam Khandaker, Ziyan Shirin Raha, Shifat Islam, Tashreef Muhammad
Explainable AI-Enhanced Deep Learning for Pumpkin Leaf Disease Detection: A Comparative Analysis of CNN Architectures
Accepted in 2024 27th International Conference on Computer and Information Technology (ICCIT)
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Pumpkin leaf diseases are significant threats to agricultural productivity, requiring a timely and precise diagnosis for effective management. Traditional identification methods are laborious and susceptible to human error, emphasizing the necessity for automated solutions. This study employs on the "Pumpkin Leaf Disease Dataset", that comprises of 2000 high-resolution images separated into five categories. Downy mildew, powdery mildew, mosaic disease, bacterial leaf spot, and healthy leaves. The dataset was rigorously assembled from several agricultural fields to ensure a strong representation for model training. We explored many proficient deep learning architectures, including DenseNet201, DenseNet121, DenseNet169, Xception, ResNet50, ResNet101 and InceptionResNetV2, and observed that ResNet50 performed most effectively, with an accuracy of 90.5% and comparable precision, recall, and F1-Score. We used Explainable AI (XAI) approaches like Grad-CAM, Grad-CAM++, Score-CAM, and Layer-CAM to provide meaningful representations of model decision-making processes, which improved understanding and trust in automated disease diagnostics. These findings demonstrate ResNet50's potential to revolutionize pumpkin leaf disease detection, allowing for earlier and more accurate treatments.
[ { "version": "v1", "created": "Thu, 9 Jan 2025 18:59:35 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 17:35:24 GMT" } ]
2025-04-11T00:00:00
[ [ "Khandaker", "Md. Arafat Alam", "" ], [ "Raha", "Ziyan Shirin", "" ], [ "Islam", "Shifat", "" ], [ "Muhammad", "Tashreef", "" ] ]
TITLE: Explainable AI-Enhanced Deep Learning for Pumpkin Leaf Disease Detection: A Comparative Analysis of CNN Architectures ABSTRACT: Pumpkin leaf diseases are significant threats to agricultural productivity, requiring a timely and precise diagnosis for effective management. Traditional identification methods are laborious and susceptible to human error, emphasizing the necessity for automated solutions. This study employs on the "Pumpkin Leaf Disease Dataset", that comprises of 2000 high-resolution images separated into five categories. Downy mildew, powdery mildew, mosaic disease, bacterial leaf spot, and healthy leaves. The dataset was rigorously assembled from several agricultural fields to ensure a strong representation for model training. We explored many proficient deep learning architectures, including DenseNet201, DenseNet121, DenseNet169, Xception, ResNet50, ResNet101 and InceptionResNetV2, and observed that ResNet50 performed most effectively, with an accuracy of 90.5% and comparable precision, recall, and F1-Score. We used Explainable AI (XAI) approaches like Grad-CAM, Grad-CAM++, Score-CAM, and Layer-CAM to provide meaningful representations of model decision-making processes, which improved understanding and trust in automated disease diagnostics. These findings demonstrate ResNet50's potential to revolutionize pumpkin leaf disease detection, allowing for earlier and more accurate treatments.
no_new_dataset
0.897471
2501.06465
Ye Chen
Ye Chen, Dongdong Huang, Haoyun Xu, Cong Fu, Lin Sheng, Qingli Zhou, Yuqiang Shen, Kai Wang
MedCT: A Clinical Terminology Graph for Generative AI Applications in Healthcare
Accepted into ICCS 2025 and published in Springer's LNCS Series
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
We introduce the world's first clinical terminology for the Chinese healthcare community, namely MedCT, accompanied by a clinical foundation model MedBERT and an entity linking model MedLink. The MedCT system enables standardized and programmable representation of Chinese clinical data, successively stimulating the development of new medicines, treatment pathways, and better patient outcomes for the populous Chinese community. Moreover, the MedCT knowledge graph provides a principled mechanism to minimize the hallucination problem of large language models (LLMs), therefore achieving significant levels of accuracy and safety in LLM-based clinical applications. By leveraging the LLMs' emergent capabilities of generativeness and expressiveness, we were able to rapidly built a production-quality terminology system and deployed to real-world clinical field within three months, while classical terminologies like SNOMED CT have gone through more than twenty years development. Our experiments show that the MedCT system achieves state-of-the-art (SOTA) performance in semantic matching and entity linking tasks, not only for Chinese but also for English. We also conducted a longitudinal field experiment by applying MedCT and LLMs in a representative spectrum of clinical tasks, including electronic health record (EHR) auto-generation and medical document search for diagnostic decision making. Our study shows a multitude of values of MedCT for clinical workflows and patient outcomes, especially in the new genre of clinical LLM applications. We present our approach in sufficient engineering detail, such that implementing a clinical terminology for other non-English societies should be readily reproducible. We openly release our terminology, models and algorithms, along with real-world clinical datasets for the development.
[ { "version": "v1", "created": "Sat, 11 Jan 2025 07:35:51 GMT" }, { "version": "v2", "created": "Tue, 21 Jan 2025 01:56:11 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 07:29:10 GMT" } ]
2025-04-11T00:00:00
[ [ "Chen", "Ye", "" ], [ "Huang", "Dongdong", "" ], [ "Xu", "Haoyun", "" ], [ "Fu", "Cong", "" ], [ "Sheng", "Lin", "" ], [ "Zhou", "Qingli", "" ], [ "Shen", "Yuqiang", "" ], [ "Wang", "Kai", "" ] ]
TITLE: MedCT: A Clinical Terminology Graph for Generative AI Applications in Healthcare ABSTRACT: We introduce the world's first clinical terminology for the Chinese healthcare community, namely MedCT, accompanied by a clinical foundation model MedBERT and an entity linking model MedLink. The MedCT system enables standardized and programmable representation of Chinese clinical data, successively stimulating the development of new medicines, treatment pathways, and better patient outcomes for the populous Chinese community. Moreover, the MedCT knowledge graph provides a principled mechanism to minimize the hallucination problem of large language models (LLMs), therefore achieving significant levels of accuracy and safety in LLM-based clinical applications. By leveraging the LLMs' emergent capabilities of generativeness and expressiveness, we were able to rapidly built a production-quality terminology system and deployed to real-world clinical field within three months, while classical terminologies like SNOMED CT have gone through more than twenty years development. Our experiments show that the MedCT system achieves state-of-the-art (SOTA) performance in semantic matching and entity linking tasks, not only for Chinese but also for English. We also conducted a longitudinal field experiment by applying MedCT and LLMs in a representative spectrum of clinical tasks, including electronic health record (EHR) auto-generation and medical document search for diagnostic decision making. Our study shows a multitude of values of MedCT for clinical workflows and patient outcomes, especially in the new genre of clinical LLM applications. We present our approach in sufficient engineering detail, such that implementing a clinical terminology for other non-English societies should be readily reproducible. We openly release our terminology, models and algorithms, along with real-world clinical datasets for the development.
no_new_dataset
0.947478
2501.07824
Joonho Ko
Joonho Ko, Jinheon Baek, Sung Ju Hwang
Real-time Verification and Refinement of Language Model Text Generation
null
null
null
null
cs.CL cs.AI cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Large language models (LLMs) have shown remarkable performance across a wide range of natural language tasks. However, a critical challenge remains in that they sometimes generate factually incorrect answers. To address this, while many previous work has focused on identifying errors in their generation and further refining them, they are slow in deployment since they are designed to verify the response from LLMs only after their entire generation (from the first to last tokens) is done. Further, we observe that once LLMs generate incorrect tokens early on, there is a higher likelihood that subsequent tokens will also be factually incorrect. To this end, in this work, we propose Streaming-VR (Streaming Verification and Refinement), a novel approach designed to enhance the efficiency of verification and refinement of LLM outputs. Specifically, the proposed Streaming-VR enables on-the-fly verification and correction of tokens as they are being generated, similar to a streaming process, ensuring that each subset of tokens is checked and refined in real-time by another LLM as the LLM constructs its response. Through comprehensive evaluations on multiple datasets, we demonstrate that our approach not only enhances the factual accuracy of LLMs, but also offers a more efficient solution compared to prior refinement methods.
[ { "version": "v1", "created": "Tue, 14 Jan 2025 03:59:48 GMT" }, { "version": "v2", "created": "Mon, 17 Feb 2025 13:26:52 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 06:39:35 GMT" } ]
2025-04-11T00:00:00
[ [ "Ko", "Joonho", "" ], [ "Baek", "Jinheon", "" ], [ "Hwang", "Sung Ju", "" ] ]
TITLE: Real-time Verification and Refinement of Language Model Text Generation ABSTRACT: Large language models (LLMs) have shown remarkable performance across a wide range of natural language tasks. However, a critical challenge remains in that they sometimes generate factually incorrect answers. To address this, while many previous work has focused on identifying errors in their generation and further refining them, they are slow in deployment since they are designed to verify the response from LLMs only after their entire generation (from the first to last tokens) is done. Further, we observe that once LLMs generate incorrect tokens early on, there is a higher likelihood that subsequent tokens will also be factually incorrect. To this end, in this work, we propose Streaming-VR (Streaming Verification and Refinement), a novel approach designed to enhance the efficiency of verification and refinement of LLM outputs. Specifically, the proposed Streaming-VR enables on-the-fly verification and correction of tokens as they are being generated, similar to a streaming process, ensuring that each subset of tokens is checked and refined in real-time by another LLM as the LLM constructs its response. Through comprehensive evaluations on multiple datasets, we demonstrate that our approach not only enhances the factual accuracy of LLMs, but also offers a more efficient solution compared to prior refinement methods.
no_new_dataset
0.948537
2501.14174
Junyeob Baek
Junyeob Baek, Yi-Fu Wu, Gautam Singh, Sungjin Ahn
Dreamweaver: Learning Compositional World Models from Pixels
null
null
null
null
cs.CV cs.AI cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Humans have an innate ability to decompose their perceptions of the world into objects and their attributes, such as colors, shapes, and movement patterns. This cognitive process enables us to imagine novel futures by recombining familiar concepts. However, replicating this ability in artificial intelligence systems has proven challenging, particularly when it comes to modeling videos into compositional concepts and generating unseen, recomposed futures without relying on auxiliary data, such as text, masks, or bounding boxes. In this paper, we propose Dreamweaver, a neural architecture designed to discover hierarchical and compositional representations from raw videos and generate compositional future simulations. Our approach leverages a novel Recurrent Block-Slot Unit (RBSU) to decompose videos into their constituent objects and attributes. In addition, Dreamweaver uses a multi-future-frame prediction objective to capture disentangled representations for dynamic concepts more effectively as well as static concepts. In experiments, we demonstrate our model outperforms current state-of-the-art baselines for world modeling when evaluated under the DCI framework across multiple datasets. Furthermore, we show how the modularized concept representations of our model enable compositional imagination, allowing the generation of novel videos by recombining attributes from previously seen objects. cun-bjy.github.io/dreamweaver-website
[ { "version": "v1", "created": "Fri, 24 Jan 2025 01:50:19 GMT" }, { "version": "v2", "created": "Tue, 18 Feb 2025 08:16:03 GMT" }, { "version": "v3", "created": "Thu, 27 Feb 2025 16:09:15 GMT" }, { "version": "v4", "created": "Fri, 28 Feb 2025 08:12:21 GMT" }, { "version": "v5", "created": "Thu, 10 Apr 2025 13:12:34 GMT" } ]
2025-04-11T00:00:00
[ [ "Baek", "Junyeob", "" ], [ "Wu", "Yi-Fu", "" ], [ "Singh", "Gautam", "" ], [ "Ahn", "Sungjin", "" ] ]
TITLE: Dreamweaver: Learning Compositional World Models from Pixels ABSTRACT: Humans have an innate ability to decompose their perceptions of the world into objects and their attributes, such as colors, shapes, and movement patterns. This cognitive process enables us to imagine novel futures by recombining familiar concepts. However, replicating this ability in artificial intelligence systems has proven challenging, particularly when it comes to modeling videos into compositional concepts and generating unseen, recomposed futures without relying on auxiliary data, such as text, masks, or bounding boxes. In this paper, we propose Dreamweaver, a neural architecture designed to discover hierarchical and compositional representations from raw videos and generate compositional future simulations. Our approach leverages a novel Recurrent Block-Slot Unit (RBSU) to decompose videos into their constituent objects and attributes. In addition, Dreamweaver uses a multi-future-frame prediction objective to capture disentangled representations for dynamic concepts more effectively as well as static concepts. In experiments, we demonstrate our model outperforms current state-of-the-art baselines for world modeling when evaluated under the DCI framework across multiple datasets. Furthermore, we show how the modularized concept representations of our model enable compositional imagination, allowing the generation of novel videos by recombining attributes from previously seen objects. cun-bjy.github.io/dreamweaver-website
no_new_dataset
0.942665
2501.15293
Sajjad Saleem
Rubab Hafeez, Sadia Waheed, Syeda Aleena Naqvi, Fahad Maqbool, Amna Sarwar, Sajjad Saleem, Muhammad Imran Sharif, Kamran Siddique, Zahid Akhtar
Deep Learning in Early Alzheimer's disease's Detection: A Comprehensive Survey of Classification, Segmentation, and Feature Extraction Methods
22 pages
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Alzheimers disease is a deadly neurological condition, impairing important memory and brain functions. Alzheimers disease promotes brain shrinkage, ultimately leading to dementia. Dementia diagnosis typically takes 2.8 to 4.4 years after the first clinical indication. Advancements in computing and information technology have led to many techniques of studying Alzheimers disease. Early identification and therapy are crucial for preventing Alzheimers disease, as early-onset dementia hits people before the age of 65, while late-onset dementia occurs after this age. According to the 2015 World Alzheimers disease Report, there are 46.8 million individuals worldwide suffering from dementia, with an anticipated 74.7 million more by 2030 and 131.5 million by 2050. Deep Learning has outperformed conventional Machine Learning techniques by identifying intricate structures in high-dimensional data. Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN), have achieved an accuracy of up to 96.0% for Alzheimers disease classification, and 84.2% for mild cognitive impairment (MCI) conversion prediction. There have been few literature surveys available on applying ML to predict dementia, lacking in congenital observations. However, this survey has focused on a specific data channel for dementia detection. This study evaluated Deep Learning algorithms for early Alzheimers disease detection, using openly accessible datasets, feature segmentation, and classification methods. This article also has identified research gaps and limits in detecting Alzheimers disease, which can inform future research.
[ { "version": "v1", "created": "Sat, 25 Jan 2025 18:00:17 GMT" }, { "version": "v2", "created": "Wed, 29 Jan 2025 10:30:35 GMT" }, { "version": "v3", "created": "Wed, 9 Apr 2025 22:39:50 GMT" } ]
2025-04-11T00:00:00
[ [ "Hafeez", "Rubab", "" ], [ "Waheed", "Sadia", "" ], [ "Naqvi", "Syeda Aleena", "" ], [ "Maqbool", "Fahad", "" ], [ "Sarwar", "Amna", "" ], [ "Saleem", "Sajjad", "" ], [ "Sharif", "Muhammad Imran", "" ], [ "Siddique", "Kamran", "" ], [ "Akhtar", "Zahid", "" ] ]
TITLE: Deep Learning in Early Alzheimer's disease's Detection: A Comprehensive Survey of Classification, Segmentation, and Feature Extraction Methods ABSTRACT: Alzheimers disease is a deadly neurological condition, impairing important memory and brain functions. Alzheimers disease promotes brain shrinkage, ultimately leading to dementia. Dementia diagnosis typically takes 2.8 to 4.4 years after the first clinical indication. Advancements in computing and information technology have led to many techniques of studying Alzheimers disease. Early identification and therapy are crucial for preventing Alzheimers disease, as early-onset dementia hits people before the age of 65, while late-onset dementia occurs after this age. According to the 2015 World Alzheimers disease Report, there are 46.8 million individuals worldwide suffering from dementia, with an anticipated 74.7 million more by 2030 and 131.5 million by 2050. Deep Learning has outperformed conventional Machine Learning techniques by identifying intricate structures in high-dimensional data. Convolutional Neural Network (CNN) and Recurrent Neural Network (RNN), have achieved an accuracy of up to 96.0% for Alzheimers disease classification, and 84.2% for mild cognitive impairment (MCI) conversion prediction. There have been few literature surveys available on applying ML to predict dementia, lacking in congenital observations. However, this survey has focused on a specific data channel for dementia detection. This study evaluated Deep Learning algorithms for early Alzheimers disease detection, using openly accessible datasets, feature segmentation, and classification methods. This article also has identified research gaps and limits in detecting Alzheimers disease, which can inform future research.
no_new_dataset
0.940517
2502.04790
Yuting Zeng
Yuting Zeng, Weizhe Huang, Lei Jiang, Tongxuan Liu, Xitai Jin, Chen Tianying Tiana, Jing Li, Xiaohua Xu
S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency
Accepted to NAACL 2025 Main
null
null
null
cs.CL cs.AI
http://creativecommons.org/licenses/by/4.0/
Large language models (LLMs) have demonstrated remarkable capabilities across various natural language processing (NLP) scenarios, but they still face challenges when handling complex arithmetic and logical reasoning tasks. While Chain-Of-Thought (CoT) reasoning, self-consistency (SC) and self-correction strategies have attempted to guide models in sequential, multi-step reasoning, Multi-agent Debate (MAD) has emerged as a viable approach for enhancing the reasoning capabilities of LLMs. By increasing both the number of agents and the frequency of debates, the performance of LLMs improves significantly. However, this strategy results in a significant increase in token costs, presenting a barrier to scalability. To address this challenge, we introduce a novel sparsification strategy designed to reduce token costs within MAD. This approach minimizes ineffective exchanges of information and unproductive discussions among agents, thereby enhancing the overall efficiency of the debate process. We conduct comparative experiments on multiple datasets across various models, demonstrating that our approach significantly reduces the token costs in MAD to a considerable extent. Specifically, compared to MAD, our approach achieves an impressive reduction of up to 94.5\% in token costs while maintaining performance degradation below 2.0\%.
[ { "version": "v1", "created": "Fri, 7 Feb 2025 09:49:56 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 02:29:35 GMT" } ]
2025-04-11T00:00:00
[ [ "Zeng", "Yuting", "" ], [ "Huang", "Weizhe", "" ], [ "Jiang", "Lei", "" ], [ "Liu", "Tongxuan", "" ], [ "Jin", "Xitai", "" ], [ "Tiana", "Chen Tianying", "" ], [ "Li", "Jing", "" ], [ "Xu", "Xiaohua", "" ] ]
TITLE: S$^2$-MAD: Breaking the Token Barrier to Enhance Multi-Agent Debate Efficiency ABSTRACT: Large language models (LLMs) have demonstrated remarkable capabilities across various natural language processing (NLP) scenarios, but they still face challenges when handling complex arithmetic and logical reasoning tasks. While Chain-Of-Thought (CoT) reasoning, self-consistency (SC) and self-correction strategies have attempted to guide models in sequential, multi-step reasoning, Multi-agent Debate (MAD) has emerged as a viable approach for enhancing the reasoning capabilities of LLMs. By increasing both the number of agents and the frequency of debates, the performance of LLMs improves significantly. However, this strategy results in a significant increase in token costs, presenting a barrier to scalability. To address this challenge, we introduce a novel sparsification strategy designed to reduce token costs within MAD. This approach minimizes ineffective exchanges of information and unproductive discussions among agents, thereby enhancing the overall efficiency of the debate process. We conduct comparative experiments on multiple datasets across various models, demonstrating that our approach significantly reduces the token costs in MAD to a considerable extent. Specifically, compared to MAD, our approach achieves an impressive reduction of up to 94.5\% in token costs while maintaining performance degradation below 2.0\%.
no_new_dataset
0.948965
2502.05780
Danny Wang
Danny Wang, Ruihong Qiu, Guangdong Bai, Zi Huang
GOLD: Graph Out-of-Distribution Detection via Implicit Adversarial Latent Generation
ICLR25
null
null
null
cs.LG
http://creativecommons.org/licenses/by-sa/4.0/
Despite graph neural networks' (GNNs) great success in modelling graph-structured data, out-of-distribution (OOD) test instances still pose a great challenge for current GNNs. One of the most effective techniques to detect OOD nodes is to expose the detector model with an additional OOD node-set, yet the extra OOD instances are often difficult to obtain in practice. Recent methods for image data address this problem using OOD data synthesis, typically relying on pre-trained generative models like Stable Diffusion. However, these approaches require vast amounts of additional data, as well as one-for-all pre-trained generative models, which are not available for graph data. Therefore, we propose the GOLD framework for graph OOD detection, an implicit adversarial learning pipeline with synthetic OOD exposure without pre-trained models. The implicit adversarial training process employs a novel alternating optimisation framework by training: (1) a latent generative model to regularly imitate the in-distribution (ID) embeddings from an evolving GNN, and (2) a GNN encoder and an OOD detector to accurately classify ID data while increasing the energy divergence between the ID embeddings and the generative model's synthetic embeddings. This novel approach implicitly transforms the synthetic embeddings into pseudo-OOD instances relative to the ID data, effectively simulating exposure to OOD scenarios without auxiliary data. Extensive OOD detection experiments are conducted on five benchmark graph datasets, verifying the superior performance of GOLD without using real OOD data compared with the state-of-the-art OOD exposure and non-exposure baselines.
[ { "version": "v1", "created": "Sun, 9 Feb 2025 05:19:53 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 11:41:23 GMT" } ]
2025-04-11T00:00:00
[ [ "Wang", "Danny", "" ], [ "Qiu", "Ruihong", "" ], [ "Bai", "Guangdong", "" ], [ "Huang", "Zi", "" ] ]
TITLE: GOLD: Graph Out-of-Distribution Detection via Implicit Adversarial Latent Generation ABSTRACT: Despite graph neural networks' (GNNs) great success in modelling graph-structured data, out-of-distribution (OOD) test instances still pose a great challenge for current GNNs. One of the most effective techniques to detect OOD nodes is to expose the detector model with an additional OOD node-set, yet the extra OOD instances are often difficult to obtain in practice. Recent methods for image data address this problem using OOD data synthesis, typically relying on pre-trained generative models like Stable Diffusion. However, these approaches require vast amounts of additional data, as well as one-for-all pre-trained generative models, which are not available for graph data. Therefore, we propose the GOLD framework for graph OOD detection, an implicit adversarial learning pipeline with synthetic OOD exposure without pre-trained models. The implicit adversarial training process employs a novel alternating optimisation framework by training: (1) a latent generative model to regularly imitate the in-distribution (ID) embeddings from an evolving GNN, and (2) a GNN encoder and an OOD detector to accurately classify ID data while increasing the energy divergence between the ID embeddings and the generative model's synthetic embeddings. This novel approach implicitly transforms the synthetic embeddings into pseudo-OOD instances relative to the ID data, effectively simulating exposure to OOD scenarios without auxiliary data. Extensive OOD detection experiments are conducted on five benchmark graph datasets, verifying the superior performance of GOLD without using real OOD data compared with the state-of-the-art OOD exposure and non-exposure baselines.
no_new_dataset
0.949949
2502.06193
Ruiqi Wang
Ruiqi Wang, Jiyu Guo, Cuiyun Gao, Guodong Fan, Chun Yong Chong, Xin Xia
Can LLMs Replace Human Evaluators? An Empirical Study of LLM-as-a-Judge in Software Engineering
Accepted by ISSTA 2025: https://conf.researchr.org/details/issta-2025/issta-2025-papers/85/Can-LLMs-replace-Human-Evaluators-An-Empirical-Study-of-LLM-as-a-Judge-in-Software-E
null
10.1145/3728963
null
cs.SE cs.AI
http://creativecommons.org/licenses/by/4.0/
Recently, large language models (LLMs) have been deployed to tackle various software engineering (SE) tasks like code generation, significantly advancing the automation of SE tasks. However, assessing the quality of these LLM-generated code and text remains challenging. The commonly used Pass@k metric necessitates extensive unit tests and configured environments, demands a high labor cost, and is not suitable for evaluating LLM-generated text. Conventional metrics like BLEU, which measure only lexical rather than semantic similarity, have also come under scrutiny. In response, a new trend has emerged to employ LLMs for automated evaluation, known as LLM-as-a-judge. These LLM-as-a-judge methods are claimed to better mimic human assessment than conventional metrics without relying on high-quality reference answers. Nevertheless, their exact human alignment in SE tasks remains unexplored. In this paper, we empirically explore LLM-as-a-judge methods for evaluating SE tasks, focusing on their alignment with human judgments. We select seven LLM-as-a-judge methods that utilize general-purpose LLMs, alongside two LLMs specifically fine-tuned for evaluation. After generating and manually scoring LLM responses on three recent SE datasets of code translation, code generation, and code summarization, we then prompt these methods to evaluate each response. Finally, we compare the scores generated by these methods with human evaluation. The results indicate that output-based methods reach the highest Pearson correlation of 81.32 and 68.51 with human scores in code translation and generation, achieving near-human evaluation, noticeably outperforming ChrF++, one of the best conventional metrics, at 34.23 and 64.92. Such output-based methods prompt LLMs to output judgments directly, and exhibit more balanced score distributions that resemble human score patterns. Finally, we provide...
[ { "version": "v1", "created": "Mon, 10 Feb 2025 06:49:29 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 07:33:55 GMT" } ]
2025-04-11T00:00:00
[ [ "Wang", "Ruiqi", "" ], [ "Guo", "Jiyu", "" ], [ "Gao", "Cuiyun", "" ], [ "Fan", "Guodong", "" ], [ "Chong", "Chun Yong", "" ], [ "Xia", "Xin", "" ] ]
TITLE: Can LLMs Replace Human Evaluators? An Empirical Study of LLM-as-a-Judge in Software Engineering ABSTRACT: Recently, large language models (LLMs) have been deployed to tackle various software engineering (SE) tasks like code generation, significantly advancing the automation of SE tasks. However, assessing the quality of these LLM-generated code and text remains challenging. The commonly used Pass@k metric necessitates extensive unit tests and configured environments, demands a high labor cost, and is not suitable for evaluating LLM-generated text. Conventional metrics like BLEU, which measure only lexical rather than semantic similarity, have also come under scrutiny. In response, a new trend has emerged to employ LLMs for automated evaluation, known as LLM-as-a-judge. These LLM-as-a-judge methods are claimed to better mimic human assessment than conventional metrics without relying on high-quality reference answers. Nevertheless, their exact human alignment in SE tasks remains unexplored. In this paper, we empirically explore LLM-as-a-judge methods for evaluating SE tasks, focusing on their alignment with human judgments. We select seven LLM-as-a-judge methods that utilize general-purpose LLMs, alongside two LLMs specifically fine-tuned for evaluation. After generating and manually scoring LLM responses on three recent SE datasets of code translation, code generation, and code summarization, we then prompt these methods to evaluate each response. Finally, we compare the scores generated by these methods with human evaluation. The results indicate that output-based methods reach the highest Pearson correlation of 81.32 and 68.51 with human scores in code translation and generation, achieving near-human evaluation, noticeably outperforming ChrF++, one of the best conventional metrics, at 34.23 and 64.92. Such output-based methods prompt LLMs to output judgments directly, and exhibit more balanced score distributions that resemble human score patterns. Finally, we provide...
no_new_dataset
0.941815
2502.07532
Erik Larsson
Erik Larsson, Joel Oskarsson, Tomas Landelius, Fredrik Lindsten
Diffusion-LAM: Probabilistic Limited Area Weather Forecasting with Diffusion
Accepted, camera ready version
null
null
null
cs.LG physics.ao-ph
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Machine learning methods have been shown to be effective for weather forecasting, based on the speed and accuracy compared to traditional numerical models. While early efforts primarily concentrated on deterministic predictions, the field has increasingly shifted toward probabilistic forecasting to better capture the forecast uncertainty. Most machine learning-based models have been designed for global-scale predictions, with only limited work targeting regional or limited area forecasting, which allows more specialized and flexible modeling for specific locations. This work introduces Diffusion-LAM, a probabilistic limited area weather model leveraging conditional diffusion. By conditioning on boundary data from surrounding regions, our approach generates forecasts within a defined area. Experimental results on the MEPS limited area dataset demonstrate the potential of Diffusion-LAM to deliver accurate probabilistic forecasts, highlighting its promise for limited-area weather prediction.
[ { "version": "v1", "created": "Tue, 11 Feb 2025 13:15:16 GMT" }, { "version": "v2", "created": "Thu, 13 Feb 2025 13:55:08 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 12:10:33 GMT" } ]
2025-04-11T00:00:00
[ [ "Larsson", "Erik", "" ], [ "Oskarsson", "Joel", "" ], [ "Landelius", "Tomas", "" ], [ "Lindsten", "Fredrik", "" ] ]
TITLE: Diffusion-LAM: Probabilistic Limited Area Weather Forecasting with Diffusion ABSTRACT: Machine learning methods have been shown to be effective for weather forecasting, based on the speed and accuracy compared to traditional numerical models. While early efforts primarily concentrated on deterministic predictions, the field has increasingly shifted toward probabilistic forecasting to better capture the forecast uncertainty. Most machine learning-based models have been designed for global-scale predictions, with only limited work targeting regional or limited area forecasting, which allows more specialized and flexible modeling for specific locations. This work introduces Diffusion-LAM, a probabilistic limited area weather model leveraging conditional diffusion. By conditioning on boundary data from surrounding regions, our approach generates forecasts within a defined area. Experimental results on the MEPS limited area dataset demonstrate the potential of Diffusion-LAM to deliver accurate probabilistic forecasts, highlighting its promise for limited-area weather prediction.
no_new_dataset
0.948585
2502.09535
Carlton Shepherd
Carlton Shepherd and Elliot Hurley
Entropy Collapse in Mobile Sensors: The Hidden Risks of Sensor-Based Security
null
null
null
null
cs.CR
http://creativecommons.org/licenses/by/4.0/
Mobile sensor data has been proposed for security-critical applications such as device pairing, proximity detection, and continuous authentication. However, the foundational premise that these signals provide sufficient entropy remains under-explored. In this work, we systematically analyse the entropy of mobile sensor data across four diverse datasets spanning multiple application contexts. Our findings reveal pervasive biases, with single-sensor mean min-entropy values ranging from 3.408-4.483 bits (S.D.=1.018-1.574) despite Shannon entropy being several multiples higher, showing a significant collapse between average- to worst-case settings. We further demonstrate that correlations between sensor modalities reduce the worst-case entropy of using multiple sensors by up to ~75% compared to average-case Shannon entropy. This brings joint min-entropy well below 10 bits in many cases and, in the best case, yielding only ~24 bits of min-entropy when combining 20 sensor modalities. These results raise the serious risk of attacks that exhaustively search the space of possible sensor measurements. Our work also calls into question the widely held assumption that adding more sensors inherently yields higher security, and we strongly urge caution when relying on mobile sensor data for security applications.
[ { "version": "v1", "created": "Thu, 13 Feb 2025 17:50:58 GMT" }, { "version": "v2", "created": "Fri, 14 Feb 2025 10:43:47 GMT" }, { "version": "v3", "created": "Mon, 17 Feb 2025 22:41:20 GMT" }, { "version": "v4", "created": "Tue, 25 Mar 2025 11:42:52 GMT" }, { "version": "v5", "created": "Wed, 26 Mar 2025 22:14:50 GMT" }, { "version": "v6", "created": "Thu, 10 Apr 2025 17:53:17 GMT" } ]
2025-04-11T00:00:00
[ [ "Shepherd", "Carlton", "" ], [ "Hurley", "Elliot", "" ] ]
TITLE: Entropy Collapse in Mobile Sensors: The Hidden Risks of Sensor-Based Security ABSTRACT: Mobile sensor data has been proposed for security-critical applications such as device pairing, proximity detection, and continuous authentication. However, the foundational premise that these signals provide sufficient entropy remains under-explored. In this work, we systematically analyse the entropy of mobile sensor data across four diverse datasets spanning multiple application contexts. Our findings reveal pervasive biases, with single-sensor mean min-entropy values ranging from 3.408-4.483 bits (S.D.=1.018-1.574) despite Shannon entropy being several multiples higher, showing a significant collapse between average- to worst-case settings. We further demonstrate that correlations between sensor modalities reduce the worst-case entropy of using multiple sensors by up to ~75% compared to average-case Shannon entropy. This brings joint min-entropy well below 10 bits in many cases and, in the best case, yielding only ~24 bits of min-entropy when combining 20 sensor modalities. These results raise the serious risk of attacks that exhaustively search the space of possible sensor measurements. Our work also calls into question the widely held assumption that adding more sensors inherently yields higher security, and we strongly urge caution when relying on mobile sensor data for security applications.
no_new_dataset
0.947721
2502.10122
Saul Jos\'e Rodrigues Dos Santos
Saul Santos and Ant\'onio Farinhas and Daniel C. McNamee and Andr\'e F.T. Martins
Modern Hopfield Networks with Continuous-Time Memories
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recent research has established a connection between modern Hopfield networks (HNs) and transformer attention heads, with guarantees of exponential storage capacity. However, these models still face challenges scaling storage efficiently. Inspired by psychological theories of continuous neural resource allocation in working memory, we propose an approach that compresses large discrete Hopfield memories into smaller, continuous-time memories. Leveraging continuous attention, our new energy function modifies the update rule of HNs, replacing the traditional softmax-based probability mass function with a probability density, over the continuous memory. This formulation aligns with modern perspectives on human executive function, offering a principled link between attractor dynamics in working memory and resource-efficient memory allocation. Our framework maintains competitive performance with HNs while leveraging a compressed memory, reducing computational costs across synthetic and video datasets.
[ { "version": "v1", "created": "Fri, 14 Feb 2025 12:41:05 GMT" }, { "version": "v2", "created": "Sun, 2 Mar 2025 10:06:12 GMT" }, { "version": "v3", "created": "Mon, 24 Mar 2025 17:57:09 GMT" }, { "version": "v4", "created": "Thu, 10 Apr 2025 14:32:13 GMT" } ]
2025-04-11T00:00:00
[ [ "Santos", "Saul", "" ], [ "Farinhas", "António", "" ], [ "McNamee", "Daniel C.", "" ], [ "Martins", "André F. T.", "" ] ]
TITLE: Modern Hopfield Networks with Continuous-Time Memories ABSTRACT: Recent research has established a connection between modern Hopfield networks (HNs) and transformer attention heads, with guarantees of exponential storage capacity. However, these models still face challenges scaling storage efficiently. Inspired by psychological theories of continuous neural resource allocation in working memory, we propose an approach that compresses large discrete Hopfield memories into smaller, continuous-time memories. Leveraging continuous attention, our new energy function modifies the update rule of HNs, replacing the traditional softmax-based probability mass function with a probability density, over the continuous memory. This formulation aligns with modern perspectives on human executive function, offering a principled link between attractor dynamics in working memory and resource-efficient memory allocation. Our framework maintains competitive performance with HNs while leveraging a compressed memory, reducing computational costs across synthetic and video datasets.
no_new_dataset
0.947478
2502.14792
Henrique Pi\~neiro Monteagudo
Henrique Pi\~neiro Monteagudo, Leonardo Taccari, Aurel Pjetri, Francesco Sambo, Samuele Salti
RendBEV: Semantic Novel View Synthesis for Self-Supervised Bird's Eye View Segmentation
Accepted at WACV 2025
2025 IEEE/CVF Winter Conference on Applications of Computer Vision (WACV), Tucson, AZ, USA, 2025, pp. 535-544
10.1109/WACV61041.2025.00062
null
cs.CV
http://creativecommons.org/licenses/by-nc-sa/4.0/
Bird's Eye View (BEV) semantic maps have recently garnered a lot of attention as a useful representation of the environment to tackle assisted and autonomous driving tasks. However, most of the existing work focuses on the fully supervised setting, training networks on large annotated datasets. In this work, we present RendBEV, a new method for the self-supervised training of BEV semantic segmentation networks, leveraging differentiable volumetric rendering to receive supervision from semantic perspective views computed by a 2D semantic segmentation model. Our method enables zero-shot BEV semantic segmentation, and already delivers competitive results in this challenging setting. When used as pretraining to then fine-tune on labeled BEV ground-truth, our method significantly boosts performance in low-annotation regimes, and sets a new state of the art when fine-tuning on all available labels.
[ { "version": "v1", "created": "Thu, 20 Feb 2025 18:11:44 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 15:00:28 GMT" } ]
2025-04-11T00:00:00
[ [ "Monteagudo", "Henrique Piñeiro", "" ], [ "Taccari", "Leonardo", "" ], [ "Pjetri", "Aurel", "" ], [ "Sambo", "Francesco", "" ], [ "Salti", "Samuele", "" ] ]
TITLE: RendBEV: Semantic Novel View Synthesis for Self-Supervised Bird's Eye View Segmentation ABSTRACT: Bird's Eye View (BEV) semantic maps have recently garnered a lot of attention as a useful representation of the environment to tackle assisted and autonomous driving tasks. However, most of the existing work focuses on the fully supervised setting, training networks on large annotated datasets. In this work, we present RendBEV, a new method for the self-supervised training of BEV semantic segmentation networks, leveraging differentiable volumetric rendering to receive supervision from semantic perspective views computed by a 2D semantic segmentation model. Our method enables zero-shot BEV semantic segmentation, and already delivers competitive results in this challenging setting. When used as pretraining to then fine-tune on labeled BEV ground-truth, our method significantly boosts performance in low-annotation regimes, and sets a new state of the art when fine-tuning on all available labels.
no_new_dataset
0.951323
2503.00961
Md Abrar Jahin
Md Abrar Jahin, Shahriar Soudeep, M. F. Mridha, Raihan Kabir, Md Rashedul Islam, Yutaka Watanobe
CAGN-GAT Fusion: A Hybrid Contrastive Attentive Graph Neural Network for Network Intrusion Detection
Accepted in 38th International Conference on Industrial, Engineering & Other Applications of Applied Intelligent Systems (IEA/AIE 2025), Kitakyushu, Japan, Jul 2025
null
null
null
cs.LG
http://creativecommons.org/licenses/by/4.0/
Cybersecurity threats are growing, making network intrusion detection essential. Traditional machine learning models remain effective in resource-limited environments due to their efficiency, requiring fewer parameters and less computational time. However, handling short and highly imbalanced datasets remains challenging. In this study, we propose the fusion of a Contrastive Attentive Graph Network and Graph Attention Network (CAGN-GAT Fusion) and benchmark it against 15 other models, including both Graph Neural Networks (GNNs) and traditional ML models. Our evaluation is conducted on four benchmark datasets (KDD-CUP-1999, NSL-KDD, UNSW-NB15, and CICIDS2017) using a short and proportionally imbalanced dataset with a constant size of 5000 samples to ensure fairness in comparison. Results show that CAGN-GAT Fusion demonstrates stable and competitive accuracy, recall, and F1-score, even though it does not achieve the highest performance in every dataset. Our analysis also highlights the impact of adaptive graph construction techniques, including small changes in connections (edge perturbation) and selective hiding of features (feature masking), improving detection performance. The findings confirm that GNNs, particularly CAGN-GAT Fusion, are robust and computationally efficient, making them well-suited for resource-constrained environments. Future work will explore GraphSAGE layers and multiview graph construction techniques to further enhance adaptability and detection accuracy.
[ { "version": "v1", "created": "Sun, 2 Mar 2025 17:01:00 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 15:11:48 GMT" } ]
2025-04-11T00:00:00
[ [ "Jahin", "Md Abrar", "" ], [ "Soudeep", "Shahriar", "" ], [ "Mridha", "M. F.", "" ], [ "Kabir", "Raihan", "" ], [ "Islam", "Md Rashedul", "" ], [ "Watanobe", "Yutaka", "" ] ]
TITLE: CAGN-GAT Fusion: A Hybrid Contrastive Attentive Graph Neural Network for Network Intrusion Detection ABSTRACT: Cybersecurity threats are growing, making network intrusion detection essential. Traditional machine learning models remain effective in resource-limited environments due to their efficiency, requiring fewer parameters and less computational time. However, handling short and highly imbalanced datasets remains challenging. In this study, we propose the fusion of a Contrastive Attentive Graph Network and Graph Attention Network (CAGN-GAT Fusion) and benchmark it against 15 other models, including both Graph Neural Networks (GNNs) and traditional ML models. Our evaluation is conducted on four benchmark datasets (KDD-CUP-1999, NSL-KDD, UNSW-NB15, and CICIDS2017) using a short and proportionally imbalanced dataset with a constant size of 5000 samples to ensure fairness in comparison. Results show that CAGN-GAT Fusion demonstrates stable and competitive accuracy, recall, and F1-score, even though it does not achieve the highest performance in every dataset. Our analysis also highlights the impact of adaptive graph construction techniques, including small changes in connections (edge perturbation) and selective hiding of features (feature masking), improving detection performance. The findings confirm that GNNs, particularly CAGN-GAT Fusion, are robust and computationally efficient, making them well-suited for resource-constrained environments. Future work will explore GraphSAGE layers and multiview graph construction techniques to further enhance adaptability and detection accuracy.
no_new_dataset
0.946843
2503.01284
Md Abrar Jahin
Md Abrar Jahin, Soudeep Shahriar, M. F. Mridha, Md. Jakir Hossen, Nilanjan Dey
Soybean Disease Detection via Interpretable Hybrid CNN-GNN: Integrating MobileNetV2 and GraphSAGE with Cross-Modal Attention
null
null
null
null
cs.CV cs.LG
http://creativecommons.org/licenses/by/4.0/
Soybean leaf disease detection is critical for agricultural productivity but faces challenges due to visually similar symptoms and limited interpretability in conventional methods. While Convolutional Neural Networks (CNNs) excel in spatial feature extraction, they often neglect inter-image relational dependencies, leading to misclassifications. This paper proposes an interpretable hybrid Sequential CNN-Graph Neural Network (GNN) framework that synergizes MobileNetV2 for localized feature extraction and GraphSAGE for relational modeling. The framework constructs a graph where nodes represent leaf images, with edges defined by cosine similarity-based adjacency matrices and adaptive neighborhood sampling. This design captures fine-grained lesion features and global symptom patterns, addressing inter-class similarity challenges. Cross-modal interpretability is achieved via Grad-CAM and Eigen-CAM visualizations, generating heatmaps to highlight disease-influential regions. Evaluated on a dataset of ten soybean leaf diseases, the model achieves $97.16\%$ accuracy, surpassing standalone CNNs ($\le95.04\%$) and traditional machine learning models ($\le77.05\%$). Ablation studies validate the sequential architecture's superiority over parallel or single-model configurations. With only 2.3 million parameters, the lightweight MobileNetV2-GraphSAGE combination ensures computational efficiency, enabling real-time deployment in resource-constrained environments. The proposed approach bridges the gap between accurate classification and practical applicability, offering a robust, interpretable tool for agricultural diagnostics while advancing CNN-GNN integration in plant pathology research.
[ { "version": "v1", "created": "Mon, 3 Mar 2025 08:12:09 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 15:14:17 GMT" } ]
2025-04-11T00:00:00
[ [ "Jahin", "Md Abrar", "" ], [ "Shahriar", "Soudeep", "" ], [ "Mridha", "M. F.", "" ], [ "Hossen", "Md. Jakir", "" ], [ "Dey", "Nilanjan", "" ] ]
TITLE: Soybean Disease Detection via Interpretable Hybrid CNN-GNN: Integrating MobileNetV2 and GraphSAGE with Cross-Modal Attention ABSTRACT: Soybean leaf disease detection is critical for agricultural productivity but faces challenges due to visually similar symptoms and limited interpretability in conventional methods. While Convolutional Neural Networks (CNNs) excel in spatial feature extraction, they often neglect inter-image relational dependencies, leading to misclassifications. This paper proposes an interpretable hybrid Sequential CNN-Graph Neural Network (GNN) framework that synergizes MobileNetV2 for localized feature extraction and GraphSAGE for relational modeling. The framework constructs a graph where nodes represent leaf images, with edges defined by cosine similarity-based adjacency matrices and adaptive neighborhood sampling. This design captures fine-grained lesion features and global symptom patterns, addressing inter-class similarity challenges. Cross-modal interpretability is achieved via Grad-CAM and Eigen-CAM visualizations, generating heatmaps to highlight disease-influential regions. Evaluated on a dataset of ten soybean leaf diseases, the model achieves $97.16\%$ accuracy, surpassing standalone CNNs ($\le95.04\%$) and traditional machine learning models ($\le77.05\%$). Ablation studies validate the sequential architecture's superiority over parallel or single-model configurations. With only 2.3 million parameters, the lightweight MobileNetV2-GraphSAGE combination ensures computational efficiency, enabling real-time deployment in resource-constrained environments. The proposed approach bridges the gap between accurate classification and practical applicability, offering a robust, interpretable tool for agricultural diagnostics while advancing CNN-GNN integration in plant pathology research.
no_new_dataset
0.950686
2503.02606
Adam Hartshorne
Adam Hartshorne, Allen Paul, Tony Shardlow, Neill D.F. Campbell
ARC-Flow : Articulated, Resolution-Agnostic, Correspondence-Free Matching and Interpolation of 3D Shapes Under Flow Fields
23 pages, 20 figures
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
This work presents a unified framework for the unsupervised prediction of physically plausible interpolations between two 3D articulated shapes and the automatic estimation of dense correspondence between them. Interpolation is modelled as a diffeomorphic transformation using a smooth, time-varying flow field governed by Neural Ordinary Differential Equations (ODEs). This ensures topological consistency and non-intersecting trajectories while accommodating hard constraints, such as volume preservation, and soft constraints, \eg physical priors. Correspondence is recovered using an efficient Varifold formulation, that is effective on high-fidelity surfaces with differing parameterisations. By providing a simple skeleton for the source shape only, we impose physically motivated constraints on the deformation field and resolve symmetric ambiguities. This is achieved without relying on skinning weights or any prior knowledge of the skeleton's target pose configuration. Qualitative and quantitative results demonstrate competitive or superior performance over existing state-of-the-art approaches in both shape correspondence and interpolation tasks across standard datasets.
[ { "version": "v1", "created": "Tue, 4 Mar 2025 13:28:05 GMT" }, { "version": "v2", "created": "Wed, 9 Apr 2025 22:08:51 GMT" } ]
2025-04-11T00:00:00
[ [ "Hartshorne", "Adam", "" ], [ "Paul", "Allen", "" ], [ "Shardlow", "Tony", "" ], [ "Campbell", "Neill D. F.", "" ] ]
TITLE: ARC-Flow : Articulated, Resolution-Agnostic, Correspondence-Free Matching and Interpolation of 3D Shapes Under Flow Fields ABSTRACT: This work presents a unified framework for the unsupervised prediction of physically plausible interpolations between two 3D articulated shapes and the automatic estimation of dense correspondence between them. Interpolation is modelled as a diffeomorphic transformation using a smooth, time-varying flow field governed by Neural Ordinary Differential Equations (ODEs). This ensures topological consistency and non-intersecting trajectories while accommodating hard constraints, such as volume preservation, and soft constraints, \eg physical priors. Correspondence is recovered using an efficient Varifold formulation, that is effective on high-fidelity surfaces with differing parameterisations. By providing a simple skeleton for the source shape only, we impose physically motivated constraints on the deformation field and resolve symmetric ambiguities. This is achieved without relying on skinning weights or any prior knowledge of the skeleton's target pose configuration. Qualitative and quantitative results demonstrate competitive or superior performance over existing state-of-the-art approaches in both shape correspondence and interpolation tasks across standard datasets.
no_new_dataset
0.945551
2503.06965
Shining Wang
Shining Wang, Yunlong Wang, Ruiqi Wu, Bingliang Jiao, Wenxuan Wang, Peng Wang
SeCap: Self-Calibrating and Adaptive Prompts for Cross-view Person Re-Identification in Aerial-Ground Networks
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
When discussing the Aerial-Ground Person Re-identification (AGPReID) task, we face the main challenge of the significant appearance variations caused by different viewpoints, making identity matching difficult. To address this issue, previous methods attempt to reduce the differences between viewpoints by critical attributes and decoupling the viewpoints. While these methods can mitigate viewpoint differences to some extent, they still face two main issues: (1) difficulty in handling viewpoint diversity and (2) neglect of the contribution of local features. To effectively address these challenges, we design and implement the Self-Calibrating and Adaptive Prompt (SeCap) method for the AGPReID task. The core of this framework relies on the Prompt Re-calibration Module (PRM), which adaptively re-calibrates prompts based on the input. Combined with the Local Feature Refinement Module (LFRM), SeCap can extract view-invariant features from local features for AGPReID. Meanwhile, given the current scarcity of datasets in the AGPReID field, we further contribute two real-world Large-scale Aerial-Ground Person Re-Identification datasets, LAGPeR and G2APS-ReID. The former is collected and annotated by us independently, covering $4,231$ unique identities and containing $63,841$ high-quality images; the latter is reconstructed from the person search dataset G2APS. Through extensive experiments on AGPReID datasets, we demonstrate that SeCap is a feasible and effective solution for the AGPReID task. The datasets and source code available on https://github.com/wangshining681/SeCap-AGPReID.
[ { "version": "v1", "created": "Mon, 10 Mar 2025 06:23:35 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 02:26:27 GMT" } ]
2025-04-11T00:00:00
[ [ "Wang", "Shining", "" ], [ "Wang", "Yunlong", "" ], [ "Wu", "Ruiqi", "" ], [ "Jiao", "Bingliang", "" ], [ "Wang", "Wenxuan", "" ], [ "Wang", "Peng", "" ] ]
TITLE: SeCap: Self-Calibrating and Adaptive Prompts for Cross-view Person Re-Identification in Aerial-Ground Networks ABSTRACT: When discussing the Aerial-Ground Person Re-identification (AGPReID) task, we face the main challenge of the significant appearance variations caused by different viewpoints, making identity matching difficult. To address this issue, previous methods attempt to reduce the differences between viewpoints by critical attributes and decoupling the viewpoints. While these methods can mitigate viewpoint differences to some extent, they still face two main issues: (1) difficulty in handling viewpoint diversity and (2) neglect of the contribution of local features. To effectively address these challenges, we design and implement the Self-Calibrating and Adaptive Prompt (SeCap) method for the AGPReID task. The core of this framework relies on the Prompt Re-calibration Module (PRM), which adaptively re-calibrates prompts based on the input. Combined with the Local Feature Refinement Module (LFRM), SeCap can extract view-invariant features from local features for AGPReID. Meanwhile, given the current scarcity of datasets in the AGPReID field, we further contribute two real-world Large-scale Aerial-Ground Person Re-Identification datasets, LAGPeR and G2APS-ReID. The former is collected and annotated by us independently, covering $4,231$ unique identities and containing $63,841$ high-quality images; the latter is reconstructed from the person search dataset G2APS. Through extensive experiments on AGPReID datasets, we demonstrate that SeCap is a feasible and effective solution for the AGPReID task. The datasets and source code available on https://github.com/wangshining681/SeCap-AGPReID.
no_new_dataset
0.942082
2503.08580
Justus Karlsson
Justus Karlsson, Yonghao Xu, Amanda Berg, Leif Haglund
Comparing Next-Day Wildfire Predictability of MODIS and VIIRS Satellite Data
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Multiple studies have performed next-day fire prediction using satellite imagery. Two main satellites are used to detect wildfires: MODIS and VIIRS. Both satellites provide fire mask products, called MOD14 and VNP14, respectively. Studies have used one or the other, but there has been no comparison between them to determine which might be more suitable for next-day fire prediction. In this paper, we first evaluate how well VIIRS and MODIS data can be used to forecast wildfire spread one day ahead. We find that the model using VIIRS as input and VNP14 as target achieves the best results. Interestingly, the model using MODIS as input and VNP14 as target performs significantly better than using VNP14 as input and MOD14 as target. Next, we discuss why MOD14 might be harder to use for predicting next-day fires. We find that the MOD14 fire mask is highly stochastic and does not correlate with reasonable fire spread patterns. This is detrimental for machine learning tasks, as the model learns irrational patterns. Therefore, we conclude that MOD14 is unsuitable for next-day fire prediction and that VNP14 is a much better option. However, using MODIS input and VNP14 as target, we achieve a significant improvement in predictability. This indicates that an improved fire detection model is possible for MODIS. The full code and dataset is available online: https://github.com/justuskarlsson/wildfire-mod14-vnp14
[ { "version": "v1", "created": "Tue, 11 Mar 2025 16:15:54 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 15:03:37 GMT" } ]
2025-04-11T00:00:00
[ [ "Karlsson", "Justus", "" ], [ "Xu", "Yonghao", "" ], [ "Berg", "Amanda", "" ], [ "Haglund", "Leif", "" ] ]
TITLE: Comparing Next-Day Wildfire Predictability of MODIS and VIIRS Satellite Data ABSTRACT: Multiple studies have performed next-day fire prediction using satellite imagery. Two main satellites are used to detect wildfires: MODIS and VIIRS. Both satellites provide fire mask products, called MOD14 and VNP14, respectively. Studies have used one or the other, but there has been no comparison between them to determine which might be more suitable for next-day fire prediction. In this paper, we first evaluate how well VIIRS and MODIS data can be used to forecast wildfire spread one day ahead. We find that the model using VIIRS as input and VNP14 as target achieves the best results. Interestingly, the model using MODIS as input and VNP14 as target performs significantly better than using VNP14 as input and MOD14 as target. Next, we discuss why MOD14 might be harder to use for predicting next-day fires. We find that the MOD14 fire mask is highly stochastic and does not correlate with reasonable fire spread patterns. This is detrimental for machine learning tasks, as the model learns irrational patterns. Therefore, we conclude that MOD14 is unsuitable for next-day fire prediction and that VNP14 is a much better option. However, using MODIS input and VNP14 as target, we achieve a significant improvement in predictability. This indicates that an improved fire detection model is possible for MODIS. The full code and dataset is available online: https://github.com/justuskarlsson/wildfire-mod14-vnp14
no_new_dataset
0.936634
2503.10111
Zecheng Zhao
Zecheng Zhao, Zhi Chen, Zi Huang, Shazia Sadiq, Tong Chen
Continual Text-to-Video Retrieval with Frame Fusion and Task-Aware Routing
Accepted at SIGIR 2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Text-to-Video Retrieval (TVR) aims to retrieve relevant videos based on textual queries. However, as video content evolves continuously, adapting TVR systems to new data remains a critical yet under-explored challenge. In this paper, we introduce the first benchmark for Continual Text-to-Video Retrieval (CTVR) to address the limitations of existing approaches. Current Pre-Trained Model (PTM)-based TVR methods struggle with maintaining model plasticity when adapting to new tasks, while existing Continual Learning (CL) methods suffer from catastrophic forgetting, leading to semantic misalignment between historical queries and stored video features. To address these two challenges, we propose FrameFusionMoE, a novel CTVR framework that comprises two key components: (1) the Frame Fusion Adapter (FFA), which captures temporal video dynamics while preserving model plasticity, and (2) the Task-Aware Mixture-of-Experts (TAME), which ensures consistent semantic alignment between queries across tasks and the stored video features. Thus, FrameFusionMoE enables effective adaptation to new video content while preserving historical text-video relevance to mitigate catastrophic forgetting. We comprehensively evaluate FrameFusionMoE on two benchmark datasets under various task settings. Results demonstrate that FrameFusionMoE outperforms existing CL and TVR methods, achieving superior retrieval performance with minimal degradation on earlier tasks when handling continuous video streams. Our code is available at: https://github.com/JasonCodeMaker/CTVR.
[ { "version": "v1", "created": "Thu, 13 Mar 2025 07:10:56 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 07:20:25 GMT" } ]
2025-04-11T00:00:00
[ [ "Zhao", "Zecheng", "" ], [ "Chen", "Zhi", "" ], [ "Huang", "Zi", "" ], [ "Sadiq", "Shazia", "" ], [ "Chen", "Tong", "" ] ]
TITLE: Continual Text-to-Video Retrieval with Frame Fusion and Task-Aware Routing ABSTRACT: Text-to-Video Retrieval (TVR) aims to retrieve relevant videos based on textual queries. However, as video content evolves continuously, adapting TVR systems to new data remains a critical yet under-explored challenge. In this paper, we introduce the first benchmark for Continual Text-to-Video Retrieval (CTVR) to address the limitations of existing approaches. Current Pre-Trained Model (PTM)-based TVR methods struggle with maintaining model plasticity when adapting to new tasks, while existing Continual Learning (CL) methods suffer from catastrophic forgetting, leading to semantic misalignment between historical queries and stored video features. To address these two challenges, we propose FrameFusionMoE, a novel CTVR framework that comprises two key components: (1) the Frame Fusion Adapter (FFA), which captures temporal video dynamics while preserving model plasticity, and (2) the Task-Aware Mixture-of-Experts (TAME), which ensures consistent semantic alignment between queries across tasks and the stored video features. Thus, FrameFusionMoE enables effective adaptation to new video content while preserving historical text-video relevance to mitigate catastrophic forgetting. We comprehensively evaluate FrameFusionMoE on two benchmark datasets under various task settings. Results demonstrate that FrameFusionMoE outperforms existing CL and TVR methods, achieving superior retrieval performance with minimal degradation on earlier tasks when handling continuous video streams. Our code is available at: https://github.com/JasonCodeMaker/CTVR.
no_new_dataset
0.947527
2503.13911
Yacong Li
Ruobing Jiang, Yacong Li, Haobing Liu, Yanwei Yu
Incorporating Attributes and Multi-Scale Structures for Heterogeneous Graph Contrastive Learning
null
null
null
null
cs.LG cs.SI
http://creativecommons.org/licenses/by/4.0/
Heterogeneous graphs (HGs) are composed of multiple types of nodes and edges, making it more effective in capturing the complex relational structures inherent in the real world. However, in real-world scenarios, labeled data is often difficult to obtain, which limits the applicability of semi-supervised approaches. Self-supervised learning aims to enable models to automatically learn useful features from data, effectively addressing the challenge of limited labeling data. In this paper, we propose a novel contrastive learning framework for heterogeneous graphs (ASHGCL), which incorporates three distinct views, each focusing on node attributes, high-order and low-order structural information, respectively, to effectively capture attribute information, high-order structures, and low-order structures for node representation learning. Furthermore, we introduce an attribute-enhanced positive sample selection strategy that combines both structural information and attribute information, effectively addressing the issue of sampling bias. Extensive experiments on four real-world datasets show that ASHGCL outperforms state-of-the-art unsupervised baselines and even surpasses some supervised benchmarks.
[ { "version": "v1", "created": "Tue, 18 Mar 2025 05:15:21 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 09:07:02 GMT" } ]
2025-04-11T00:00:00
[ [ "Jiang", "Ruobing", "" ], [ "Li", "Yacong", "" ], [ "Liu", "Haobing", "" ], [ "Yu", "Yanwei", "" ] ]
TITLE: Incorporating Attributes and Multi-Scale Structures for Heterogeneous Graph Contrastive Learning ABSTRACT: Heterogeneous graphs (HGs) are composed of multiple types of nodes and edges, making it more effective in capturing the complex relational structures inherent in the real world. However, in real-world scenarios, labeled data is often difficult to obtain, which limits the applicability of semi-supervised approaches. Self-supervised learning aims to enable models to automatically learn useful features from data, effectively addressing the challenge of limited labeling data. In this paper, we propose a novel contrastive learning framework for heterogeneous graphs (ASHGCL), which incorporates three distinct views, each focusing on node attributes, high-order and low-order structural information, respectively, to effectively capture attribute information, high-order structures, and low-order structures for node representation learning. Furthermore, we introduce an attribute-enhanced positive sample selection strategy that combines both structural information and attribute information, effectively addressing the issue of sampling bias. Extensive experiments on four real-world datasets show that ASHGCL outperforms state-of-the-art unsupervised baselines and even surpasses some supervised benchmarks.
no_new_dataset
0.948965
2503.16825
Xiyue Guo
Xiyue Guo, Jiarui Hu, Junjie Hu, Hujun Bao, Guofeng Zhang
SGFormer: Satellite-Ground Fusion for 3D Semantic Scene Completion
Project Page: https://zju3dv.github.io/sgformer/
null
null
null
cs.CV cs.RO
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Recently, camera-based solutions have been extensively explored for scene semantic completion (SSC). Despite their success in visible areas, existing methods struggle to capture complete scene semantics due to frequent visual occlusions. To address this limitation, this paper presents the first satellite-ground cooperative SSC framework, i.e., SGFormer, exploring the potential of satellite-ground image pairs in the SSC task. Specifically, we propose a dual-branch architecture that encodes orthogonal satellite and ground views in parallel, unifying them into a common domain. Additionally, we design a ground-view guidance strategy that corrects satellite image biases during feature encoding, addressing misalignment between satellite and ground views. Moreover, we develop an adaptive weighting strategy that balances contributions from satellite and ground views. Experiments demonstrate that SGFormer outperforms the state of the art on SemanticKITTI and SSCBench-KITTI-360 datasets. Our code is available on https://github.com/gxytcrc/SGFormer.
[ { "version": "v1", "created": "Fri, 21 Mar 2025 03:37:08 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 08:47:41 GMT" } ]
2025-04-11T00:00:00
[ [ "Guo", "Xiyue", "" ], [ "Hu", "Jiarui", "" ], [ "Hu", "Junjie", "" ], [ "Bao", "Hujun", "" ], [ "Zhang", "Guofeng", "" ] ]
TITLE: SGFormer: Satellite-Ground Fusion for 3D Semantic Scene Completion ABSTRACT: Recently, camera-based solutions have been extensively explored for scene semantic completion (SSC). Despite their success in visible areas, existing methods struggle to capture complete scene semantics due to frequent visual occlusions. To address this limitation, this paper presents the first satellite-ground cooperative SSC framework, i.e., SGFormer, exploring the potential of satellite-ground image pairs in the SSC task. Specifically, we propose a dual-branch architecture that encodes orthogonal satellite and ground views in parallel, unifying them into a common domain. Additionally, we design a ground-view guidance strategy that corrects satellite image biases during feature encoding, addressing misalignment between satellite and ground views. Moreover, we develop an adaptive weighting strategy that balances contributions from satellite and ground views. Experiments demonstrate that SGFormer outperforms the state of the art on SemanticKITTI and SSCBench-KITTI-360 datasets. Our code is available on https://github.com/gxytcrc/SGFormer.
no_new_dataset
0.947381
2503.17845
Matthias Herp
Matthias Herp, Johannes Brachem, Michael Altenbuchinger, Thomas Kneib
Graphical Transformation Models
36 pages, 10 Figures, presented at the DAGStat 2025 in Berlin
null
null
null
stat.ME cs.LG stat.ML
http://creativecommons.org/licenses/by/4.0/
Graphical Transformation Models (GTMs) are introduced as a novel approach to effectively model multivariate data with intricate marginals and complex dependency structures non-parametrically, while maintaining interpretability through the identification of varying conditional independencies. GTMs extend multivariate transformation models by replacing the Gaussian copula with a custom-designed multivariate transformation, offering two major advantages. Firstly, GTMs can capture more complex interdependencies using penalized splines, which also provide an efficient regularization scheme. Secondly, we demonstrate how to approximately regularize GTMs using a lasso penalty towards pairwise conditional independencies, akin to Gaussian graphical models. The model's robustness and effectiveness are validated through simulations, showcasing its ability to accurately learn parametric vine copulas and identify conditional independencies. Additionally, the model is applied to a benchmark astrophysics dataset, where the GTM demonstrates favorable performance compared to non-parametric vine copulas in learning complex multivariate distributions.
[ { "version": "v1", "created": "Sat, 22 Mar 2025 19:41:15 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 12:45:22 GMT" } ]
2025-04-11T00:00:00
[ [ "Herp", "Matthias", "" ], [ "Brachem", "Johannes", "" ], [ "Altenbuchinger", "Michael", "" ], [ "Kneib", "Thomas", "" ] ]
TITLE: Graphical Transformation Models ABSTRACT: Graphical Transformation Models (GTMs) are introduced as a novel approach to effectively model multivariate data with intricate marginals and complex dependency structures non-parametrically, while maintaining interpretability through the identification of varying conditional independencies. GTMs extend multivariate transformation models by replacing the Gaussian copula with a custom-designed multivariate transformation, offering two major advantages. Firstly, GTMs can capture more complex interdependencies using penalized splines, which also provide an efficient regularization scheme. Secondly, we demonstrate how to approximately regularize GTMs using a lasso penalty towards pairwise conditional independencies, akin to Gaussian graphical models. The model's robustness and effectiveness are validated through simulations, showcasing its ability to accurately learn parametric vine copulas and identify conditional independencies. Additionally, the model is applied to a benchmark astrophysics dataset, where the GTM demonstrates favorable performance compared to non-parametric vine copulas in learning complex multivariate distributions.
no_new_dataset
0.948251
2503.19949
Valerii Zuev
Valerii A. Zuev, Elena G. Salmagambetova, Stepan N. Djakov, Lev V. Utkin
Automated Video-EEG Analysis in Epilepsy Studies: Advances and Challenges
null
null
null
null
eess.IV cs.LG
http://creativecommons.org/licenses/by/4.0/
Epilepsy is typically diagnosed through electroencephalography (EEG) and long-term video-EEG (vEEG) monitoring. The manual analysis of vEEG recordings is time-consuming, necessitating automated tools for seizure detection. Recent advancements in machine learning have shown promise in real-time seizure detection and prediction using EEG and video data. However, diversity of seizure symptoms, markup ambiguities, and limited availability of multimodal datasets hinder progress. This paper reviews the latest developments in automated video-EEG analysis and discusses the integration of multimodal data. We also propose a novel pipeline for treatment effect estimation from vEEG data using concept-based learning, offering a pathway for future research in this domain.
[ { "version": "v1", "created": "Tue, 25 Mar 2025 15:37:02 GMT" }, { "version": "v2", "created": "Thu, 3 Apr 2025 17:13:16 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 08:16:31 GMT" } ]
2025-04-11T00:00:00
[ [ "Zuev", "Valerii A.", "" ], [ "Salmagambetova", "Elena G.", "" ], [ "Djakov", "Stepan N.", "" ], [ "Utkin", "Lev V.", "" ] ]
TITLE: Automated Video-EEG Analysis in Epilepsy Studies: Advances and Challenges ABSTRACT: Epilepsy is typically diagnosed through electroencephalography (EEG) and long-term video-EEG (vEEG) monitoring. The manual analysis of vEEG recordings is time-consuming, necessitating automated tools for seizure detection. Recent advancements in machine learning have shown promise in real-time seizure detection and prediction using EEG and video data. However, diversity of seizure symptoms, markup ambiguities, and limited availability of multimodal datasets hinder progress. This paper reviews the latest developments in automated video-EEG analysis and discusses the integration of multimodal data. We also propose a novel pipeline for treatment effect estimation from vEEG data using concept-based learning, offering a pathway for future research in this domain.
no_new_dataset
0.948442
2503.24305
Jakub Adamczyk
Jakub Adamczyk, Jakub Poziemski, Pawel Siedlecki
Evaluating machine learning models for predicting pesticides toxicity to honey bees
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Small molecules play a critical role in the biomedical, environmental, and agrochemical domains, each with distinct physicochemical requirements and success criteria. Although biomedical research benefits from extensive datasets and established benchmarks, agrochemical data remain scarce, particularly with respect to species-specific toxicity. This work focuses on ApisTox, the most comprehensive dataset of experimentally validated chemical toxicity to the honey bee (Apis mellifera), an ecologically vital pollinator. We evaluate ApisTox using a diverse suite of machine learning approaches, including molecular fingerprints, graph kernels, and graph neural networks, as well as pretrained models. Comparative analysis with medicinal datasets from the MoleculeNet benchmark reveals that ApisTox represents a distinct chemical space. Performance degradation on non-medicinal datasets, such as ApisTox, demonstrates their limited generalizability of current state-of-the-art algorithms trained solely on biomedical data. Our study highlights the need for more diverse datasets and for targeted model development geared toward the agrochemical domain.
[ { "version": "v1", "created": "Mon, 31 Mar 2025 16:51:12 GMT" }, { "version": "v2", "created": "Tue, 1 Apr 2025 07:57:14 GMT" }, { "version": "v3", "created": "Thu, 10 Apr 2025 09:38:53 GMT" } ]
2025-04-11T00:00:00
[ [ "Adamczyk", "Jakub", "" ], [ "Poziemski", "Jakub", "" ], [ "Siedlecki", "Pawel", "" ] ]
TITLE: Evaluating machine learning models for predicting pesticides toxicity to honey bees ABSTRACT: Small molecules play a critical role in the biomedical, environmental, and agrochemical domains, each with distinct physicochemical requirements and success criteria. Although biomedical research benefits from extensive datasets and established benchmarks, agrochemical data remain scarce, particularly with respect to species-specific toxicity. This work focuses on ApisTox, the most comprehensive dataset of experimentally validated chemical toxicity to the honey bee (Apis mellifera), an ecologically vital pollinator. We evaluate ApisTox using a diverse suite of machine learning approaches, including molecular fingerprints, graph kernels, and graph neural networks, as well as pretrained models. Comparative analysis with medicinal datasets from the MoleculeNet benchmark reveals that ApisTox represents a distinct chemical space. Performance degradation on non-medicinal datasets, such as ApisTox, demonstrates their limited generalizability of current state-of-the-art algorithms trained solely on biomedical data. Our study highlights the need for more diverse datasets and for targeted model development geared toward the agrochemical domain.
no_new_dataset
0.801548
2504.01541
Meng Yuan
Meng Yuan, Yutian Xiao, Wei Chen, Chu Zhao, Deqing Wang, Fuzhen Zhuang
Hyperbolic Diffusion Recommender Model
null
null
null
null
cs.IR cs.AI
http://creativecommons.org/licenses/by/4.0/
Diffusion models (DMs) have emerged as the new state-of-the-art family of deep generative models. To gain deeper insights into the limitations of diffusion models in recommender systems, we investigate the fundamental structural disparities between images and items. Consequently, items often exhibit distinct anisotropic and directional structures that are less prevalent in images. However, the traditional forward diffusion process continuously adds isotropic Gaussian noise, causing anisotropic signals to degrade into noise, which impairs the semantically meaningful representations in recommender systems. Inspired by the advancements in hyperbolic spaces, we propose a novel \textit{\textbf{H}yperbolic} \textit{\textbf{D}iffusion} \textit{\textbf{R}ecommender} \textit{\textbf{M}odel} (named HDRM). Unlike existing directional diffusion methods based on Euclidean space, the intrinsic non-Euclidean structure of hyperbolic space makes it particularly well-adapted for handling anisotropic diffusion processes. In particular, we begin by formulating concepts to characterize latent directed diffusion processes within a geometrically grounded hyperbolic space. Subsequently, we propose a novel hyperbolic latent diffusion process specifically tailored for users and items. Drawing upon the natural geometric attributes of hyperbolic spaces, we impose structural restrictions on the space to enhance hyperbolic diffusion propagation, thereby ensuring the preservation of the intrinsic topology of user-item graphs. Extensive experiments on three benchmark datasets demonstrate the effectiveness of HDRM.
[ { "version": "v1", "created": "Wed, 2 Apr 2025 09:27:40 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 08:02:56 GMT" } ]
2025-04-11T00:00:00
[ [ "Yuan", "Meng", "" ], [ "Xiao", "Yutian", "" ], [ "Chen", "Wei", "" ], [ "Zhao", "Chu", "" ], [ "Wang", "Deqing", "" ], [ "Zhuang", "Fuzhen", "" ] ]
TITLE: Hyperbolic Diffusion Recommender Model ABSTRACT: Diffusion models (DMs) have emerged as the new state-of-the-art family of deep generative models. To gain deeper insights into the limitations of diffusion models in recommender systems, we investigate the fundamental structural disparities between images and items. Consequently, items often exhibit distinct anisotropic and directional structures that are less prevalent in images. However, the traditional forward diffusion process continuously adds isotropic Gaussian noise, causing anisotropic signals to degrade into noise, which impairs the semantically meaningful representations in recommender systems. Inspired by the advancements in hyperbolic spaces, we propose a novel \textit{\textbf{H}yperbolic} \textit{\textbf{D}iffusion} \textit{\textbf{R}ecommender} \textit{\textbf{M}odel} (named HDRM). Unlike existing directional diffusion methods based on Euclidean space, the intrinsic non-Euclidean structure of hyperbolic space makes it particularly well-adapted for handling anisotropic diffusion processes. In particular, we begin by formulating concepts to characterize latent directed diffusion processes within a geometrically grounded hyperbolic space. Subsequently, we propose a novel hyperbolic latent diffusion process specifically tailored for users and items. Drawing upon the natural geometric attributes of hyperbolic spaces, we impose structural restrictions on the space to enhance hyperbolic diffusion propagation, thereby ensuring the preservation of the intrinsic topology of user-item graphs. Extensive experiments on three benchmark datasets demonstrate the effectiveness of HDRM.
no_new_dataset
0.949059
2504.03135
Junkai Zhang
Junkai Zhang, Bin Li, Shoujun Zhou, Yue Du
Hierarchical Modeling for Medical Visual Question Answering with Cross-Attention Fusion
null
null
null
null
cs.CV cs.AI
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Medical Visual Question Answering (Med-VQA) answers clinical questions using medical images, aiding diagnosis. Designing the MedVQA system holds profound importance in assisting clinical diagnosis and enhancing diagnostic accuracy. Building upon this foundation, Hierarchical Medical VQA extends Medical VQA by organizing medical questions into a hierarchical structure and making level-specific predictions to handle fine-grained distinctions. Recently, many studies have proposed hierarchical MedVQA tasks and established datasets, However, several issues still remain: (1) imperfect hierarchical modeling leads to poor differentiation between question levels causing semantic fragmentation across hierarchies. (2) Excessive reliance on implicit learning in Transformer-based cross-modal self-attention fusion methods, which obscures crucial local semantic correlations in medical scenarios. To address these issues, this study proposes a HiCA-VQA method, including two modules: Hierarchical Prompting for fine-grained medical questions and Hierarchical Answer Decoders. The hierarchical prompting module pre-aligns hierarchical text prompts with image features to guide the model in focusing on specific image regions according to question types, while the hierarchical decoder performs separate predictions for questions at different levels to improve accuracy across granularities. The framework also incorporates a cross-attention fusion module where images serve as queries and text as key-value pairs. Experiments on the Rad-Restruct benchmark demonstrate that the HiCA-VQA framework better outperforms existing state-of-the-art methods in answering hierarchical fine-grained questions. This study provides an effective pathway for hierarchical visual question answering systems, advancing medical image understanding.
[ { "version": "v1", "created": "Fri, 4 Apr 2025 03:03:12 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 11:52:40 GMT" } ]
2025-04-11T00:00:00
[ [ "Zhang", "Junkai", "" ], [ "Li", "Bin", "" ], [ "Zhou", "Shoujun", "" ], [ "Du", "Yue", "" ] ]
TITLE: Hierarchical Modeling for Medical Visual Question Answering with Cross-Attention Fusion ABSTRACT: Medical Visual Question Answering (Med-VQA) answers clinical questions using medical images, aiding diagnosis. Designing the MedVQA system holds profound importance in assisting clinical diagnosis and enhancing diagnostic accuracy. Building upon this foundation, Hierarchical Medical VQA extends Medical VQA by organizing medical questions into a hierarchical structure and making level-specific predictions to handle fine-grained distinctions. Recently, many studies have proposed hierarchical MedVQA tasks and established datasets, However, several issues still remain: (1) imperfect hierarchical modeling leads to poor differentiation between question levels causing semantic fragmentation across hierarchies. (2) Excessive reliance on implicit learning in Transformer-based cross-modal self-attention fusion methods, which obscures crucial local semantic correlations in medical scenarios. To address these issues, this study proposes a HiCA-VQA method, including two modules: Hierarchical Prompting for fine-grained medical questions and Hierarchical Answer Decoders. The hierarchical prompting module pre-aligns hierarchical text prompts with image features to guide the model in focusing on specific image regions according to question types, while the hierarchical decoder performs separate predictions for questions at different levels to improve accuracy across granularities. The framework also incorporates a cross-attention fusion module where images serve as queries and text as key-value pairs. Experiments on the Rad-Restruct benchmark demonstrate that the HiCA-VQA framework better outperforms existing state-of-the-art methods in answering hierarchical fine-grained questions. This study provides an effective pathway for hierarchical visual question answering systems, advancing medical image understanding.
no_new_dataset
0.945751
2504.03783
Haoyuan Li
Haoyuan Li, Mathias Funk, Jindong Wang, Aaqib Saeed
FAST: Federated Active Learning with Foundation Models for Communication-efficient Sampling and Training
null
null
null
null
cs.LG cs.AI cs.CV cs.DC
http://creativecommons.org/licenses/by/4.0/
Federated Active Learning (FAL) has emerged as a promising framework to leverage large quantities of unlabeled data across distributed clients while preserving data privacy. However, real-world deployments remain limited by high annotation costs and communication-intensive sampling processes, particularly in a cross-silo setting, when clients possess substantial local datasets. This paper addresses the crucial question: What is the best practice to reduce communication costs in human-in-the-loop learning with minimal annotator effort? Existing FAL methods typically rely on iterative annotation processes that separate active sampling from federated updates, leading to multiple rounds of expensive communication and annotation. In response, we introduce FAST, a two-pass FAL framework that harnesses foundation models for weak labeling in a preliminary pass, followed by a refinement pass focused exclusively on the most uncertain samples. By leveraging representation knowledge from foundation models and integrating refinement steps into a streamlined workflow, FAST substantially reduces the overhead incurred by iterative active sampling. Extensive experiments on diverse medical and natural image benchmarks demonstrate that FAST outperforms existing FAL methods by an average of 4.36% while reducing communication rounds eightfold under a limited 5% labeling budget.
[ { "version": "v1", "created": "Thu, 3 Apr 2025 16:12:03 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 14:42:57 GMT" } ]
2025-04-11T00:00:00
[ [ "Li", "Haoyuan", "" ], [ "Funk", "Mathias", "" ], [ "Wang", "Jindong", "" ], [ "Saeed", "Aaqib", "" ] ]
TITLE: FAST: Federated Active Learning with Foundation Models for Communication-efficient Sampling and Training ABSTRACT: Federated Active Learning (FAL) has emerged as a promising framework to leverage large quantities of unlabeled data across distributed clients while preserving data privacy. However, real-world deployments remain limited by high annotation costs and communication-intensive sampling processes, particularly in a cross-silo setting, when clients possess substantial local datasets. This paper addresses the crucial question: What is the best practice to reduce communication costs in human-in-the-loop learning with minimal annotator effort? Existing FAL methods typically rely on iterative annotation processes that separate active sampling from federated updates, leading to multiple rounds of expensive communication and annotation. In response, we introduce FAST, a two-pass FAL framework that harnesses foundation models for weak labeling in a preliminary pass, followed by a refinement pass focused exclusively on the most uncertain samples. By leveraging representation knowledge from foundation models and integrating refinement steps into a streamlined workflow, FAST substantially reduces the overhead incurred by iterative active sampling. Extensive experiments on diverse medical and natural image benchmarks demonstrate that FAST outperforms existing FAL methods by an average of 4.36% while reducing communication rounds eightfold under a limited 5% labeling budget.
no_new_dataset
0.947284
2504.03970
Dahun Kim
Dahun Kim, AJ Piergiovanni, Ganesh Mallya, Anelia Angelova
VideoComp: Advancing Fine-Grained Compositional and Temporal Alignment in Video-Text Models
CVPR 2025, project page at https://github.com/google-deepmind/video_comp
null
null
null
cs.CV cs.AI cs.CL cs.IR
http://creativecommons.org/licenses/by/4.0/
We introduce VideoComp, a benchmark and learning framework for advancing video-text compositionality understanding, aimed at improving vision-language models (VLMs) in fine-grained temporal alignment. Unlike existing benchmarks focused on static image-text compositionality or isolated single-event videos, our benchmark targets alignment in continuous multi-event videos. Leveraging video-text datasets with temporally localized event captions (e.g. ActivityNet-Captions, YouCook2), we construct two compositional benchmarks, ActivityNet-Comp and YouCook2-Comp. We create challenging negative samples with subtle temporal disruptions such as reordering, action word replacement, partial captioning, and combined disruptions. These benchmarks comprehensively test models' compositional sensitivity across extended, cohesive video-text sequences. To improve model performance, we propose a hierarchical pairwise preference loss that strengthens alignment with temporally accurate pairs and gradually penalizes increasingly disrupted ones, encouraging fine-grained compositional learning. To mitigate the limited availability of densely annotated video data, we introduce a pretraining strategy that concatenates short video-caption pairs to simulate multi-event sequences. We evaluate video-text foundational models and large multimodal models (LMMs) on our benchmark, identifying both strengths and areas for improvement in compositionality. Overall, our work provides a comprehensive framework for evaluating and enhancing model capabilities in achieving fine-grained, temporally coherent video-text alignment.
[ { "version": "v1", "created": "Fri, 4 Apr 2025 22:24:30 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 10:41:20 GMT" } ]
2025-04-11T00:00:00
[ [ "Kim", "Dahun", "" ], [ "Piergiovanni", "AJ", "" ], [ "Mallya", "Ganesh", "" ], [ "Angelova", "Anelia", "" ] ]
TITLE: VideoComp: Advancing Fine-Grained Compositional and Temporal Alignment in Video-Text Models ABSTRACT: We introduce VideoComp, a benchmark and learning framework for advancing video-text compositionality understanding, aimed at improving vision-language models (VLMs) in fine-grained temporal alignment. Unlike existing benchmarks focused on static image-text compositionality or isolated single-event videos, our benchmark targets alignment in continuous multi-event videos. Leveraging video-text datasets with temporally localized event captions (e.g. ActivityNet-Captions, YouCook2), we construct two compositional benchmarks, ActivityNet-Comp and YouCook2-Comp. We create challenging negative samples with subtle temporal disruptions such as reordering, action word replacement, partial captioning, and combined disruptions. These benchmarks comprehensively test models' compositional sensitivity across extended, cohesive video-text sequences. To improve model performance, we propose a hierarchical pairwise preference loss that strengthens alignment with temporally accurate pairs and gradually penalizes increasingly disrupted ones, encouraging fine-grained compositional learning. To mitigate the limited availability of densely annotated video data, we introduce a pretraining strategy that concatenates short video-caption pairs to simulate multi-event sequences. We evaluate video-text foundational models and large multimodal models (LMMs) on our benchmark, identifying both strengths and areas for improvement in compositionality. Overall, our work provides a comprehensive framework for evaluating and enhancing model capabilities in achieving fine-grained, temporally coherent video-text alignment.
no_new_dataset
0.946745
2504.04103
Jiaxun Zhang
Jiaxun Zhang, Yanchen Guan, Chengyue Wang, Haicheng Liao, Guohui Zhang, Zhenning Li
LATTE: Lightweight Attention-based Traffic Accident Anticipation Engine
Accept by Information Fusion (Elsevier)
null
null
null
cs.CE
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Accurately predicting traffic accidents in real-time is a critical challenge in autonomous driving, particularly in resource-constrained environments. Existing solutions often suffer from high computational overhead or fail to adequately address the uncertainty of evolving traffic scenarios. This paper introduces LATTE, a Lightweight Attention-based Traffic Accident Anticipation Engine, which integrates computational efficiency with state-of-the-art performance. LATTE employs Efficient Multiscale Spatial Aggregation (EMSA) to capture spatial features across scales, Memory Attention Aggregation (MAA) to enhance temporal modeling, and Auxiliary Self-Attention Aggregation (AAA) to extract latent dependencies over extended sequences. Additionally, LATTE incorporates the Flamingo Alert-Assisted System (FAA), leveraging a vision-language model to provide real-time, cognitively accessible verbal hazard alerts, improving passenger situational awareness. Evaluations on benchmark datasets (DAD, CCD, A3D) demonstrate LATTE's superior predictive capabilities and computational efficiency. LATTE achieves state-of-the-art 89.74% Average Precision (AP) on DAD benchmark, with 5.4% higher mean Time-To-Accident (mTTA) than the second-best model, and maintains competitive mTTA at a Recall of 80% (TTA@R80) (4.04s) while demonstrating robust accident anticipation across diverse driving conditions. Its lightweight design delivers a 93.14% reduction in floating-point operations (FLOPs) and a 31.58% decrease in parameter count (Params), enabling real-time operation on resource-limited hardware without compromising performance. Ablation studies confirm the effectiveness of LATTE's architectural components, while visualizations and failure case analyses highlight its practical applicability and areas for enhancement.
[ { "version": "v1", "created": "Sat, 5 Apr 2025 08:26:50 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 12:51:38 GMT" } ]
2025-04-11T00:00:00
[ [ "Zhang", "Jiaxun", "" ], [ "Guan", "Yanchen", "" ], [ "Wang", "Chengyue", "" ], [ "Liao", "Haicheng", "" ], [ "Zhang", "Guohui", "" ], [ "Li", "Zhenning", "" ] ]
TITLE: LATTE: Lightweight Attention-based Traffic Accident Anticipation Engine ABSTRACT: Accurately predicting traffic accidents in real-time is a critical challenge in autonomous driving, particularly in resource-constrained environments. Existing solutions often suffer from high computational overhead or fail to adequately address the uncertainty of evolving traffic scenarios. This paper introduces LATTE, a Lightweight Attention-based Traffic Accident Anticipation Engine, which integrates computational efficiency with state-of-the-art performance. LATTE employs Efficient Multiscale Spatial Aggregation (EMSA) to capture spatial features across scales, Memory Attention Aggregation (MAA) to enhance temporal modeling, and Auxiliary Self-Attention Aggregation (AAA) to extract latent dependencies over extended sequences. Additionally, LATTE incorporates the Flamingo Alert-Assisted System (FAA), leveraging a vision-language model to provide real-time, cognitively accessible verbal hazard alerts, improving passenger situational awareness. Evaluations on benchmark datasets (DAD, CCD, A3D) demonstrate LATTE's superior predictive capabilities and computational efficiency. LATTE achieves state-of-the-art 89.74% Average Precision (AP) on DAD benchmark, with 5.4% higher mean Time-To-Accident (mTTA) than the second-best model, and maintains competitive mTTA at a Recall of 80% (TTA@R80) (4.04s) while demonstrating robust accident anticipation across diverse driving conditions. Its lightweight design delivers a 93.14% reduction in floating-point operations (FLOPs) and a 31.58% decrease in parameter count (Params), enabling real-time operation on resource-limited hardware without compromising performance. Ablation studies confirm the effectiveness of LATTE's architectural components, while visualizations and failure case analyses highlight its practical applicability and areas for enhancement.
no_new_dataset
0.94887
2504.04753
Jiacheng Wei
Cheng Chen, Jiacheng Wei, Tianrun Chen, Chi Zhang, Xiaofeng Yang, Shangzhan Zhang, Bingchen Yang, Chuan-Sheng Foo, Guosheng Lin, Qixing Huang, Fayao Liu
CADCrafter: Generating Computer-Aided Design Models from Unconstrained Images
Accepted to CVPR2025
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Creating CAD digital twins from the physical world is crucial for manufacturing, design, and simulation. However, current methods typically rely on costly 3D scanning with labor-intensive post-processing. To provide a user-friendly design process, we explore the problem of reverse engineering from unconstrained real-world CAD images that can be easily captured by users of all experiences. However, the scarcity of real-world CAD data poses challenges in directly training such models. To tackle these challenges, we propose CADCrafter, an image-to-parametric CAD model generation framework that trains solely on synthetic textureless CAD data while testing on real-world images. To bridge the significant representation disparity between images and parametric CAD models, we introduce a geometry encoder to accurately capture diverse geometric features. Moreover, the texture-invariant properties of the geometric features can also facilitate the generalization to real-world scenarios. Since compiling CAD parameter sequences into explicit CAD models is a non-differentiable process, the network training inherently lacks explicit geometric supervision. To impose geometric validity constraints, we employ direct preference optimization (DPO) to fine-tune our model with the automatic code checker feedback on CAD sequence quality. Furthermore, we collected a real-world dataset, comprised of multi-view images and corresponding CAD command sequence pairs, to evaluate our method. Experimental results demonstrate that our approach can robustly handle real unconstrained CAD images, and even generalize to unseen general objects.
[ { "version": "v1", "created": "Mon, 7 Apr 2025 06:01:35 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 14:54:12 GMT" } ]
2025-04-11T00:00:00
[ [ "Chen", "Cheng", "" ], [ "Wei", "Jiacheng", "" ], [ "Chen", "Tianrun", "" ], [ "Zhang", "Chi", "" ], [ "Yang", "Xiaofeng", "" ], [ "Zhang", "Shangzhan", "" ], [ "Yang", "Bingchen", "" ], [ "Foo", "Chuan-Sheng", "" ], [ "Lin", "Guosheng", "" ], [ "Huang", "Qixing", "" ], [ "Liu", "Fayao", "" ] ]
TITLE: CADCrafter: Generating Computer-Aided Design Models from Unconstrained Images ABSTRACT: Creating CAD digital twins from the physical world is crucial for manufacturing, design, and simulation. However, current methods typically rely on costly 3D scanning with labor-intensive post-processing. To provide a user-friendly design process, we explore the problem of reverse engineering from unconstrained real-world CAD images that can be easily captured by users of all experiences. However, the scarcity of real-world CAD data poses challenges in directly training such models. To tackle these challenges, we propose CADCrafter, an image-to-parametric CAD model generation framework that trains solely on synthetic textureless CAD data while testing on real-world images. To bridge the significant representation disparity between images and parametric CAD models, we introduce a geometry encoder to accurately capture diverse geometric features. Moreover, the texture-invariant properties of the geometric features can also facilitate the generalization to real-world scenarios. Since compiling CAD parameter sequences into explicit CAD models is a non-differentiable process, the network training inherently lacks explicit geometric supervision. To impose geometric validity constraints, we employ direct preference optimization (DPO) to fine-tune our model with the automatic code checker feedback on CAD sequence quality. Furthermore, we collected a real-world dataset, comprised of multi-view images and corresponding CAD command sequence pairs, to evaluate our method. Experimental results demonstrate that our approach can robustly handle real unconstrained CAD images, and even generalize to unseen general objects.
new_dataset
0.968321
2504.04834
Pengju Sun
Pengju Sun, Banglei Guan, Zhenbao Yu, Yang Shang, Qifeng Yu, Daniel Barath
Learning Affine Correspondences by Integrating Geometric Constraints
Accepted by IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 2025
null
null
null
cs.CV
http://creativecommons.org/publicdomain/zero/1.0/
Affine correspondences have received significant attention due to their benefits in tasks like image matching and pose estimation. Existing methods for extracting affine correspondences still have many limitations in terms of performance; thus, exploring a new paradigm is crucial. In this paper, we present a new pipeline designed for extracting accurate affine correspondences by integrating dense matching and geometric constraints. Specifically, a novel extraction framework is introduced, with the aid of dense matching and a novel keypoint scale and orientation estimator. For this purpose, we propose loss functions based on geometric constraints, which can effectively improve accuracy by supervising neural networks to learn feature geometry. The experimental show that the accuracy and robustness of our method outperform the existing ones in image matching tasks. To further demonstrate the effectiveness of the proposed method, we applied it to relative pose estimation. Affine correspondences extracted by our method lead to more accurate poses than the baselines on a range of real-world datasets. The code is available at https://github.com/stilcrad/DenseAffine.
[ { "version": "v1", "created": "Mon, 7 Apr 2025 08:44:50 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 13:40:31 GMT" } ]
2025-04-11T00:00:00
[ [ "Sun", "Pengju", "" ], [ "Guan", "Banglei", "" ], [ "Yu", "Zhenbao", "" ], [ "Shang", "Yang", "" ], [ "Yu", "Qifeng", "" ], [ "Barath", "Daniel", "" ] ]
TITLE: Learning Affine Correspondences by Integrating Geometric Constraints ABSTRACT: Affine correspondences have received significant attention due to their benefits in tasks like image matching and pose estimation. Existing methods for extracting affine correspondences still have many limitations in terms of performance; thus, exploring a new paradigm is crucial. In this paper, we present a new pipeline designed for extracting accurate affine correspondences by integrating dense matching and geometric constraints. Specifically, a novel extraction framework is introduced, with the aid of dense matching and a novel keypoint scale and orientation estimator. For this purpose, we propose loss functions based on geometric constraints, which can effectively improve accuracy by supervising neural networks to learn feature geometry. The experimental show that the accuracy and robustness of our method outperform the existing ones in image matching tasks. To further demonstrate the effectiveness of the proposed method, we applied it to relative pose estimation. Affine correspondences extracted by our method lead to more accurate poses than the baselines on a range of real-world datasets. The code is available at https://github.com/stilcrad/DenseAffine.
no_new_dataset
0.949716
2504.05730
Teng Shi
Teng Shi, Jun Xu, Xiao Zhang, Xiaoxue Zang, Kai Zheng, Yang Song and Enyun Yu
Unified Generative Search and Recommendation
null
null
null
null
cs.IR
http://creativecommons.org/licenses/by/4.0/
Modern commercial platforms typically offer both search and recommendation functionalities to serve diverse user needs, making joint modeling of these tasks an appealing direction. While prior work has shown that integrating search and recommendation can be mutually beneficial, it also reveals a performance trade-off: enhancements in one task often come at the expense of the other. This challenge arises from their distinct information requirements: search emphasizes semantic relevance between queries and items, whereas recommendation depends more on collaborative signals among users and items. Effectively addressing this trade-off requires tackling two key problems: (1) integrating both semantic and collaborative signals into item representations, and (2) guiding the model to distinguish and adapt to the unique demands of search and recommendation. The emergence of generative retrieval with Large Language Models (LLMs) presents new possibilities. This paradigm encodes items as identifiers and frames both search and recommendation as sequential generation tasks, offering the flexibility to leverage multiple identifiers and task-specific prompts. In light of this, we introduce GenSAR, a unified generative framework for balanced search and recommendation. Our approach designs dual-purpose identifiers and tailored training strategies to incorporate complementary signals and align with task-specific objectives. Experiments on both public and commercial datasets demonstrate that GenSAR effectively reduces the trade-off and achieves state-of-the-art performance on both tasks.
[ { "version": "v1", "created": "Tue, 8 Apr 2025 07:03:08 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 06:34:28 GMT" } ]
2025-04-11T00:00:00
[ [ "Shi", "Teng", "" ], [ "Xu", "Jun", "" ], [ "Zhang", "Xiao", "" ], [ "Zang", "Xiaoxue", "" ], [ "Zheng", "Kai", "" ], [ "Song", "Yang", "" ], [ "Yu", "Enyun", "" ] ]
TITLE: Unified Generative Search and Recommendation ABSTRACT: Modern commercial platforms typically offer both search and recommendation functionalities to serve diverse user needs, making joint modeling of these tasks an appealing direction. While prior work has shown that integrating search and recommendation can be mutually beneficial, it also reveals a performance trade-off: enhancements in one task often come at the expense of the other. This challenge arises from their distinct information requirements: search emphasizes semantic relevance between queries and items, whereas recommendation depends more on collaborative signals among users and items. Effectively addressing this trade-off requires tackling two key problems: (1) integrating both semantic and collaborative signals into item representations, and (2) guiding the model to distinguish and adapt to the unique demands of search and recommendation. The emergence of generative retrieval with Large Language Models (LLMs) presents new possibilities. This paradigm encodes items as identifiers and frames both search and recommendation as sequential generation tasks, offering the flexibility to leverage multiple identifiers and task-specific prompts. In light of this, we introduce GenSAR, a unified generative framework for balanced search and recommendation. Our approach designs dual-purpose identifiers and tailored training strategies to incorporate complementary signals and align with task-specific objectives. Experiments on both public and commercial datasets demonstrate that GenSAR effectively reduces the trade-off and achieves state-of-the-art performance on both tasks.
no_new_dataset
0.940517
2504.06265
Bojana Rankovi\'c
Bojana Rankovi\'c, Philippe Schwaller
GOLLuM: Gaussian Process Optimized LLMs -- Reframing LLM Finetuning through Bayesian Optimization
null
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) can encode complex relationships in their latent spaces, yet harnessing them for optimization under uncertainty remains challenging. We address this gap with a novel architecture that reframes LLM finetuning as Gaussian process (GP) marginal likelihood optimization via deep kernel methods. We introduce LLM-based deep kernels, jointly optimized with GPs to preserve the benefits of both - LLMs to provide a rich and flexible input space for Bayesian optimization and - GPs to model this space with predictive uncertainty for more efficient sampling. Applied to Buchwald-Hartwig reaction optimization, our method nearly doubles the discovery rate of high-performing reactions compared to static LLM embeddings (from 24% to 43% coverage of the top 5% reactions in just 50 optimization iterations). We also observe a 14% improvement over domain-specific representations without requiring specialized features. Extensive empirical evaluation across 19 benchmarks - ranging from general chemistry to reaction and molecular property optimization - demonstrates our method's robustness, generality, and consistent improvements across: (1) tasks, (2) LLM architectures (encoder, decoder, encoder-decoder), (3) pretraining domains (chemistry-related or general-purpose) and (4) hyperparameter settings (tuned once on a single dataset). Finally, we explain these improvements: joint LLM-GP optimization through marginal likelihood implicitly performs contrastive learning, aligning representations to produce (1) better-structured embedding spaces, (2) improved uncertainty calibration, and (3) more efficient sampling - without requiring any external loss. This work provides both practical advances in sample-efficient optimization and insights into what makes effective Bayesian optimization.
[ { "version": "v1", "created": "Tue, 8 Apr 2025 17:59:57 GMT" }, { "version": "v2", "created": "Wed, 9 Apr 2025 23:45:44 GMT" } ]
2025-04-11T00:00:00
[ [ "Ranković", "Bojana", "" ], [ "Schwaller", "Philippe", "" ] ]
TITLE: GOLLuM: Gaussian Process Optimized LLMs -- Reframing LLM Finetuning through Bayesian Optimization ABSTRACT: Large Language Models (LLMs) can encode complex relationships in their latent spaces, yet harnessing them for optimization under uncertainty remains challenging. We address this gap with a novel architecture that reframes LLM finetuning as Gaussian process (GP) marginal likelihood optimization via deep kernel methods. We introduce LLM-based deep kernels, jointly optimized with GPs to preserve the benefits of both - LLMs to provide a rich and flexible input space for Bayesian optimization and - GPs to model this space with predictive uncertainty for more efficient sampling. Applied to Buchwald-Hartwig reaction optimization, our method nearly doubles the discovery rate of high-performing reactions compared to static LLM embeddings (from 24% to 43% coverage of the top 5% reactions in just 50 optimization iterations). We also observe a 14% improvement over domain-specific representations without requiring specialized features. Extensive empirical evaluation across 19 benchmarks - ranging from general chemistry to reaction and molecular property optimization - demonstrates our method's robustness, generality, and consistent improvements across: (1) tasks, (2) LLM architectures (encoder, decoder, encoder-decoder), (3) pretraining domains (chemistry-related or general-purpose) and (4) hyperparameter settings (tuned once on a single dataset). Finally, we explain these improvements: joint LLM-GP optimization through marginal likelihood implicitly performs contrastive learning, aligning representations to produce (1) better-structured embedding spaces, (2) improved uncertainty calibration, and (3) more efficient sampling - without requiring any external loss. This work provides both practical advances in sample-efficient optimization and insights into what makes effective Bayesian optimization.
no_new_dataset
0.947914
2504.06293
Amin Haeri
Amin Haeri, Jonathan Vitrano, Mahdi Ghelichi
Generative AI Enhanced Financial Risk Management Information Retrieval
10 pages, 3 figures, 2 tables, 1 equation
null
null
null
q-fin.RM cs.LG
http://creativecommons.org/licenses/by/4.0/
Risk management in finance involves recognizing, evaluating, and addressing financial risks to maintain stability and ensure regulatory compliance. Extracting relevant insights from extensive regulatory documents is a complex challenge requiring advanced retrieval and language models. This paper introduces RiskData, a dataset specifically curated for finetuning embedding models in risk management, and RiskEmbed, a finetuned embedding model designed to improve retrieval accuracy in financial question-answering systems. The dataset is derived from 94 regulatory guidelines published by the Office of the Superintendent of Financial Institutions (OSFI) from 1991 to 2024. We finetune a state-of-the-art sentence BERT embedding model to enhance domain-specific retrieval performance typically for Retrieval-Augmented Generation (RAG) systems. Experimental results demonstrate that RiskEmbed significantly outperforms general-purpose and financial embedding models, achieving substantial improvements in ranking metrics. By open-sourcing both the dataset and the model, we provide a valuable resource for financial institutions and researchers aiming to develop more accurate and efficient risk management AI solutions.
[ { "version": "v1", "created": "Fri, 4 Apr 2025 20:42:38 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 03:08:59 GMT" } ]
2025-04-11T00:00:00
[ [ "Haeri", "Amin", "" ], [ "Vitrano", "Jonathan", "" ], [ "Ghelichi", "Mahdi", "" ] ]
TITLE: Generative AI Enhanced Financial Risk Management Information Retrieval ABSTRACT: Risk management in finance involves recognizing, evaluating, and addressing financial risks to maintain stability and ensure regulatory compliance. Extracting relevant insights from extensive regulatory documents is a complex challenge requiring advanced retrieval and language models. This paper introduces RiskData, a dataset specifically curated for finetuning embedding models in risk management, and RiskEmbed, a finetuned embedding model designed to improve retrieval accuracy in financial question-answering systems. The dataset is derived from 94 regulatory guidelines published by the Office of the Superintendent of Financial Institutions (OSFI) from 1991 to 2024. We finetune a state-of-the-art sentence BERT embedding model to enhance domain-specific retrieval performance typically for Retrieval-Augmented Generation (RAG) systems. Experimental results demonstrate that RiskEmbed significantly outperforms general-purpose and financial embedding models, achieving substantial improvements in ranking metrics. By open-sourcing both the dataset and the model, we provide a valuable resource for financial institutions and researchers aiming to develop more accurate and efficient risk management AI solutions.
new_dataset
0.963231
2504.06301
Mohsen Jenadeleh
Mohsen Jenadeleh, Jon Sneyers, Panqi Jia, Shima Mohammadi, Joao Ascenso, Dietmar Saupe
Subjective Visual Quality Assessment for High-Fidelity Learning-Based Image Compression
7 pages, 5 figures, 3 tables, submitted to QoMEX 2025
null
null
null
eess.IV cs.CV
http://creativecommons.org/licenses/by-sa/4.0/
Learning-based image compression methods have recently emerged as promising alternatives to traditional codecs, offering improved rate-distortion performance and perceptual quality. JPEG AI represents the latest standardized framework in this domain, leveraging deep neural networks for high-fidelity image reconstruction. In this study, we present a comprehensive subjective visual quality assessment of JPEG AI-compressed images using the JPEG AIC-3 methodology, which quantifies perceptual differences in terms of Just Noticeable Difference (JND) units. We generated a dataset of 50 compressed images with fine-grained distortion levels from five diverse sources. A large-scale crowdsourced experiment collected 96,200 triplet responses from 459 participants. We reconstructed JND-based quality scales using a unified model based on boosted and plain triplet comparisons. Additionally, we evaluated the alignment of multiple objective image quality metrics with human perception in the high-fidelity range. The CVVDP metric achieved the overall highest performance; however, most metrics including CVVDP were overly optimistic in predicting the quality of JPEG AI-compressed images. These findings emphasize the necessity for rigorous subjective evaluations in the development and benchmarking of modern image codecs, particularly in the high-fidelity range. Another technical contribution is the introduction of the well-known Meng-Rosenthal-Rubin statistical test to the field of Quality of Experience research. This test can reliably assess the significance of difference in performance of quality metrics in terms of correlation between metrics and ground truth. The complete dataset, including all subjective scores, is publicly available at https://github.com/jpeg-aic/dataset-JPEG-AI-SDR25.
[ { "version": "v1", "created": "Mon, 7 Apr 2025 15:16:58 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 11:37:08 GMT" } ]
2025-04-11T00:00:00
[ [ "Jenadeleh", "Mohsen", "" ], [ "Sneyers", "Jon", "" ], [ "Jia", "Panqi", "" ], [ "Mohammadi", "Shima", "" ], [ "Ascenso", "Joao", "" ], [ "Saupe", "Dietmar", "" ] ]
TITLE: Subjective Visual Quality Assessment for High-Fidelity Learning-Based Image Compression ABSTRACT: Learning-based image compression methods have recently emerged as promising alternatives to traditional codecs, offering improved rate-distortion performance and perceptual quality. JPEG AI represents the latest standardized framework in this domain, leveraging deep neural networks for high-fidelity image reconstruction. In this study, we present a comprehensive subjective visual quality assessment of JPEG AI-compressed images using the JPEG AIC-3 methodology, which quantifies perceptual differences in terms of Just Noticeable Difference (JND) units. We generated a dataset of 50 compressed images with fine-grained distortion levels from five diverse sources. A large-scale crowdsourced experiment collected 96,200 triplet responses from 459 participants. We reconstructed JND-based quality scales using a unified model based on boosted and plain triplet comparisons. Additionally, we evaluated the alignment of multiple objective image quality metrics with human perception in the high-fidelity range. The CVVDP metric achieved the overall highest performance; however, most metrics including CVVDP were overly optimistic in predicting the quality of JPEG AI-compressed images. These findings emphasize the necessity for rigorous subjective evaluations in the development and benchmarking of modern image codecs, particularly in the high-fidelity range. Another technical contribution is the introduction of the well-known Meng-Rosenthal-Rubin statistical test to the field of Quality of Experience research. This test can reliably assess the significance of difference in performance of quality metrics in terms of correlation between metrics and ground truth. The complete dataset, including all subjective scores, is publicly available at https://github.com/jpeg-aic/dataset-JPEG-AI-SDR25.
new_dataset
0.963403
2504.06643
Tiange Huang
Tiange Huang and Yongjun Li
AMAD: AutoMasked Attention for Unsupervised Multivariate Time Series Anomaly Detection
fix img issues
null
null
null
cs.LG cs.AI
http://creativecommons.org/licenses/by-nc-nd/4.0/
Unsupervised multivariate time series anomaly detection (UMTSAD) plays a critical role in various domains, including finance, networks, and sensor systems. In recent years, due to the outstanding performance of deep learning in general sequential tasks, many models have been specialized for deep UMTSAD tasks and have achieved impressive results, particularly those based on the Transformer and self-attention mechanisms. However, the sequence anomaly association assumptions underlying these models are often limited to specific predefined patterns and scenarios, such as concentrated or peak anomaly patterns. These limitations hinder their ability to generalize to diverse anomaly situations, especially where the lack of labels poses significant challenges. To address these issues, we propose AMAD, which integrates \textbf{A}uto\textbf{M}asked Attention for UMTS\textbf{AD} scenarios. AMAD introduces a novel structure based on the AutoMask mechanism and an attention mixup module, forming a simple yet generalized anomaly association representation framework. This framework is further enhanced by a Max-Min training strategy and a Local-Global contrastive learning approach. By combining multi-scale feature extraction with automatic relative association modeling, AMAD provides a robust and adaptable solution to UMTSAD challenges. Extensive experimental results demonstrate that the proposed model achieving competitive performance results compared to SOTA benchmarks across a variety of datasets.
[ { "version": "v1", "created": "Wed, 9 Apr 2025 07:32:59 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 02:37:53 GMT" } ]
2025-04-11T00:00:00
[ [ "Huang", "Tiange", "" ], [ "Li", "Yongjun", "" ] ]
TITLE: AMAD: AutoMasked Attention for Unsupervised Multivariate Time Series Anomaly Detection ABSTRACT: Unsupervised multivariate time series anomaly detection (UMTSAD) plays a critical role in various domains, including finance, networks, and sensor systems. In recent years, due to the outstanding performance of deep learning in general sequential tasks, many models have been specialized for deep UMTSAD tasks and have achieved impressive results, particularly those based on the Transformer and self-attention mechanisms. However, the sequence anomaly association assumptions underlying these models are often limited to specific predefined patterns and scenarios, such as concentrated or peak anomaly patterns. These limitations hinder their ability to generalize to diverse anomaly situations, especially where the lack of labels poses significant challenges. To address these issues, we propose AMAD, which integrates \textbf{A}uto\textbf{M}asked Attention for UMTS\textbf{AD} scenarios. AMAD introduces a novel structure based on the AutoMask mechanism and an attention mixup module, forming a simple yet generalized anomaly association representation framework. This framework is further enhanced by a Max-Min training strategy and a Local-Global contrastive learning approach. By combining multi-scale feature extraction with automatic relative association modeling, AMAD provides a robust and adaptable solution to UMTSAD challenges. Extensive experimental results demonstrate that the proposed model achieving competitive performance results compared to SOTA benchmarks across a variety of datasets.
no_new_dataset
0.946695
2504.06742
Alexandra Ertl
Alexandra Ertl, Shuhan Xiao, Stefan Denner, Robin Peretzke, David Zimmerer, Peter Neher, Fabian Isensee, Klaus Maier-Hein
nnLandmark: A Self-Configuring Method for 3D Medical Landmark Detection
null
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Landmark detection plays a crucial role in medical imaging tasks that rely on precise spatial localization, including specific applications in diagnosis, treatment planning, image registration, and surgical navigation. However, manual annotation is labor-intensive and requires expert knowledge. While deep learning shows promise in automating this task, progress is hindered by limited public datasets, inconsistent benchmarks, and non-standardized baselines, restricting reproducibility, fair comparisons, and model generalizability. This work introduces nnLandmark, a self-configuring deep learning framework for 3D medical landmark detection, adapting nnU-Net to perform heatmap-based regression. By leveraging nnU-Net's automated configuration, nnLandmark eliminates the need for manual parameter tuning, offering out-of-the-box usability. It achieves state-of-the-art accuracy across two public datasets, with a mean radial error (MRE) of 1.5 mm on the Mandibular Molar Landmark (MML) dental CT dataset and 1.2 mm for anatomical fiducials on a brain MRI dataset (AFIDs), where nnLandmark aligns with the inter-rater variability of 1.5 mm. With its strong generalization, reproducibility, and ease of deployment, nnLandmark establishes a reliable baseline for 3D landmark detection, supporting research in anatomical localization and clinical workflows that depend on precise landmark identification. The code will be available soon.
[ { "version": "v1", "created": "Wed, 9 Apr 2025 09:53:39 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 07:04:29 GMT" } ]
2025-04-11T00:00:00
[ [ "Ertl", "Alexandra", "" ], [ "Xiao", "Shuhan", "" ], [ "Denner", "Stefan", "" ], [ "Peretzke", "Robin", "" ], [ "Zimmerer", "David", "" ], [ "Neher", "Peter", "" ], [ "Isensee", "Fabian", "" ], [ "Maier-Hein", "Klaus", "" ] ]
TITLE: nnLandmark: A Self-Configuring Method for 3D Medical Landmark Detection ABSTRACT: Landmark detection plays a crucial role in medical imaging tasks that rely on precise spatial localization, including specific applications in diagnosis, treatment planning, image registration, and surgical navigation. However, manual annotation is labor-intensive and requires expert knowledge. While deep learning shows promise in automating this task, progress is hindered by limited public datasets, inconsistent benchmarks, and non-standardized baselines, restricting reproducibility, fair comparisons, and model generalizability. This work introduces nnLandmark, a self-configuring deep learning framework for 3D medical landmark detection, adapting nnU-Net to perform heatmap-based regression. By leveraging nnU-Net's automated configuration, nnLandmark eliminates the need for manual parameter tuning, offering out-of-the-box usability. It achieves state-of-the-art accuracy across two public datasets, with a mean radial error (MRE) of 1.5 mm on the Mandibular Molar Landmark (MML) dental CT dataset and 1.2 mm for anatomical fiducials on a brain MRI dataset (AFIDs), where nnLandmark aligns with the inter-rater variability of 1.5 mm. With its strong generalization, reproducibility, and ease of deployment, nnLandmark establishes a reliable baseline for 3D landmark detection, supporting research in anatomical localization and clinical workflows that depend on precise landmark identification. The code will be available soon.
no_new_dataset
0.947478
2504.06752
Rishubh Parihar
Rishubh Parihar, Vaibhav Agrawal, Sachidanand VS, R. Venkatesh Babu
Compass Control: Multi Object Orientation Control for Text-to-Image Generation
CVPR 2025 Camera Ready. Project page: https://rishubhpar.github.io/compasscontrol
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Existing approaches for controlling text-to-image diffusion models, while powerful, do not allow for explicit 3D object-centric control, such as precise control of object orientation. In this work, we address the problem of multi-object orientation control in text-to-image diffusion models. This enables the generation of diverse multi-object scenes with precise orientation control for each object. The key idea is to condition the diffusion model with a set of orientation-aware \textbf{compass} tokens, one for each object, along with text tokens. A light-weight encoder network predicts these compass tokens taking object orientation as the input. The model is trained on a synthetic dataset of procedurally generated scenes, each containing one or two 3D assets on a plain background. However, direct training this framework results in poor orientation control as well as leads to entanglement among objects. To mitigate this, we intervene in the generation process and constrain the cross-attention maps of each compass token to its corresponding object regions. The trained model is able to achieve precise orientation control for a) complex objects not seen during training and b) multi-object scenes with more than two objects, indicating strong generalization capabilities. Further, when combined with personalization methods, our method precisely controls the orientation of the new object in diverse contexts. Our method achieves state-of-the-art orientation control and text alignment, quantified with extensive evaluations and a user study.
[ { "version": "v1", "created": "Wed, 9 Apr 2025 10:15:15 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 04:59:11 GMT" } ]
2025-04-11T00:00:00
[ [ "Parihar", "Rishubh", "" ], [ "Agrawal", "Vaibhav", "" ], [ "VS", "Sachidanand", "" ], [ "Babu", "R. Venkatesh", "" ] ]
TITLE: Compass Control: Multi Object Orientation Control for Text-to-Image Generation ABSTRACT: Existing approaches for controlling text-to-image diffusion models, while powerful, do not allow for explicit 3D object-centric control, such as precise control of object orientation. In this work, we address the problem of multi-object orientation control in text-to-image diffusion models. This enables the generation of diverse multi-object scenes with precise orientation control for each object. The key idea is to condition the diffusion model with a set of orientation-aware \textbf{compass} tokens, one for each object, along with text tokens. A light-weight encoder network predicts these compass tokens taking object orientation as the input. The model is trained on a synthetic dataset of procedurally generated scenes, each containing one or two 3D assets on a plain background. However, direct training this framework results in poor orientation control as well as leads to entanglement among objects. To mitigate this, we intervene in the generation process and constrain the cross-attention maps of each compass token to its corresponding object regions. The trained model is able to achieve precise orientation control for a) complex objects not seen during training and b) multi-object scenes with more than two objects, indicating strong generalization capabilities. Further, when combined with personalization methods, our method precisely controls the orientation of the new object in diverse contexts. Our method achieves state-of-the-art orientation control and text alignment, quantified with extensive evaluations and a user study.
no_new_dataset
0.822902
2504.06801
Rishubh Parihar
Rishubh Parihar, Srinjay Sarkar, Sarthak Vora, Jogendra Kundu, R. Venkatesh Babu
MonoPlace3D: Learning 3D-Aware Object Placement for 3D Monocular Detection
CVPR 2025 Camera Ready. Project page - https://rishubhpar.github.io/monoplace3D
null
null
null
cs.CV
http://creativecommons.org/licenses/by/4.0/
Current monocular 3D detectors are held back by the limited diversity and scale of real-world datasets. While data augmentation certainly helps, it's particularly difficult to generate realistic scene-aware augmented data for outdoor settings. Most current approaches to synthetic data generation focus on realistic object appearance through improved rendering techniques. However, we show that where and how objects are positioned is just as crucial for training effective 3D monocular detectors. The key obstacle lies in automatically determining realistic object placement parameters - including position, dimensions, and directional alignment when introducing synthetic objects into actual scenes. To address this, we introduce MonoPlace3D, a novel system that considers the 3D scene content to create realistic augmentations. Specifically, given a background scene, MonoPlace3D learns a distribution over plausible 3D bounding boxes. Subsequently, we render realistic objects and place them according to the locations sampled from the learned distribution. Our comprehensive evaluation on two standard datasets KITTI and NuScenes, demonstrates that MonoPlace3D significantly improves the accuracy of multiple existing monocular 3D detectors while being highly data efficient.
[ { "version": "v1", "created": "Wed, 9 Apr 2025 11:47:48 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 05:03:02 GMT" } ]
2025-04-11T00:00:00
[ [ "Parihar", "Rishubh", "" ], [ "Sarkar", "Srinjay", "" ], [ "Vora", "Sarthak", "" ], [ "Kundu", "Jogendra", "" ], [ "Babu", "R. Venkatesh", "" ] ]
TITLE: MonoPlace3D: Learning 3D-Aware Object Placement for 3D Monocular Detection ABSTRACT: Current monocular 3D detectors are held back by the limited diversity and scale of real-world datasets. While data augmentation certainly helps, it's particularly difficult to generate realistic scene-aware augmented data for outdoor settings. Most current approaches to synthetic data generation focus on realistic object appearance through improved rendering techniques. However, we show that where and how objects are positioned is just as crucial for training effective 3D monocular detectors. The key obstacle lies in automatically determining realistic object placement parameters - including position, dimensions, and directional alignment when introducing synthetic objects into actual scenes. To address this, we introduce MonoPlace3D, a novel system that considers the 3D scene content to create realistic augmentations. Specifically, given a background scene, MonoPlace3D learns a distribution over plausible 3D bounding boxes. Subsequently, we render realistic objects and place them according to the locations sampled from the learned distribution. Our comprehensive evaluation on two standard datasets KITTI and NuScenes, demonstrates that MonoPlace3D significantly improves the accuracy of multiple existing monocular 3D detectors while being highly data efficient.
no_new_dataset
0.94801
2504.07083
Mengchen Zhang
Mengchen Zhang, Tong Wu, Jing Tan, Ziwei Liu, Gordon Wetzstein, Dahua Lin
GenDoP: Auto-regressive Camera Trajectory Generation as a Director of Photography
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Camera trajectory design plays a crucial role in video production, serving as a fundamental tool for conveying directorial intent and enhancing visual storytelling. In cinematography, Directors of Photography meticulously craft camera movements to achieve expressive and intentional framing. However, existing methods for camera trajectory generation remain limited: Traditional approaches rely on geometric optimization or handcrafted procedural systems, while recent learning-based methods often inherit structural biases or lack textual alignment, constraining creative synthesis. In this work, we introduce an auto-regressive model inspired by the expertise of Directors of Photography to generate artistic and expressive camera trajectories. We first introduce DataDoP, a large-scale multi-modal dataset containing 29K real-world shots with free-moving camera trajectories, depth maps, and detailed captions in specific movements, interaction with the scene, and directorial intent. Thanks to the comprehensive and diverse database, we further train an auto-regressive, decoder-only Transformer for high-quality, context-aware camera movement generation based on text guidance and RGBD inputs, named GenDoP. Extensive experiments demonstrate that compared to existing methods, GenDoP offers better controllability, finer-grained trajectory adjustments, and higher motion stability. We believe our approach establishes a new standard for learning-based cinematography, paving the way for future advancements in camera control and filmmaking. Our project website: https://kszpxxzmc.github.io/GenDoP/.
[ { "version": "v1", "created": "Wed, 9 Apr 2025 17:56:01 GMT" }, { "version": "v2", "created": "Thu, 10 Apr 2025 16:10:15 GMT" } ]
2025-04-11T00:00:00
[ [ "Zhang", "Mengchen", "" ], [ "Wu", "Tong", "" ], [ "Tan", "Jing", "" ], [ "Liu", "Ziwei", "" ], [ "Wetzstein", "Gordon", "" ], [ "Lin", "Dahua", "" ] ]
TITLE: GenDoP: Auto-regressive Camera Trajectory Generation as a Director of Photography ABSTRACT: Camera trajectory design plays a crucial role in video production, serving as a fundamental tool for conveying directorial intent and enhancing visual storytelling. In cinematography, Directors of Photography meticulously craft camera movements to achieve expressive and intentional framing. However, existing methods for camera trajectory generation remain limited: Traditional approaches rely on geometric optimization or handcrafted procedural systems, while recent learning-based methods often inherit structural biases or lack textual alignment, constraining creative synthesis. In this work, we introduce an auto-regressive model inspired by the expertise of Directors of Photography to generate artistic and expressive camera trajectories. We first introduce DataDoP, a large-scale multi-modal dataset containing 29K real-world shots with free-moving camera trajectories, depth maps, and detailed captions in specific movements, interaction with the scene, and directorial intent. Thanks to the comprehensive and diverse database, we further train an auto-regressive, decoder-only Transformer for high-quality, context-aware camera movement generation based on text guidance and RGBD inputs, named GenDoP. Extensive experiments demonstrate that compared to existing methods, GenDoP offers better controllability, finer-grained trajectory adjustments, and higher motion stability. We believe our approach establishes a new standard for learning-based cinematography, paving the way for future advancements in camera control and filmmaking. Our project website: https://kszpxxzmc.github.io/GenDoP/.
new_dataset
0.958109
2504.07100
Sean O'Brien
Abhay Gupta, Jacob Cheung, Philip Meng, Shayan Sayyed, Austen Liao, Kevin Zhu, Sean O'Brien
EnDive: A Cross-Dialect Benchmark for Fairness and Performance in Large Language Models
null
null
null
null
cs.CL
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
The diversity of human language, shaped by social, cultural, and regional influences, presents significant challenges for natural language processing (NLP) systems. Existing benchmarks often overlook intra-language variations, leaving speakers of non-standard dialects underserved. To address this gap, we introduce EnDive (English Diversity), a benchmark that evaluates five widely-used large language models (LLMs) across tasks in language understanding, algorithmic reasoning, mathematics, and logic. Our framework translates Standard American English datasets into five underrepresented dialects using few-shot prompting with verified examples from native speakers, and compare these translations against rule-based methods via fluency assessments, preference tests, and semantic similarity metrics. Human evaluations confirm high translation quality, with average scores of at least 6.02/7 for faithfulness, fluency, and formality. By filtering out near-identical translations, we create a challenging dataset that reveals significant performance disparities - models consistently underperform on dialectal inputs compared to Standard American English. EnDive thus advances dialect-aware NLP by uncovering model biases and promoting more equitable language technologies.
[ { "version": "v1", "created": "Tue, 25 Feb 2025 23:48:33 GMT" } ]
2025-04-11T00:00:00
[ [ "Gupta", "Abhay", "" ], [ "Cheung", "Jacob", "" ], [ "Meng", "Philip", "" ], [ "Sayyed", "Shayan", "" ], [ "Liao", "Austen", "" ], [ "Zhu", "Kevin", "" ], [ "O'Brien", "Sean", "" ] ]
TITLE: EnDive: A Cross-Dialect Benchmark for Fairness and Performance in Large Language Models ABSTRACT: The diversity of human language, shaped by social, cultural, and regional influences, presents significant challenges for natural language processing (NLP) systems. Existing benchmarks often overlook intra-language variations, leaving speakers of non-standard dialects underserved. To address this gap, we introduce EnDive (English Diversity), a benchmark that evaluates five widely-used large language models (LLMs) across tasks in language understanding, algorithmic reasoning, mathematics, and logic. Our framework translates Standard American English datasets into five underrepresented dialects using few-shot prompting with verified examples from native speakers, and compare these translations against rule-based methods via fluency assessments, preference tests, and semantic similarity metrics. Human evaluations confirm high translation quality, with average scores of at least 6.02/7 for faithfulness, fluency, and formality. By filtering out near-identical translations, we create a challenging dataset that reveals significant performance disparities - models consistently underperform on dialectal inputs compared to Standard American English. EnDive thus advances dialect-aware NLP by uncovering model biases and promoting more equitable language technologies.
new_dataset
0.95803
2504.07102
Chendi Ge
Chendi Ge, Xin Wang, Ziwei Zhang, Yijian Qin, Hong Chen, Haiyang Wu, Yang Zhang, Yuekui Yang, Wenwu Zhu
Behavior Importance-Aware Graph Neural Architecture Search for Cross-Domain Recommendation
AAAI 2025 Oral
null
null
null
cs.IR cs.LG
http://creativecommons.org/licenses/by/4.0/
Cross-domain recommendation (CDR) mitigates data sparsity and cold-start issues in recommendation systems. While recent CDR approaches using graph neural networks (GNNs) capture complex user-item interactions, they rely on manually designed architectures that are often suboptimal and labor-intensive. Additionally, extracting valuable behavioral information from source domains to improve target domain recommendations remains challenging. To address these challenges, we propose Behavior importance-aware Graph Neural Architecture Search (BiGNAS), a framework that jointly optimizes GNN architecture and data importance for CDR. BiGNAS introduces two key components: a Cross-Domain Customized Supernetwork and a Graph-Based Behavior Importance Perceptron. The supernetwork, as a one-shot, retrain-free module, automatically searches the optimal GNN architecture for each domain without the need for retraining. The perceptron uses auxiliary learning to dynamically assess the importance of source domain behaviors, thereby improving target domain recommendations. Extensive experiments on benchmark CDR datasets and a large-scale industry advertising dataset demonstrate that BiGNAS consistently outperforms state-of-the-art baselines. To the best of our knowledge, this is the first work to jointly optimize GNN architecture and behavior data importance for cross-domain recommendation.
[ { "version": "v1", "created": "Tue, 11 Mar 2025 04:30:18 GMT" } ]
2025-04-11T00:00:00
[ [ "Ge", "Chendi", "" ], [ "Wang", "Xin", "" ], [ "Zhang", "Ziwei", "" ], [ "Qin", "Yijian", "" ], [ "Chen", "Hong", "" ], [ "Wu", "Haiyang", "" ], [ "Zhang", "Yang", "" ], [ "Yang", "Yuekui", "" ], [ "Zhu", "Wenwu", "" ] ]
TITLE: Behavior Importance-Aware Graph Neural Architecture Search for Cross-Domain Recommendation ABSTRACT: Cross-domain recommendation (CDR) mitigates data sparsity and cold-start issues in recommendation systems. While recent CDR approaches using graph neural networks (GNNs) capture complex user-item interactions, they rely on manually designed architectures that are often suboptimal and labor-intensive. Additionally, extracting valuable behavioral information from source domains to improve target domain recommendations remains challenging. To address these challenges, we propose Behavior importance-aware Graph Neural Architecture Search (BiGNAS), a framework that jointly optimizes GNN architecture and data importance for CDR. BiGNAS introduces two key components: a Cross-Domain Customized Supernetwork and a Graph-Based Behavior Importance Perceptron. The supernetwork, as a one-shot, retrain-free module, automatically searches the optimal GNN architecture for each domain without the need for retraining. The perceptron uses auxiliary learning to dynamically assess the importance of source domain behaviors, thereby improving target domain recommendations. Extensive experiments on benchmark CDR datasets and a large-scale industry advertising dataset demonstrate that BiGNAS consistently outperforms state-of-the-art baselines. To the best of our knowledge, this is the first work to jointly optimize GNN architecture and behavior data importance for cross-domain recommendation.
no_new_dataset
0.951006
2504.07108
Roan Schellingerhout
Roan Schellingerhout, Francesco Barile, and Nava Tintarev
OKRA: an Explainable, Heterogeneous, Multi-Stakeholder Job Recommender System
17 pages, 1 figure, 1 table, to be published in the proceedings of ECIR2025
null
null
null
cs.IR cs.AI cs.HC
http://creativecommons.org/licenses/by/4.0/
The use of recommender systems in the recruitment domain has been labeled as 'high-risk' in recent legislation. As a result, strict requirements regarding explainability and fairness have been put in place to ensure proper treatment of all involved stakeholders. To allow for stakeholder-specific explainability, while also handling highly heterogeneous recruitment data, we propose a novel explainable multi-stakeholder job recommender system using graph neural networks: the Occupational Knowledge-based Recommender using Attention (OKRA). The proposed method is capable of providing both candidate- and company-side recommendations and explanations. We find that OKRA performs substantially better than six baselines in terms of nDCG for two datasets. Furthermore, we find that the tested models show a bias toward candidates and vacancies located in urban areas. Overall, our findings suggest that OKRA provides a balance between accuracy, explainability, and fairness.
[ { "version": "v1", "created": "Mon, 17 Mar 2025 14:12:51 GMT" } ]
2025-04-11T00:00:00
[ [ "Schellingerhout", "Roan", "" ], [ "Barile", "Francesco", "" ], [ "Tintarev", "Nava", "" ] ]
TITLE: OKRA: an Explainable, Heterogeneous, Multi-Stakeholder Job Recommender System ABSTRACT: The use of recommender systems in the recruitment domain has been labeled as 'high-risk' in recent legislation. As a result, strict requirements regarding explainability and fairness have been put in place to ensure proper treatment of all involved stakeholders. To allow for stakeholder-specific explainability, while also handling highly heterogeneous recruitment data, we propose a novel explainable multi-stakeholder job recommender system using graph neural networks: the Occupational Knowledge-based Recommender using Attention (OKRA). The proposed method is capable of providing both candidate- and company-side recommendations and explanations. We find that OKRA performs substantially better than six baselines in terms of nDCG for two datasets. Furthermore, we find that the tested models show a bias toward candidates and vacancies located in urban areas. Overall, our findings suggest that OKRA provides a balance between accuracy, explainability, and fairness.
no_new_dataset
0.946051
2504.07110
Kin Sum Liu
Omkar Gurjar, Kin Sum Liu, Praveen Kolli, Utsaw Kumar, Mandar Rahurkar
DashCLIP: Leveraging multimodal models for generating semantic embeddings for DoorDash
null
null
null
null
cs.IR cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Despite the success of vision-language models in various generative tasks, obtaining high-quality semantic representations for products and user intents is still challenging due to the inability of off-the-shelf models to capture nuanced relationships between the entities. In this paper, we introduce a joint training framework for product and user queries by aligning uni-modal and multi-modal encoders through contrastive learning on image-text data. Our novel approach trains a query encoder with an LLM-curated relevance dataset, eliminating the reliance on engagement history. These embeddings demonstrate strong generalization capabilities and improve performance across applications, including product categorization and relevance prediction. For personalized ads recommendation, a significant uplift in the click-through rate and conversion rate after the deployment further confirms the impact on key business metrics. We believe that the flexibility of our framework makes it a promising solution toward enriching the user experience across the e-commerce landscape.
[ { "version": "v1", "created": "Tue, 18 Mar 2025 20:38:31 GMT" } ]
2025-04-11T00:00:00
[ [ "Gurjar", "Omkar", "" ], [ "Liu", "Kin Sum", "" ], [ "Kolli", "Praveen", "" ], [ "Kumar", "Utsaw", "" ], [ "Rahurkar", "Mandar", "" ] ]
TITLE: DashCLIP: Leveraging multimodal models for generating semantic embeddings for DoorDash ABSTRACT: Despite the success of vision-language models in various generative tasks, obtaining high-quality semantic representations for products and user intents is still challenging due to the inability of off-the-shelf models to capture nuanced relationships between the entities. In this paper, we introduce a joint training framework for product and user queries by aligning uni-modal and multi-modal encoders through contrastive learning on image-text data. Our novel approach trains a query encoder with an LLM-curated relevance dataset, eliminating the reliance on engagement history. These embeddings demonstrate strong generalization capabilities and improve performance across applications, including product categorization and relevance prediction. For personalized ads recommendation, a significant uplift in the click-through rate and conversion rate after the deployment further confirms the impact on key business metrics. We believe that the flexibility of our framework makes it a promising solution toward enriching the user experience across the e-commerce landscape.
no_new_dataset
0.859605
2504.07114
Serina Chang
Serina Chang, Ashton Anderson, Jake M. Hofman
ChatBench: From Static Benchmarks to Human-AI Evaluation
null
null
null
null
cs.CL cs.AI cs.CY cs.HC
http://creativecommons.org/licenses/by/4.0/
With the rapid adoption of LLM-based chatbots, there is a pressing need to evaluate what humans and LLMs can achieve together. However, standard benchmarks, such as MMLU, measure LLM capabilities in isolation (i.e., "AI-alone"). Here, we design and conduct a user study to convert MMLU questions into user-AI conversations, by seeding the user with the question and having them carry out a conversation with the LLM to answer their question. We release ChatBench, a new dataset with AI-alone, user-alone, and user-AI data for 396 questions and two LLMs, including 144K answers and 7,336 user-AI conversations. We find that AI-alone accuracy fails to predict user-AI accuracy, with significant differences across multiple subjects (math, physics, and moral reasoning), and we analyze the user-AI conversations to provide insight into how they diverge from AI-alone benchmarks. Finally, we show that fine-tuning a user simulator on a subset of ChatBench improves its ability to estimate user-AI accuracies, increasing correlation on held-out questions by more than 20 points, creating possibilities for scaling interactive evaluation.
[ { "version": "v1", "created": "Sat, 22 Mar 2025 01:21:40 GMT" } ]
2025-04-11T00:00:00
[ [ "Chang", "Serina", "" ], [ "Anderson", "Ashton", "" ], [ "Hofman", "Jake M.", "" ] ]
TITLE: ChatBench: From Static Benchmarks to Human-AI Evaluation ABSTRACT: With the rapid adoption of LLM-based chatbots, there is a pressing need to evaluate what humans and LLMs can achieve together. However, standard benchmarks, such as MMLU, measure LLM capabilities in isolation (i.e., "AI-alone"). Here, we design and conduct a user study to convert MMLU questions into user-AI conversations, by seeding the user with the question and having them carry out a conversation with the LLM to answer their question. We release ChatBench, a new dataset with AI-alone, user-alone, and user-AI data for 396 questions and two LLMs, including 144K answers and 7,336 user-AI conversations. We find that AI-alone accuracy fails to predict user-AI accuracy, with significant differences across multiple subjects (math, physics, and moral reasoning), and we analyze the user-AI conversations to provide insight into how they diverge from AI-alone benchmarks. Finally, we show that fine-tuning a user simulator on a subset of ChatBench improves its ability to estimate user-AI accuracies, increasing correlation on held-out questions by more than 20 points, creating possibilities for scaling interactive evaluation.
new_dataset
0.956756
2504.07115
Jiali Cheng
Jiali Cheng, Hadi Amiri
EqualizeIR: Mitigating Linguistic Biases in Retrieval Models
NAACL 2025
NAACL 2025
null
null
cs.CL cs.LG
http://creativecommons.org/licenses/by-nc-sa/4.0/
This study finds that existing information retrieval (IR) models show significant biases based on the linguistic complexity of input queries, performing well on linguistically simpler (or more complex) queries while underperforming on linguistically more complex (or simpler) queries. To address this issue, we propose EqualizeIR, a framework to mitigate linguistic biases in IR models. EqualizeIR uses a linguistically biased weak learner to capture linguistic biases in IR datasets and then trains a robust model by regularizing and refining its predictions using the biased weak learner. This approach effectively prevents the robust model from overfitting to specific linguistic patterns in data. We propose four approaches for developing linguistically-biased models. Extensive experiments on several datasets show that our method reduces performance disparities across linguistically simple and complex queries, while improving overall retrieval performance.
[ { "version": "v1", "created": "Sat, 22 Mar 2025 03:24:34 GMT" } ]
2025-04-11T00:00:00
[ [ "Cheng", "Jiali", "" ], [ "Amiri", "Hadi", "" ] ]
TITLE: EqualizeIR: Mitigating Linguistic Biases in Retrieval Models ABSTRACT: This study finds that existing information retrieval (IR) models show significant biases based on the linguistic complexity of input queries, performing well on linguistically simpler (or more complex) queries while underperforming on linguistically more complex (or simpler) queries. To address this issue, we propose EqualizeIR, a framework to mitigate linguistic biases in IR models. EqualizeIR uses a linguistically biased weak learner to capture linguistic biases in IR datasets and then trains a robust model by regularizing and refining its predictions using the biased weak learner. This approach effectively prevents the robust model from overfitting to specific linguistic patterns in data. We propose four approaches for developing linguistically-biased models. Extensive experiments on several datasets show that our method reduces performance disparities across linguistically simple and complex queries, while improving overall retrieval performance.
no_new_dataset
0.951323
2504.07117
Nuren Zhaksylyk
Nuren Zhaksylyk, Ibrahim Almakky, Jay Paranjape, S. Swaroop Vedula, Shameema Sikder, Vishal M. Patel, Mohammad Yaqub
RP-SAM2: Refining Point Prompts for Stable Surgical Instrument Segmentation
null
null
null
null
q-bio.TO cs.AI
http://creativecommons.org/licenses/by-nc-sa/4.0/
Accurate surgical instrument segmentation is essential in cataract surgery for tasks such as skill assessment and workflow optimization. However, limited annotated data makes it difficult to develop fully automatic models. Prompt-based methods like SAM2 offer flexibility yet remain highly sensitive to the point prompt placement, often leading to inconsistent segmentations. We address this issue by introducing RP-SAM2, which incorporates a novel shift block and a compound loss function to stabilize point prompts. Our approach reduces annotator reliance on precise point positioning while maintaining robust segmentation capabilities. Experiments on the Cataract1k dataset demonstrate that RP-SAM2 improves segmentation accuracy, with a 2% mDSC gain, a 21.36% reduction in mHD95, and decreased variance across random single-point prompt results compared to SAM2. Additionally, on the CaDIS dataset, pseudo masks generated by RP-SAM2 for fine-tuning SAM2's mask decoder outperformed those generated by SAM2. These results highlight RP-SAM2 as a practical, stable and reliable solution for semi-automatic instrument segmentation in data-constrained medical settings. The code is available at https://github.com/BioMedIA-MBZUAI/RP-SAM2.
[ { "version": "v1", "created": "Tue, 25 Mar 2025 18:59:23 GMT" } ]
2025-04-11T00:00:00
[ [ "Zhaksylyk", "Nuren", "" ], [ "Almakky", "Ibrahim", "" ], [ "Paranjape", "Jay", "" ], [ "Vedula", "S. Swaroop", "" ], [ "Sikder", "Shameema", "" ], [ "Patel", "Vishal M.", "" ], [ "Yaqub", "Mohammad", "" ] ]
TITLE: RP-SAM2: Refining Point Prompts for Stable Surgical Instrument Segmentation ABSTRACT: Accurate surgical instrument segmentation is essential in cataract surgery for tasks such as skill assessment and workflow optimization. However, limited annotated data makes it difficult to develop fully automatic models. Prompt-based methods like SAM2 offer flexibility yet remain highly sensitive to the point prompt placement, often leading to inconsistent segmentations. We address this issue by introducing RP-SAM2, which incorporates a novel shift block and a compound loss function to stabilize point prompts. Our approach reduces annotator reliance on precise point positioning while maintaining robust segmentation capabilities. Experiments on the Cataract1k dataset demonstrate that RP-SAM2 improves segmentation accuracy, with a 2% mDSC gain, a 21.36% reduction in mHD95, and decreased variance across random single-point prompt results compared to SAM2. Additionally, on the CaDIS dataset, pseudo masks generated by RP-SAM2 for fine-tuning SAM2's mask decoder outperformed those generated by SAM2. These results highlight RP-SAM2 as a practical, stable and reliable solution for semi-automatic instrument segmentation in data-constrained medical settings. The code is available at https://github.com/BioMedIA-MBZUAI/RP-SAM2.
no_new_dataset
0.951774
2504.07131
Peng Liu
Peng Liu, Lian Cheng, Benjamin P.Omell, Anthony P.Burgard
Embedding Reliability Verification Constraints into Generation Expansion Planning
5 pages,3 figures. IEEE PES general meeting 2025
null
null
null
cs.AI stat.ML
http://creativecommons.org/licenses/by-nc-sa/4.0/
Generation planning approaches face challenges in managing the incompatible mathematical structures between stochastic production simulations for reliability assessment and optimization models for generation planning, which hinders the integration of reliability constraints. This study proposes an approach to embedding reliability verification constraints into generation expansion planning by leveraging a weighted oblique decision tree (WODT) technique. For each planning year, a generation mix dataset, labeled with reliability assessment simulations, is generated. An WODT model is trained using this dataset. Reliability-feasible regions are extracted via depth-first search technique and formulated as disjunctive constraints. These constraints are then transformed into mixed-integer linear form using a convex hull modeling technique and embedded into a unit commitment-integrated generation expansion planning model. The proposed approach is validated through a long-term generation planning case study for the Electric Reliability Council of Texas (ERCOT) region, demonstrating its effectiveness in achieving reliable and optimal planning solutions.
[ { "version": "v1", "created": "Sun, 6 Apr 2025 04:58:45 GMT" } ]
2025-04-11T00:00:00
[ [ "Liu", "Peng", "" ], [ "Cheng", "Lian", "" ], [ "Omell", "Benjamin P.", "" ], [ "Burgard", "Anthony P.", "" ] ]
TITLE: Embedding Reliability Verification Constraints into Generation Expansion Planning ABSTRACT: Generation planning approaches face challenges in managing the incompatible mathematical structures between stochastic production simulations for reliability assessment and optimization models for generation planning, which hinders the integration of reliability constraints. This study proposes an approach to embedding reliability verification constraints into generation expansion planning by leveraging a weighted oblique decision tree (WODT) technique. For each planning year, a generation mix dataset, labeled with reliability assessment simulations, is generated. An WODT model is trained using this dataset. Reliability-feasible regions are extracted via depth-first search technique and formulated as disjunctive constraints. These constraints are then transformed into mixed-integer linear form using a convex hull modeling technique and embedded into a unit commitment-integrated generation expansion planning model. The proposed approach is validated through a long-term generation planning case study for the Electric Reliability Council of Texas (ERCOT) region, demonstrating its effectiveness in achieving reliable and optimal planning solutions.
new_dataset
0.961461
2504.07132
Abdulrahman Alhaidari
Abdulrahman Alhaidari, Bhavani Kalal, Balaji Palanisamy, Shamik Sural
SolRPDS: A Dataset for Analyzing Rug Pulls in Solana Decentralized Finance
Accepted paper to appear in the 15th ACM Conference on Data and Application Security and Privacy (CODASPY 2025)
null
null
null
cs.CR cs.CE cs.LG
http://creativecommons.org/licenses/by/4.0/
Rug pulls in Solana have caused significant damage to users interacting with Decentralized Finance (DeFi). A rug pull occurs when developers exploit users' trust and drain liquidity from token pools on Decentralized Exchanges (DEXs), leaving users with worthless tokens. Although rug pulls in Ethereum and Binance Smart Chain (BSC) have gained attention recently, analysis of rug pulls in Solana remains largely under-explored. In this paper, we introduce SolRPDS (Solana Rug Pull Dataset), the first public rug pull dataset derived from Solana's transactions. We examine approximately four years of DeFi data (2021-2024) that covers suspected and confirmed tokens exhibiting rug pull patterns. The dataset, derived from 3.69 billion transactions, consists of 62,895 suspicious liquidity pools. The data is annotated for inactivity states, which is a key indicator, and includes several detailed liquidity activities such as additions, removals, and last interaction as well as other attributes such as inactivity periods and withdrawn token amounts, to help identify suspicious behavior. Our preliminary analysis reveals clear distinctions between legitimate and fraudulent liquidity pools and we found that 22,195 tokens in the dataset exhibit rug pull patterns during the examined period. SolRPDS can support a wide range of future research on rug pulls including the development of data-driven and heuristic-based solutions for real-time rug pull detection and mitigation.
[ { "version": "v1", "created": "Sun, 6 Apr 2025 11:36:48 GMT" } ]
2025-04-11T00:00:00
[ [ "Alhaidari", "Abdulrahman", "" ], [ "Kalal", "Bhavani", "" ], [ "Palanisamy", "Balaji", "" ], [ "Sural", "Shamik", "" ] ]
TITLE: SolRPDS: A Dataset for Analyzing Rug Pulls in Solana Decentralized Finance ABSTRACT: Rug pulls in Solana have caused significant damage to users interacting with Decentralized Finance (DeFi). A rug pull occurs when developers exploit users' trust and drain liquidity from token pools on Decentralized Exchanges (DEXs), leaving users with worthless tokens. Although rug pulls in Ethereum and Binance Smart Chain (BSC) have gained attention recently, analysis of rug pulls in Solana remains largely under-explored. In this paper, we introduce SolRPDS (Solana Rug Pull Dataset), the first public rug pull dataset derived from Solana's transactions. We examine approximately four years of DeFi data (2021-2024) that covers suspected and confirmed tokens exhibiting rug pull patterns. The dataset, derived from 3.69 billion transactions, consists of 62,895 suspicious liquidity pools. The data is annotated for inactivity states, which is a key indicator, and includes several detailed liquidity activities such as additions, removals, and last interaction as well as other attributes such as inactivity periods and withdrawn token amounts, to help identify suspicious behavior. Our preliminary analysis reveals clear distinctions between legitimate and fraudulent liquidity pools and we found that 22,195 tokens in the dataset exhibit rug pull patterns during the examined period. SolRPDS can support a wide range of future research on rug pulls including the development of data-driven and heuristic-based solutions for real-time rug pull detection and mitigation.
new_dataset
0.966663
2504.07135
Zhang Mingqing
Mingqing Zhang, Qiang Liu, Xiang Tao, Shu Wu, Liang Wang
SINCon: Mitigate LLM-Generated Malicious Message Injection Attack for Rumor Detection
null
null
null
null
cs.CR
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
In the era of rapidly evolving large language models (LLMs), state-of-the-art rumor detection systems, particularly those based on Message Propagation Trees (MPTs), which represent a conversation tree with the post as its root and the replies as its descendants, are facing increasing threats from adversarial attacks that leverage LLMs to generate and inject malicious messages. Existing methods are based on the assumption that different nodes exhibit varying degrees of influence on predictions. They define nodes with high predictive influence as important nodes and target them for attacks. If the model treats nodes' predictive influence more uniformly, attackers will find it harder to target high predictive influence nodes. In this paper, we propose Similarizing the predictive Influence of Nodes with Contrastive Learning (SINCon), a defense mechanism that encourages the model to learn graph representations where nodes with varying importance have a more uniform influence on predictions. Extensive experiments on the Twitter and Weibo datasets demonstrate that SINCon not only preserves high classification accuracy on clean data but also significantly enhances resistance against LLM-driven message injection attacks.
[ { "version": "v1", "created": "Mon, 7 Apr 2025 08:20:48 GMT" } ]
2025-04-11T00:00:00
[ [ "Zhang", "Mingqing", "" ], [ "Liu", "Qiang", "" ], [ "Tao", "Xiang", "" ], [ "Wu", "Shu", "" ], [ "Wang", "Liang", "" ] ]
TITLE: SINCon: Mitigate LLM-Generated Malicious Message Injection Attack for Rumor Detection ABSTRACT: In the era of rapidly evolving large language models (LLMs), state-of-the-art rumor detection systems, particularly those based on Message Propagation Trees (MPTs), which represent a conversation tree with the post as its root and the replies as its descendants, are facing increasing threats from adversarial attacks that leverage LLMs to generate and inject malicious messages. Existing methods are based on the assumption that different nodes exhibit varying degrees of influence on predictions. They define nodes with high predictive influence as important nodes and target them for attacks. If the model treats nodes' predictive influence more uniformly, attackers will find it harder to target high predictive influence nodes. In this paper, we propose Similarizing the predictive Influence of Nodes with Contrastive Learning (SINCon), a defense mechanism that encourages the model to learn graph representations where nodes with varying importance have a more uniform influence on predictions. Extensive experiments on the Twitter and Weibo datasets demonstrate that SINCon not only preserves high classification accuracy on clean data but also significantly enhances resistance against LLM-driven message injection attacks.
no_new_dataset
0.948298
2504.07137
Hamed Jelodar
Hamed Jelodar, Samita Bai, Parisa Hamedi, Hesamodin Mohammadian, Roozbeh Razavi-Far and Ali Ghorbani
Large Language Model (LLM) for Software Security: Code Analysis, Malware Analysis, Reverse Engineering
null
null
null
null
cs.CR cs.AI
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) have recently emerged as powerful tools in cybersecurity, offering advanced capabilities in malware detection, generation, and real-time monitoring. Numerous studies have explored their application in cybersecurity, demonstrating their effectiveness in identifying novel malware variants, analyzing malicious code structures, and enhancing automated threat analysis. Several transformer-based architectures and LLM-driven models have been proposed to improve malware analysis, leveraging semantic and structural insights to recognize malicious intent more accurately. This study presents a comprehensive review of LLM-based approaches in malware code analysis, summarizing recent advancements, trends, and methodologies. We examine notable scholarly works to map the research landscape, identify key challenges, and highlight emerging innovations in LLM-driven cybersecurity. Additionally, we emphasize the role of static analysis in malware detection, introduce notable datasets and specialized LLM models, and discuss essential datasets supporting automated malware research. This study serves as a valuable resource for researchers and cybersecurity professionals, offering insights into LLM-powered malware detection and defence strategies while outlining future directions for strengthening cybersecurity resilience.
[ { "version": "v1", "created": "Mon, 7 Apr 2025 22:32:46 GMT" } ]
2025-04-11T00:00:00
[ [ "Jelodar", "Hamed", "" ], [ "Bai", "Samita", "" ], [ "Hamedi", "Parisa", "" ], [ "Mohammadian", "Hesamodin", "" ], [ "Razavi-Far", "Roozbeh", "" ], [ "Ghorbani", "Ali", "" ] ]
TITLE: Large Language Model (LLM) for Software Security: Code Analysis, Malware Analysis, Reverse Engineering ABSTRACT: Large Language Models (LLMs) have recently emerged as powerful tools in cybersecurity, offering advanced capabilities in malware detection, generation, and real-time monitoring. Numerous studies have explored their application in cybersecurity, demonstrating their effectiveness in identifying novel malware variants, analyzing malicious code structures, and enhancing automated threat analysis. Several transformer-based architectures and LLM-driven models have been proposed to improve malware analysis, leveraging semantic and structural insights to recognize malicious intent more accurately. This study presents a comprehensive review of LLM-based approaches in malware code analysis, summarizing recent advancements, trends, and methodologies. We examine notable scholarly works to map the research landscape, identify key challenges, and highlight emerging innovations in LLM-driven cybersecurity. Additionally, we emphasize the role of static analysis in malware detection, introduce notable datasets and specialized LLM models, and discuss essential datasets supporting automated malware research. This study serves as a valuable resource for researchers and cybersecurity professionals, offering insights into LLM-powered malware detection and defence strategies while outlining future directions for strengthening cybersecurity resilience.
no_new_dataset
0.856692
2504.07149
Matthew Nyaaba
Matthew Nyaaba
Glocalizing Generative AI in Education for the Global South: The Design Case of 21st Century Teacher Educator AI for Ghana
null
null
null
null
cs.CY
http://creativecommons.org/licenses/by/4.0/
This study presents the design and development of the 21st Century Teacher Educator for Ghana GPT, a customized Generative AI (GenAI) tool created using OpenAI's Retrieval-Augmented Generation (RAG) and Interactive Semi-Automated Prompting Strategy (ISA). Anchored in a Glocalized design approach, this tool supports pre-service teachers (PSTs) in Ghana by embedding localized linguistic, cultural, and curricular content within globally aligned principles of ethical and responsible AI use. The model utilizes structured, preloaded datasets-including Ghana's National Teacher Education Curriculum Framework (NTECF), UNESCO's (2023) AI guidelines, and culturally responsive pedagogies-to offer curriculum-aligned, linguistically adaptive, and pedagogically grounded learning support. The ISA enables users to input their institution, year, and semester, generating tailored academic content such as lecture notes, assessment practice, practicum resources, and action research guidance. The design incorporates the Culture and Context-Aware Framework, GenAI-CRSciA, and frameworks addressing GenAI neocolonialism to ensure equity, curriculum fidelity, and local relevance. Pilot implementation revealed notable strengths in language adaptation and localization, delivering bilingual support in English and Ghanaian languages like Twi, Dagbani, Mampruli, and Dagaare, with contextualized examples for deeper understanding. The GPT also generated practice assessments aligned with course objectives, reinforcing learner engagement. Challenges included occasional hallucinations due to limited corpora in some indigenous languages and access barriers tied to premium subscriptions. This design case contributes to discourse on Glocalized GenAI and calls for collaboration with OpenAI NextGen to expand access and empirically assess usage across diverse African educational contexts.
[ { "version": "v1", "created": "Wed, 9 Apr 2025 03:28:35 GMT" } ]
2025-04-11T00:00:00
[ [ "Nyaaba", "Matthew", "" ] ]
TITLE: Glocalizing Generative AI in Education for the Global South: The Design Case of 21st Century Teacher Educator AI for Ghana ABSTRACT: This study presents the design and development of the 21st Century Teacher Educator for Ghana GPT, a customized Generative AI (GenAI) tool created using OpenAI's Retrieval-Augmented Generation (RAG) and Interactive Semi-Automated Prompting Strategy (ISA). Anchored in a Glocalized design approach, this tool supports pre-service teachers (PSTs) in Ghana by embedding localized linguistic, cultural, and curricular content within globally aligned principles of ethical and responsible AI use. The model utilizes structured, preloaded datasets-including Ghana's National Teacher Education Curriculum Framework (NTECF), UNESCO's (2023) AI guidelines, and culturally responsive pedagogies-to offer curriculum-aligned, linguistically adaptive, and pedagogically grounded learning support. The ISA enables users to input their institution, year, and semester, generating tailored academic content such as lecture notes, assessment practice, practicum resources, and action research guidance. The design incorporates the Culture and Context-Aware Framework, GenAI-CRSciA, and frameworks addressing GenAI neocolonialism to ensure equity, curriculum fidelity, and local relevance. Pilot implementation revealed notable strengths in language adaptation and localization, delivering bilingual support in English and Ghanaian languages like Twi, Dagbani, Mampruli, and Dagaare, with contextualized examples for deeper understanding. The GPT also generated practice assessments aligned with course objectives, reinforcing learner engagement. Challenges included occasional hallucinations due to limited corpora in some indigenous languages and access barriers tied to premium subscriptions. This design case contributes to discourse on Glocalized GenAI and calls for collaboration with OpenAI NextGen to expand access and empirically assess usage across diverse African educational contexts.
no_new_dataset
0.952442
2504.07151
David Vigouroux
David Vigouroux, Joseba Dalmau, Louis B\'ethune (IRIT, IRIT-ADRIA, UT3), Victor Boutin
Deep Sturm--Liouville: From Sample-Based to 1D Regularization with Learnable Orthogonal Basis Functions
null
null
null
null
cs.LG
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
Although Artificial Neural Networks (ANNs) have achieved remarkable success across various tasks, they still suffer from limited generalization. We hypothesize that this limitation arises from the traditional sample-based (0--dimensionnal) regularization used in ANNs. To overcome this, we introduce \textit{Deep Sturm--Liouville} (DSL), a novel function approximator that enables continuous 1D regularization along field lines in the input space by integrating the Sturm--Liouville Theorem (SLT) into the deep learning framework. DSL defines field lines traversing the input space, along which a Sturm--Liouville problem is solved to generate orthogonal basis functions, enforcing implicit regularization thanks to the desirable properties of SLT. These basis functions are linearly combined to construct the DSL approximator. Both the vector field and basis functions are parameterized by neural networks and learned jointly. We demonstrate that the DSL formulation naturally arises when solving a Rank-1 Parabolic Eigenvalue Problem. DSL is trained efficiently using stochastic gradient descent via implicit differentiation. DSL achieves competitive performance and demonstrate improved sample efficiency on diverse multivariate datasets including high-dimensional image datasets such as MNIST and CIFAR-10.
[ { "version": "v1", "created": "Wed, 9 Apr 2025 07:21:13 GMT" } ]
2025-04-11T00:00:00
[ [ "Vigouroux", "David", "", "IRIT, IRIT-ADRIA,\n UT3" ], [ "Dalmau", "Joseba", "", "IRIT, IRIT-ADRIA,\n UT3" ], [ "Béthune", "Louis", "", "IRIT, IRIT-ADRIA,\n UT3" ], [ "Boutin", "Victor", "" ] ]
TITLE: Deep Sturm--Liouville: From Sample-Based to 1D Regularization with Learnable Orthogonal Basis Functions ABSTRACT: Although Artificial Neural Networks (ANNs) have achieved remarkable success across various tasks, they still suffer from limited generalization. We hypothesize that this limitation arises from the traditional sample-based (0--dimensionnal) regularization used in ANNs. To overcome this, we introduce \textit{Deep Sturm--Liouville} (DSL), a novel function approximator that enables continuous 1D regularization along field lines in the input space by integrating the Sturm--Liouville Theorem (SLT) into the deep learning framework. DSL defines field lines traversing the input space, along which a Sturm--Liouville problem is solved to generate orthogonal basis functions, enforcing implicit regularization thanks to the desirable properties of SLT. These basis functions are linearly combined to construct the DSL approximator. Both the vector field and basis functions are parameterized by neural networks and learned jointly. We demonstrate that the DSL formulation naturally arises when solving a Rank-1 Parabolic Eigenvalue Problem. DSL is trained efficiently using stochastic gradient descent via implicit differentiation. DSL achieves competitive performance and demonstrate improved sample efficiency on diverse multivariate datasets including high-dimensional image datasets such as MNIST and CIFAR-10.
no_new_dataset
0.949576
2504.07152
Matthew Roughan
Matthew Roughan
Evolutionary Generation of Random Surreal Numbers for Benchmarking
To appear in short form in Genetic and Evolutionary Computation Conference (GECCO '25), 2025
Genetic and Evolutionary Computation Conference (GECCO '25), July 14--18, 2025, Malaga
10.1145/3712255.3726584
null
cs.NE math.CO
http://creativecommons.org/licenses/by/4.0/
There are many areas of scientific endeavour where large, complex datasets are needed for benchmarking. Evolutionary computing provides a means towards creating such sets. As a case study, we consider Conway's Surreal numbers. They have largely been treated as a theoretical construct, with little effort towards empirical study, at least in part because of the difficulty of working with all but the smallest numbers. To advance this status, we need efficient algorithms, and in order to develop such we need benchmark data sets of surreal numbers. In this paper, we present a method for generating ensembles of random surreal numbers to benchmark algorithms. The approach uses an evolutionary algorithm to create the benchmark datasets where we can analyse and control features of the resulting test sets. Ultimately, the process is designed to generate networks with defined properties, and we expect this to be useful for other types of network data.
[ { "version": "v1", "created": "Wed, 9 Apr 2025 07:28:42 GMT" } ]
2025-04-11T00:00:00
[ [ "Roughan", "Matthew", "" ] ]
TITLE: Evolutionary Generation of Random Surreal Numbers for Benchmarking ABSTRACT: There are many areas of scientific endeavour where large, complex datasets are needed for benchmarking. Evolutionary computing provides a means towards creating such sets. As a case study, we consider Conway's Surreal numbers. They have largely been treated as a theoretical construct, with little effort towards empirical study, at least in part because of the difficulty of working with all but the smallest numbers. To advance this status, we need efficient algorithms, and in order to develop such we need benchmark data sets of surreal numbers. In this paper, we present a method for generating ensembles of random surreal numbers to benchmark algorithms. The approach uses an evolutionary algorithm to create the benchmark datasets where we can analyse and control features of the resulting test sets. Ultimately, the process is designed to generate networks with defined properties, and we expect this to be useful for other types of network data.
no_new_dataset
0.893959
2504.07155
Jonathan Adam Rico
Jonathan Adam Rico, Nagarajan Raghavan, and Senthilnath Jayavelu
Compound Fault Diagnosis for Train Transmission Systems Using Deep Learning with Fourier-enhanced Representation
Accepted for the 2025 IEEE Conference on Prognostics and Health Management (ICPHM 2025)
null
null
null
cs.LG
http://creativecommons.org/licenses/by-nc-nd/4.0/
Fault diagnosis prevents train disruptions by ensuring the stability and reliability of their transmission systems. Data-driven fault diagnosis models have several advantages over traditional methods in terms of dealing with non-linearity, adaptability, scalability, and automation. However, existing data-driven models are trained on separate transmission components and only consider single faults due to the limitations of existing datasets. These models will perform worse in scenarios where components operate with each other at the same time, affecting each component's vibration signals. To address some of these challenges, we propose a frequency domain representation and a 1-dimensional convolutional neural network for compound fault diagnosis and applied it on the PHM Beijing 2024 dataset, which includes 21 sensor channels, 17 single faults, and 42 compound faults from 4 interacting components, that is, motor, gearbox, left axle box, and right axle box. Our proposed model achieved 97.67% and 93.93% accuracies on the test set with 17 single faults and on the test set with 42 compound faults, respectively.
[ { "version": "v1", "created": "Wed, 9 Apr 2025 09:01:18 GMT" } ]
2025-04-11T00:00:00
[ [ "Rico", "Jonathan Adam", "" ], [ "Raghavan", "Nagarajan", "" ], [ "Jayavelu", "Senthilnath", "" ] ]
TITLE: Compound Fault Diagnosis for Train Transmission Systems Using Deep Learning with Fourier-enhanced Representation ABSTRACT: Fault diagnosis prevents train disruptions by ensuring the stability and reliability of their transmission systems. Data-driven fault diagnosis models have several advantages over traditional methods in terms of dealing with non-linearity, adaptability, scalability, and automation. However, existing data-driven models are trained on separate transmission components and only consider single faults due to the limitations of existing datasets. These models will perform worse in scenarios where components operate with each other at the same time, affecting each component's vibration signals. To address some of these challenges, we propose a frequency domain representation and a 1-dimensional convolutional neural network for compound fault diagnosis and applied it on the PHM Beijing 2024 dataset, which includes 21 sensor channels, 17 single faults, and 42 compound faults from 4 interacting components, that is, motor, gearbox, left axle box, and right axle box. Our proposed model achieved 97.67% and 93.93% accuracies on the test set with 17 single faults and on the test set with 42 compound faults, respectively.
no_new_dataset
0.94887
2504.07157
Xavier Secheresse
Xavier S\'echeresse, Jacques-Yves Guilbert--Ly and Antoine Villedieu de Torcy
GAAPO: Genetic Algorithmic Applied to Prompt Optimization
26 pages, 9 figures
null
null
null
cs.NE cs.LG
http://creativecommons.org/licenses/by/4.0/
Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks, with their performance heavily dependent on the quality of input prompts \cite{schulhoff2025promptsurvey} \cite{sahoo2025promptengineering}. While prompt engineering has proven effective, it typically relies on manual adjustments, making it time-consuming and potentially suboptimal. This paper introduces GAAPO (Genetic Algorithm Applied to Prompt Optimization), a novel hybrid optimization framework that leverages genetic algorithm \cite{dejong1988gen} principles to evolve prompts through successive generations. Unlike traditional genetic approaches that rely solely on mutation and crossover operations, GAAPO integrates multiple specialized prompt generation strategies within its evolutionary framework. Through extensive experimentation on diverse datasets including ETHOS, MMLU-Pro, and GPQA, our analysis reveals several important point for the future development of automatic prompt optimization methods: importance of the tradeoff between the population size and the number of generations, effect of selection methods on stability results, capacity of different LLMs and especially reasoning models to be able to automatically generate prompts from similar queries... Furthermore, we provide insights into the relative effectiveness of different prompt generation strategies and their evolution across optimization phases. These findings contribute to both the theoretical understanding of prompt optimization and practical applications in improving LLM performance.
[ { "version": "v1", "created": "Wed, 9 Apr 2025 11:19:42 GMT" } ]
2025-04-11T00:00:00
[ [ "Sécheresse", "Xavier", "" ], [ "Guilbert--Ly", "Jacques-Yves", "" ], [ "de Torcy", "Antoine Villedieu", "" ] ]
TITLE: GAAPO: Genetic Algorithmic Applied to Prompt Optimization ABSTRACT: Large Language Models (LLMs) have demonstrated remarkable capabilities across various tasks, with their performance heavily dependent on the quality of input prompts \cite{schulhoff2025promptsurvey} \cite{sahoo2025promptengineering}. While prompt engineering has proven effective, it typically relies on manual adjustments, making it time-consuming and potentially suboptimal. This paper introduces GAAPO (Genetic Algorithm Applied to Prompt Optimization), a novel hybrid optimization framework that leverages genetic algorithm \cite{dejong1988gen} principles to evolve prompts through successive generations. Unlike traditional genetic approaches that rely solely on mutation and crossover operations, GAAPO integrates multiple specialized prompt generation strategies within its evolutionary framework. Through extensive experimentation on diverse datasets including ETHOS, MMLU-Pro, and GPQA, our analysis reveals several important point for the future development of automatic prompt optimization methods: importance of the tradeoff between the population size and the number of generations, effect of selection methods on stability results, capacity of different LLMs and especially reasoning models to be able to automatically generate prompts from similar queries... Furthermore, we provide insights into the relative effectiveness of different prompt generation strategies and their evolution across optimization phases. These findings contribute to both the theoretical understanding of prompt optimization and practical applications in improving LLM performance.
no_new_dataset
0.944228
2504.07165
Yana Wei
Yana Wei, Liang Zhao, Kangheng Lin, En Yu, Yuang Peng, Runpei Dong, Jianjian Sun, Haoran Wei, Zheng Ge, Xiangyu Zhang, Vishal M. Patel
Perception in Reflection
null
null
null
null
cs.CV
http://arxiv.org/licenses/nonexclusive-distrib/1.0/
We present a perception in reflection paradigm designed to transcend the limitations of current large vision-language models (LVLMs), which are expected yet often fail to achieve perfect perception initially. Specifically, we propose Reflective Perception (RePer), a dual-model reflection mechanism that systematically alternates between policy and critic models, enables iterative refinement of visual perception. This framework is powered by Reflective Perceptual Learning (RPL), which reinforces intrinsic reflective capabilities through a methodically constructed visual reflection dataset and reflective unlikelihood training. Comprehensive experimental evaluation demonstrates RePer's quantifiable improvements in image understanding, captioning precision, and hallucination reduction. Notably, RePer achieves strong alignment between model attention patterns and human visual focus, while RPL optimizes fine-grained and free-form preference alignment. These advancements establish perception in reflection as a robust paradigm for future multimodal agents, particularly in tasks requiring complex reasoning and multi-step manipulation.
[ { "version": "v1", "created": "Wed, 9 Apr 2025 17:59:02 GMT" } ]
2025-04-11T00:00:00
[ [ "Wei", "Yana", "" ], [ "Zhao", "Liang", "" ], [ "Lin", "Kangheng", "" ], [ "Yu", "En", "" ], [ "Peng", "Yuang", "" ], [ "Dong", "Runpei", "" ], [ "Sun", "Jianjian", "" ], [ "Wei", "Haoran", "" ], [ "Ge", "Zheng", "" ], [ "Zhang", "Xiangyu", "" ], [ "Patel", "Vishal M.", "" ] ]
TITLE: Perception in Reflection ABSTRACT: We present a perception in reflection paradigm designed to transcend the limitations of current large vision-language models (LVLMs), which are expected yet often fail to achieve perfect perception initially. Specifically, we propose Reflective Perception (RePer), a dual-model reflection mechanism that systematically alternates between policy and critic models, enables iterative refinement of visual perception. This framework is powered by Reflective Perceptual Learning (RPL), which reinforces intrinsic reflective capabilities through a methodically constructed visual reflection dataset and reflective unlikelihood training. Comprehensive experimental evaluation demonstrates RePer's quantifiable improvements in image understanding, captioning precision, and hallucination reduction. Notably, RePer achieves strong alignment between model attention patterns and human visual focus, while RPL optimizes fine-grained and free-form preference alignment. These advancements establish perception in reflection as a robust paradigm for future multimodal agents, particularly in tasks requiring complex reasoning and multi-step manipulation.
new_dataset
0.962143
2504.07198
Ashutosh Chaubey
Ashutosh Chaubey, Xulang Guan, Mohammad Soleymani
Face-LLaVA: Facial Expression and Attribute Understanding through Instruction Tuning
Project Page: https://face-llava.github.io
null
null
null
cs.CV cs.AI cs.HC
http://creativecommons.org/licenses/by-nc-nd/4.0/
The human face plays a central role in social communication, necessitating the use of performant computer vision tools for human-centered applications. We propose Face-LLaVA, a multimodal large language model for face-centered, in-context learning, including facial expression and attribute recognition. Additionally, Face-LLaVA is able to generate natural language descriptions that can be used for reasoning. Leveraging existing visual databases, we first developed FaceInstruct-1M, a face-centered database for instruction tuning MLLMs for face processing. We then developed a novel face-specific visual encoder powered by Face-Region Guided Cross-Attention that integrates face geometry with local visual features. We evaluated the proposed method across nine different datasets and five different face processing tasks, including facial expression recognition, action unit detection, facial attribute detection, age estimation and deepfake detection. Face-LLaVA achieves superior results compared to existing open-source MLLMs and competitive performance compared to commercial solutions. Our model output also receives a higher reasoning rating by GPT under a zero-shot setting across all the tasks. Both our dataset and model wil be released at https://face-llava.github.io to support future advancements in social AI and foundational vision-language research.
[ { "version": "v1", "created": "Wed, 9 Apr 2025 18:26:07 GMT" } ]
2025-04-11T00:00:00
[ [ "Chaubey", "Ashutosh", "" ], [ "Guan", "Xulang", "" ], [ "Soleymani", "Mohammad", "" ] ]
TITLE: Face-LLaVA: Facial Expression and Attribute Understanding through Instruction Tuning ABSTRACT: The human face plays a central role in social communication, necessitating the use of performant computer vision tools for human-centered applications. We propose Face-LLaVA, a multimodal large language model for face-centered, in-context learning, including facial expression and attribute recognition. Additionally, Face-LLaVA is able to generate natural language descriptions that can be used for reasoning. Leveraging existing visual databases, we first developed FaceInstruct-1M, a face-centered database for instruction tuning MLLMs for face processing. We then developed a novel face-specific visual encoder powered by Face-Region Guided Cross-Attention that integrates face geometry with local visual features. We evaluated the proposed method across nine different datasets and five different face processing tasks, including facial expression recognition, action unit detection, facial attribute detection, age estimation and deepfake detection. Face-LLaVA achieves superior results compared to existing open-source MLLMs and competitive performance compared to commercial solutions. Our model output also receives a higher reasoning rating by GPT under a zero-shot setting across all the tasks. Both our dataset and model wil be released at https://face-llava.github.io to support future advancements in social AI and foundational vision-language research.
new_dataset
0.959383