Search is not available for this dataset
id
string | submitter
string | authors
string | title
string | comments
string | journal-ref
string | doi
string | report-no
string | categories
string | license
string | abstract
string | versions
list | update_date
timestamp[s] | authors_parsed
sequence | prompt
string |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
2409.17346 | Yuxiao Li | Yuxiao Li, Mingze Xia, Xin Liang, Bei Wang, and Hanqi Guo | Multi-Tier Preservation of Discrete Morse Smale Complexes in
Error-Bounded Lossy Compression | 10pages, 14 figures | null | null | null | cs.GR | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We propose a novel method to preserve key topological structures (extrema,
saddles, separatrices, and persistence diagrams) associated with Morse Smale
complexes in error-bounded lossy compressed scalar fields. Existing error
bounded lossy compressors rarely consider preserving topological structures
such as discrete Morse Smale complexes, leading to significant inaccuracies in
data interpretation and potentially resulting in incorrect scientific
conclusions. This paper mainly focuses on preserving the Morse-Smale complexes
in 2D/3D discrete scalar fields by precisely preserving critical points (cells)
and the separatrices that connect them. Our approach generates a series of
(discrete) edits during compression time, which are applied to the decompressed
data to accurately reconstruct the complexes while maintaining the error within
prescribed bounds. We design a workflow that iteratively fixes critical cells
and separatrices in alternating steps until convergence within finite
iterations. Our approach addresses diverse application needs by offering users
multitier options to balance compression efficiency and feature preservation.
To enable effective integration with lossy compressors, we use GPU parallelism
to enhance the performance of each workflow component. We conduct experiments
on various datasets to demonstrate the effectiveness of our method in
accurately preserving Morse-Smale complexes.
| [
{
"version": "v1",
"created": "Wed, 25 Sep 2024 20:46:40 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 18:48:53 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Li",
"Yuxiao",
""
],
[
"Xia",
"Mingze",
""
],
[
"Liang",
"Xin",
""
],
[
"Wang",
"Bei",
""
],
[
"Guo",
"Hanqi",
""
]
] | TITLE: Multi-Tier Preservation of Discrete Morse Smale Complexes in
Error-Bounded Lossy Compression
ABSTRACT: We propose a novel method to preserve key topological structures (extrema,
saddles, separatrices, and persistence diagrams) associated with Morse Smale
complexes in error-bounded lossy compressed scalar fields. Existing error
bounded lossy compressors rarely consider preserving topological structures
such as discrete Morse Smale complexes, leading to significant inaccuracies in
data interpretation and potentially resulting in incorrect scientific
conclusions. This paper mainly focuses on preserving the Morse-Smale complexes
in 2D/3D discrete scalar fields by precisely preserving critical points (cells)
and the separatrices that connect them. Our approach generates a series of
(discrete) edits during compression time, which are applied to the decompressed
data to accurately reconstruct the complexes while maintaining the error within
prescribed bounds. We design a workflow that iteratively fixes critical cells
and separatrices in alternating steps until convergence within finite
iterations. Our approach addresses diverse application needs by offering users
multitier options to balance compression efficiency and feature preservation.
To enable effective integration with lossy compressors, we use GPU parallelism
to enhance the performance of each workflow component. We conduct experiments
on various datasets to demonstrate the effectiveness of our method in
accurately preserving Morse-Smale complexes.
|
2409.18257 | Anirudh Mazumder | Anirudh Mazumder, Jianguo Liu | Developing a Dual-Stage Vision Transformer Model for Lung Disease
Classification | 3 pages, 2 figures | null | null | null | eess.IV cs.CV | http://creativecommons.org/licenses/by/4.0/ | Lung diseases have become a prevalent problem throughout the United States,
affecting over 34 million people. Accurate and timely diagnosis of the
different types of lung diseases is critical, and Artificial Intelligence (AI)
methods could speed up these processes. A dual-stage vision transformer is
built throughout this research by integrating a Vision Transformer (ViT) and a
Swin Transformer to classify 14 different lung diseases from X-ray scans of
patients with these diseases. The proposed model achieved an accuracy of 92.06%
on a label-level when making predictions on an unseen testing subset of the
dataset after data preprocessing and training the neural network. The model
showed promise for accurately classifying lung diseases and diagnosing patients
who suffer from these harmful diseases.
| [
{
"version": "v1",
"created": "Thu, 26 Sep 2024 19:59:36 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 23:17:36 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Mazumder",
"Anirudh",
""
],
[
"Liu",
"Jianguo",
""
]
] | TITLE: Developing a Dual-Stage Vision Transformer Model for Lung Disease
Classification
ABSTRACT: Lung diseases have become a prevalent problem throughout the United States,
affecting over 34 million people. Accurate and timely diagnosis of the
different types of lung diseases is critical, and Artificial Intelligence (AI)
methods could speed up these processes. A dual-stage vision transformer is
built throughout this research by integrating a Vision Transformer (ViT) and a
Swin Transformer to classify 14 different lung diseases from X-ray scans of
patients with these diseases. The proposed model achieved an accuracy of 92.06%
on a label-level when making predictions on an unseen testing subset of the
dataset after data preprocessing and training the neural network. The model
showed promise for accurately classifying lung diseases and diagnosing patients
who suffer from these harmful diseases.
|
2410.01100 | Jungyeul Park | Seohyun Song and Eunkyul Leah Jo and Yige Chen and Jeen-Pyo Hong and
Kyuwon Kim and Jin Wee and Miyoung Kang and KyungTae Lim and Jungyeul Park
and Chulwoo Park | Unlocking Korean Verbs: A User-Friendly Exploration into the Verb
Lexicon | NAACL 2025 System Demonstrations | null | null | null | cs.CL | http://creativecommons.org/licenses/by-nc-nd/4.0/ | The Sejong dictionary dataset offers a valuable resource, providing extensive
coverage of morphology, syntax, and semantic representation. This dataset can
be utilized to explore linguistic information in greater depth. The labeled
linguistic structures within this dataset form the basis for uncovering
relationships between words and phrases and their associations with target
verbs. This paper introduces a user-friendly web interface designed for the
collection and consolidation of verb-related information, with a particular
focus on subcategorization frames. Additionally, it outlines our efforts in
mapping this information by aligning subcategorization frames with
corresponding illustrative sentence examples. Furthermore, we provide a Python
library that would simplify syntactic parsing and semantic role labeling. These
tools are intended to assist individuals interested in harnessing the Sejong
dictionary dataset to develop applications for Korean language processing.
| [
{
"version": "v1",
"created": "Tue, 1 Oct 2024 22:03:34 GMT"
},
{
"version": "v2",
"created": "Sun, 1 Dec 2024 14:32:47 GMT"
},
{
"version": "v3",
"created": "Wed, 2 Apr 2025 23:59:53 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Song",
"Seohyun",
""
],
[
"Jo",
"Eunkyul Leah",
""
],
[
"Chen",
"Yige",
""
],
[
"Hong",
"Jeen-Pyo",
""
],
[
"Kim",
"Kyuwon",
""
],
[
"Wee",
"Jin",
""
],
[
"Kang",
"Miyoung",
""
],
[
"Lim",
"KyungTae",
""
],
[
"Park",
"Jungyeul",
""
],
[
"Park",
"Chulwoo",
""
]
] | TITLE: Unlocking Korean Verbs: A User-Friendly Exploration into the Verb
Lexicon
ABSTRACT: The Sejong dictionary dataset offers a valuable resource, providing extensive
coverage of morphology, syntax, and semantic representation. This dataset can
be utilized to explore linguistic information in greater depth. The labeled
linguistic structures within this dataset form the basis for uncovering
relationships between words and phrases and their associations with target
verbs. This paper introduces a user-friendly web interface designed for the
collection and consolidation of verb-related information, with a particular
focus on subcategorization frames. Additionally, it outlines our efforts in
mapping this information by aligning subcategorization frames with
corresponding illustrative sentence examples. Furthermore, we provide a Python
library that would simplify syntactic parsing and semantic role labeling. These
tools are intended to assist individuals interested in harnessing the Sejong
dictionary dataset to develop applications for Korean language processing.
|
2410.02179 | Adrian Chan | Adrian Chan, Anupam Mijar, Mehreen Saeed, Chau-Wai Wong, Akram Khater | HATFormer: Historic Handwritten Arabic Text Recognition with
Transformers | null | null | null | null | cs.CV cs.CL cs.LG | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Arabic handwritten text recognition (HTR) is challenging, especially for
historical texts, due to diverse writing styles and the intrinsic features of
Arabic script. Additionally, Arabic handwriting datasets are smaller compared
to English ones, making it difficult to train generalizable Arabic HTR models.
To address these challenges, we propose HATFormer, a transformer-based
encoder-decoder architecture that builds on a state-of-the-art English HTR
model. By leveraging the transformer's attention mechanism, HATFormer captures
spatial contextual information to address the intrinsic challenges of Arabic
script through differentiating cursive characters, decomposing visual
representations, and identifying diacritics. Our customization to historical
handwritten Arabic includes an image processor for effective ViT information
preprocessing, a text tokenizer for compact Arabic text representation, and a
training pipeline that accounts for a limited amount of historic Arabic
handwriting data. HATFormer achieves a character error rate (CER) of 8.6% on
the largest public historical handwritten Arabic dataset, with a 51%
improvement over the best baseline in the literature. HATFormer also attains a
comparable CER of 4.2% on the largest private non-historical dataset. Our work
demonstrates the feasibility of adapting an English HTR method to a
low-resource language with complex, language-specific challenges, contributing
to advancements in document digitization, information retrieval, and cultural
preservation.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 03:43:29 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 17:56:58 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Chan",
"Adrian",
""
],
[
"Mijar",
"Anupam",
""
],
[
"Saeed",
"Mehreen",
""
],
[
"Wong",
"Chau-Wai",
""
],
[
"Khater",
"Akram",
""
]
] | TITLE: HATFormer: Historic Handwritten Arabic Text Recognition with
Transformers
ABSTRACT: Arabic handwritten text recognition (HTR) is challenging, especially for
historical texts, due to diverse writing styles and the intrinsic features of
Arabic script. Additionally, Arabic handwriting datasets are smaller compared
to English ones, making it difficult to train generalizable Arabic HTR models.
To address these challenges, we propose HATFormer, a transformer-based
encoder-decoder architecture that builds on a state-of-the-art English HTR
model. By leveraging the transformer's attention mechanism, HATFormer captures
spatial contextual information to address the intrinsic challenges of Arabic
script through differentiating cursive characters, decomposing visual
representations, and identifying diacritics. Our customization to historical
handwritten Arabic includes an image processor for effective ViT information
preprocessing, a text tokenizer for compact Arabic text representation, and a
training pipeline that accounts for a limited amount of historic Arabic
handwriting data. HATFormer achieves a character error rate (CER) of 8.6% on
the largest public historical handwritten Arabic dataset, with a 51%
improvement over the best baseline in the literature. HATFormer also attains a
comparable CER of 4.2% on the largest private non-historical dataset. Our work
demonstrates the feasibility of adapting an English HTR method to a
low-resource language with complex, language-specific challenges, contributing
to advancements in document digitization, information retrieval, and cultural
preservation.
|
2410.02660 | Tianyu Gao | Tianyu Gao, Alexander Wettig, Howard Yen, Danqi Chen | How to Train Long-Context Language Models (Effectively) | Our code, data, and models are available at
https://github.com/princeton-nlp/ProLong | null | null | null | cs.CL cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We study continued training and supervised fine-tuning (SFT) of a language
model (LM) to make effective use of long-context information. We first
establish a reliable evaluation protocol to guide model development -- instead
of perplexity or simple needle-in-a-haystack (NIAH) tests, we use a broad set
of long-context downstream tasks, and we evaluate models after SFT as this
better reveals long-context abilities. Supported by our robust evaluations, we
run thorough experiments to decide the data mix for continued pre-training, the
instruction tuning dataset, and many other design choices such as position
extrapolation. We find that (1) code repositories and books are excellent
sources of long data, but it is crucial to combine them with high-quality
short-context data; (2) training with a sequence length beyond the evaluation
length boosts long-context performance; (3) for SFT, using only short
instruction datasets yields strong performance on long-context tasks. Our final
model, ProLong-8B, which is initialized from Llama-3 and trained on 40B tokens,
demonstrates state-of-the-art long-context performance among similarly sized
models at a length of 128K. ProLong outperforms Llama-3.1-8B-Instruct on the
majority of long-context tasks despite using only 5% as many tokens during
long-context training. Additionally, ProLong can effectively process up to 512K
tokens, one of the longest context windows of publicly available LMs.
| [
{
"version": "v1",
"created": "Thu, 3 Oct 2024 16:46:52 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 13:26:46 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Gao",
"Tianyu",
""
],
[
"Wettig",
"Alexander",
""
],
[
"Yen",
"Howard",
""
],
[
"Chen",
"Danqi",
""
]
] | TITLE: How to Train Long-Context Language Models (Effectively)
ABSTRACT: We study continued training and supervised fine-tuning (SFT) of a language
model (LM) to make effective use of long-context information. We first
establish a reliable evaluation protocol to guide model development -- instead
of perplexity or simple needle-in-a-haystack (NIAH) tests, we use a broad set
of long-context downstream tasks, and we evaluate models after SFT as this
better reveals long-context abilities. Supported by our robust evaluations, we
run thorough experiments to decide the data mix for continued pre-training, the
instruction tuning dataset, and many other design choices such as position
extrapolation. We find that (1) code repositories and books are excellent
sources of long data, but it is crucial to combine them with high-quality
short-context data; (2) training with a sequence length beyond the evaluation
length boosts long-context performance; (3) for SFT, using only short
instruction datasets yields strong performance on long-context tasks. Our final
model, ProLong-8B, which is initialized from Llama-3 and trained on 40B tokens,
demonstrates state-of-the-art long-context performance among similarly sized
models at a length of 128K. ProLong outperforms Llama-3.1-8B-Instruct on the
majority of long-context tasks despite using only 5% as many tokens during
long-context training. Additionally, ProLong can effectively process up to 512K
tokens, one of the longest context windows of publicly available LMs.
|
2410.03408 | Tung Luu | Tung M. Luu, Donghoon Lee, and Chang D. Yoo | Predictive Coding for Decision Transformer | 8 pages, IROS 2024. The first two authors are equally contributed
(Code: https://github.com/tunglm2203/pcdt) | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Recent work in offline reinforcement learning (RL) has demonstrated the
effectiveness of formulating decision-making as return-conditioned supervised
learning. Notably, the decision transformer (DT) architecture has shown promise
across various domains. However, despite its initial success, DTs have
underperformed on several challenging datasets in goal-conditioned RL. This
limitation stems from the inefficiency of return conditioning for guiding
policy learning, particularly in unstructured and suboptimal datasets,
resulting in DTs failing to effectively learn temporal compositionality.
Moreover, this problem might be further exacerbated in long-horizon
sparse-reward tasks. To address this challenge, we propose the Predictive
Coding for Decision Transformer (PCDT) framework, which leverages generalized
future conditioning to enhance DT methods. PCDT utilizes an architecture that
extends the DT framework, conditioned on predictive codings, enabling
decision-making based on both past and future factors, thereby improving
generalization. Through extensive experiments on eight datasets from the
AntMaze and FrankaKitchen environments, our proposed method achieves
performance on par with or surpassing existing popular value-based and
transformer-based methods in offline goal-conditioned RL. Furthermore, we also
evaluate our method on a goal-reaching task with a physical robot.
| [
{
"version": "v1",
"created": "Fri, 4 Oct 2024 13:17:34 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 10:35:28 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Luu",
"Tung M.",
""
],
[
"Lee",
"Donghoon",
""
],
[
"Yoo",
"Chang D.",
""
]
] | TITLE: Predictive Coding for Decision Transformer
ABSTRACT: Recent work in offline reinforcement learning (RL) has demonstrated the
effectiveness of formulating decision-making as return-conditioned supervised
learning. Notably, the decision transformer (DT) architecture has shown promise
across various domains. However, despite its initial success, DTs have
underperformed on several challenging datasets in goal-conditioned RL. This
limitation stems from the inefficiency of return conditioning for guiding
policy learning, particularly in unstructured and suboptimal datasets,
resulting in DTs failing to effectively learn temporal compositionality.
Moreover, this problem might be further exacerbated in long-horizon
sparse-reward tasks. To address this challenge, we propose the Predictive
Coding for Decision Transformer (PCDT) framework, which leverages generalized
future conditioning to enhance DT methods. PCDT utilizes an architecture that
extends the DT framework, conditioned on predictive codings, enabling
decision-making based on both past and future factors, thereby improving
generalization. Through extensive experiments on eight datasets from the
AntMaze and FrankaKitchen environments, our proposed method achieves
performance on par with or surpassing existing popular value-based and
transformer-based methods in offline goal-conditioned RL. Furthermore, we also
evaluate our method on a goal-reaching task with a physical robot.
|
2410.09871 | Narayan Adhikari | Narayan S. Adhikari, Shradha Agarwal | A Comparative Study of PDF Parsing Tools Across Diverse Document
Categories | 17 pages,11 figures, 5 tables | null | null | null | cs.IR cs.DL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | PDF is one of the most prominent data formats, making PDF parsing crucial for
information extraction and retrieval, particularly with the rise of RAG
systems. While various PDF parsing tools exist, their effectiveness across
different document types remains understudied, especially beyond academic
papers. Our research aims to address this gap by comparing 10 popular PDF
parsing tools across 6 document categories using the DocLayNet dataset. These
tools include PyPDF, pdfminer-six, PyMuPDF, pdfplumber, pypdfium2,
Unstructured, Tabula, Camelot, as well as the deep learning-based tools Nougat
and Table Transformer(TATR). We evaluated both text extraction and table
detection capabilities. For text extraction, PyMuPDF and pypdfium generally
outperformed others, but all parsers struggled with Scientific and Patent
documents. For these challenging categories, learning-based tools like Nougat
demonstrated superior performance. In table detection, TATR excelled in the
Financial, Patent, Law & Regulations, and Scientific categories. Table
detection tool Camelot performed best for tender documents, while PyMuPDF
performed superior in the Manual category. Our findings highlight the
importance of selecting appropriate parsing tools based on document type and
specific tasks, providing valuable insights for researchers and practitioners
working with diverse document sources.
| [
{
"version": "v1",
"created": "Sun, 13 Oct 2024 15:11:31 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 12:09:36 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Adhikari",
"Narayan S.",
""
],
[
"Agarwal",
"Shradha",
""
]
] | TITLE: A Comparative Study of PDF Parsing Tools Across Diverse Document
Categories
ABSTRACT: PDF is one of the most prominent data formats, making PDF parsing crucial for
information extraction and retrieval, particularly with the rise of RAG
systems. While various PDF parsing tools exist, their effectiveness across
different document types remains understudied, especially beyond academic
papers. Our research aims to address this gap by comparing 10 popular PDF
parsing tools across 6 document categories using the DocLayNet dataset. These
tools include PyPDF, pdfminer-six, PyMuPDF, pdfplumber, pypdfium2,
Unstructured, Tabula, Camelot, as well as the deep learning-based tools Nougat
and Table Transformer(TATR). We evaluated both text extraction and table
detection capabilities. For text extraction, PyMuPDF and pypdfium generally
outperformed others, but all parsers struggled with Scientific and Patent
documents. For these challenging categories, learning-based tools like Nougat
demonstrated superior performance. In table detection, TATR excelled in the
Financial, Patent, Law & Regulations, and Scientific categories. Table
detection tool Camelot performed best for tender documents, while PyMuPDF
performed superior in the Manual category. Our findings highlight the
importance of selecting appropriate parsing tools based on document type and
specific tasks, providing valuable insights for researchers and practitioners
working with diverse document sources.
|
2410.14121 | Van Tuan Nguyen | Van Tuan Nguyen and Razvan Beuran | FedMSE: Semi-supervised federated learning approach for IoT network
intrusion detection | null | Computers & Security Computers & Security Volume 151, April 2025,
104337 | 10.1016/j.cose.2025.104337 | null | cs.LG cs.AI | http://creativecommons.org/licenses/by-nc-nd/4.0/ | This paper proposes a novel federated learning approach for improving IoT
network intrusion detection. The rise of IoT has expanded the cyber attack
surface, making traditional centralized machine learning methods insufficient
due to concerns about data availability, computational resources, transfer
costs, and especially privacy preservation. A semi-supervised federated
learning model was developed to overcome these issues, combining the Shrink
Autoencoder and Centroid one-class classifier (SAE-CEN). This approach enhances
the performance of intrusion detection by effectively representing normal
network data and accurately identifying anomalies in the decentralized
strategy. Additionally, a mean square error-based aggregation algorithm
(MSEAvg) was introduced to improve global model performance by prioritizing
more accurate local models. The results obtained in our experimental setup,
which uses various settings relying on the N-BaIoT dataset and Dirichlet
distribution, demonstrate significant improvements in real-world heterogeneous
IoT networks in detection accuracy from 93.98$\pm$2.90 to 97.30$\pm$0.49,
reduced learning costs when requiring only 50\% of gateways participating in
the training process, and robustness in large-scale networks.
| [
{
"version": "v1",
"created": "Fri, 18 Oct 2024 02:23:57 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 15:16:55 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Nguyen",
"Van Tuan",
""
],
[
"Beuran",
"Razvan",
""
]
] | TITLE: FedMSE: Semi-supervised federated learning approach for IoT network
intrusion detection
ABSTRACT: This paper proposes a novel federated learning approach for improving IoT
network intrusion detection. The rise of IoT has expanded the cyber attack
surface, making traditional centralized machine learning methods insufficient
due to concerns about data availability, computational resources, transfer
costs, and especially privacy preservation. A semi-supervised federated
learning model was developed to overcome these issues, combining the Shrink
Autoencoder and Centroid one-class classifier (SAE-CEN). This approach enhances
the performance of intrusion detection by effectively representing normal
network data and accurately identifying anomalies in the decentralized
strategy. Additionally, a mean square error-based aggregation algorithm
(MSEAvg) was introduced to improve global model performance by prioritizing
more accurate local models. The results obtained in our experimental setup,
which uses various settings relying on the N-BaIoT dataset and Dirichlet
distribution, demonstrate significant improvements in real-world heterogeneous
IoT networks in detection accuracy from 93.98$\pm$2.90 to 97.30$\pm$0.49,
reduced learning costs when requiring only 50\% of gateways participating in
the training process, and robustness in large-scale networks.
|
2410.17242 | Haian Jin | Haian Jin, Hanwen Jiang, Hao Tan, Kai Zhang, Sai Bi, Tianyuan Zhang,
Fujun Luan, Noah Snavely, Zexiang Xu | LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias | project page: https://haian-jin.github.io/projects/LVSM/ | null | null | null | cs.CV cs.GR cs.LG | http://creativecommons.org/licenses/by/4.0/ | We propose the Large View Synthesis Model (LVSM), a novel transformer-based
approach for scalable and generalizable novel view synthesis from sparse-view
inputs. We introduce two architectures: (1) an encoder-decoder LVSM, which
encodes input image tokens into a fixed number of 1D latent tokens, functioning
as a fully learned scene representation, and decodes novel-view images from
them; and (2) a decoder-only LVSM, which directly maps input images to
novel-view outputs, completely eliminating intermediate scene representations.
Both models bypass the 3D inductive biases used in previous methods -- from 3D
representations (e.g., NeRF, 3DGS) to network designs (e.g., epipolar
projections, plane sweeps) -- addressing novel view synthesis with a fully
data-driven approach. While the encoder-decoder model offers faster inference
due to its independent latent representation, the decoder-only LVSM achieves
superior quality, scalability, and zero-shot generalization, outperforming
previous state-of-the-art methods by 1.5 to 3.5 dB PSNR. Comprehensive
evaluations across multiple datasets demonstrate that both LVSM variants
achieve state-of-the-art novel view synthesis quality. Notably, our models
surpass all previous methods even with reduced computational resources (1-2
GPUs). Please see our website for more details:
https://haian-jin.github.io/projects/LVSM/ .
| [
{
"version": "v1",
"created": "Tue, 22 Oct 2024 17:58:28 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 21:12:32 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Jin",
"Haian",
""
],
[
"Jiang",
"Hanwen",
""
],
[
"Tan",
"Hao",
""
],
[
"Zhang",
"Kai",
""
],
[
"Bi",
"Sai",
""
],
[
"Zhang",
"Tianyuan",
""
],
[
"Luan",
"Fujun",
""
],
[
"Snavely",
"Noah",
""
],
[
"Xu",
"Zexiang",
""
]
] | TITLE: LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias
ABSTRACT: We propose the Large View Synthesis Model (LVSM), a novel transformer-based
approach for scalable and generalizable novel view synthesis from sparse-view
inputs. We introduce two architectures: (1) an encoder-decoder LVSM, which
encodes input image tokens into a fixed number of 1D latent tokens, functioning
as a fully learned scene representation, and decodes novel-view images from
them; and (2) a decoder-only LVSM, which directly maps input images to
novel-view outputs, completely eliminating intermediate scene representations.
Both models bypass the 3D inductive biases used in previous methods -- from 3D
representations (e.g., NeRF, 3DGS) to network designs (e.g., epipolar
projections, plane sweeps) -- addressing novel view synthesis with a fully
data-driven approach. While the encoder-decoder model offers faster inference
due to its independent latent representation, the decoder-only LVSM achieves
superior quality, scalability, and zero-shot generalization, outperforming
previous state-of-the-art methods by 1.5 to 3.5 dB PSNR. Comprehensive
evaluations across multiple datasets demonstrate that both LVSM variants
achieve state-of-the-art novel view synthesis quality. Notably, our models
surpass all previous methods even with reduced computational resources (1-2
GPUs). Please see our website for more details:
https://haian-jin.github.io/projects/LVSM/ .
|
2411.05841 | Thea Br\"usch | Thea Br\"usch, Kristoffer K. Wickstr{\o}m, Mikkel N. Schmidt, Robert
Jenssen, Tommy S. Alstr{\o}m | FLEXtime: Filterbank learning to explain time series | Accepted to The 3rd World Conference on eXplainable Artificial
Intelligence | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | State-of-the-art methods for explaining predictions from time series involve
learning an instance-wise saliency mask for each time step; however, many types
of time series are difficult to interpret in the time domain, due to the
inherently complex nature of the data. Instead, we propose to view time series
explainability as saliency maps over interpretable parts, leaning on
established signal processing methodology on signal decomposition.
Specifically, we propose a new method called FLEXtime that uses a bank of
bandpass filters to split the time series into frequency bands. Then, we learn
the combination of these bands that optimally explains the model's prediction.
Our extensive evaluation shows that, on average, FLEXtime outperforms
state-of-the-art explainability methods across a range of datasets. FLEXtime
fills an important gap in the current time series explainability methodology
and is a valuable tool for a wide range of time series such as EEG and audio.
Code is available at https://github.com/theabrusch/FLEXtime.
| [
{
"version": "v1",
"created": "Wed, 6 Nov 2024 15:06:42 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 11:00:24 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 09:04:46 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Brüsch",
"Thea",
""
],
[
"Wickstrøm",
"Kristoffer K.",
""
],
[
"Schmidt",
"Mikkel N.",
""
],
[
"Jenssen",
"Robert",
""
],
[
"Alstrøm",
"Tommy S.",
""
]
] | TITLE: FLEXtime: Filterbank learning to explain time series
ABSTRACT: State-of-the-art methods for explaining predictions from time series involve
learning an instance-wise saliency mask for each time step; however, many types
of time series are difficult to interpret in the time domain, due to the
inherently complex nature of the data. Instead, we propose to view time series
explainability as saliency maps over interpretable parts, leaning on
established signal processing methodology on signal decomposition.
Specifically, we propose a new method called FLEXtime that uses a bank of
bandpass filters to split the time series into frequency bands. Then, we learn
the combination of these bands that optimally explains the model's prediction.
Our extensive evaluation shows that, on average, FLEXtime outperforms
state-of-the-art explainability methods across a range of datasets. FLEXtime
fills an important gap in the current time series explainability methodology
and is a valuable tool for a wide range of time series such as EEG and audio.
Code is available at https://github.com/theabrusch/FLEXtime.
|
2411.08297 | Aditya Mittal | Norman Matloff and Aditya Mittal | TowerDebias: A Novel Unfairness Removal Method Based on the Tower
Property | Completed preprint version. To be submitted for review | null | null | null | cs.LG cs.AI math.PR stat.AP stat.ML | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Decision-making processes have increasingly come to rely on sophisticated
machine learning tools, raising critical concerns about the fairness of their
predictions with respect to sensitive groups. The widespread adoption of
commercial "black-box" models necessitates careful consideration of their legal
and ethical implications for consumers. When users interact with such black-box
models, a key challenge arises: how can the influence of sensitive attributes,
such as race or gender, be mitigated or removed from its predictions? We
propose towerDebias (tDB), a novel post-processing method designed to reduce
the influence of sensitive attributes in predictions made by black-box models.
Our tDB approach leverages the Tower Property from probability theory to
improve prediction fairness without requiring retraining of the original model.
This method is highly versatile, as it requires no prior knowledge of the
original algorithm's internal structure and is adaptable to a diverse range of
applications. We present a formal fairness improvement theorem for tDB and
showcase its effectiveness in both regression and classification tasks using
multiple real-world datasets.
| [
{
"version": "v1",
"created": "Wed, 13 Nov 2024 02:32:38 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 19:30:44 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Matloff",
"Norman",
""
],
[
"Mittal",
"Aditya",
""
]
] | TITLE: TowerDebias: A Novel Unfairness Removal Method Based on the Tower
Property
ABSTRACT: Decision-making processes have increasingly come to rely on sophisticated
machine learning tools, raising critical concerns about the fairness of their
predictions with respect to sensitive groups. The widespread adoption of
commercial "black-box" models necessitates careful consideration of their legal
and ethical implications for consumers. When users interact with such black-box
models, a key challenge arises: how can the influence of sensitive attributes,
such as race or gender, be mitigated or removed from its predictions? We
propose towerDebias (tDB), a novel post-processing method designed to reduce
the influence of sensitive attributes in predictions made by black-box models.
Our tDB approach leverages the Tower Property from probability theory to
improve prediction fairness without requiring retraining of the original model.
This method is highly versatile, as it requires no prior knowledge of the
original algorithm's internal structure and is adaptable to a diverse range of
applications. We present a formal fairness improvement theorem for tDB and
showcase its effectiveness in both regression and classification tasks using
multiple real-world datasets.
|
2411.08306 | Songtao Liu | Songtao Liu, Dandan Zhang, Zhengkai Tu, Hanjun Dai, Peng Liu | Evaluating Molecule Synthesizability via Retrosynthetic Planning and
Reaction Prediction | null | null | null | null | cs.LG q-bio.QM | http://creativecommons.org/licenses/by/4.0/ | A significant challenge in wet lab experiments with current drug design
generative models is the trade-off between pharmacological properties and
synthesizability. Molecules predicted to have highly desirable properties are
often difficult to synthesize, while those that are easily synthesizable tend
to exhibit less favorable properties. As a result, evaluating the
synthesizability of molecules in general drug design scenarios remains a
significant challenge in the field of drug discovery. The commonly used
synthetic accessibility (SA) score aims to evaluate the ease of synthesizing
generated molecules, but it falls short of guaranteeing that synthetic routes
can actually be found. Inspired by recent advances in top-down synthetic route
generation and forward reaction prediction, we propose a new, data-driven
metric to evaluate molecule synthesizability. This novel metric leverages the
synergistic duality between retrosynthetic planners and reaction predictors,
both of which are trained on extensive reaction datasets. To demonstrate the
efficacy of our metric, we conduct a comprehensive evaluation of round-trip
scores across a range of representative molecule generative models.
| [
{
"version": "v1",
"created": "Wed, 13 Nov 2024 03:08:33 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 05:16:18 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liu",
"Songtao",
""
],
[
"Zhang",
"Dandan",
""
],
[
"Tu",
"Zhengkai",
""
],
[
"Dai",
"Hanjun",
""
],
[
"Liu",
"Peng",
""
]
] | TITLE: Evaluating Molecule Synthesizability via Retrosynthetic Planning and
Reaction Prediction
ABSTRACT: A significant challenge in wet lab experiments with current drug design
generative models is the trade-off between pharmacological properties and
synthesizability. Molecules predicted to have highly desirable properties are
often difficult to synthesize, while those that are easily synthesizable tend
to exhibit less favorable properties. As a result, evaluating the
synthesizability of molecules in general drug design scenarios remains a
significant challenge in the field of drug discovery. The commonly used
synthetic accessibility (SA) score aims to evaluate the ease of synthesizing
generated molecules, but it falls short of guaranteeing that synthetic routes
can actually be found. Inspired by recent advances in top-down synthetic route
generation and forward reaction prediction, we propose a new, data-driven
metric to evaluate molecule synthesizability. This novel metric leverages the
synergistic duality between retrosynthetic planners and reaction predictors,
both of which are trained on extensive reaction datasets. To demonstrate the
efficacy of our metric, we conduct a comprehensive evaluation of round-trip
scores across a range of representative molecule generative models.
|
2412.01477 | Nitish Mital | Nitish Mital, Simon Malzard, Richard Walters, Celso M. De Melo,
Raghuveer Rao, Victoria Nockles | Improving Object Detection by Modifying Synthetic Data with Explainable
AI | null | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Limited real-world data severely impacts model performance in many computer
vision domains, particularly for samples that are underrepresented in training.
Synthetically generated images are a promising solution, but 1) it remains
unclear how to design synthetic training data to optimally improve model
performance (e.g, whether and where to introduce more realism or more
abstraction) and 2) the domain expertise, time and effort required from human
operators for this design and optimisation process represents a major practical
challenge. Here we propose a novel conceptual approach to improve the
efficiency of designing synthetic images, by using robust Explainable AI (XAI)
techniques to guide a human-in-the-loop process of modifying 3D mesh models
used to generate these images. Importantly, this framework allows both
modifications that increase and decrease realism in synthetic data, which can
both improve model performance. We illustrate this concept using a real-world
example where data are sparse; detection of vehicles in infrared imagery. We
fine-tune an initial YOLOv8 model on the ATR DSIAC infrared dataset and
synthetic images generated from 3D mesh models in the Unity gaming engine, and
then use XAI saliency maps to guide modification of our Unity models. We show
that synthetic data can improve detection of vehicles in orientations unseen in
training by 4.6% (to mAP50 = 94.6%). We further improve performance by an
additional 1.5% (to 96.1%) through our new XAI-guided approach, which reduces
misclassifications through both increasing and decreasing the realism of
different parts of the synthetic data. Our proof-of-concept results pave the
way for fine, XAI-controlled curation of synthetic datasets tailored to improve
object detection performance, whilst simultaneously reducing the burden on
human operators in designing and optimising these datasets.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 13:24:43 GMT"
},
{
"version": "v2",
"created": "Thu, 27 Mar 2025 13:57:53 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 12:02:11 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Mital",
"Nitish",
""
],
[
"Malzard",
"Simon",
""
],
[
"Walters",
"Richard",
""
],
[
"De Melo",
"Celso M.",
""
],
[
"Rao",
"Raghuveer",
""
],
[
"Nockles",
"Victoria",
""
]
] | TITLE: Improving Object Detection by Modifying Synthetic Data with Explainable
AI
ABSTRACT: Limited real-world data severely impacts model performance in many computer
vision domains, particularly for samples that are underrepresented in training.
Synthetically generated images are a promising solution, but 1) it remains
unclear how to design synthetic training data to optimally improve model
performance (e.g, whether and where to introduce more realism or more
abstraction) and 2) the domain expertise, time and effort required from human
operators for this design and optimisation process represents a major practical
challenge. Here we propose a novel conceptual approach to improve the
efficiency of designing synthetic images, by using robust Explainable AI (XAI)
techniques to guide a human-in-the-loop process of modifying 3D mesh models
used to generate these images. Importantly, this framework allows both
modifications that increase and decrease realism in synthetic data, which can
both improve model performance. We illustrate this concept using a real-world
example where data are sparse; detection of vehicles in infrared imagery. We
fine-tune an initial YOLOv8 model on the ATR DSIAC infrared dataset and
synthetic images generated from 3D mesh models in the Unity gaming engine, and
then use XAI saliency maps to guide modification of our Unity models. We show
that synthetic data can improve detection of vehicles in orientations unseen in
training by 4.6% (to mAP50 = 94.6%). We further improve performance by an
additional 1.5% (to 96.1%) through our new XAI-guided approach, which reduces
misclassifications through both increasing and decreasing the realism of
different parts of the synthetic data. Our proof-of-concept results pave the
way for fine, XAI-controlled curation of synthetic datasets tailored to improve
object detection performance, whilst simultaneously reducing the burden on
human operators in designing and optimising these datasets.
|
2412.01543 | Yufeng Jin | Yufeng Jin, Vignesh Prasad, Snehal Jauhri, Mathias Franzius, Georgia
Chalvatzaki | 6DOPE-GS: Online 6D Object Pose Estimation using Gaussian Splatting | null | null | null | null | cs.CV cs.RO | http://creativecommons.org/licenses/by/4.0/ | Efficient and accurate object pose estimation is an essential component for
modern vision systems in many applications such as Augmented Reality,
autonomous driving, and robotics. While research in model-based 6D object pose
estimation has delivered promising results, model-free methods are hindered by
the high computational load in rendering and inferring consistent poses of
arbitrary objects in a live RGB-D video stream. To address this issue, we
present 6DOPE-GS, a novel method for online 6D object pose estimation \&
tracking with a single RGB-D camera by effectively leveraging advances in
Gaussian Splatting. Thanks to the fast differentiable rendering capabilities of
Gaussian Splatting, 6DOPE-GS can simultaneously optimize for 6D object poses
and 3D object reconstruction. To achieve the necessary efficiency and accuracy
for live tracking, our method uses incremental 2D Gaussian Splatting with an
intelligent dynamic keyframe selection procedure to achieve high spatial object
coverage and prevent erroneous pose updates. We also propose an opacity
statistic-based pruning mechanism for adaptive Gaussian density control, to
ensure training stability and efficiency. We evaluate our method on the HO3D
and YCBInEOAT datasets and show that 6DOPE-GS matches the performance of
state-of-the-art baselines for model-free simultaneous 6D pose tracking and
reconstruction while providing a 5$\times$ speedup. We also demonstrate the
method's suitability for live, dynamic object tracking and reconstruction in a
real-world setting.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 14:32:19 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 10:25:40 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Jin",
"Yufeng",
""
],
[
"Prasad",
"Vignesh",
""
],
[
"Jauhri",
"Snehal",
""
],
[
"Franzius",
"Mathias",
""
],
[
"Chalvatzaki",
"Georgia",
""
]
] | TITLE: 6DOPE-GS: Online 6D Object Pose Estimation using Gaussian Splatting
ABSTRACT: Efficient and accurate object pose estimation is an essential component for
modern vision systems in many applications such as Augmented Reality,
autonomous driving, and robotics. While research in model-based 6D object pose
estimation has delivered promising results, model-free methods are hindered by
the high computational load in rendering and inferring consistent poses of
arbitrary objects in a live RGB-D video stream. To address this issue, we
present 6DOPE-GS, a novel method for online 6D object pose estimation \&
tracking with a single RGB-D camera by effectively leveraging advances in
Gaussian Splatting. Thanks to the fast differentiable rendering capabilities of
Gaussian Splatting, 6DOPE-GS can simultaneously optimize for 6D object poses
and 3D object reconstruction. To achieve the necessary efficiency and accuracy
for live tracking, our method uses incremental 2D Gaussian Splatting with an
intelligent dynamic keyframe selection procedure to achieve high spatial object
coverage and prevent erroneous pose updates. We also propose an opacity
statistic-based pruning mechanism for adaptive Gaussian density control, to
ensure training stability and efficiency. We evaluate our method on the HO3D
and YCBInEOAT datasets and show that 6DOPE-GS matches the performance of
state-of-the-art baselines for model-free simultaneous 6D pose tracking and
reconstruction while providing a 5$\times$ speedup. We also demonstrate the
method's suitability for live, dynamic object tracking and reconstruction in a
real-world setting.
|
2412.01619 | Kang Liu | Kang Liu and Enrique Zuazua | Representation and Regression Problems in Neural Networks: Relaxation,
Generalization, and Numerics | 39 pages, 6 figures | null | null | null | cs.LG math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this work, we address three non-convex optimization problems associated
with the training of shallow neural networks (NNs) for exact and approximate
representation, as well as for regression tasks. Through a mean-field approach,
we convexify these problems and, applying a representer theorem, prove the
absence of relaxation gaps. We establish generalization bounds for the
resulting NN solutions, assessing their predictive performance on test datasets
and, analyzing the impact of key hyperparameters on these bounds, propose
optimal choices.
On the computational side, we examine the discretization of the convexified
problems and derive convergence rates. For low-dimensional datasets, these
discretized problems are efficiently solvable using the simplex method. For
high-dimensional datasets, we propose a sparsification algorithm that, combined
with gradient descent for over-parameterized shallow NNs, yields effective
solutions to the primal problems.
| [
{
"version": "v1",
"created": "Mon, 2 Dec 2024 15:40:29 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 11:29:53 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liu",
"Kang",
""
],
[
"Zuazua",
"Enrique",
""
]
] | TITLE: Representation and Regression Problems in Neural Networks: Relaxation,
Generalization, and Numerics
ABSTRACT: In this work, we address three non-convex optimization problems associated
with the training of shallow neural networks (NNs) for exact and approximate
representation, as well as for regression tasks. Through a mean-field approach,
we convexify these problems and, applying a representer theorem, prove the
absence of relaxation gaps. We establish generalization bounds for the
resulting NN solutions, assessing their predictive performance on test datasets
and, analyzing the impact of key hyperparameters on these bounds, propose
optimal choices.
On the computational side, we examine the discretization of the convexified
problems and derive convergence rates. For low-dimensional datasets, these
discretized problems are efficiently solvable using the simplex method. For
high-dimensional datasets, we propose a sparsification algorithm that, combined
with gradient descent for over-parameterized shallow NNs, yields effective
solutions to the primal problems.
|
2412.04255 | MohammadSadegh KhajueeZadeh | Ali Pourghoraba, MohammadSadegh KhajueeZadeh, Ali Amini, Abolfazl
Vahedi, Gholam Reza Agah, and Akbar Rahideh | Model-Agnostic Meta-Learning for Fault Diagnosis of Induction Motors in
Data-Scarce Environments with Varying Operating Conditions and Electric Drive
Noise | null | null | 10.1109/TEC.2025.3556100 | null | eess.SY cs.SY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Reliable mechanical fault detection with limited data is crucial for the
effective operation of induction machines, particularly given the real-world
challenges present in industrial datasets, such as significant imbalances
between healthy and faulty samples and the scarcity of data representing faulty
conditions. This research introduces an innovative meta-learning approach to
address these issues, focusing on mechanical fault detection in induction
motors across diverse operating conditions while mitigating the adverse effects
of drive noise in scenarios with limited data. The process of identifying
faults under varying operating conditions is framed as a few-shot
classification challenge and approached through a model-agnostic meta-learning
strategy. Specifically, this approach begins with training a meta-learner
across multiple interconnected fault-diagnosis tasks conducted under different
operating conditions. In this stage, cross-entropy is utilized to optimize
parameters and develop a robust representation of the tasks. Subsequently, the
parameters of the meta-learner are fine-tuned for new tasks, enabling rapid
adaptation using only a small number of samples. This method achieves excellent
accuracy in fault detection across various conditions, even when data
availability is restricted. The findings indicate that the proposed model
outperforms other sophisticated techniques, providing enhanced generalization
and quicker adaptation. The accuracy of fault diagnosis reaches a minimum of
99%, underscoring the model's effectiveness for reliable fault identification.
| [
{
"version": "v1",
"created": "Thu, 5 Dec 2024 15:34:40 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 13:23:10 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Pourghoraba",
"Ali",
""
],
[
"KhajueeZadeh",
"MohammadSadegh",
""
],
[
"Amini",
"Ali",
""
],
[
"Vahedi",
"Abolfazl",
""
],
[
"Agah",
"Gholam Reza",
""
],
[
"Rahideh",
"Akbar",
""
]
] | TITLE: Model-Agnostic Meta-Learning for Fault Diagnosis of Induction Motors in
Data-Scarce Environments with Varying Operating Conditions and Electric Drive
Noise
ABSTRACT: Reliable mechanical fault detection with limited data is crucial for the
effective operation of induction machines, particularly given the real-world
challenges present in industrial datasets, such as significant imbalances
between healthy and faulty samples and the scarcity of data representing faulty
conditions. This research introduces an innovative meta-learning approach to
address these issues, focusing on mechanical fault detection in induction
motors across diverse operating conditions while mitigating the adverse effects
of drive noise in scenarios with limited data. The process of identifying
faults under varying operating conditions is framed as a few-shot
classification challenge and approached through a model-agnostic meta-learning
strategy. Specifically, this approach begins with training a meta-learner
across multiple interconnected fault-diagnosis tasks conducted under different
operating conditions. In this stage, cross-entropy is utilized to optimize
parameters and develop a robust representation of the tasks. Subsequently, the
parameters of the meta-learner are fine-tuned for new tasks, enabling rapid
adaptation using only a small number of samples. This method achieves excellent
accuracy in fault detection across various conditions, even when data
availability is restricted. The findings indicate that the proposed model
outperforms other sophisticated techniques, providing enhanced generalization
and quicker adaptation. The accuracy of fault diagnosis reaches a minimum of
99%, underscoring the model's effectiveness for reliable fault identification.
|
2412.07237 | Jiayi Su | Jiayi Su, Youhe Feng, Zheng Li, Jinhua Song, Yangfan He, Botao Ren,
Botian Xu | ArtFormer: Controllable Generation of Diverse 3D Articulated Objects | CVPR 2025. impl. repo: https://github.com/ShuYuMo2003/ArtFormer | null | null | null | cs.CV cs.AI cs.RO | http://creativecommons.org/licenses/by/4.0/ | This paper presents a novel framework for modeling and conditional generation
of 3D articulated objects. Troubled by flexibility-quality tradeoffs, existing
methods are often limited to using predefined structures or retrieving shapes
from static datasets. To address these challenges, we parameterize an
articulated object as a tree of tokens and employ a transformer to generate
both the object's high-level geometry code and its kinematic relations.
Subsequently, each sub-part's geometry is further decoded using a
signed-distance-function (SDF) shape prior, facilitating the synthesis of
high-quality 3D shapes. Our approach enables the generation of diverse objects
with high-quality geometry and varying number of parts. Comprehensive
experiments on conditional generation from text descriptions demonstrate the
effectiveness and flexibility of our method.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 07:00:05 GMT"
},
{
"version": "v2",
"created": "Mon, 17 Mar 2025 18:22:54 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 14:16:29 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Su",
"Jiayi",
""
],
[
"Feng",
"Youhe",
""
],
[
"Li",
"Zheng",
""
],
[
"Song",
"Jinhua",
""
],
[
"He",
"Yangfan",
""
],
[
"Ren",
"Botao",
""
],
[
"Xu",
"Botian",
""
]
] | TITLE: ArtFormer: Controllable Generation of Diverse 3D Articulated Objects
ABSTRACT: This paper presents a novel framework for modeling and conditional generation
of 3D articulated objects. Troubled by flexibility-quality tradeoffs, existing
methods are often limited to using predefined structures or retrieving shapes
from static datasets. To address these challenges, we parameterize an
articulated object as a tree of tokens and employ a transformer to generate
both the object's high-level geometry code and its kinematic relations.
Subsequently, each sub-part's geometry is further decoded using a
signed-distance-function (SDF) shape prior, facilitating the synthesis of
high-quality 3D shapes. Our approach enables the generation of diverse objects
with high-quality geometry and varying number of parts. Comprehensive
experiments on conditional generation from text descriptions demonstrate the
effectiveness and flexibility of our method.
|
2412.07755 | Arijit Ray | Arijit Ray, Jiafei Duan, Ellis Brown, Reuben Tan, Dina Bashkirova,
Rose Hendrix, Kiana Ehsani, Aniruddha Kembhavi, Bryan A. Plummer, Ranjay
Krishna, Kuo-Hao Zeng, Kate Saenko | SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models | Project webpage: https://arijitray.com/SAT/ | null | null | null | cs.CV cs.AI cs.GR cs.RO | http://creativecommons.org/licenses/by/4.0/ | Reasoning about motion and space is a fundamental cognitive capability that
is required by multiple real-world applications. While many studies highlight
that large multimodal language models (MLMs) struggle to reason about space,
they only focus on static spatial relationships, and not dynamic awareness of
motion and space, i.e., reasoning about the effect of egocentric and object
motions on spatial relationships. Manually annotating such object and camera
movements is expensive. Hence, we introduce SAT, a simulated spatial aptitude
training dataset comprising both static and dynamic spatial reasoning across
175K question-answer (QA) pairs and 20K scenes. Complementing this, we also
construct a small (150 image-QAs) yet challenging dynamic spatial test set
using real-world images. Leveraging our SAT datasets and 6 existing static
spatial benchmarks, we systematically investigate what improves both static and
dynamic spatial awareness. Our results reveal that simulations are surprisingly
effective at imparting spatial aptitude to MLMs that translate to real images.
We show that perfect annotations in simulation are more effective than existing
approaches of pseudo-annotating real images. For instance, SAT training
improves a LLaVA-13B model by an average 11% and a LLaVA-Video-7B model by an
average 8% on multiple spatial benchmarks, including our real-image dynamic
test set and spatial reasoning on long videos -- even outperforming some large
proprietary models. While reasoning over static relationships improves with
synthetic training data, there is still considerable room for improvement for
dynamic reasoning questions.
| [
{
"version": "v1",
"created": "Tue, 10 Dec 2024 18:52:45 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 17:59:24 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Ray",
"Arijit",
""
],
[
"Duan",
"Jiafei",
""
],
[
"Brown",
"Ellis",
""
],
[
"Tan",
"Reuben",
""
],
[
"Bashkirova",
"Dina",
""
],
[
"Hendrix",
"Rose",
""
],
[
"Ehsani",
"Kiana",
""
],
[
"Kembhavi",
"Aniruddha",
""
],
[
"Plummer",
"Bryan A.",
""
],
[
"Krishna",
"Ranjay",
""
],
[
"Zeng",
"Kuo-Hao",
""
],
[
"Saenko",
"Kate",
""
]
] | TITLE: SAT: Dynamic Spatial Aptitude Training for Multimodal Language Models
ABSTRACT: Reasoning about motion and space is a fundamental cognitive capability that
is required by multiple real-world applications. While many studies highlight
that large multimodal language models (MLMs) struggle to reason about space,
they only focus on static spatial relationships, and not dynamic awareness of
motion and space, i.e., reasoning about the effect of egocentric and object
motions on spatial relationships. Manually annotating such object and camera
movements is expensive. Hence, we introduce SAT, a simulated spatial aptitude
training dataset comprising both static and dynamic spatial reasoning across
175K question-answer (QA) pairs and 20K scenes. Complementing this, we also
construct a small (150 image-QAs) yet challenging dynamic spatial test set
using real-world images. Leveraging our SAT datasets and 6 existing static
spatial benchmarks, we systematically investigate what improves both static and
dynamic spatial awareness. Our results reveal that simulations are surprisingly
effective at imparting spatial aptitude to MLMs that translate to real images.
We show that perfect annotations in simulation are more effective than existing
approaches of pseudo-annotating real images. For instance, SAT training
improves a LLaVA-13B model by an average 11% and a LLaVA-Video-7B model by an
average 8% on multiple spatial benchmarks, including our real-image dynamic
test set and spatial reasoning on long videos -- even outperforming some large
proprietary models. While reasoning over static relationships improves with
synthetic training data, there is still considerable room for improvement for
dynamic reasoning questions.
|
2412.09754 | Ali Athar | Ali Athar, Xueqing Deng, Liang-Chieh Chen | ViCaS: A Dataset for Combining Holistic and Pixel-level Video
Understanding using Captions with Grounded Segmentation | Accepted to CVPR 2025. Project page:
https://ali2500.github.io/vicas-project/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advances in multimodal large language models (MLLMs) have expanded
research in video understanding, primarily focusing on high-level tasks such as
video captioning and question-answering. Meanwhile, a smaller body of work
addresses dense, pixel-precise segmentation tasks, which typically involve
category-guided or referral-based object segmentation. Although both directions
are essential for developing models with human-level video comprehension, they
have largely evolved separately, with distinct benchmarks and architectures.
This paper aims to unify these efforts by introducing ViCaS, a new dataset
containing thousands of challenging videos, each annotated with detailed,
human-written captions and temporally consistent, pixel-accurate masks for
multiple objects with phrase grounding. Our benchmark evaluates models on both
holistic/high-level understanding and language-guided, pixel-precise
segmentation. We also present carefully validated evaluation measures and
propose an effective model architecture that can tackle our benchmark. Project
page: https://ali2500.github.io/vicas-project/
| [
{
"version": "v1",
"created": "Thu, 12 Dec 2024 23:10:54 GMT"
},
{
"version": "v2",
"created": "Tue, 17 Dec 2024 21:14:50 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 14:52:24 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Athar",
"Ali",
""
],
[
"Deng",
"Xueqing",
""
],
[
"Chen",
"Liang-Chieh",
""
]
] | TITLE: ViCaS: A Dataset for Combining Holistic and Pixel-level Video
Understanding using Captions with Grounded Segmentation
ABSTRACT: Recent advances in multimodal large language models (MLLMs) have expanded
research in video understanding, primarily focusing on high-level tasks such as
video captioning and question-answering. Meanwhile, a smaller body of work
addresses dense, pixel-precise segmentation tasks, which typically involve
category-guided or referral-based object segmentation. Although both directions
are essential for developing models with human-level video comprehension, they
have largely evolved separately, with distinct benchmarks and architectures.
This paper aims to unify these efforts by introducing ViCaS, a new dataset
containing thousands of challenging videos, each annotated with detailed,
human-written captions and temporally consistent, pixel-accurate masks for
multiple objects with phrase grounding. Our benchmark evaluates models on both
holistic/high-level understanding and language-guided, pixel-precise
segmentation. We also present carefully validated evaluation measures and
propose an effective model architecture that can tackle our benchmark. Project
page: https://ali2500.github.io/vicas-project/
|
2412.12386 | Giang (Dexter) Nguyen | Giang Nguyen, Ivan Brugere, Shubham Sharma, Sanjay Kariyappa, Anh
Totti Nguyen, Freddy Lecue | Interpretable LLM-based Table Question Answering | 10 pages, 2 figures and 9 tables in the main text | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Interpretability for Table Question Answering (Table QA) is critical,
particularly in high-stakes industries like finance or healthcare. Although
recent approaches using Large Language Models (LLMs) have significantly
improved Table QA performance, their explanations for how the answers are
generated are ambiguous. To fill this gap, we introduce Plan-of-SQLs (POS), an
interpretable Table QA approach designed to improve users' understanding of
model decision-making. Through qualitative and quantitative evaluations with
human and LLM judges, we show that: First, POS is the highest-quality
explanation method, helps human users understand model behaviors, and
facilitates model prediction verification. Second, when evaluated on popular
and standard Table QA datasets (TabFact, WikiTQ, and FetaQA), POS achieves QA
accuracy that is competitive with or superior to existing methods, while also
offering greater efficiency-requiring significantly fewer LLM calls and table
database queries-and robust performance on large-sized tables. Finally, we
observe high agreement (up to 90%) between LLMs and human users when making
decisions based on the same explanations, suggesting that LLMs could serve as
an effective proxy for humans in evaluating explanations. This finding enables
faster, more affordable evaluation of AI explanations-possibly accelerating
trustworthy AI research while maintaining reliable judgments on
interpretability.
| [
{
"version": "v1",
"created": "Mon, 16 Dec 2024 22:44:31 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 22:07:14 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Nguyen",
"Giang",
""
],
[
"Brugere",
"Ivan",
""
],
[
"Sharma",
"Shubham",
""
],
[
"Kariyappa",
"Sanjay",
""
],
[
"Nguyen",
"Anh Totti",
""
],
[
"Lecue",
"Freddy",
""
]
] | TITLE: Interpretable LLM-based Table Question Answering
ABSTRACT: Interpretability for Table Question Answering (Table QA) is critical,
particularly in high-stakes industries like finance or healthcare. Although
recent approaches using Large Language Models (LLMs) have significantly
improved Table QA performance, their explanations for how the answers are
generated are ambiguous. To fill this gap, we introduce Plan-of-SQLs (POS), an
interpretable Table QA approach designed to improve users' understanding of
model decision-making. Through qualitative and quantitative evaluations with
human and LLM judges, we show that: First, POS is the highest-quality
explanation method, helps human users understand model behaviors, and
facilitates model prediction verification. Second, when evaluated on popular
and standard Table QA datasets (TabFact, WikiTQ, and FetaQA), POS achieves QA
accuracy that is competitive with or superior to existing methods, while also
offering greater efficiency-requiring significantly fewer LLM calls and table
database queries-and robust performance on large-sized tables. Finally, we
observe high agreement (up to 90%) between LLMs and human users when making
decisions based on the same explanations, suggesting that LLMs could serve as
an effective proxy for humans in evaluating explanations. This finding enables
faster, more affordable evaluation of AI explanations-possibly accelerating
trustworthy AI research while maintaining reliable judgments on
interpretability.
|
2412.17671 | Fabrizio Guillaro | Fabrizio Guillaro and Giada Zingarini and Ben Usman and Avneesh Sud
and Davide Cozzolino and Luisa Verdoliva | A Bias-Free Training Paradigm for More General AI-generated Image
Detection | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Successful forensic detectors can produce excellent results in supervised
learning benchmarks but struggle to transfer to real-world applications. We
believe this limitation is largely due to inadequate training data quality.
While most research focuses on developing new algorithms, less attention is
given to training data selection, despite evidence that performance can be
strongly impacted by spurious correlations such as content, format, or
resolution. A well-designed forensic detector should detect generator specific
artifacts rather than reflect data biases. To this end, we propose B-Free, a
bias-free training paradigm, where fake images are generated from real ones
using the conditioning procedure of stable diffusion models. This ensures
semantic alignment between real and fake images, allowing any differences to
stem solely from the subtle artifacts introduced by AI generation. Through
content-based augmentation, we show significant improvements in both
generalization and robustness over state-of-the-art detectors and more
calibrated results across 27 different generative models, including recent
releases, like FLUX and Stable Diffusion 3.5. Our findings emphasize the
importance of a careful dataset design, highlighting the need for further
research on this topic. Code and data are publicly available at
https://grip-unina.github.io/B-Free/.
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2024 15:54:32 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 10:36:19 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Guillaro",
"Fabrizio",
""
],
[
"Zingarini",
"Giada",
""
],
[
"Usman",
"Ben",
""
],
[
"Sud",
"Avneesh",
""
],
[
"Cozzolino",
"Davide",
""
],
[
"Verdoliva",
"Luisa",
""
]
] | TITLE: A Bias-Free Training Paradigm for More General AI-generated Image
Detection
ABSTRACT: Successful forensic detectors can produce excellent results in supervised
learning benchmarks but struggle to transfer to real-world applications. We
believe this limitation is largely due to inadequate training data quality.
While most research focuses on developing new algorithms, less attention is
given to training data selection, despite evidence that performance can be
strongly impacted by spurious correlations such as content, format, or
resolution. A well-designed forensic detector should detect generator specific
artifacts rather than reflect data biases. To this end, we propose B-Free, a
bias-free training paradigm, where fake images are generated from real ones
using the conditioning procedure of stable diffusion models. This ensures
semantic alignment between real and fake images, allowing any differences to
stem solely from the subtle artifacts introduced by AI generation. Through
content-based augmentation, we show significant improvements in both
generalization and robustness over state-of-the-art detectors and more
calibrated results across 27 different generative models, including recent
releases, like FLUX and Stable Diffusion 3.5. Our findings emphasize the
importance of a careful dataset design, highlighting the need for further
research on this topic. Code and data are publicly available at
https://grip-unina.github.io/B-Free/.
|
2412.17811 | Siyuan Bian | Siyuan Bian, Chenghao Xu, Yuliang Xiu, Artur Grigorev, Zhen Liu, Cewu
Lu, Michael J. Black, Yao Feng | ChatGarment: Garment Estimation, Generation and Editing via Large
Language Models | CVPR 2025 | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | We introduce ChatGarment, a novel approach that leverages large
vision-language models (VLMs) to automate the estimation, generation, and
editing of 3D garments from images or text descriptions. Unlike previous
methods that struggle in real-world scenarios or lack interactive editing
capabilities, ChatGarment can estimate sewing patterns from in-the-wild images
or sketches, generate them from text descriptions, and edit garments based on
user instructions, all within an interactive dialogue. These sewing patterns
can then be draped on a 3D body and animated. This is achieved by finetuning a
VLM to directly generate a JSON file that includes both textual descriptions of
garment types and styles, as well as continuous numerical attributes. This JSON
file is then used to create sewing patterns through a programming parametric
model. To support this, we refine the existing programming model, GarmentCode,
by expanding its garment type coverage and simplifying its structure for
efficient VLM fine-tuning. Additionally, we construct a large-scale dataset of
image-to-sewing-pattern and text-to-sewing-pattern pairs through an automated
data pipeline. Extensive evaluations demonstrate ChatGarment's ability to
accurately reconstruct, generate, and edit garments from multimodal inputs,
highlighting its potential to simplify workflows in fashion and gaming
applications. Code and data are available at https://chatgarment.github.io/ .
| [
{
"version": "v1",
"created": "Mon, 23 Dec 2024 18:59:28 GMT"
},
{
"version": "v2",
"created": "Sat, 28 Dec 2024 02:24:34 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 09:47:55 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Bian",
"Siyuan",
""
],
[
"Xu",
"Chenghao",
""
],
[
"Xiu",
"Yuliang",
""
],
[
"Grigorev",
"Artur",
""
],
[
"Liu",
"Zhen",
""
],
[
"Lu",
"Cewu",
""
],
[
"Black",
"Michael J.",
""
],
[
"Feng",
"Yao",
""
]
] | TITLE: ChatGarment: Garment Estimation, Generation and Editing via Large
Language Models
ABSTRACT: We introduce ChatGarment, a novel approach that leverages large
vision-language models (VLMs) to automate the estimation, generation, and
editing of 3D garments from images or text descriptions. Unlike previous
methods that struggle in real-world scenarios or lack interactive editing
capabilities, ChatGarment can estimate sewing patterns from in-the-wild images
or sketches, generate them from text descriptions, and edit garments based on
user instructions, all within an interactive dialogue. These sewing patterns
can then be draped on a 3D body and animated. This is achieved by finetuning a
VLM to directly generate a JSON file that includes both textual descriptions of
garment types and styles, as well as continuous numerical attributes. This JSON
file is then used to create sewing patterns through a programming parametric
model. To support this, we refine the existing programming model, GarmentCode,
by expanding its garment type coverage and simplifying its structure for
efficient VLM fine-tuning. Additionally, we construct a large-scale dataset of
image-to-sewing-pattern and text-to-sewing-pattern pairs through an automated
data pipeline. Extensive evaluations demonstrate ChatGarment's ability to
accurately reconstruct, generate, and edit garments from multimodal inputs,
highlighting its potential to simplify workflows in fashion and gaming
applications. Code and data are available at https://chatgarment.github.io/ .
|
2412.18870 | Weiyuan Peng | Chenyang Lei, Meiying Zhang, Weiyuan Peng, Qi Hao, Chengzhong Xu,
Chunlin Ji, Guang Zhou | TSceneJAL: Joint Active Learning of Traffic Scenes for 3D Object
Detection | null | null | 10.1109/TITS.2025.3553170 | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Most autonomous driving (AD) datasets incur substantial costs for collection
and labeling, inevitably yielding a plethora of low-quality and redundant data
instances, thereby compromising performance and efficiency. Many applications
in AD systems necessitate high-quality training datasets using both existing
datasets and newly collected data. In this paper, we propose a traffic scene
joint active learning (TSceneJAL) framework that can efficiently sample the
balanced, diverse, and complex traffic scenes from both labeled and unlabeled
data. The novelty of this framework is threefold: 1) a scene sampling scheme
based on a category entropy, to identify scenes containing multiple object
classes, thus mitigating class imbalance for the active learner; 2) a
similarity sampling scheme, estimated through the directed graph representation
and a marginalize kernel algorithm, to pick sparse and diverse scenes; 3) an
uncertainty sampling scheme, predicted by a mixture density network, to select
instances with the most unclear or complex regression outcomes for the learner.
Finally, the integration of these three schemes in a joint selection strategy
yields an optimal and valuable subdataset. Experiments on the KITTI, Lyft,
nuScenes and SUScape datasets demonstrate that our approach outperforms
existing state-of-the-art methods on 3D object detection tasks with up to 12%
improvements.
| [
{
"version": "v1",
"created": "Wed, 25 Dec 2024 11:07:04 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 08:13:51 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Lei",
"Chenyang",
""
],
[
"Zhang",
"Meiying",
""
],
[
"Peng",
"Weiyuan",
""
],
[
"Hao",
"Qi",
""
],
[
"Xu",
"Chengzhong",
""
],
[
"Ji",
"Chunlin",
""
],
[
"Zhou",
"Guang",
""
]
] | TITLE: TSceneJAL: Joint Active Learning of Traffic Scenes for 3D Object
Detection
ABSTRACT: Most autonomous driving (AD) datasets incur substantial costs for collection
and labeling, inevitably yielding a plethora of low-quality and redundant data
instances, thereby compromising performance and efficiency. Many applications
in AD systems necessitate high-quality training datasets using both existing
datasets and newly collected data. In this paper, we propose a traffic scene
joint active learning (TSceneJAL) framework that can efficiently sample the
balanced, diverse, and complex traffic scenes from both labeled and unlabeled
data. The novelty of this framework is threefold: 1) a scene sampling scheme
based on a category entropy, to identify scenes containing multiple object
classes, thus mitigating class imbalance for the active learner; 2) a
similarity sampling scheme, estimated through the directed graph representation
and a marginalize kernel algorithm, to pick sparse and diverse scenes; 3) an
uncertainty sampling scheme, predicted by a mixture density network, to select
instances with the most unclear or complex regression outcomes for the learner.
Finally, the integration of these three schemes in a joint selection strategy
yields an optimal and valuable subdataset. Experiments on the KITTI, Lyft,
nuScenes and SUScape datasets demonstrate that our approach outperforms
existing state-of-the-art methods on 3D object detection tasks with up to 12%
improvements.
|
2412.21127 | Netanel Tamir | Netanel Y. Tamir, Shir Amir, Ranel Itzhaky, Noam Atia, Shobhita
Sundaram, Stephanie Fu, Ron Sokolovsky, Phillip Isola, Tali Dekel, Richard
Zhang, Miriam Farber | What Makes for a Good Stereoscopic Image? | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With rapid advancements in virtual reality (VR) headsets, effectively
measuring stereoscopic quality of experience (SQoE) has become essential for
delivering immersive and comfortable 3D experiences. However, most existing
stereo metrics focus on isolated aspects of the viewing experience such as
visual discomfort or image quality, and have traditionally faced data
limitations. To address these gaps, we present SCOPE (Stereoscopic COntent
Preference Evaluation), a new dataset comprised of real and synthetic
stereoscopic images featuring a wide range of common perceptual distortions and
artifacts. The dataset is labeled with preference annotations collected on a VR
headset, with our findings indicating a notable degree of consistency in user
preferences across different headsets. Additionally, we present iSQoE, a new
model for stereo quality of experience assessment trained on our dataset. We
show that iSQoE aligns better with human preferences than existing methods when
comparing mono-to-stereo conversion methods.
| [
{
"version": "v1",
"created": "Mon, 30 Dec 2024 17:58:50 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 08:26:17 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Tamir",
"Netanel Y.",
""
],
[
"Amir",
"Shir",
""
],
[
"Itzhaky",
"Ranel",
""
],
[
"Atia",
"Noam",
""
],
[
"Sundaram",
"Shobhita",
""
],
[
"Fu",
"Stephanie",
""
],
[
"Sokolovsky",
"Ron",
""
],
[
"Isola",
"Phillip",
""
],
[
"Dekel",
"Tali",
""
],
[
"Zhang",
"Richard",
""
],
[
"Farber",
"Miriam",
""
]
] | TITLE: What Makes for a Good Stereoscopic Image?
ABSTRACT: With rapid advancements in virtual reality (VR) headsets, effectively
measuring stereoscopic quality of experience (SQoE) has become essential for
delivering immersive and comfortable 3D experiences. However, most existing
stereo metrics focus on isolated aspects of the viewing experience such as
visual discomfort or image quality, and have traditionally faced data
limitations. To address these gaps, we present SCOPE (Stereoscopic COntent
Preference Evaluation), a new dataset comprised of real and synthetic
stereoscopic images featuring a wide range of common perceptual distortions and
artifacts. The dataset is labeled with preference annotations collected on a VR
headset, with our findings indicating a notable degree of consistency in user
preferences across different headsets. Additionally, we present iSQoE, a new
model for stereo quality of experience assessment trained on our dataset. We
show that iSQoE aligns better with human preferences than existing methods when
comparing mono-to-stereo conversion methods.
|
2501.00164 | Yi Fang | Subramaniam Vincent, Phoebe Wang, Zhan Shi, Sahas Koka, Yi Fang | Measuring Large Language Models Capacity to Annotate Journalistic
Sourcing | null | null | null | null | cs.CL cs.CY | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Since the launch of ChatGPT in late 2022, the capacities of Large Language
Models and their evaluation have been in constant discussion and evaluation
both in academic research and in the industry. Scenarios and benchmarks have
been developed in several areas such as law, medicine and math (Bommasani et
al., 2023) and there is continuous evaluation of model variants. One area that
has not received sufficient scenario development attention is journalism, and
in particular journalistic sourcing and ethics. Journalism is a crucial
truth-determination function in democracy (Vincent, 2023), and sourcing is a
crucial pillar to all original journalistic output. Evaluating the capacities
of LLMs to annotate stories for the different signals of sourcing and how
reporters justify them is a crucial scenario that warrants a benchmark
approach. It offers potential to build automated systems to contrast more
transparent and ethically rigorous forms of journalism with everyday fare. In
this paper we lay out a scenario to evaluate LLM performance on identifying and
annotating sourcing in news stories on a five-category schema inspired from
journalism studies (Gans, 2004). We offer the use case, our dataset and metrics
and as the first step towards systematic benchmarking. Our accuracy findings
indicate LLM-based approaches have more catching to do in identifying all the
sourced statements in a story, and equally, in matching the type of sources. An
even harder task is spotting source justifications.
| [
{
"version": "v1",
"created": "Mon, 30 Dec 2024 22:15:57 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 16:54:12 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Vincent",
"Subramaniam",
""
],
[
"Wang",
"Phoebe",
""
],
[
"Shi",
"Zhan",
""
],
[
"Koka",
"Sahas",
""
],
[
"Fang",
"Yi",
""
]
] | TITLE: Measuring Large Language Models Capacity to Annotate Journalistic
Sourcing
ABSTRACT: Since the launch of ChatGPT in late 2022, the capacities of Large Language
Models and their evaluation have been in constant discussion and evaluation
both in academic research and in the industry. Scenarios and benchmarks have
been developed in several areas such as law, medicine and math (Bommasani et
al., 2023) and there is continuous evaluation of model variants. One area that
has not received sufficient scenario development attention is journalism, and
in particular journalistic sourcing and ethics. Journalism is a crucial
truth-determination function in democracy (Vincent, 2023), and sourcing is a
crucial pillar to all original journalistic output. Evaluating the capacities
of LLMs to annotate stories for the different signals of sourcing and how
reporters justify them is a crucial scenario that warrants a benchmark
approach. It offers potential to build automated systems to contrast more
transparent and ethically rigorous forms of journalism with everyday fare. In
this paper we lay out a scenario to evaluate LLM performance on identifying and
annotating sourcing in news stories on a five-category schema inspired from
journalism studies (Gans, 2004). We offer the use case, our dataset and metrics
and as the first step towards systematic benchmarking. Our accuracy findings
indicate LLM-based approaches have more catching to do in identifying all the
sourced statements in a story, and equally, in matching the type of sources. An
even harder task is spotting source justifications.
|
2501.00398 | Nishit Anand | Nishit Anand, Ashish Seth, Ramani Duraiswami, Dinesh Manocha | TSPE: Task-Specific Prompt Ensemble for Improved Zero-Shot Audio
Classification | Accepted to SALMA Workshop ICASSP 2025 | null | null | null | cs.SD cs.AI cs.CL cs.LG eess.AS | http://creativecommons.org/licenses/by/4.0/ | Audio-language models (ALMs) excel in zero-shot audio classification, a task
where models classify previously unseen audio clips at test time by leveraging
descriptive natural language prompts. We introduce TSPE (Task-Specific Prompt
Ensemble), a simple, training-free hard prompting method that boosts ALEs'
zero-shot performance by customizing prompts for diverse audio classification
tasks. Rather than using generic template-based prompts like "Sound of a car"
we generate context-rich prompts, such as "Sound of a car coming from a
tunnel". Specifically, we leverage label information to identify suitable sound
attributes, such as "loud" and "feeble", and appropriate sound sources, such as
"tunnel" and "street" and incorporate this information into the prompts used by
Audio-Language Models (ALMs) for audio classification. Further, to enhance
audio-text alignment, we perform prompt ensemble across TSPE-generated
task-specific prompts. When evaluated on 12 diverse audio classification
datasets, TSPE improves performance across ALMs by showing an absolute
improvement of 1.23-16.36% over vanilla zero-shot evaluation.
| [
{
"version": "v1",
"created": "Tue, 31 Dec 2024 11:27:17 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 01:09:23 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Anand",
"Nishit",
""
],
[
"Seth",
"Ashish",
""
],
[
"Duraiswami",
"Ramani",
""
],
[
"Manocha",
"Dinesh",
""
]
] | TITLE: TSPE: Task-Specific Prompt Ensemble for Improved Zero-Shot Audio
Classification
ABSTRACT: Audio-language models (ALMs) excel in zero-shot audio classification, a task
where models classify previously unseen audio clips at test time by leveraging
descriptive natural language prompts. We introduce TSPE (Task-Specific Prompt
Ensemble), a simple, training-free hard prompting method that boosts ALEs'
zero-shot performance by customizing prompts for diverse audio classification
tasks. Rather than using generic template-based prompts like "Sound of a car"
we generate context-rich prompts, such as "Sound of a car coming from a
tunnel". Specifically, we leverage label information to identify suitable sound
attributes, such as "loud" and "feeble", and appropriate sound sources, such as
"tunnel" and "street" and incorporate this information into the prompts used by
Audio-Language Models (ALMs) for audio classification. Further, to enhance
audio-text alignment, we perform prompt ensemble across TSPE-generated
task-specific prompts. When evaluated on 12 diverse audio classification
datasets, TSPE improves performance across ALMs by showing an absolute
improvement of 1.23-16.36% over vanilla zero-shot evaluation.
|
2501.06035 | Cecilia Curreli | Cecilia Curreli, Dominik Muhle, Abhishek Saroha, Zhenzhang Ye,
Riccardo Marin, Daniel Cremers | Nonisotropic Gaussian Diffusion for Realistic 3D Human Motion Prediction | CVPR 2025. Code availabe at
https://ceveloper.github.io/publications/skeletondiffusion | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Probabilistic human motion prediction aims to forecast multiple possible
future movements from past observations. While current approaches report high
diversity and realism, they often generate motions with undetected limb
stretching and jitter. To address this, we introduce SkeletonDiffusion, a
latent diffusion model that embeds an explicit inductive bias on the human body
within its architecture and training. Our model is trained with a novel
nonisotropic Gaussian diffusion formulation that aligns with the natural
kinematic structure of the human skeleton. Results show that our approach
outperforms conventional isotropic alternatives, consistently generating
realistic predictions while avoiding artifacts such as limb distortion.
Additionally, we identify a limitation in commonly used diversity metrics,
which may inadvertently favor models that produce inconsistent limb lengths
within the same sequence. SkeletonDiffusion sets a new benchmark on real-world
datasets, outperforming various baselines across multiple evaluation metrics.
Visit our project page at
https://ceveloper.github.io/publications/skeletondiffusion/ .
| [
{
"version": "v1",
"created": "Fri, 10 Jan 2025 15:13:43 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 08:35:42 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Curreli",
"Cecilia",
""
],
[
"Muhle",
"Dominik",
""
],
[
"Saroha",
"Abhishek",
""
],
[
"Ye",
"Zhenzhang",
""
],
[
"Marin",
"Riccardo",
""
],
[
"Cremers",
"Daniel",
""
]
] | TITLE: Nonisotropic Gaussian Diffusion for Realistic 3D Human Motion Prediction
ABSTRACT: Probabilistic human motion prediction aims to forecast multiple possible
future movements from past observations. While current approaches report high
diversity and realism, they often generate motions with undetected limb
stretching and jitter. To address this, we introduce SkeletonDiffusion, a
latent diffusion model that embeds an explicit inductive bias on the human body
within its architecture and training. Our model is trained with a novel
nonisotropic Gaussian diffusion formulation that aligns with the natural
kinematic structure of the human skeleton. Results show that our approach
outperforms conventional isotropic alternatives, consistently generating
realistic predictions while avoiding artifacts such as limb distortion.
Additionally, we identify a limitation in commonly used diversity metrics,
which may inadvertently favor models that produce inconsistent limb lengths
within the same sequence. SkeletonDiffusion sets a new benchmark on real-world
datasets, outperforming various baselines across multiple evaluation metrics.
Visit our project page at
https://ceveloper.github.io/publications/skeletondiffusion/ .
|
2501.07742 | Yaqing Ding | Yaqing Ding, Viktor Kocur, V\'aclav V\'avra, Zuzana Berger Haladov\'a,
Jian Yang, Torsten Sattler, Zuzana Kukelova | RePoseD: Efficient Relative Pose Estimation With Known Depth Information | 18 pages | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Recent advances in monocular depth estimation methods (MDE) and their
improved accuracy open new possibilities for their applications. In this paper,
we investigate how monocular depth estimates can be used for relative pose
estimation. In particular, we are interested in answering the question whether
using MDEs improves results over traditional point-based methods. We propose a
novel framework for estimating the relative pose of two cameras from point
correspondences with associated monocular depths. Since depth predictions are
typically defined up to an unknown scale or even both unknown scale and shift
parameters, our solvers jointly estimate the scale or both the scale and shift
parameters along with the relative pose. We derive efficient solvers
considering different types of depths for three camera configurations: (1) two
calibrated cameras, (2) two cameras with an unknown shared focal length, and
(3) two cameras with unknown different focal lengths. Our new solvers
outperform state-of-the-art depth-aware solvers in terms of speed and accuracy.
In extensive real experiments on multiple datasets and with various MDEs, we
discuss which depth-aware solvers are preferable in which situation. The code
will be made publicly available.
| [
{
"version": "v1",
"created": "Mon, 13 Jan 2025 23:13:33 GMT"
},
{
"version": "v2",
"created": "Tue, 1 Apr 2025 14:02:01 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 12:07:38 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Ding",
"Yaqing",
""
],
[
"Kocur",
"Viktor",
""
],
[
"Vávra",
"Václav",
""
],
[
"Haladová",
"Zuzana Berger",
""
],
[
"Yang",
"Jian",
""
],
[
"Sattler",
"Torsten",
""
],
[
"Kukelova",
"Zuzana",
""
]
] | TITLE: RePoseD: Efficient Relative Pose Estimation With Known Depth Information
ABSTRACT: Recent advances in monocular depth estimation methods (MDE) and their
improved accuracy open new possibilities for their applications. In this paper,
we investigate how monocular depth estimates can be used for relative pose
estimation. In particular, we are interested in answering the question whether
using MDEs improves results over traditional point-based methods. We propose a
novel framework for estimating the relative pose of two cameras from point
correspondences with associated monocular depths. Since depth predictions are
typically defined up to an unknown scale or even both unknown scale and shift
parameters, our solvers jointly estimate the scale or both the scale and shift
parameters along with the relative pose. We derive efficient solvers
considering different types of depths for three camera configurations: (1) two
calibrated cameras, (2) two cameras with an unknown shared focal length, and
(3) two cameras with unknown different focal lengths. Our new solvers
outperform state-of-the-art depth-aware solvers in terms of speed and accuracy.
In extensive real experiments on multiple datasets and with various MDEs, we
discuss which depth-aware solvers are preferable in which situation. The code
will be made publicly available.
|
2501.08628 | Charalampos Shimillas | Charalampos Shimillas, Kleanthis Malialis, Konstantinos Fokianos,
Marios M. Polycarpou | Transformer-based Multivariate Time Series Anomaly Localization | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | With the growing complexity of Cyber-Physical Systems (CPS) and the
integration of Internet of Things (IoT), the use of sensors for online
monitoring generates large volume of multivariate time series (MTS) data.
Consequently, the need for robust anomaly diagnosis in MTS is paramount to
maintaining system reliability and safety. While significant advancements have
been made in anomaly detection, localization remains a largely underexplored
area, though crucial for intelligent decision-making. This paper introduces a
novel transformer-based model for unsupervised anomaly diagnosis in MTS, with a
focus on improving localization performance, through an in-depth analysis of
the self-attention mechanism's learning behavior under both normal and
anomalous conditions. We formulate the anomaly localization problem as a
three-stage process: time-step, window, and segment-based. This leads to the
development of the Space-Time Anomaly Score (STAS), a new metric inspired by
the connection between transformer latent representations and space-time
statistical models. STAS is designed to capture individual anomaly behaviors
and inter-series dependencies, delivering enhanced localization performance.
Additionally, the Statistical Feature Anomaly Score (SFAS) complements STAS by
analyzing statistical features around anomalies, with their combination helping
to reduce false alarms. Experiments on real world and synthetic datasets
illustrate the model's superiority over state-of-the-art methods in both
detection and localization tasks.
| [
{
"version": "v1",
"created": "Wed, 15 Jan 2025 07:18:51 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 08:48:54 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Shimillas",
"Charalampos",
""
],
[
"Malialis",
"Kleanthis",
""
],
[
"Fokianos",
"Konstantinos",
""
],
[
"Polycarpou",
"Marios M.",
""
]
] | TITLE: Transformer-based Multivariate Time Series Anomaly Localization
ABSTRACT: With the growing complexity of Cyber-Physical Systems (CPS) and the
integration of Internet of Things (IoT), the use of sensors for online
monitoring generates large volume of multivariate time series (MTS) data.
Consequently, the need for robust anomaly diagnosis in MTS is paramount to
maintaining system reliability and safety. While significant advancements have
been made in anomaly detection, localization remains a largely underexplored
area, though crucial for intelligent decision-making. This paper introduces a
novel transformer-based model for unsupervised anomaly diagnosis in MTS, with a
focus on improving localization performance, through an in-depth analysis of
the self-attention mechanism's learning behavior under both normal and
anomalous conditions. We formulate the anomaly localization problem as a
three-stage process: time-step, window, and segment-based. This leads to the
development of the Space-Time Anomaly Score (STAS), a new metric inspired by
the connection between transformer latent representations and space-time
statistical models. STAS is designed to capture individual anomaly behaviors
and inter-series dependencies, delivering enhanced localization performance.
Additionally, the Statistical Feature Anomaly Score (SFAS) complements STAS by
analyzing statistical features around anomalies, with their combination helping
to reduce false alarms. Experiments on real world and synthetic datasets
illustrate the model's superiority over state-of-the-art methods in both
detection and localization tasks.
|
2502.00380 | Bruno Belucci | Bruno Belucci, Karim Lounici, Katia Meziani | CoHiRF: A Scalable and Interpretable Clustering Framework for
High-Dimensional Data | null | null | null | null | cs.LG stat.ML | http://creativecommons.org/licenses/by/4.0/ | Clustering high-dimensional data poses significant challenges due to the
curse of dimensionality, scalability issues, and the presence of noisy and
irrelevant features. We propose Consensus Hierarchical Random Feature (CoHiRF),
a novel clustering method designed to address these challenges effectively.
CoHiRF leverages random feature selection to mitigate noise and dimensionality
effects, repeatedly applies K-Means clustering in reduced feature spaces, and
combines results through a unanimous consensus criterion. This iterative
approach constructs a cluster assignment matrix, where each row records the
cluster assignments of a sample across repetitions, enabling the identification
of stable clusters by comparing identical rows. Clusters are organized
hierarchically, enabling the interpretation of the hierarchy to gain insights
into the dataset. CoHiRF is computationally efficient with a running time
comparable to K-Means, scalable to massive datasets, and exhibits robust
performance against state-of-the-art methods such as SC-SRGF, HDBSCAN, and
OPTICS. Experimental results on synthetic and real-world datasets confirm the
method's ability to reveal meaningful patterns while maintaining scalability,
making it a powerful tool for high-dimensional data analysis.
| [
{
"version": "v1",
"created": "Sat, 1 Feb 2025 09:38:44 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 19:10:01 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Belucci",
"Bruno",
""
],
[
"Lounici",
"Karim",
""
],
[
"Meziani",
"Katia",
""
]
] | TITLE: CoHiRF: A Scalable and Interpretable Clustering Framework for
High-Dimensional Data
ABSTRACT: Clustering high-dimensional data poses significant challenges due to the
curse of dimensionality, scalability issues, and the presence of noisy and
irrelevant features. We propose Consensus Hierarchical Random Feature (CoHiRF),
a novel clustering method designed to address these challenges effectively.
CoHiRF leverages random feature selection to mitigate noise and dimensionality
effects, repeatedly applies K-Means clustering in reduced feature spaces, and
combines results through a unanimous consensus criterion. This iterative
approach constructs a cluster assignment matrix, where each row records the
cluster assignments of a sample across repetitions, enabling the identification
of stable clusters by comparing identical rows. Clusters are organized
hierarchically, enabling the interpretation of the hierarchy to gain insights
into the dataset. CoHiRF is computationally efficient with a running time
comparable to K-Means, scalable to massive datasets, and exhibits robust
performance against state-of-the-art methods such as SC-SRGF, HDBSCAN, and
OPTICS. Experimental results on synthetic and real-world datasets confirm the
method's ability to reveal meaningful patterns while maintaining scalability,
making it a powerful tool for high-dimensional data analysis.
|
2502.00536 | Wenbo Xiao | Wenbo Xiao and Zhihao Xu and Guiping Liang and Yangjun Deng and Yi
Xiao | CAD: Confidence-Aware Adaptive Displacement for Semi-Supervised Medical
Image Segmentation | 9 pages, 3 figures, 4 tables | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Semi-supervised medical image segmentation aims to leverage minimal expert
annotations, yet remains confronted by challenges in maintaining high-quality
consistency learning. Excessive perturbations can degrade alignment and hinder
precise decision boundaries, especially in regions with uncertain predictions.
In this paper, we introduce Confidence-Aware Adaptive Displacement (CAD), a
framework that selectively identifies and replaces the largest low-confidence
regions with high-confidence patches. By dynamically adjusting both the maximum
allowable replacement size and the confidence threshold throughout training,
CAD progressively refines the segmentation quality without overwhelming the
learning process. Experimental results on public medical datasets demonstrate
that CAD effectively enhances segmentation quality, establishing new
state-of-the-art accuracy in this field. The source code will be released after
the paper is published.
| [
{
"version": "v1",
"created": "Sat, 1 Feb 2025 19:23:18 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 11:08:03 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Xiao",
"Wenbo",
""
],
[
"Xu",
"Zhihao",
""
],
[
"Liang",
"Guiping",
""
],
[
"Deng",
"Yangjun",
""
],
[
"Xiao",
"Yi",
""
]
] | TITLE: CAD: Confidence-Aware Adaptive Displacement for Semi-Supervised Medical
Image Segmentation
ABSTRACT: Semi-supervised medical image segmentation aims to leverage minimal expert
annotations, yet remains confronted by challenges in maintaining high-quality
consistency learning. Excessive perturbations can degrade alignment and hinder
precise decision boundaries, especially in regions with uncertain predictions.
In this paper, we introduce Confidence-Aware Adaptive Displacement (CAD), a
framework that selectively identifies and replaces the largest low-confidence
regions with high-confidence patches. By dynamically adjusting both the maximum
allowable replacement size and the confidence threshold throughout training,
CAD progressively refines the segmentation quality without overwhelming the
learning process. Experimental results on public medical datasets demonstrate
that CAD effectively enhances segmentation quality, establishing new
state-of-the-art accuracy in this field. The source code will be released after
the paper is published.
|
2502.00631 | Zeyu Zhang | Xuyin Qi, Zeyu Zhang, Huazhan Zheng, Mingxi Chen, Numan Kutaiba, Ruth
Lim, Cherie Chiang, Zi En Tham, Xuan Ren, Wenxin Zhang, Lei Zhang, Hao Zhang,
Wenbing Lv, Guangzhen Yao, Renda Han, Kangsheng Wang, Mingyuan Li, Hongtao
Mao, Yu Li, Zhibin Liao, Yang Zhao, Minh-Son To | MedConv: Convolutions Beat Transformers on Long-Tailed Bone Density
Prediction | Accepted to IJCNN 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Bone density prediction via CT scans to estimate T-scores is crucial,
providing a more precise assessment of bone health compared to traditional
methods like X-ray bone density tests, which lack spatial resolution and the
ability to detect localized changes. However, CT-based prediction faces two
major challenges: the high computational complexity of transformer-based
architectures, which limits their deployment in portable and clinical settings,
and the imbalanced, long-tailed distribution of real-world hospital data that
skews predictions. To address these issues, we introduce MedConv, a
convolutional model for bone density prediction that outperforms transformer
models with lower computational demands. We also adapt Bal-CE loss and post-hoc
logit adjustment to improve class balance. Extensive experiments on our
AustinSpine dataset shows that our approach achieves up to 21% improvement in
accuracy and 20% in ROC AUC over previous state-of-the-art methods.
| [
{
"version": "v1",
"created": "Sun, 2 Feb 2025 02:43:40 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 13:23:35 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Qi",
"Xuyin",
""
],
[
"Zhang",
"Zeyu",
""
],
[
"Zheng",
"Huazhan",
""
],
[
"Chen",
"Mingxi",
""
],
[
"Kutaiba",
"Numan",
""
],
[
"Lim",
"Ruth",
""
],
[
"Chiang",
"Cherie",
""
],
[
"Tham",
"Zi En",
""
],
[
"Ren",
"Xuan",
""
],
[
"Zhang",
"Wenxin",
""
],
[
"Zhang",
"Lei",
""
],
[
"Zhang",
"Hao",
""
],
[
"Lv",
"Wenbing",
""
],
[
"Yao",
"Guangzhen",
""
],
[
"Han",
"Renda",
""
],
[
"Wang",
"Kangsheng",
""
],
[
"Li",
"Mingyuan",
""
],
[
"Mao",
"Hongtao",
""
],
[
"Li",
"Yu",
""
],
[
"Liao",
"Zhibin",
""
],
[
"Zhao",
"Yang",
""
],
[
"To",
"Minh-Son",
""
]
] | TITLE: MedConv: Convolutions Beat Transformers on Long-Tailed Bone Density
Prediction
ABSTRACT: Bone density prediction via CT scans to estimate T-scores is crucial,
providing a more precise assessment of bone health compared to traditional
methods like X-ray bone density tests, which lack spatial resolution and the
ability to detect localized changes. However, CT-based prediction faces two
major challenges: the high computational complexity of transformer-based
architectures, which limits their deployment in portable and clinical settings,
and the imbalanced, long-tailed distribution of real-world hospital data that
skews predictions. To address these issues, we introduce MedConv, a
convolutional model for bone density prediction that outperforms transformer
models with lower computational demands. We also adapt Bal-CE loss and post-hoc
logit adjustment to improve class balance. Extensive experiments on our
AustinSpine dataset shows that our approach achieves up to 21% improvement in
accuracy and 20% in ROC AUC over previous state-of-the-art methods.
|
2502.03123 | Xingshen Zhang | Xingshen Zhang, Lin Wang, Shuangrong Liu, Xintao Lu, Chaoran Pang, Bo
Yang | Disentanglement in Difference: Directly Learning Semantically
Disentangled Representations by Maximizing Inter-Factor Differences | null | null | null | null | cs.LG cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | In this study, Disentanglement in Difference(DiD) is proposed to address the
inherent inconsistency between the statistical independence of latent variables
and the goal of semantic disentanglement in disentanglement representation
learning. Conventional disentanglement methods achieve disentanglement
representation by improving statistical independence among latent variables.
However, the statistical independence of latent variables does not necessarily
imply that they are semantically unrelated, thus, improving statistical
independence does not always enhance disentanglement performance. To address
the above issue, DiD is proposed to directly learn semantic differences rather
than the statistical independence of latent variables. In the DiD, a Difference
Encoder is designed to measure the semantic differences; a contrastive loss
function is established to facilitate inter-dimensional comparison. Both of
them allow the model to directly differentiate and disentangle distinct
semantic factors, thereby resolving the inconsistency between statistical
independence and semantic disentanglement. Experimental results on the dSprites
and 3DShapes datasets demonstrate that the proposed DiD outperforms existing
mainstream methods across various disentanglement metrics.
| [
{
"version": "v1",
"created": "Wed, 5 Feb 2025 12:30:41 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 15:28:18 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zhang",
"Xingshen",
""
],
[
"Wang",
"Lin",
""
],
[
"Liu",
"Shuangrong",
""
],
[
"Lu",
"Xintao",
""
],
[
"Pang",
"Chaoran",
""
],
[
"Yang",
"Bo",
""
]
] | TITLE: Disentanglement in Difference: Directly Learning Semantically
Disentangled Representations by Maximizing Inter-Factor Differences
ABSTRACT: In this study, Disentanglement in Difference(DiD) is proposed to address the
inherent inconsistency between the statistical independence of latent variables
and the goal of semantic disentanglement in disentanglement representation
learning. Conventional disentanglement methods achieve disentanglement
representation by improving statistical independence among latent variables.
However, the statistical independence of latent variables does not necessarily
imply that they are semantically unrelated, thus, improving statistical
independence does not always enhance disentanglement performance. To address
the above issue, DiD is proposed to directly learn semantic differences rather
than the statistical independence of latent variables. In the DiD, a Difference
Encoder is designed to measure the semantic differences; a contrastive loss
function is established to facilitate inter-dimensional comparison. Both of
them allow the model to directly differentiate and disentangle distinct
semantic factors, thereby resolving the inconsistency between statistical
independence and semantic disentanglement. Experimental results on the dSprites
and 3DShapes datasets demonstrate that the proposed DiD outperforms existing
mainstream methods across various disentanglement metrics.
|
2502.06688 | Patrik Goldschmidt | Patrik Goldschmidt, Daniela Chud\'a | Network Intrusion Datasets: A Survey, Limitations, and Recommendations | 41 pages, 8 figures, 6 tables. Minor revision for the journal
Computers & Security | null | null | null | cs.CR cs.NI | http://creativecommons.org/licenses/by/4.0/ | Data-driven cyberthreat detection has become a crucial defense technique in
modern cybersecurity. Network defense, supported by Network Intrusion Detection
Systems (NIDSs), has also increasingly adopted data-driven approaches, leading
to greater reliance on data. Despite the importance of data, its scarcity has
long been recognized as a major obstacle in NIDS research. In response, the
community has published many new datasets recently. However, many of them
remain largely unknown and unanalyzed, leaving researchers uncertain about
their suitability for specific use cases.
In this paper, we aim to address this knowledge gap by performing a
systematic literature review (SLR) of 89 public datasets for NIDS research.
Each dataset is comparatively analyzed across 13 key properties, and its
potential applications are outlined. Beyond the review, we also discuss
domain-specific challenges and common data limitations to facilitate a critical
view on data quality. To aid in data selection, we conduct a dataset popularity
analysis in contemporary state-of-the-art NIDS research. Furthermore, the paper
presents best practices for dataset selection, generation, and usage. By
providing a comprehensive overview of the domain and its data, this work aims
to guide future research toward improving data quality and the robustness of
NIDS solutions.
| [
{
"version": "v1",
"created": "Mon, 10 Feb 2025 17:14:37 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 18:40:47 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Goldschmidt",
"Patrik",
""
],
[
"Chudá",
"Daniela",
""
]
] | TITLE: Network Intrusion Datasets: A Survey, Limitations, and Recommendations
ABSTRACT: Data-driven cyberthreat detection has become a crucial defense technique in
modern cybersecurity. Network defense, supported by Network Intrusion Detection
Systems (NIDSs), has also increasingly adopted data-driven approaches, leading
to greater reliance on data. Despite the importance of data, its scarcity has
long been recognized as a major obstacle in NIDS research. In response, the
community has published many new datasets recently. However, many of them
remain largely unknown and unanalyzed, leaving researchers uncertain about
their suitability for specific use cases.
In this paper, we aim to address this knowledge gap by performing a
systematic literature review (SLR) of 89 public datasets for NIDS research.
Each dataset is comparatively analyzed across 13 key properties, and its
potential applications are outlined. Beyond the review, we also discuss
domain-specific challenges and common data limitations to facilitate a critical
view on data quality. To aid in data selection, we conduct a dataset popularity
analysis in contemporary state-of-the-art NIDS research. Furthermore, the paper
presents best practices for dataset selection, generation, and usage. By
providing a comprehensive overview of the domain and its data, this work aims
to guide future research toward improving data quality and the robustness of
NIDS solutions.
|
2502.09654 | Bowen Chen | Bowen Chen, Keyan Chen, Mohan Yang, Zhengxia Zou, Zhenwei Shi | Heterogeneous Mixture of Experts for Remote Sensing Image
Super-Resolution | null | null | null | null | eess.IV cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Remote sensing image super-resolution (SR) aims to reconstruct
high-resolution remote sensing images from low-resolution inputs, thereby
addressing limitations imposed by sensors and imaging conditions. However, the
inherent characteristics of remote sensing images, including diverse ground
object types and complex details, pose significant challenges to achieving
high-quality reconstruction. Existing methods typically employ a uniform
structure to process various types of ground objects without distinction,
making it difficult to adapt to the complex characteristics of remote sensing
images. To address this issue, we introduce a Mixture of Experts (MoE) model
and design a set of heterogeneous experts. These experts are organized into
multiple expert groups, where experts within each group are homogeneous while
being heterogeneous across groups. This design ensures that specialized
activation parameters can be employed to handle the diverse and intricate
details of ground objects effectively. To better accommodate the heterogeneous
experts, we propose a multi-level feature aggregation strategy to guide the
routing process. Additionally, we develop a dual-routing mechanism to
adaptively select the optimal expert for each pixel. Experiments conducted on
the UCMerced and AID datasets demonstrate that our proposed method achieves
superior SR reconstruction accuracy compared to state-of-the-art methods. The
code will be available at https://github.com/Mr-Bamboo/MFG-HMoE.
| [
{
"version": "v1",
"created": "Wed, 12 Feb 2025 03:25:53 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 02:27:53 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Chen",
"Bowen",
""
],
[
"Chen",
"Keyan",
""
],
[
"Yang",
"Mohan",
""
],
[
"Zou",
"Zhengxia",
""
],
[
"Shi",
"Zhenwei",
""
]
] | TITLE: Heterogeneous Mixture of Experts for Remote Sensing Image
Super-Resolution
ABSTRACT: Remote sensing image super-resolution (SR) aims to reconstruct
high-resolution remote sensing images from low-resolution inputs, thereby
addressing limitations imposed by sensors and imaging conditions. However, the
inherent characteristics of remote sensing images, including diverse ground
object types and complex details, pose significant challenges to achieving
high-quality reconstruction. Existing methods typically employ a uniform
structure to process various types of ground objects without distinction,
making it difficult to adapt to the complex characteristics of remote sensing
images. To address this issue, we introduce a Mixture of Experts (MoE) model
and design a set of heterogeneous experts. These experts are organized into
multiple expert groups, where experts within each group are homogeneous while
being heterogeneous across groups. This design ensures that specialized
activation parameters can be employed to handle the diverse and intricate
details of ground objects effectively. To better accommodate the heterogeneous
experts, we propose a multi-level feature aggregation strategy to guide the
routing process. Additionally, we develop a dual-routing mechanism to
adaptively select the optimal expert for each pixel. Experiments conducted on
the UCMerced and AID datasets demonstrate that our proposed method achieves
superior SR reconstruction accuracy compared to state-of-the-art methods. The
code will be available at https://github.com/Mr-Bamboo/MFG-HMoE.
|
2502.11167 | Bohan Lyu | Bohan Lyu, Siqiao Huang, Zichen Liang, Qi-An Sun, Jiaming Zhang | SURGE: On the Potential of Large Language Models as General-Purpose
Surrogate Code Executors | null | null | null | null | cs.LG cs.CL | http://creativecommons.org/licenses/by/4.0/ | Neural surrogate models have emerged as powerful and efficient tools in data
mining. Meanwhile, large language models (LLMs) have demonstrated remarkable
capabilities in code-related tasks. We investigate a novel application: using
LLMs as surrogate models for code execution prediction. Given LLMs' unique
ability to understand and process diverse programs, they present a promising
direction for building general-purpose surrogate models. To systematically
investigate this capability, we introduce SURGE, a comprehensive benchmark with
$1160$ problems covering $8$ key aspects: multi-language programming tasks,
competition-level programming problems, repository-level code analysis,
high-cost scientific computing, time-complexity-intensive algorithms, buggy
code analysis, programs dependent on specific compilers or execution
environments, and formal mathematical proof verification. Through extensive
empirical analysis of $21$ open-source and proprietary LLMs, we examine scaling
laws, data efficiency, and predictive accuracy. Our findings reveal important
insights about the feasibility of LLMs as efficient surrogates for
computational processes, with implications for automated software testing,
program analysis, and computational resource optimization in data mining
applications. Code and dataset are released at
https://github.com/Imbernoulli/SURGE.
| [
{
"version": "v1",
"created": "Sun, 16 Feb 2025 15:38:19 GMT"
},
{
"version": "v2",
"created": "Mon, 3 Mar 2025 08:26:12 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 09:54:20 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Lyu",
"Bohan",
""
],
[
"Huang",
"Siqiao",
""
],
[
"Liang",
"Zichen",
""
],
[
"Sun",
"Qi-An",
""
],
[
"Zhang",
"Jiaming",
""
]
] | TITLE: SURGE: On the Potential of Large Language Models as General-Purpose
Surrogate Code Executors
ABSTRACT: Neural surrogate models have emerged as powerful and efficient tools in data
mining. Meanwhile, large language models (LLMs) have demonstrated remarkable
capabilities in code-related tasks. We investigate a novel application: using
LLMs as surrogate models for code execution prediction. Given LLMs' unique
ability to understand and process diverse programs, they present a promising
direction for building general-purpose surrogate models. To systematically
investigate this capability, we introduce SURGE, a comprehensive benchmark with
$1160$ problems covering $8$ key aspects: multi-language programming tasks,
competition-level programming problems, repository-level code analysis,
high-cost scientific computing, time-complexity-intensive algorithms, buggy
code analysis, programs dependent on specific compilers or execution
environments, and formal mathematical proof verification. Through extensive
empirical analysis of $21$ open-source and proprietary LLMs, we examine scaling
laws, data efficiency, and predictive accuracy. Our findings reveal important
insights about the feasibility of LLMs as efficient surrogates for
computational processes, with implications for automated software testing,
program analysis, and computational resource optimization in data mining
applications. Code and dataset are released at
https://github.com/Imbernoulli/SURGE.
|
2502.14614 | Mingyi Jia | Mingyi Jia and Junwen Duan and Yan Song and Jianxin Wang | FIND: Fine-grained Information Density Guided Adaptive
Retrieval-Augmented Generation for Disease Diagnosis | null | null | null | null | cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Retrieval-Augmented Large Language Models (LLMs), which integrate external
knowledge into LLMs, have shown remarkable performance in various medical
domains, including clinical diagnosis. However, existing RAG methods struggle
to effectively assess task difficulty to make retrieval decisions, thereby
failing to meet the clinical requirements for balancing efficiency and
accuracy. So in this paper, we propose FIND (\textbf{F}ine-grained
\textbf{In}formation \textbf{D}ensity Guided Adaptive RAG), a novel framework
that improves the reliability of RAG in disease diagnosis scenarios. FIND
incorporates a fine-grained adaptive control module to determine whether
retrieval is necessary based on the information density of the input. By
optimizing the retrieval process and implementing a knowledge filtering module,
FIND ensures that the retrieval is better suited to clinical scenarios.
Experiments on three Chinese electronic medical record datasets demonstrate
that FIND significantly outperforms various baseline methods, highlighting its
effectiveness in clinical diagnosis tasks.
| [
{
"version": "v1",
"created": "Thu, 20 Feb 2025 14:52:36 GMT"
},
{
"version": "v2",
"created": "Thu, 13 Mar 2025 13:13:07 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 09:07:07 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Jia",
"Mingyi",
""
],
[
"Duan",
"Junwen",
""
],
[
"Song",
"Yan",
""
],
[
"Wang",
"Jianxin",
""
]
] | TITLE: FIND: Fine-grained Information Density Guided Adaptive
Retrieval-Augmented Generation for Disease Diagnosis
ABSTRACT: Retrieval-Augmented Large Language Models (LLMs), which integrate external
knowledge into LLMs, have shown remarkable performance in various medical
domains, including clinical diagnosis. However, existing RAG methods struggle
to effectively assess task difficulty to make retrieval decisions, thereby
failing to meet the clinical requirements for balancing efficiency and
accuracy. So in this paper, we propose FIND (\textbf{F}ine-grained
\textbf{In}formation \textbf{D}ensity Guided Adaptive RAG), a novel framework
that improves the reliability of RAG in disease diagnosis scenarios. FIND
incorporates a fine-grained adaptive control module to determine whether
retrieval is necessary based on the information density of the input. By
optimizing the retrieval process and implementing a knowledge filtering module,
FIND ensures that the retrieval is better suited to clinical scenarios.
Experiments on three Chinese electronic medical record datasets demonstrate
that FIND significantly outperforms various baseline methods, highlighting its
effectiveness in clinical diagnosis tasks.
|
2502.19790 | Maximilian B\"other | Maximilian B\"other, Xiaozhe Yao, Tolga Kerimoglu, Dan Graur, Viktor
Gsteiger, Ana Klimovic | Mixtera: A Data Plane for Foundation Model Training | under submission | null | null | null | cs.LG cs.AI cs.DB | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | State-of-the-art large language and vision models are trained over trillions
of tokens that are aggregated from a large variety of sources. As training data
collections grow, manually managing the samples becomes time-consuming,
tedious, and prone to errors. Yet recent research shows that the data mixture
and the order in which samples are visited during training can significantly
influence model accuracy. We build and present Mixtera, a data plane for
foundation model training that enables users to declaratively express which
data samples should be used in which proportion and in which order during
training. Mixtera is a centralized, read-only layer that is deployed on top of
existing training data collections and can be declaratively queried. It
operates independently of the filesystem structure and supports mixtures across
arbitrary properties (e.g., language, source dataset) as well as dynamic
adjustment of the mixture based on model feedback. We experimentally evaluate
Mixtera and show that our implementation does not bottleneck training and
scales to 256 GH200 superchips. We demonstrate how Mixtera supports recent
advancements in mixing strategies by implementing the proposed Adaptive Data
Optimization (ADO) algorithm in the system and evaluating its performance
impact. We also explore the role of mixtures for vision-language models.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 05:55:44 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 08:29:01 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Böther",
"Maximilian",
""
],
[
"Yao",
"Xiaozhe",
""
],
[
"Kerimoglu",
"Tolga",
""
],
[
"Graur",
"Dan",
""
],
[
"Gsteiger",
"Viktor",
""
],
[
"Klimovic",
"Ana",
""
]
] | TITLE: Mixtera: A Data Plane for Foundation Model Training
ABSTRACT: State-of-the-art large language and vision models are trained over trillions
of tokens that are aggregated from a large variety of sources. As training data
collections grow, manually managing the samples becomes time-consuming,
tedious, and prone to errors. Yet recent research shows that the data mixture
and the order in which samples are visited during training can significantly
influence model accuracy. We build and present Mixtera, a data plane for
foundation model training that enables users to declaratively express which
data samples should be used in which proportion and in which order during
training. Mixtera is a centralized, read-only layer that is deployed on top of
existing training data collections and can be declaratively queried. It
operates independently of the filesystem structure and supports mixtures across
arbitrary properties (e.g., language, source dataset) as well as dynamic
adjustment of the mixture based on model feedback. We experimentally evaluate
Mixtera and show that our implementation does not bottleneck training and
scales to 256 GH200 superchips. We demonstrate how Mixtera supports recent
advancements in mixing strategies by implementing the proposed Adaptive Data
Optimization (ADO) algorithm in the system and evaluating its performance
impact. We also explore the role of mixtures for vision-language models.
|
2502.20576 | Yongfeng Zhang | Kai Mei and Wujiang Xu and Shuhang Lin and Yongfeng Zhang | Smart Routing: Cost-Effective Multi-LLM Serving for Multi-Core AIOS | null | null | null | null | cs.DB cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | As large language models (LLMs) are increasingly deployed as service
endpoints in systems, the surge in query volume creates significant scheduling
challenges. Existing scheduling frameworks mainly target at latency
optimization while neglecting the capability of LLMs to serve different level
of queries, which could lead to computational resource waste. For example,
those simple queries can be safely handled by small, fast and cheap LLMs, while
those complex and difficult queries need to be handled by large, slow, and
expensive LLMs. This paper addresses this challenge by proposing an efficient
capability-cost coordinated scheduling framework, ECCOS, for multi-LLM serving,
which explicitly constrains response quality and workload to optimize LLM
inference cost. Specifically, it introduces the two-stage scheduling by
designing a multi-objective predictor and a constrained optimizer. The
predictor estimates both model capabilities and computational costs through
training-based and retrieval-based approaches, while the optimizer determines
cost-optimal assignments under quality and workload constraints. It also
introduces QAServe, a dataset for sample-wise response quality and costs
collected by zero-shot prompting different LLMs on knowledge QA and
mathematical reasoning. Extensive experiments demonstrate that ECCOS improves
success rates by 6.30% while reducing costs by 10.15% compared to existing
methods, consuming less than 0.5% of LLM response time. The code is available
at: https://github.com/agiresearch/ECCOS, and the proposed smart routing
mechanism has been integrated into AIOS, the AI Agent Operating System, at
https://github.com/agiresearch/AIOS.
| [
{
"version": "v1",
"created": "Thu, 27 Feb 2025 22:35:31 GMT"
},
{
"version": "v2",
"created": "Fri, 7 Mar 2025 13:35:33 GMT"
},
{
"version": "v3",
"created": "Sat, 22 Mar 2025 20:12:27 GMT"
},
{
"version": "v4",
"created": "Wed, 2 Apr 2025 19:33:06 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Mei",
"Kai",
""
],
[
"Xu",
"Wujiang",
""
],
[
"Lin",
"Shuhang",
""
],
[
"Zhang",
"Yongfeng",
""
]
] | TITLE: Smart Routing: Cost-Effective Multi-LLM Serving for Multi-Core AIOS
ABSTRACT: As large language models (LLMs) are increasingly deployed as service
endpoints in systems, the surge in query volume creates significant scheduling
challenges. Existing scheduling frameworks mainly target at latency
optimization while neglecting the capability of LLMs to serve different level
of queries, which could lead to computational resource waste. For example,
those simple queries can be safely handled by small, fast and cheap LLMs, while
those complex and difficult queries need to be handled by large, slow, and
expensive LLMs. This paper addresses this challenge by proposing an efficient
capability-cost coordinated scheduling framework, ECCOS, for multi-LLM serving,
which explicitly constrains response quality and workload to optimize LLM
inference cost. Specifically, it introduces the two-stage scheduling by
designing a multi-objective predictor and a constrained optimizer. The
predictor estimates both model capabilities and computational costs through
training-based and retrieval-based approaches, while the optimizer determines
cost-optimal assignments under quality and workload constraints. It also
introduces QAServe, a dataset for sample-wise response quality and costs
collected by zero-shot prompting different LLMs on knowledge QA and
mathematical reasoning. Extensive experiments demonstrate that ECCOS improves
success rates by 6.30% while reducing costs by 10.15% compared to existing
methods, consuming less than 0.5% of LLM response time. The code is available
at: https://github.com/agiresearch/ECCOS, and the proposed smart routing
mechanism has been integrated into AIOS, the AI Agent Operating System, at
https://github.com/agiresearch/AIOS.
|
2503.00383 | Song Xia | Song Xia, Yi Yu, Wenhan Yang, Meiwen Ding, Zhuo Chen, Ling-Yu Duan,
Alex C. Kot, Xudong Jiang | Theoretical Insights in Model Inversion Robustness and Conditional
Entropy Maximization for Collaborative Inference Systems | accepted by CVPR2025 | null | null | null | cs.LG cs.AI stat.ML | http://creativecommons.org/licenses/by/4.0/ | By locally encoding raw data into intermediate features, collaborative
inference enables end users to leverage powerful deep learning models without
exposure of sensitive raw data to cloud servers. However, recent studies have
revealed that these intermediate features may not sufficiently preserve
privacy, as information can be leaked and raw data can be reconstructed via
model inversion attacks (MIAs). Obfuscation-based methods, such as noise
corruption, adversarial representation learning, and information filters,
enhance the inversion robustness by obfuscating the task-irrelevant redundancy
empirically. However, methods for quantifying such redundancy remain elusive,
and the explicit mathematical relation between this redundancy minimization and
inversion robustness enhancement has not yet been established. To address that,
this work first theoretically proves that the conditional entropy of inputs
given intermediate features provides a guaranteed lower bound on the
reconstruction mean square error (MSE) under any MIA. Then, we derive a
differentiable and solvable measure for bounding this conditional entropy based
on the Gaussian mixture estimation and propose a conditional entropy
maximization (CEM) algorithm to enhance the inversion robustness. Experimental
results on four datasets demonstrate the effectiveness and adaptability of our
proposed CEM; without compromising feature utility and computing efficiency,
plugging the proposed CEM into obfuscation-based defense mechanisms
consistently boosts their inversion robustness, achieving average gains ranging
from 12.9\% to 48.2\%. Code is available at
\href{https://github.com/xiasong0501/CEM}{https://github.com/xiasong0501/CEM}.
| [
{
"version": "v1",
"created": "Sat, 1 Mar 2025 07:15:21 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 05:50:56 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Xia",
"Song",
""
],
[
"Yu",
"Yi",
""
],
[
"Yang",
"Wenhan",
""
],
[
"Ding",
"Meiwen",
""
],
[
"Chen",
"Zhuo",
""
],
[
"Duan",
"Ling-Yu",
""
],
[
"Kot",
"Alex C.",
""
],
[
"Jiang",
"Xudong",
""
]
] | TITLE: Theoretical Insights in Model Inversion Robustness and Conditional
Entropy Maximization for Collaborative Inference Systems
ABSTRACT: By locally encoding raw data into intermediate features, collaborative
inference enables end users to leverage powerful deep learning models without
exposure of sensitive raw data to cloud servers. However, recent studies have
revealed that these intermediate features may not sufficiently preserve
privacy, as information can be leaked and raw data can be reconstructed via
model inversion attacks (MIAs). Obfuscation-based methods, such as noise
corruption, adversarial representation learning, and information filters,
enhance the inversion robustness by obfuscating the task-irrelevant redundancy
empirically. However, methods for quantifying such redundancy remain elusive,
and the explicit mathematical relation between this redundancy minimization and
inversion robustness enhancement has not yet been established. To address that,
this work first theoretically proves that the conditional entropy of inputs
given intermediate features provides a guaranteed lower bound on the
reconstruction mean square error (MSE) under any MIA. Then, we derive a
differentiable and solvable measure for bounding this conditional entropy based
on the Gaussian mixture estimation and propose a conditional entropy
maximization (CEM) algorithm to enhance the inversion robustness. Experimental
results on four datasets demonstrate the effectiveness and adaptability of our
proposed CEM; without compromising feature utility and computing efficiency,
plugging the proposed CEM into obfuscation-based defense mechanisms
consistently boosts their inversion robustness, achieving average gains ranging
from 12.9\% to 48.2\%. Code is available at
\href{https://github.com/xiasong0501/CEM}{https://github.com/xiasong0501/CEM}.
|
2503.05445 | Meiyu Lin | Meiyu Lin, Haichuan Zhang, Jiale Lao, Renyuan Li, Yuanchun Zhou, Carl
Yang, Yang Cao, Mingjie Tang | ToxicSQL: Migrating SQL Injection Threats into Text-to-SQL Models via
Backdoor Attack | null | null | null | null | cs.CR cs.DB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | Large language models (LLMs) have shown state-of-the-art results in
translating natural language questions into SQL queries (Text-to-SQL), a
long-standing challenge within the database community. However, security
concerns remain largely unexplored, particularly the threat of backdoor
attacks, which can introduce malicious behaviors into models through
fine-tuning with poisoned datasets. In this work, we systematically investigate
the vulnerabilities of LLM-based Text-to-SQL models and present ToxicSQL, a
novel backdoor attack framework. Our approach leverages stealthy {semantic and
character-level triggers} to make backdoors difficult to detect and remove,
ensuring that malicious behaviors remain covert while maintaining high model
accuracy on benign inputs. Furthermore, we propose leveraging SQL injection
payloads as backdoor targets, enabling the generation of malicious yet
executable SQL queries, which pose severe security and privacy risks in
language model-based SQL development. We demonstrate that injecting only 0.44%
of poisoned data can result in an attack success rate of 79.41%, posing a
significant risk to database security. Additionally, we propose detection and
mitigation strategies to enhance model reliability. Our findings highlight the
urgent need for security-aware Text-to-SQL development, emphasizing the
importance of robust defenses against backdoor threats.
| [
{
"version": "v1",
"created": "Fri, 7 Mar 2025 14:16:48 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 10:16:53 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Lin",
"Meiyu",
""
],
[
"Zhang",
"Haichuan",
""
],
[
"Lao",
"Jiale",
""
],
[
"Li",
"Renyuan",
""
],
[
"Zhou",
"Yuanchun",
""
],
[
"Yang",
"Carl",
""
],
[
"Cao",
"Yang",
""
],
[
"Tang",
"Mingjie",
""
]
] | TITLE: ToxicSQL: Migrating SQL Injection Threats into Text-to-SQL Models via
Backdoor Attack
ABSTRACT: Large language models (LLMs) have shown state-of-the-art results in
translating natural language questions into SQL queries (Text-to-SQL), a
long-standing challenge within the database community. However, security
concerns remain largely unexplored, particularly the threat of backdoor
attacks, which can introduce malicious behaviors into models through
fine-tuning with poisoned datasets. In this work, we systematically investigate
the vulnerabilities of LLM-based Text-to-SQL models and present ToxicSQL, a
novel backdoor attack framework. Our approach leverages stealthy {semantic and
character-level triggers} to make backdoors difficult to detect and remove,
ensuring that malicious behaviors remain covert while maintaining high model
accuracy on benign inputs. Furthermore, we propose leveraging SQL injection
payloads as backdoor targets, enabling the generation of malicious yet
executable SQL queries, which pose severe security and privacy risks in
language model-based SQL development. We demonstrate that injecting only 0.44%
of poisoned data can result in an attack success rate of 79.41%, posing a
significant risk to database security. Additionally, we propose detection and
mitigation strategies to enhance model reliability. Our findings highlight the
urgent need for security-aware Text-to-SQL development, emphasizing the
importance of robust defenses against backdoor threats.
|
2503.08970 | Julian Rene Cuellar Buritica | Julian Rene Cuellar Buritica, Vu Dinh, Manjula Burri, Julie Roelandts,
James Wendling, Jon D. Klingensmith | Evaluation of state-of-the-art deep learning models in the segmentation
of the heart ventricles in parasternal short-axis echocardiograms | 25 pages, 13 figures, 6 tables | null | 10.1117/1.JMI.12.2.024002 | null | eess.IV cs.CV cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Previous studies on echocardiogram segmentation are focused on the left
ventricle in parasternal long-axis views. In this study, deep-learning models
were evaluated on the segmentation of the ventricles in parasternal short-axis
echocardiograms (PSAX-echo). Segmentation of the ventricles in complementary
echocardiogram views will allow the computation of important metrics with the
potential to aid in diagnosing cardio-pulmonary diseases and other
cardiomyopathies. Evaluating state-of-the-art models with small datasets can
reveal if they improve performance on limited data. PSAX-echo were performed on
33 volunteer women. An experienced cardiologist identified end-diastole and
end-systole frames from 387 scans, and expert observers manually traced the
contours of the cardiac structures. Traced frames were pre-processed and used
to create labels to train 2 specific-domain (Unet-Resnet101 and Unet-ResNet50),
and 4 general-domain (3 Segment Anything (SAM) variants, and the Detectron2)
deep-learning models. The performance of the models was evaluated using the
Dice similarity coefficient (DSC), Hausdorff distance (HD), and difference in
cross-sectional area (DCSA). The Unet-Resnet101 model provided superior
performance in the segmentation of the ventricles with 0.83, 4.93 pixels, and
106 pixel2 on average for DSC, HD, and DCSA respectively. A fine-tuned MedSAM
model provided a performance of 0.82, 6.66 pixels, and 1252 pixel2, while the
Detectron2 model provided 0.78, 2.12 pixels, and 116 pixel2 for the same
metrics respectively. Deep-learning models are suitable for the segmentation of
the left and right ventricles in PSAX-echo. This study demonstrated that
specific-domain trained models such as Unet-ResNet provide higher accuracy for
echo segmentation than general-domain segmentation models when working with
small and locally acquired datasets.
| [
{
"version": "v1",
"created": "Wed, 12 Mar 2025 00:33:01 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Buritica",
"Julian Rene Cuellar",
""
],
[
"Dinh",
"Vu",
""
],
[
"Burri",
"Manjula",
""
],
[
"Roelandts",
"Julie",
""
],
[
"Wendling",
"James",
""
],
[
"Klingensmith",
"Jon D.",
""
]
] | TITLE: Evaluation of state-of-the-art deep learning models in the segmentation
of the heart ventricles in parasternal short-axis echocardiograms
ABSTRACT: Previous studies on echocardiogram segmentation are focused on the left
ventricle in parasternal long-axis views. In this study, deep-learning models
were evaluated on the segmentation of the ventricles in parasternal short-axis
echocardiograms (PSAX-echo). Segmentation of the ventricles in complementary
echocardiogram views will allow the computation of important metrics with the
potential to aid in diagnosing cardio-pulmonary diseases and other
cardiomyopathies. Evaluating state-of-the-art models with small datasets can
reveal if they improve performance on limited data. PSAX-echo were performed on
33 volunteer women. An experienced cardiologist identified end-diastole and
end-systole frames from 387 scans, and expert observers manually traced the
contours of the cardiac structures. Traced frames were pre-processed and used
to create labels to train 2 specific-domain (Unet-Resnet101 and Unet-ResNet50),
and 4 general-domain (3 Segment Anything (SAM) variants, and the Detectron2)
deep-learning models. The performance of the models was evaluated using the
Dice similarity coefficient (DSC), Hausdorff distance (HD), and difference in
cross-sectional area (DCSA). The Unet-Resnet101 model provided superior
performance in the segmentation of the ventricles with 0.83, 4.93 pixels, and
106 pixel2 on average for DSC, HD, and DCSA respectively. A fine-tuned MedSAM
model provided a performance of 0.82, 6.66 pixels, and 1252 pixel2, while the
Detectron2 model provided 0.78, 2.12 pixels, and 116 pixel2 for the same
metrics respectively. Deep-learning models are suitable for the segmentation of
the left and right ventricles in PSAX-echo. This study demonstrated that
specific-domain trained models such as Unet-ResNet provide higher accuracy for
echo segmentation than general-domain segmentation models when working with
small and locally acquired datasets.
|
2503.13067 | Satadeep Bhattacharjee | Sk Mujaffar Hossain, Namitha Anna Koshi, Seung-Cheol Lee, G.P Das and
Satadeep Bhattacharjee | Deep Neural Network-Based Voltage Prediction for Alkali-Metal-Ion
Battery Materials | null | null | null | null | cond-mat.mtrl-sci physics.chem-ph | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate voltage prediction of battery materials plays a pivotal role in
advancing energy storage technologies and in the rational design of
high-performance cathode materials. In this work, we present a deep neural
network (DNN) model, built using PyTorch, to estimate the average voltage of
cathode materials across Li-ion, Na-ion, and other alkali-metal-ion batteries.
The model is trained on an extensive dataset from the Materials Project,
incorporating a wide range of descriptors-structural, physical, chemical,
electronic, thermodynamic, and battery-specific-ensuring a comprehensive
representation of material properties. Our model exhibits strong predictive
performance, as corroborated by first-principles density functional theory
(DFT) calculations. The close alignment between the DNN predictions and DFT
outcomes highlights the robustness and accuracy of our machine learning
framework in effectively screening and identifying viable battery materials.
Utilizing this validated model, we successfully propose novel Na-ion battery
compositions, with their predicted behavior confirmed through rigorous
computational assessment. By seamlessly integrating data-driven prediction with
first-principles validation, this study presents an effective framework that
significantly accelerates the discovery and optimization of advanced battery
materials, contributing to the development of more reliable and efficient
energy storage technologies.
| [
{
"version": "v1",
"created": "Mon, 17 Mar 2025 11:15:31 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 05:10:32 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Hossain",
"Sk Mujaffar",
""
],
[
"Koshi",
"Namitha Anna",
""
],
[
"Lee",
"Seung-Cheol",
""
],
[
"Das",
"G. P",
""
],
[
"Bhattacharjee",
"Satadeep",
""
]
] | TITLE: Deep Neural Network-Based Voltage Prediction for Alkali-Metal-Ion
Battery Materials
ABSTRACT: Accurate voltage prediction of battery materials plays a pivotal role in
advancing energy storage technologies and in the rational design of
high-performance cathode materials. In this work, we present a deep neural
network (DNN) model, built using PyTorch, to estimate the average voltage of
cathode materials across Li-ion, Na-ion, and other alkali-metal-ion batteries.
The model is trained on an extensive dataset from the Materials Project,
incorporating a wide range of descriptors-structural, physical, chemical,
electronic, thermodynamic, and battery-specific-ensuring a comprehensive
representation of material properties. Our model exhibits strong predictive
performance, as corroborated by first-principles density functional theory
(DFT) calculations. The close alignment between the DNN predictions and DFT
outcomes highlights the robustness and accuracy of our machine learning
framework in effectively screening and identifying viable battery materials.
Utilizing this validated model, we successfully propose novel Na-ion battery
compositions, with their predicted behavior confirmed through rigorous
computational assessment. By seamlessly integrating data-driven prediction with
first-principles validation, this study presents an effective framework that
significantly accelerates the discovery and optimization of advanced battery
materials, contributing to the development of more reliable and efficient
energy storage technologies.
|
2503.15275 | Xiang Li | Xiang Li, Heqian Qiu, Lanxiao Wang, Hanwen Zhang, Chenghao Qi, Linfeng
Han, Huiyu Xiong, Hongliang Li | Challenges and Trends in Egocentric Vision: A Survey | null | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | With the rapid development of artificial intelligence technologies and
wearable devices, egocentric vision understanding has emerged as a new and
challenging research direction, gradually attracting widespread attention from
both academia and industry. Egocentric vision captures visual and multimodal
data through cameras or sensors worn on the human body, offering a unique
perspective that simulates human visual experiences. This paper provides a
comprehensive survey of the research on egocentric vision understanding,
systematically analyzing the components of egocentric scenes and categorizing
the tasks into four main areas: subject understanding, object understanding,
environment understanding, and hybrid understanding. We explore in detail the
sub-tasks within each category. We also summarize the main challenges and
trends currently existing in the field. Furthermore, this paper presents an
overview of high-quality egocentric vision datasets, offering valuable
resources for future research. By summarizing the latest advancements, we
anticipate the broad applications of egocentric vision technologies in fields
such as augmented reality, virtual reality, and embodied intelligence, and
propose future research directions based on the latest developments in the
field.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 14:51:27 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 08:06:35 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Li",
"Xiang",
""
],
[
"Qiu",
"Heqian",
""
],
[
"Wang",
"Lanxiao",
""
],
[
"Zhang",
"Hanwen",
""
],
[
"Qi",
"Chenghao",
""
],
[
"Han",
"Linfeng",
""
],
[
"Xiong",
"Huiyu",
""
],
[
"Li",
"Hongliang",
""
]
] | TITLE: Challenges and Trends in Egocentric Vision: A Survey
ABSTRACT: With the rapid development of artificial intelligence technologies and
wearable devices, egocentric vision understanding has emerged as a new and
challenging research direction, gradually attracting widespread attention from
both academia and industry. Egocentric vision captures visual and multimodal
data through cameras or sensors worn on the human body, offering a unique
perspective that simulates human visual experiences. This paper provides a
comprehensive survey of the research on egocentric vision understanding,
systematically analyzing the components of egocentric scenes and categorizing
the tasks into four main areas: subject understanding, object understanding,
environment understanding, and hybrid understanding. We explore in detail the
sub-tasks within each category. We also summarize the main challenges and
trends currently existing in the field. Furthermore, this paper presents an
overview of high-quality egocentric vision datasets, offering valuable
resources for future research. By summarizing the latest advancements, we
anticipate the broad applications of egocentric vision technologies in fields
such as augmented reality, virtual reality, and embodied intelligence, and
propose future research directions based on the latest developments in the
field.
|
2503.15289 | Junnan Zhu | Junnan Zhu, Min Xiao, Yining Wang, Feifei Zhai, Yu Zhou, Chengqing
Zong | TROVE: A Challenge for Fine-Grained Text Provenance via Source Sentence
Tracing and Relationship Classification | 15 pages | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | LLMs have achieved remarkable fluency and coherence in text generation, yet
their widespread adoption has raised concerns about content reliability and
accountability. In high-stakes domains such as healthcare, law, and news, it is
crucial to understand where and how the content is created. To address this, we
introduce the Text pROVEnance (TROVE) challenge, designed to trace each
sentence of a target text back to specific source sentences within potentially
lengthy or multi-document inputs. Beyond identifying sources, TROVE annotates
the fine-grained relationships (quotation, compression, inference, and others),
providing a deep understanding of how each target sentence is formed. To
benchmark TROVE, we construct our dataset by leveraging three public datasets
covering 11 diverse scenarios (e.g., QA and summarization) in English and
Chinese, spanning source texts of varying lengths (0-5k, 5-10k, 10k+),
emphasizing the multi-document and long-document settings essential for
provenance. To ensure high-quality data, we employ a three-stage annotation
process: sentence retrieval, GPT provenance, and human provenance. We evaluate
11 LLMs under direct prompting and retrieval-augmented paradigms, revealing
that retrieval is essential for robust performance, larger models perform
better in complex relationship classification, and closed-source models often
lead, yet open-source models show significant promise, particularly with
retrieval augmentation.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 15:09:39 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 09:56:04 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zhu",
"Junnan",
""
],
[
"Xiao",
"Min",
""
],
[
"Wang",
"Yining",
""
],
[
"Zhai",
"Feifei",
""
],
[
"Zhou",
"Yu",
""
],
[
"Zong",
"Chengqing",
""
]
] | TITLE: TROVE: A Challenge for Fine-Grained Text Provenance via Source Sentence
Tracing and Relationship Classification
ABSTRACT: LLMs have achieved remarkable fluency and coherence in text generation, yet
their widespread adoption has raised concerns about content reliability and
accountability. In high-stakes domains such as healthcare, law, and news, it is
crucial to understand where and how the content is created. To address this, we
introduce the Text pROVEnance (TROVE) challenge, designed to trace each
sentence of a target text back to specific source sentences within potentially
lengthy or multi-document inputs. Beyond identifying sources, TROVE annotates
the fine-grained relationships (quotation, compression, inference, and others),
providing a deep understanding of how each target sentence is formed. To
benchmark TROVE, we construct our dataset by leveraging three public datasets
covering 11 diverse scenarios (e.g., QA and summarization) in English and
Chinese, spanning source texts of varying lengths (0-5k, 5-10k, 10k+),
emphasizing the multi-document and long-document settings essential for
provenance. To ensure high-quality data, we employ a three-stage annotation
process: sentence retrieval, GPT provenance, and human provenance. We evaluate
11 LLMs under direct prompting and retrieval-augmented paradigms, revealing
that retrieval is essential for robust performance, larger models perform
better in complex relationship classification, and closed-source models often
lead, yet open-source models show significant promise, particularly with
retrieval augmentation.
|
2503.15567 | Yanchen Luo | Yanchen Luo, Zhiyuan Liu, Yi Zhao, Sihang Li, Kenji Kawaguchi,
Tat-Seng Chua, Xiang Wang | Towards Unified Latent Space for 3D Molecular Latent Diffusion Modeling | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | 3D molecule generation is crucial for drug discovery and material science,
requiring models to process complex multi-modalities, including atom types,
chemical bonds, and 3D coordinates. A key challenge is integrating these
modalities of different shapes while maintaining SE(3) equivariance for 3D
coordinates. To achieve this, existing approaches typically maintain separate
latent spaces for invariant and equivariant modalities, reducing efficiency in
both training and sampling. In this work, we propose \textbf{U}nified
Variational \textbf{A}uto-\textbf{E}ncoder for \textbf{3D} Molecular Latent
Diffusion Modeling (\textbf{UAE-3D}), a multi-modal VAE that compresses 3D
molecules into latent sequences from a unified latent space, while maintaining
near-zero reconstruction error. This unified latent space eliminates the
complexities of handling multi-modality and equivariance when performing latent
diffusion modeling. We demonstrate this by employing the Diffusion
Transformer--a general-purpose diffusion model without any molecular inductive
bias--for latent generation. Extensive experiments on GEOM-Drugs and QM9
datasets demonstrate that our method significantly establishes new benchmarks
in both \textit{de novo} and conditional 3D molecule generation, achieving
leading efficiency and quality.
| [
{
"version": "v1",
"created": "Wed, 19 Mar 2025 08:56:13 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 04:03:49 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Luo",
"Yanchen",
""
],
[
"Liu",
"Zhiyuan",
""
],
[
"Zhao",
"Yi",
""
],
[
"Li",
"Sihang",
""
],
[
"Kawaguchi",
"Kenji",
""
],
[
"Chua",
"Tat-Seng",
""
],
[
"Wang",
"Xiang",
""
]
] | TITLE: Towards Unified Latent Space for 3D Molecular Latent Diffusion Modeling
ABSTRACT: 3D molecule generation is crucial for drug discovery and material science,
requiring models to process complex multi-modalities, including atom types,
chemical bonds, and 3D coordinates. A key challenge is integrating these
modalities of different shapes while maintaining SE(3) equivariance for 3D
coordinates. To achieve this, existing approaches typically maintain separate
latent spaces for invariant and equivariant modalities, reducing efficiency in
both training and sampling. In this work, we propose \textbf{U}nified
Variational \textbf{A}uto-\textbf{E}ncoder for \textbf{3D} Molecular Latent
Diffusion Modeling (\textbf{UAE-3D}), a multi-modal VAE that compresses 3D
molecules into latent sequences from a unified latent space, while maintaining
near-zero reconstruction error. This unified latent space eliminates the
complexities of handling multi-modality and equivariance when performing latent
diffusion modeling. We demonstrate this by employing the Diffusion
Transformer--a general-purpose diffusion model without any molecular inductive
bias--for latent generation. Extensive experiments on GEOM-Drugs and QM9
datasets demonstrate that our method significantly establishes new benchmarks
in both \textit{de novo} and conditional 3D molecule generation, achieving
leading efficiency and quality.
|
2503.17604 | Vignesh Prabhakar | Vignesh Prabhakar, Md Amirul Islam, Adam Atanas, Yao-Ting Wang, Joah
Han, Aastha Jhunjhunwala, Rucha Apte, Robert Clark, Kang Xu, Zihan Wang, Kai
Liu | OmniScience: A Domain-Specialized LLM for Scientific Reasoning and
Discovery | null | null | null | null | cs.AI | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have demonstrated remarkable potential in
advancing scientific knowledge and addressing complex challenges. In this work,
we introduce OmniScience, a specialized large reasoning model for general
science, developed through three key components: (1) domain adaptive
pretraining on a carefully curated corpus of scientific literature, (2)
instruction tuning on a specialized dataset to guide the model in following
domain-specific tasks, and (3) reasoning-based knowledge distillation through
fine-tuning to significantly enhance its ability to generate contextually
relevant and logically sound responses. We demonstrate the versatility of
OmniScience by developing a battery agent that efficiently ranks molecules as
potential electrolyte solvents or additives. Comprehensive evaluations reveal
that OmniScience is competitive with state-of-the-art large reasoning models on
the GPQA Diamond and domain-specific battery benchmarks, while outperforming
all public reasoning and non-reasoning models with similar parameter counts. We
further demonstrate via ablation experiments that domain adaptive pretraining
and reasoning-based knowledge distillation are critical to attain our
performance levels, across benchmarks.
| [
{
"version": "v1",
"created": "Sat, 22 Mar 2025 01:18:59 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 20:01:30 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Prabhakar",
"Vignesh",
""
],
[
"Islam",
"Md Amirul",
""
],
[
"Atanas",
"Adam",
""
],
[
"Wang",
"Yao-Ting",
""
],
[
"Han",
"Joah",
""
],
[
"Jhunjhunwala",
"Aastha",
""
],
[
"Apte",
"Rucha",
""
],
[
"Clark",
"Robert",
""
],
[
"Xu",
"Kang",
""
],
[
"Wang",
"Zihan",
""
],
[
"Liu",
"Kai",
""
]
] | TITLE: OmniScience: A Domain-Specialized LLM for Scientific Reasoning and
Discovery
ABSTRACT: Large Language Models (LLMs) have demonstrated remarkable potential in
advancing scientific knowledge and addressing complex challenges. In this work,
we introduce OmniScience, a specialized large reasoning model for general
science, developed through three key components: (1) domain adaptive
pretraining on a carefully curated corpus of scientific literature, (2)
instruction tuning on a specialized dataset to guide the model in following
domain-specific tasks, and (3) reasoning-based knowledge distillation through
fine-tuning to significantly enhance its ability to generate contextually
relevant and logically sound responses. We demonstrate the versatility of
OmniScience by developing a battery agent that efficiently ranks molecules as
potential electrolyte solvents or additives. Comprehensive evaluations reveal
that OmniScience is competitive with state-of-the-art large reasoning models on
the GPQA Diamond and domain-specific battery benchmarks, while outperforming
all public reasoning and non-reasoning models with similar parameter counts. We
further demonstrate via ablation experiments that domain adaptive pretraining
and reasoning-based knowledge distillation are critical to attain our
performance levels, across benchmarks.
|
2503.20297 | Yuhan Wang | Yuhan Wang, Suzhi Bi, Ying-Jun Angela Zhang, and Xiaojun Yuan | Traversing Distortion-Perception Tradeoff using a Single Score-Based
Generative Model | Accepted by IEEE/CVF Conference on Computer Vision and Pattern
Recognition 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | The distortion-perception (DP) tradeoff reveals a fundamental conflict
between distortion metrics (e.g., MSE and PSNR) and perceptual quality. Recent
research has increasingly concentrated on evaluating denoising algorithms
within the DP framework. However, existing algorithms either prioritize
perceptual quality by sacrificing acceptable distortion, or focus on minimizing
MSE for faithful restoration. When the goal shifts or noisy measurements vary,
adapting to different points on the DP plane needs retraining or even
re-designing the model. Inspired by recent advances in solving inverse problems
using score-based generative models, we explore the potential of flexibly and
optimally traversing DP tradeoffs using a single pre-trained score-based model.
Specifically, we introduce a variance-scaled reverse diffusion process and
theoretically characterize the marginal distribution. We then prove that the
proposed sample process is an optimal solution to the DP tradeoff for
conditional Gaussian distribution. Experimental results on two-dimensional and
image datasets illustrate that a single score network can effectively and
flexibly traverse the DP tradeoff for general denoising problems.
| [
{
"version": "v1",
"created": "Wed, 26 Mar 2025 07:37:53 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 07:46:31 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Wang",
"Yuhan",
""
],
[
"Bi",
"Suzhi",
""
],
[
"Zhang",
"Ying-Jun Angela",
""
],
[
"Yuan",
"Xiaojun",
""
]
] | TITLE: Traversing Distortion-Perception Tradeoff using a Single Score-Based
Generative Model
ABSTRACT: The distortion-perception (DP) tradeoff reveals a fundamental conflict
between distortion metrics (e.g., MSE and PSNR) and perceptual quality. Recent
research has increasingly concentrated on evaluating denoising algorithms
within the DP framework. However, existing algorithms either prioritize
perceptual quality by sacrificing acceptable distortion, or focus on minimizing
MSE for faithful restoration. When the goal shifts or noisy measurements vary,
adapting to different points on the DP plane needs retraining or even
re-designing the model. Inspired by recent advances in solving inverse problems
using score-based generative models, we explore the potential of flexibly and
optimally traversing DP tradeoffs using a single pre-trained score-based model.
Specifically, we introduce a variance-scaled reverse diffusion process and
theoretically characterize the marginal distribution. We then prove that the
proposed sample process is an optimal solution to the DP tradeoff for
conditional Gaussian distribution. Experimental results on two-dimensional and
image datasets illustrate that a single score network can effectively and
flexibly traverse the DP tradeoff for general denoising problems.
|
2503.22512 | Wenqiang Luo | Wenqiang Luo, Jacky Wai Keung, Boyang Yang, Jacques Klein, Tegawende
F. Bissyande, Haoye Tian, Bach Le | Unlocking LLM Repair Capabilities in Low-Resource Programming Languages
Through Cross-Language Translation and Multi-Agent Refinement | null | null | null | null | cs.SE | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in leveraging LLMs for APR have demonstrated impressive
capabilities in fixing software defects. However, current LLM-based approaches
predominantly focus on mainstream programming languages like Java and Python,
neglecting less prevalent but emerging languages such as Rust due to expensive
training resources, limited datasets, and insufficient community support. This
narrow focus creates a significant gap in repair capabilities across the
programming language spectrum, where the full potential of LLMs for
comprehensive multilingual program repair remains largely unexplored. To
address this limitation, we introduce a novel cross-language program repair
approach LANTERN that leverages LLMs' differential proficiency across languages
through a multi-agent iterative repair paradigm. Our technique strategically
translates defective code from languages where LLMs exhibit weaker repair
capabilities to languages where they demonstrate stronger performance, without
requiring additional training. A key innovation of our approach is an LLM-based
decision-making system that dynamically selects optimal target languages based
on bug characteristics and continuously incorporates feedback from previous
repair attempts. We evaluate our method on xCodeEval, a comprehensive
multilingual benchmark comprising 5,068 bugs across 11 programming languages.
Results demonstrate significant enhancement in repair effectiveness,
particularly for underrepresented languages, with Rust showing a 22.09%
improvement in Pass@10 metrics. Our research provides the first empirical
evidence that cross-language translation significantly expands the repair
capabilities of LLMs and effectively bridges the performance gap between
programming languages with different levels of popularity, opening new avenues
for truly language-agnostic automated program repair.
| [
{
"version": "v1",
"created": "Fri, 28 Mar 2025 15:15:56 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 06:56:58 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Luo",
"Wenqiang",
""
],
[
"Keung",
"Jacky Wai",
""
],
[
"Yang",
"Boyang",
""
],
[
"Klein",
"Jacques",
""
],
[
"Bissyande",
"Tegawende F.",
""
],
[
"Tian",
"Haoye",
""
],
[
"Le",
"Bach",
""
]
] | TITLE: Unlocking LLM Repair Capabilities in Low-Resource Programming Languages
Through Cross-Language Translation and Multi-Agent Refinement
ABSTRACT: Recent advances in leveraging LLMs for APR have demonstrated impressive
capabilities in fixing software defects. However, current LLM-based approaches
predominantly focus on mainstream programming languages like Java and Python,
neglecting less prevalent but emerging languages such as Rust due to expensive
training resources, limited datasets, and insufficient community support. This
narrow focus creates a significant gap in repair capabilities across the
programming language spectrum, where the full potential of LLMs for
comprehensive multilingual program repair remains largely unexplored. To
address this limitation, we introduce a novel cross-language program repair
approach LANTERN that leverages LLMs' differential proficiency across languages
through a multi-agent iterative repair paradigm. Our technique strategically
translates defective code from languages where LLMs exhibit weaker repair
capabilities to languages where they demonstrate stronger performance, without
requiring additional training. A key innovation of our approach is an LLM-based
decision-making system that dynamically selects optimal target languages based
on bug characteristics and continuously incorporates feedback from previous
repair attempts. We evaluate our method on xCodeEval, a comprehensive
multilingual benchmark comprising 5,068 bugs across 11 programming languages.
Results demonstrate significant enhancement in repair effectiveness,
particularly for underrepresented languages, with Rust showing a 22.09%
improvement in Pass@10 metrics. Our research provides the first empirical
evidence that cross-language translation significantly expands the repair
capabilities of LLMs and effectively bridges the performance gap between
programming languages with different levels of popularity, opening new avenues
for truly language-agnostic automated program repair.
|
2503.22976 | Jiahui Zhang | Jiahui Zhang, Yurui Chen, Yanpeng Zhou, Yueming Xu, Ze Huang, Jilin
Mei, Junhui Chen, Yu-Jie Yuan, Xinyue Cai, Guowei Huang, Xingyue Quan, Hang
Xu, Li Zhang | From Flatland to Space: Teaching Vision-Language Models to Perceive and
Reason in 3D | Project page: https://fudan-zvg.github.io/spar | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Recent advances in LVLMs have improved vision-language understanding, but
they still struggle with spatial perception, limiting their ability to reason
about complex 3D scenes. Unlike previous approaches that incorporate 3D
representations into models to improve spatial understanding, we aim to unlock
the potential of VLMs by leveraging spatially relevant image data. To this end,
we introduce a novel 2D spatial data generation and annotation pipeline built
upon scene data with 3D ground-truth. This pipeline enables the creation of a
diverse set of spatial tasks, ranging from basic perception tasks to more
complex reasoning tasks. Leveraging this pipeline, we construct SPAR-7M, a
large-scale dataset generated from thousands of scenes across multiple public
datasets. In addition, we introduce SPAR-Bench, a benchmark designed to offer a
more comprehensive evaluation of spatial capabilities compared to existing
spatial benchmarks, supporting both single-view and multi-view inputs. Training
on both SPAR-7M and large-scale 2D datasets enables our models to achieve
state-of-the-art performance on 2D spatial benchmarks. Further fine-tuning on
3D task-specific datasets yields competitive results, underscoring the
effectiveness of our dataset in enhancing spatial reasoning.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 04:51:50 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 04:34:23 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zhang",
"Jiahui",
""
],
[
"Chen",
"Yurui",
""
],
[
"Zhou",
"Yanpeng",
""
],
[
"Xu",
"Yueming",
""
],
[
"Huang",
"Ze",
""
],
[
"Mei",
"Jilin",
""
],
[
"Chen",
"Junhui",
""
],
[
"Yuan",
"Yu-Jie",
""
],
[
"Cai",
"Xinyue",
""
],
[
"Huang",
"Guowei",
""
],
[
"Quan",
"Xingyue",
""
],
[
"Xu",
"Hang",
""
],
[
"Zhang",
"Li",
""
]
] | TITLE: From Flatland to Space: Teaching Vision-Language Models to Perceive and
Reason in 3D
ABSTRACT: Recent advances in LVLMs have improved vision-language understanding, but
they still struggle with spatial perception, limiting their ability to reason
about complex 3D scenes. Unlike previous approaches that incorporate 3D
representations into models to improve spatial understanding, we aim to unlock
the potential of VLMs by leveraging spatially relevant image data. To this end,
we introduce a novel 2D spatial data generation and annotation pipeline built
upon scene data with 3D ground-truth. This pipeline enables the creation of a
diverse set of spatial tasks, ranging from basic perception tasks to more
complex reasoning tasks. Leveraging this pipeline, we construct SPAR-7M, a
large-scale dataset generated from thousands of scenes across multiple public
datasets. In addition, we introduce SPAR-Bench, a benchmark designed to offer a
more comprehensive evaluation of spatial capabilities compared to existing
spatial benchmarks, supporting both single-view and multi-view inputs. Training
on both SPAR-7M and large-scale 2D datasets enables our models to achieve
state-of-the-art performance on 2D spatial benchmarks. Further fine-tuning on
3D task-specific datasets yields competitive results, underscoring the
effectiveness of our dataset in enhancing spatial reasoning.
|
2503.23037 | Aske Plaat | Aske Plaat, Max van Duijn, Niki van Stein, Mike Preuss, Peter van der
Putten, Kees Joost Batenburg | Agentic Large Language Models, a survey | Website: https://askeplaat.github.io/agentic-llm-survey-site/ | null | null | null | cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by-sa/4.0/ | There is great interest in agentic LLMs, large language models that act as
agents. We review the growing body of work in this area and provide a research
agenda. Agentic LLMs are LLMs that (1) reason, (2) act, and (3) interact. We
organize the literature according to these three categories. The research in
the first category focuses on reasoning, reflection, and retrieval, aiming to
improve decision making; the second category focuses on action models, robots,
and tools, aiming for agents that act as useful assistants; the third category
focuses on multi-agent systems, aiming for collaborative task solving and
simulating interaction to study emergent social behavior. We find that works
mutually benefit from results in other categories: retrieval enables tool use,
reflection improves multi-agent collaboration, and reasoning benefits all
categories. We discuss applications of agentic LLMs and provide an agenda for
further research. Important applications are in medical diagnosis, logistics
and financial market analysis. Meanwhile, self-reflective agents playing roles
and interacting with one another augment the process of scientific research
itself. Further, agentic LLMs may provide a solution for the problem of LLMs
running out of training data: inference-time behavior generates new training
states, such that LLMs can keep learning without needing ever larger datasets.
We note that there is risk associated with LLM assistants taking action in the
real world, while agentic LLMs are also likely to benefit society.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 11:02:20 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 14:32:44 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Plaat",
"Aske",
""
],
[
"van Duijn",
"Max",
""
],
[
"van Stein",
"Niki",
""
],
[
"Preuss",
"Mike",
""
],
[
"van der Putten",
"Peter",
""
],
[
"Batenburg",
"Kees Joost",
""
]
] | TITLE: Agentic Large Language Models, a survey
ABSTRACT: There is great interest in agentic LLMs, large language models that act as
agents. We review the growing body of work in this area and provide a research
agenda. Agentic LLMs are LLMs that (1) reason, (2) act, and (3) interact. We
organize the literature according to these three categories. The research in
the first category focuses on reasoning, reflection, and retrieval, aiming to
improve decision making; the second category focuses on action models, robots,
and tools, aiming for agents that act as useful assistants; the third category
focuses on multi-agent systems, aiming for collaborative task solving and
simulating interaction to study emergent social behavior. We find that works
mutually benefit from results in other categories: retrieval enables tool use,
reflection improves multi-agent collaboration, and reasoning benefits all
categories. We discuss applications of agentic LLMs and provide an agenda for
further research. Important applications are in medical diagnosis, logistics
and financial market analysis. Meanwhile, self-reflective agents playing roles
and interacting with one another augment the process of scientific research
itself. Further, agentic LLMs may provide a solution for the problem of LLMs
running out of training data: inference-time behavior generates new training
states, such that LLMs can keep learning without needing ever larger datasets.
We note that there is risk associated with LLM assistants taking action in the
real world, while agentic LLMs are also likely to benefit society.
|
2503.23224 | Yiqian Wu | Yiqian Wu and Yujie Liu and Yi Yin and Muhan Zeng and Zhentao Ye and
Xin Zhang and Yingfei Xiong and Lu Zhang | SmartFL: Semantics Based Probabilistic Fault Localization | Submitted to IEEE Transactions on Software Engineering Code:
https://github.com/toledosakasa/SMARTFL This update corrects the author's
name | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Testing-based fault localization has been a research focus in software
engineering in the past decades. It localizes faulty program elements based on
a set of passing and failing test executions. Since whether a fault could be
triggered and detected by a test is related to program semantics, it is crucial
to model program semantics in fault localization approaches. Existing
approaches either consider the full semantics of the program (e.g.,
mutation-based fault localization and angelic debugging), leading to
scalability issues, or ignore the semantics of the program (e.g.,
spectrum-based fault localization), leading to imprecise localization results.
Our key idea is: by modeling only the correctness of program values but not
their full semantics, a balance could be reached between effectiveness and
scalability. To realize this idea, we introduce a probabilistic model by
efficient approximation of program semantics and several techniques to address
scalability challenges. Our approach, SmartFL(SeMantics bAsed pRobabilisTic
Fault Localization), is evaluated on a real-world dataset, Defects4J 2.0. The
top-1 statement-level accuracy of our approach is {14\%}, which improves 130\%
over the best SBFL and MBFL methods. The average time cost is {205} seconds per
fault, which is half of SBFL methods. After combining our approach with
existing approaches using the CombineFL framework, the performance of the
combined approach is significantly boosted by an average of 10\% on top-1,
top-3, and top-5 accuracy compared to state-of-the-art combination methods.
| [
{
"version": "v1",
"created": "Sat, 29 Mar 2025 21:00:51 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 16:35:04 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Wu",
"Yiqian",
""
],
[
"Liu",
"Yujie",
""
],
[
"Yin",
"Yi",
""
],
[
"Zeng",
"Muhan",
""
],
[
"Ye",
"Zhentao",
""
],
[
"Zhang",
"Xin",
""
],
[
"Xiong",
"Yingfei",
""
],
[
"Zhang",
"Lu",
""
]
] | TITLE: SmartFL: Semantics Based Probabilistic Fault Localization
ABSTRACT: Testing-based fault localization has been a research focus in software
engineering in the past decades. It localizes faulty program elements based on
a set of passing and failing test executions. Since whether a fault could be
triggered and detected by a test is related to program semantics, it is crucial
to model program semantics in fault localization approaches. Existing
approaches either consider the full semantics of the program (e.g.,
mutation-based fault localization and angelic debugging), leading to
scalability issues, or ignore the semantics of the program (e.g.,
spectrum-based fault localization), leading to imprecise localization results.
Our key idea is: by modeling only the correctness of program values but not
their full semantics, a balance could be reached between effectiveness and
scalability. To realize this idea, we introduce a probabilistic model by
efficient approximation of program semantics and several techniques to address
scalability challenges. Our approach, SmartFL(SeMantics bAsed pRobabilisTic
Fault Localization), is evaluated on a real-world dataset, Defects4J 2.0. The
top-1 statement-level accuracy of our approach is {14\%}, which improves 130\%
over the best SBFL and MBFL methods. The average time cost is {205} seconds per
fault, which is half of SBFL methods. After combining our approach with
existing approaches using the CombineFL framework, the performance of the
combined approach is significantly boosted by an average of 10\% on top-1,
top-3, and top-5 accuracy compared to state-of-the-art combination methods.
|
2503.23397 | Yuan Chen | Yuan Chen, Ao Li, Wenhai Li and Lingfeng Deng | FB$^+$-tree: A Memory-Optimized B$^+$-tree with Latch-Free Update | 14 pages,17 figures | null | null | null | cs.DB cs.DS cs.PF | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | B$^+$-trees are prevalent in traditional database systems due to their
versatility and balanced structure. While binary search is typically utilized
for branch operations, it may lead to inefficient cache utilization in
main-memory scenarios. In contrast, trie-based index structures drive branch
operations through prefix matching. While these structures generally produce
fewer cache misses and are thus increasingly popular, they may underperform in
range scans because of frequent pointer chasing. This paper proposes a new
high-performance B$^+$-tree variant called \textbf{Feature B$^+$-tree
(FB$^+$-tree)}. Similar to employing bit or byte for branch operation in tries,
FB$^+$-tree progressively considers several bytes following the common prefix
on each level of its inner nodes\textemdash referred to as features, which
allows FB$^+$-tree to benefit from prefix skewness. FB$^+$-tree blurs the lines
between B$^+$-trees and tries, while still retaining balance. In the best case,
FB$^+$-tree almost becomes a trie, whereas in the worst case, it continues to
function as a B$^+$-tree. Meanwhile, a crafted synchronization protocol that
combines the link technique and optimistic lock is designed to support
efficient concurrent index access. Distinctively, FB$^+$-tree leverages subtle
atomic operations seamlessly coordinated with optimistic lock to facilitate
latch-free updates, which can be easily extended to other structures. Intensive
experiments on multiple workload-dataset combinations demonstrate that
FB$^+$-tree shows comparable lookup performance to state-of-the-art trie-based
indexes and outperforms popular B$^+$-trees by 2.3x$\ \sim\ $3.7x under 96
threads. FB$^+$-tree also exhibits significant potential on other workloads,
especially update workloads under contention and scan workloads.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 11:09:06 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Chen",
"Yuan",
""
],
[
"Li",
"Ao",
""
],
[
"Li",
"Wenhai",
""
],
[
"Deng",
"Lingfeng",
""
]
] | TITLE: FB$^+$-tree: A Memory-Optimized B$^+$-tree with Latch-Free Update
ABSTRACT: B$^+$-trees are prevalent in traditional database systems due to their
versatility and balanced structure. While binary search is typically utilized
for branch operations, it may lead to inefficient cache utilization in
main-memory scenarios. In contrast, trie-based index structures drive branch
operations through prefix matching. While these structures generally produce
fewer cache misses and are thus increasingly popular, they may underperform in
range scans because of frequent pointer chasing. This paper proposes a new
high-performance B$^+$-tree variant called \textbf{Feature B$^+$-tree
(FB$^+$-tree)}. Similar to employing bit or byte for branch operation in tries,
FB$^+$-tree progressively considers several bytes following the common prefix
on each level of its inner nodes\textemdash referred to as features, which
allows FB$^+$-tree to benefit from prefix skewness. FB$^+$-tree blurs the lines
between B$^+$-trees and tries, while still retaining balance. In the best case,
FB$^+$-tree almost becomes a trie, whereas in the worst case, it continues to
function as a B$^+$-tree. Meanwhile, a crafted synchronization protocol that
combines the link technique and optimistic lock is designed to support
efficient concurrent index access. Distinctively, FB$^+$-tree leverages subtle
atomic operations seamlessly coordinated with optimistic lock to facilitate
latch-free updates, which can be easily extended to other structures. Intensive
experiments on multiple workload-dataset combinations demonstrate that
FB$^+$-tree shows comparable lookup performance to state-of-the-art trie-based
indexes and outperforms popular B$^+$-trees by 2.3x$\ \sim\ $3.7x under 96
threads. FB$^+$-tree also exhibits significant potential on other workloads,
especially update workloads under contention and scan workloads.
|
2503.24108 | Anwesa Choudhuri | Anwesa Choudhuri, Zhongpai Gao, Meng Zheng, Benjamin Planche, Terrence
Chen and Ziyan Wu | PolypSegTrack: Unified Foundation Model for Colonoscopy Video Analysis | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | Early detection, accurate segmentation, classification and tracking of polyps
during colonoscopy are critical for preventing colorectal cancer. Many existing
deep-learning-based methods for analyzing colonoscopic videos either require
task-specific fine-tuning, lack tracking capabilities, or rely on
domain-specific pre-training. In this paper, we introduce PolypSegTrack, a
novel foundation model that jointly addresses polyp detection, segmentation,
classification and unsupervised tracking in colonoscopic videos. Our approach
leverages a novel conditional mask loss, enabling flexible training across
datasets with either pixel-level segmentation masks or bounding box
annotations, allowing us to bypass task-specific fine-tuning. Our unsupervised
tracking module reliably associates polyp instances across frames using object
queries, without relying on any heuristics. We leverage a robust vision
foundation model backbone that is pre-trained unsupervisedly on natural images,
thereby removing the need for domain-specific pre-training. Extensive
experiments on multiple polyp benchmarks demonstrate that our method
significantly outperforms existing state-of-the-art approaches in detection,
segmentation, classification, and tracking.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 14:00:21 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 19:58:56 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Choudhuri",
"Anwesa",
""
],
[
"Gao",
"Zhongpai",
""
],
[
"Zheng",
"Meng",
""
],
[
"Planche",
"Benjamin",
""
],
[
"Chen",
"Terrence",
""
],
[
"Wu",
"Ziyan",
""
]
] | TITLE: PolypSegTrack: Unified Foundation Model for Colonoscopy Video Analysis
ABSTRACT: Early detection, accurate segmentation, classification and tracking of polyps
during colonoscopy are critical for preventing colorectal cancer. Many existing
deep-learning-based methods for analyzing colonoscopic videos either require
task-specific fine-tuning, lack tracking capabilities, or rely on
domain-specific pre-training. In this paper, we introduce PolypSegTrack, a
novel foundation model that jointly addresses polyp detection, segmentation,
classification and unsupervised tracking in colonoscopic videos. Our approach
leverages a novel conditional mask loss, enabling flexible training across
datasets with either pixel-level segmentation masks or bounding box
annotations, allowing us to bypass task-specific fine-tuning. Our unsupervised
tracking module reliably associates polyp instances across frames using object
queries, without relying on any heuristics. We leverage a robust vision
foundation model backbone that is pre-trained unsupervisedly on natural images,
thereby removing the need for domain-specific pre-training. Extensive
experiments on multiple polyp benchmarks demonstrate that our method
significantly outperforms existing state-of-the-art approaches in detection,
segmentation, classification, and tracking.
|
2503.24121 | Valentin Boussot Mr | Valentin Boussot, C\'edric H\'emon, Jean-Claude Nunes, Jason Downling,
Simon Rouz\'e, Caroline Lafond, Ana\"is Barateau, Jean-Louis Dillenseger | IMPACT: A Generic Semantic Loss for Multimodal Medical Image
Registration | Submitted to IEEE Transactions on Pattern Analysis and Machine
Intelligence (TPAMI). This is a preprint version and has not been
peer-reviewed | null | null | null | cs.CV cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Image registration is fundamental in medical imaging, enabling precise
alignment of anatomical structures for diagnosis, treatment planning,
image-guided interventions, and longitudinal monitoring. This work introduces
IMPACT (Image Metric with Pretrained model-Agnostic Comparison for
Transmodality registration), a novel similarity metric designed for robust
multimodal image registration. Rather than relying on raw intensities,
handcrafted descriptors, or task-specific training, IMPACT defines a semantic
similarity measure based on the comparison of deep features extracted from
large-scale pretrained segmentation models. By leveraging representations from
models such as TotalSegmentator, Segment Anything (SAM), and other foundation
networks, IMPACT provides a task-agnostic, training-free solution that
generalizes across imaging modalities. These features, originally trained for
segmentation, offer strong spatial correspondence and semantic alignment
capabilities, making them naturally suited for registration. The method
integrates seamlessly into both algorithmic (Elastix) and learning-based
(VoxelMorph) frameworks, leveraging the strengths of each. IMPACT was evaluated
on five challenging 3D registration tasks involving thoracic CT/CBCT and pelvic
MR/CT datasets. Quantitative metrics, including Target Registration Error and
Dice Similarity Coefficient, demonstrated consistent improvements in anatomical
alignment over baseline methods. Qualitative analyses further highlighted the
robustness of the proposed metric in the presence of noise, artifacts, and
modality variations. With its versatility, efficiency, and strong performance
across diverse tasks, IMPACT offers a powerful solution for advancing
multimodal image registration in both clinical and research settings.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 14:08:21 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 16:03:23 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Boussot",
"Valentin",
""
],
[
"Hémon",
"Cédric",
""
],
[
"Nunes",
"Jean-Claude",
""
],
[
"Downling",
"Jason",
""
],
[
"Rouzé",
"Simon",
""
],
[
"Lafond",
"Caroline",
""
],
[
"Barateau",
"Anaïs",
""
],
[
"Dillenseger",
"Jean-Louis",
""
]
] | TITLE: IMPACT: A Generic Semantic Loss for Multimodal Medical Image
Registration
ABSTRACT: Image registration is fundamental in medical imaging, enabling precise
alignment of anatomical structures for diagnosis, treatment planning,
image-guided interventions, and longitudinal monitoring. This work introduces
IMPACT (Image Metric with Pretrained model-Agnostic Comparison for
Transmodality registration), a novel similarity metric designed for robust
multimodal image registration. Rather than relying on raw intensities,
handcrafted descriptors, or task-specific training, IMPACT defines a semantic
similarity measure based on the comparison of deep features extracted from
large-scale pretrained segmentation models. By leveraging representations from
models such as TotalSegmentator, Segment Anything (SAM), and other foundation
networks, IMPACT provides a task-agnostic, training-free solution that
generalizes across imaging modalities. These features, originally trained for
segmentation, offer strong spatial correspondence and semantic alignment
capabilities, making them naturally suited for registration. The method
integrates seamlessly into both algorithmic (Elastix) and learning-based
(VoxelMorph) frameworks, leveraging the strengths of each. IMPACT was evaluated
on five challenging 3D registration tasks involving thoracic CT/CBCT and pelvic
MR/CT datasets. Quantitative metrics, including Target Registration Error and
Dice Similarity Coefficient, demonstrated consistent improvements in anatomical
alignment over baseline methods. Qualitative analyses further highlighted the
robustness of the proposed metric in the presence of noise, artifacts, and
modality variations. With its versatility, efficiency, and strong performance
across diverse tasks, IMPACT offers a powerful solution for advancing
multimodal image registration in both clinical and research settings.
|
2504.00034 | Chi-Sheng Chen | Chi-Sheng Chen and Wei An Hou and Hsiang-Wei Hu and Zhen-Sheng Cai | Quantum Generative Models for Image Generation: Insights from MNIST and
MedMNIST | null | null | null | null | quant-ph cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Quantum generative models offer a promising new direction in machine learning
by leveraging quantum circuits to enhance data generation capabilities. In this
study, we propose a hybrid quantum-classical image generation framework that
integrates variational quantum circuits into a diffusion-based model. To
improve training dynamics and generation quality, we introduce two novel noise
strategies: intrinsic quantum-generated noise and a tailored noise scheduling
mechanism. Our method is built upon a lightweight U-Net architecture, with the
quantum layer embedded in the bottleneck module to isolate its effect. We
evaluate our model on MNIST and MedMNIST datasets to examine its feasibility
and performance. Notably, our results reveal that under limited data conditions
(fewer than 100 training images), the quantum-enhanced model generates images
with higher perceptual quality and distributional similarity than its classical
counterpart using the same architecture. While the quantum model shows
advantages on grayscale data such as MNIST, its performance is more nuanced on
complex, color-rich datasets like PathMNIST. These findings highlight both the
potential and current limitations of quantum generative models and lay the
groundwork for future developments in low-resource and biomedical image
generation.
| [
{
"version": "v1",
"created": "Sun, 30 Mar 2025 06:36:22 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 17:40:26 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Chen",
"Chi-Sheng",
""
],
[
"Hou",
"Wei An",
""
],
[
"Hu",
"Hsiang-Wei",
""
],
[
"Cai",
"Zhen-Sheng",
""
]
] | TITLE: Quantum Generative Models for Image Generation: Insights from MNIST and
MedMNIST
ABSTRACT: Quantum generative models offer a promising new direction in machine learning
by leveraging quantum circuits to enhance data generation capabilities. In this
study, we propose a hybrid quantum-classical image generation framework that
integrates variational quantum circuits into a diffusion-based model. To
improve training dynamics and generation quality, we introduce two novel noise
strategies: intrinsic quantum-generated noise and a tailored noise scheduling
mechanism. Our method is built upon a lightweight U-Net architecture, with the
quantum layer embedded in the bottleneck module to isolate its effect. We
evaluate our model on MNIST and MedMNIST datasets to examine its feasibility
and performance. Notably, our results reveal that under limited data conditions
(fewer than 100 training images), the quantum-enhanced model generates images
with higher perceptual quality and distributional similarity than its classical
counterpart using the same architecture. While the quantum model shows
advantages on grayscale data such as MNIST, its performance is more nuanced on
complex, color-rich datasets like PathMNIST. These findings highlight both the
potential and current limitations of quantum generative models and lay the
groundwork for future developments in low-resource and biomedical image
generation.
|
2504.00457 | Hao Qin | Hao Qin, Luyuan Chen, Ming Kong, Mengxu Lu, Qiang Zhu | Distilling Multi-view Diffusion Models into 3D Generators | null | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by/4.0/ | We introduce DD3G, a formulation that Distills a multi-view Diffusion model
(MV-DM) into a 3D Generator using gaussian splatting. DD3G compresses and
integrates extensive visual and spatial geometric knowledge from the MV-DM by
simulating its ordinary differential equation (ODE) trajectory, ensuring the
distilled generator generalizes better than those trained solely on 3D data.
Unlike previous amortized optimization approaches, we align the MV-DM and 3D
generator representation spaces to transfer the teacher's probabilistic flow to
the student, thus avoiding inconsistencies in optimization objectives caused by
probabilistic sampling. The introduction of probabilistic flow and the coupling
of various attributes in 3D Gaussians introduce challenges in the generation
process. To tackle this, we propose PEPD, a generator consisting of Pattern
Extraction and Progressive Decoding phases, which enables efficient fusion of
probabilistic flow and converts a single image into 3D Gaussians within 0.06
seconds. Furthermore, to reduce knowledge loss and overcome sparse-view
supervision, we design a joint optimization objective that ensures the quality
of generated samples through explicit supervision and implicit verification.
Leveraging existing 2D generation models, we compile 120k high-quality RGBA
images for distillation. Experiments on synthetic and public datasets
demonstrate the effectiveness of our method. Our project is available at:
https://qinbaigao.github.io/DD3G_project/
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 06:32:48 GMT"
},
{
"version": "v2",
"created": "Wed, 2 Apr 2025 04:29:23 GMT"
},
{
"version": "v3",
"created": "Thu, 3 Apr 2025 01:44:53 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Qin",
"Hao",
""
],
[
"Chen",
"Luyuan",
""
],
[
"Kong",
"Ming",
""
],
[
"Lu",
"Mengxu",
""
],
[
"Zhu",
"Qiang",
""
]
] | TITLE: Distilling Multi-view Diffusion Models into 3D Generators
ABSTRACT: We introduce DD3G, a formulation that Distills a multi-view Diffusion model
(MV-DM) into a 3D Generator using gaussian splatting. DD3G compresses and
integrates extensive visual and spatial geometric knowledge from the MV-DM by
simulating its ordinary differential equation (ODE) trajectory, ensuring the
distilled generator generalizes better than those trained solely on 3D data.
Unlike previous amortized optimization approaches, we align the MV-DM and 3D
generator representation spaces to transfer the teacher's probabilistic flow to
the student, thus avoiding inconsistencies in optimization objectives caused by
probabilistic sampling. The introduction of probabilistic flow and the coupling
of various attributes in 3D Gaussians introduce challenges in the generation
process. To tackle this, we propose PEPD, a generator consisting of Pattern
Extraction and Progressive Decoding phases, which enables efficient fusion of
probabilistic flow and converts a single image into 3D Gaussians within 0.06
seconds. Furthermore, to reduce knowledge loss and overcome sparse-view
supervision, we design a joint optimization objective that ensures the quality
of generated samples through explicit supervision and implicit verification.
Leveraging existing 2D generation models, we compile 120k high-quality RGBA
images for distillation. Experiments on synthetic and public datasets
demonstrate the effectiveness of our method. Our project is available at:
https://qinbaigao.github.io/DD3G_project/
|
2504.00564 | Anish Acharya | Anish Acharya, Sujay Sanghavi, Alexandros G. Dimakis, Inderjit S
Dhillon | Geometric Median Matching for Robust k-Subset Selection from Noisy Data | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Data pruning -- the combinatorial task of selecting a small and
representative subset from a large dataset, is crucial for mitigating the
enormous computational costs associated with training data-hungry modern deep
learning models at scale. Since large scale data collections are invariably
noisy, developing data pruning strategies that remain robust even in the
presence of corruption is critical in practice. However, existing data pruning
methods often fail under high corruption rates due to their reliance on
empirical mean estimation, which is highly sensitive to outliers.
In response, we propose Geometric Median (GM) Matching, a novel k-subset
selection strategy that leverages Geometric Median -- a robust estimator with
an optimal breakdown point of 1/2; to enhance resilience against noisy data.
Our method iteratively selects a k-subset such that the mean of the subset
approximates the GM of the (potentially) noisy dataset, ensuring robustness
even under arbitrary corruption. We provide theoretical guarantees, showing
that GM Matching enjoys an improved O(1/k) convergence rate -- a quadratic
improvement over random sampling, even under arbitrary corruption. Extensive
experiments across image classification and image generation tasks demonstrate
that GM Matching consistently outperforms existing pruning approaches,
particularly in high-corruption settings and at high pruning rates; making it a
strong baseline for robust data pruning.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 09:22:05 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 11:12:07 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Acharya",
"Anish",
""
],
[
"Sanghavi",
"Sujay",
""
],
[
"Dimakis",
"Alexandros G.",
""
],
[
"Dhillon",
"Inderjit S",
""
]
] | TITLE: Geometric Median Matching for Robust k-Subset Selection from Noisy Data
ABSTRACT: Data pruning -- the combinatorial task of selecting a small and
representative subset from a large dataset, is crucial for mitigating the
enormous computational costs associated with training data-hungry modern deep
learning models at scale. Since large scale data collections are invariably
noisy, developing data pruning strategies that remain robust even in the
presence of corruption is critical in practice. However, existing data pruning
methods often fail under high corruption rates due to their reliance on
empirical mean estimation, which is highly sensitive to outliers.
In response, we propose Geometric Median (GM) Matching, a novel k-subset
selection strategy that leverages Geometric Median -- a robust estimator with
an optimal breakdown point of 1/2; to enhance resilience against noisy data.
Our method iteratively selects a k-subset such that the mean of the subset
approximates the GM of the (potentially) noisy dataset, ensuring robustness
even under arbitrary corruption. We provide theoretical guarantees, showing
that GM Matching enjoys an improved O(1/k) convergence rate -- a quadratic
improvement over random sampling, even under arbitrary corruption. Extensive
experiments across image classification and image generation tasks demonstrate
that GM Matching consistently outperforms existing pruning approaches,
particularly in high-corruption settings and at high pruning rates; making it a
strong baseline for robust data pruning.
|
2504.00824 | Yubo Wang | Yubo Wang, Xueguang Ma, Ping Nie, Huaye Zeng, Zhiheng Lyu, Yuxuan
Zhang, Benjamin Schneider, Yi Lu, Xiang Yue, Wenhu Chen | ScholarCopilot: Training Large Language Models for Academic Writing with
Accurate Citations | null | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Academic writing requires both coherent text generation and precise citation
of relevant literature. Although recent Retrieval-Augmented Generation (RAG)
systems have significantly improved factual accuracy in general-purpose text
generation, their ability to support professional academic writing remains
limited. In this work, we introduce ScholarCopilot, a unified framework
designed to enhance existing large language models for generating professional
academic articles with accurate and contextually relevant citations.
ScholarCopilot dynamically determines when to retrieve scholarly references by
generating a retrieval token [RET], which is then used to query a citation
database. The retrieved references are fed into the model to augment the
generation process. We jointly optimize both the generation and citation tasks
within a single framework to improve efficiency. Our model is built upon
Qwen-2.5-7B and trained on 500K papers from arXiv. It achieves a top-1
retrieval accuracy of 40.1% on our evaluation dataset, outperforming baselines
such as E5-Mistral-7B-Instruct (15.0%) and BM25 (9.8%). On a dataset of 1,000
academic writing samples, ScholarCopilot scores 16.2/25 in generation quality
-- measured across relevance, coherence, academic rigor, completeness, and
innovation -- significantly surpassing all existing models, including much
larger ones like the Retrieval-Augmented Qwen2.5-72B-Instruct. Human studies
further demonstrate that ScholarCopilot, despite being a 7B model,
significantly outperforms ChatGPT, achieving 100% preference in citation
quality and over 70% in overall usefulness.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 14:12:14 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 15:07:29 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Wang",
"Yubo",
""
],
[
"Ma",
"Xueguang",
""
],
[
"Nie",
"Ping",
""
],
[
"Zeng",
"Huaye",
""
],
[
"Lyu",
"Zhiheng",
""
],
[
"Zhang",
"Yuxuan",
""
],
[
"Schneider",
"Benjamin",
""
],
[
"Lu",
"Yi",
""
],
[
"Yue",
"Xiang",
""
],
[
"Chen",
"Wenhu",
""
]
] | TITLE: ScholarCopilot: Training Large Language Models for Academic Writing with
Accurate Citations
ABSTRACT: Academic writing requires both coherent text generation and precise citation
of relevant literature. Although recent Retrieval-Augmented Generation (RAG)
systems have significantly improved factual accuracy in general-purpose text
generation, their ability to support professional academic writing remains
limited. In this work, we introduce ScholarCopilot, a unified framework
designed to enhance existing large language models for generating professional
academic articles with accurate and contextually relevant citations.
ScholarCopilot dynamically determines when to retrieve scholarly references by
generating a retrieval token [RET], which is then used to query a citation
database. The retrieved references are fed into the model to augment the
generation process. We jointly optimize both the generation and citation tasks
within a single framework to improve efficiency. Our model is built upon
Qwen-2.5-7B and trained on 500K papers from arXiv. It achieves a top-1
retrieval accuracy of 40.1% on our evaluation dataset, outperforming baselines
such as E5-Mistral-7B-Instruct (15.0%) and BM25 (9.8%). On a dataset of 1,000
academic writing samples, ScholarCopilot scores 16.2/25 in generation quality
-- measured across relevance, coherence, academic rigor, completeness, and
innovation -- significantly surpassing all existing models, including much
larger ones like the Retrieval-Augmented Qwen2.5-72B-Instruct. Human studies
further demonstrate that ScholarCopilot, despite being a 7B model,
significantly outperforms ChatGPT, achieving 100% preference in citation
quality and over 70% in overall usefulness.
|
2504.01128 | Andrei Dumitriu | Andrei Dumitriu, Florin Tatui, Florin Miron, Aakash Ralhan, Radu Tudor
Ionescu, Radu Timofte | RipVIS: Rip Currents Video Instance Segmentation Benchmark for Beach
Monitoring and Safety | Accepted at CVPR 2025 | null | null | null | cs.CV cs.AI | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Rip currents are strong, localized and narrow currents of water that flow
outwards into the sea, causing numerous beach-related injuries and fatalities
worldwide. Accurate identification of rip currents remains challenging due to
their amorphous nature and the lack of annotated data, which often requires
expert knowledge. To address these issues, we present RipVIS, a large-scale
video instance segmentation benchmark explicitly designed for rip current
segmentation. RipVIS is an order of magnitude larger than previous datasets,
featuring $184$ videos ($212,328$ frames), of which $150$ videos ($163,528$
frames) are with rip currents, collected from various sources, including
drones, mobile phones, and fixed beach cameras. Our dataset encompasses diverse
visual contexts, such as wave-breaking patterns, sediment flows, and water
color variations, across multiple global locations, including USA, Mexico,
Costa Rica, Portugal, Italy, Greece, Romania, Sri Lanka, Australia and New
Zealand. Most videos are annotated at $5$ FPS to ensure accuracy in dynamic
scenarios, supplemented by an additional $34$ videos ($48,800$ frames) without
rip currents. We conduct comprehensive experiments with Mask R-CNN, Cascade
Mask R-CNN, SparseInst and YOLO11, fine-tuning these models for the task of rip
current segmentation. Results are reported in terms of multiple metrics, with a
particular focus on the $F_2$ score to prioritize recall and reduce false
negatives. To enhance segmentation performance, we introduce a novel
post-processing step based on Temporal Confidence Aggregation (TCA). RipVIS
aims to set a new standard for rip current segmentation, contributing towards
safer beach environments. We offer a benchmark website to share data, models,
and results with the research community, encouraging ongoing collaboration and
future contributions, at https://ripvis.ai.
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 18:57:15 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 09:29:08 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Dumitriu",
"Andrei",
""
],
[
"Tatui",
"Florin",
""
],
[
"Miron",
"Florin",
""
],
[
"Ralhan",
"Aakash",
""
],
[
"Ionescu",
"Radu Tudor",
""
],
[
"Timofte",
"Radu",
""
]
] | TITLE: RipVIS: Rip Currents Video Instance Segmentation Benchmark for Beach
Monitoring and Safety
ABSTRACT: Rip currents are strong, localized and narrow currents of water that flow
outwards into the sea, causing numerous beach-related injuries and fatalities
worldwide. Accurate identification of rip currents remains challenging due to
their amorphous nature and the lack of annotated data, which often requires
expert knowledge. To address these issues, we present RipVIS, a large-scale
video instance segmentation benchmark explicitly designed for rip current
segmentation. RipVIS is an order of magnitude larger than previous datasets,
featuring $184$ videos ($212,328$ frames), of which $150$ videos ($163,528$
frames) are with rip currents, collected from various sources, including
drones, mobile phones, and fixed beach cameras. Our dataset encompasses diverse
visual contexts, such as wave-breaking patterns, sediment flows, and water
color variations, across multiple global locations, including USA, Mexico,
Costa Rica, Portugal, Italy, Greece, Romania, Sri Lanka, Australia and New
Zealand. Most videos are annotated at $5$ FPS to ensure accuracy in dynamic
scenarios, supplemented by an additional $34$ videos ($48,800$ frames) without
rip currents. We conduct comprehensive experiments with Mask R-CNN, Cascade
Mask R-CNN, SparseInst and YOLO11, fine-tuning these models for the task of rip
current segmentation. Results are reported in terms of multiple metrics, with a
particular focus on the $F_2$ score to prioritize recall and reduce false
negatives. To enhance segmentation performance, we introduce a novel
post-processing step based on Temporal Confidence Aggregation (TCA). RipVIS
aims to set a new standard for rip current segmentation, contributing towards
safer beach environments. We offer a benchmark website to share data, models,
and results with the research community, encouraging ongoing collaboration and
future contributions, at https://ripvis.ai.
|
2504.01281 | Sagar Srinivas Sakhinana | Sakhinana Sagar Srinivas, Venkataramana Runkana | Scaling Test-Time Inference with Policy-Optimized, Dynamic
Retrieval-Augmented Generation via KV Caching and Decoding | null | null | null | null | cs.LG cs.AI cs.CL cs.IR | http://creativecommons.org/licenses/by-nc-nd/4.0/ | We present a comprehensive framework for enhancing Retrieval-Augmented
Generation (RAG) systems through dynamic retrieval strategies and reinforcement
fine-tuning. This approach significantly improves large language models on
knowledge-intensive tasks, including opendomain question answering and complex
reasoning. Our framework integrates two complementary techniques:
Policy-Optimized RetrievalAugmented Generation (PORAG), which optimizes the use
of retrieved information, and Adaptive Token-Layer Attention Scoring (ATLAS),
which dynamically determines retrieval timing and content based on contextual
needs. Together, these techniques enhance both the utilization and relevance of
retrieved content, improving factual accuracy and response quality. Designed as
a lightweight solution compatible with any Transformer-based LLM without
requiring additional training, our framework excels in knowledge-intensive
tasks, boosting output accuracy in RAG settings. We further propose CRITIC, a
novel method to selectively compress key-value caches by token importance,
mitigating memory bottlenecks in long-context applications. The framework also
incorporates test-time scaling techniques to dynamically balance reasoning
depth and computational resources, alongside optimized decoding strategies for
faster inference. Experiments on benchmark datasets show that our framework
reduces hallucinations, strengthens domain-specific reasoning, and achieves
significant efficiency and scalability gains over traditional RAG systems. This
integrated approach advances the development of robust, efficient, and scalable
RAG systems across diverse applications.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 01:16:10 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 01:23:22 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Srinivas",
"Sakhinana Sagar",
""
],
[
"Runkana",
"Venkataramana",
""
]
] | TITLE: Scaling Test-Time Inference with Policy-Optimized, Dynamic
Retrieval-Augmented Generation via KV Caching and Decoding
ABSTRACT: We present a comprehensive framework for enhancing Retrieval-Augmented
Generation (RAG) systems through dynamic retrieval strategies and reinforcement
fine-tuning. This approach significantly improves large language models on
knowledge-intensive tasks, including opendomain question answering and complex
reasoning. Our framework integrates two complementary techniques:
Policy-Optimized RetrievalAugmented Generation (PORAG), which optimizes the use
of retrieved information, and Adaptive Token-Layer Attention Scoring (ATLAS),
which dynamically determines retrieval timing and content based on contextual
needs. Together, these techniques enhance both the utilization and relevance of
retrieved content, improving factual accuracy and response quality. Designed as
a lightweight solution compatible with any Transformer-based LLM without
requiring additional training, our framework excels in knowledge-intensive
tasks, boosting output accuracy in RAG settings. We further propose CRITIC, a
novel method to selectively compress key-value caches by token importance,
mitigating memory bottlenecks in long-context applications. The framework also
incorporates test-time scaling techniques to dynamically balance reasoning
depth and computational resources, alongside optimized decoding strategies for
faster inference. Experiments on benchmark datasets show that our framework
reduces hallucinations, strengthens domain-specific reasoning, and achieves
significant efficiency and scalability gains over traditional RAG systems. This
integrated approach advances the development of robust, efficient, and scalable
RAG systems across diverse applications.
|
2504.01298 | Shiyong Liu | Shiyong Liu, Zhihao Li, Xiao Tang, Jianzhuang Liu | Direction-Aware Hybrid Representation Learning for 3D Hand Pose and
Shape Estimation | Accepted to CVPR 2025 workshop | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Most model-based 3D hand pose and shape estimation methods directly regress
the parametric model parameters from an image to obtain 3D joints under weak
supervision. However, these methods involve solving a complex optimization
problem with many local minima, making training difficult. To address this
challenge, we propose learning direction-aware hybrid features (DaHyF) that
fuse implicit image features and explicit 2D joint coordinate features. This
fusion is enhanced by the pixel direction information in the camera coordinate
system to estimate pose, shape, and camera viewpoint. Our method directly
predicts 3D hand poses with DaHyF representation and reduces jittering during
motion capture using prediction confidence based on contrastive learning. We
evaluate our method on the FreiHAND dataset and show that it outperforms
existing state-of-the-art methods by more than 33% in accuracy. DaHyF also
achieves the top ranking on both the HO3Dv2 and HO3Dv3 leaderboards for the
metric of Mean Joint Error (after scale and translation alignment). Compared to
the second-best results, the largest improvement observed is 10%. We also
demonstrate its effectiveness in real-time motion capture scenarios with hand
position variability, occlusion, and motion blur.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 02:06:23 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 07:52:59 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liu",
"Shiyong",
""
],
[
"Li",
"Zhihao",
""
],
[
"Tang",
"Xiao",
""
],
[
"Liu",
"Jianzhuang",
""
]
] | TITLE: Direction-Aware Hybrid Representation Learning for 3D Hand Pose and
Shape Estimation
ABSTRACT: Most model-based 3D hand pose and shape estimation methods directly regress
the parametric model parameters from an image to obtain 3D joints under weak
supervision. However, these methods involve solving a complex optimization
problem with many local minima, making training difficult. To address this
challenge, we propose learning direction-aware hybrid features (DaHyF) that
fuse implicit image features and explicit 2D joint coordinate features. This
fusion is enhanced by the pixel direction information in the camera coordinate
system to estimate pose, shape, and camera viewpoint. Our method directly
predicts 3D hand poses with DaHyF representation and reduces jittering during
motion capture using prediction confidence based on contrastive learning. We
evaluate our method on the FreiHAND dataset and show that it outperforms
existing state-of-the-art methods by more than 33% in accuracy. DaHyF also
achieves the top ranking on both the HO3Dv2 and HO3Dv3 leaderboards for the
metric of Mean Joint Error (after scale and translation alignment). Compared to
the second-best results, the largest improvement observed is 10%. We also
demonstrate its effectiveness in real-time motion capture scenarios with hand
position variability, occlusion, and motion blur.
|
2504.01591 | Adriano Fragomeni | Adriano Fragomeni, Dima Damen and Michael Wray | Leveraging Modality Tags for Enhanced Cross-Modal Video Retrieval | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Video retrieval requires aligning visual content with corresponding natural
language descriptions. In this paper, we introduce Modality Auxiliary Concepts
for Video Retrieval (MAC-VR), a novel approach that leverages modality-specific
tags -- automatically extracted from foundation models -- to enhance video
retrieval. We propose to align modalities in a latent space, along with
learning and aligning auxiliary latent concepts, derived from the features of a
video and its corresponding caption. We introduce these auxiliary concepts to
improve the alignment of visual and textual latent concepts, and so are able to
distinguish concepts from one other. We conduct extensive experiments on five
diverse datasets: MSR-VTT, DiDeMo, TGIF, Charades and YouCook2. The
experimental results consistently demonstrate that modality-specific tags
improve cross-modal alignment, outperforming current state-of-the-art methods
across three datasets and performing comparably or better across the other two.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 10:56:01 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 10:30:52 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Fragomeni",
"Adriano",
""
],
[
"Damen",
"Dima",
""
],
[
"Wray",
"Michael",
""
]
] | TITLE: Leveraging Modality Tags for Enhanced Cross-Modal Video Retrieval
ABSTRACT: Video retrieval requires aligning visual content with corresponding natural
language descriptions. In this paper, we introduce Modality Auxiliary Concepts
for Video Retrieval (MAC-VR), a novel approach that leverages modality-specific
tags -- automatically extracted from foundation models -- to enhance video
retrieval. We propose to align modalities in a latent space, along with
learning and aligning auxiliary latent concepts, derived from the features of a
video and its corresponding caption. We introduce these auxiliary concepts to
improve the alignment of visual and textual latent concepts, and so are able to
distinguish concepts from one other. We conduct extensive experiments on five
diverse datasets: MSR-VTT, DiDeMo, TGIF, Charades and YouCook2. The
experimental results consistently demonstrate that modality-specific tags
improve cross-modal alignment, outperforming current state-of-the-art methods
across three datasets and performing comparably or better across the other two.
|
2504.01659 | Haosheng Li | Haosheng Li, Junjie Chen, Yuecong Xu, Kemi Ding | Robust Unsupervised Domain Adaptation for 3D Point Cloud Segmentation
Under Source Adversarial Attacks | null | null | null | null | cs.CV | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Unsupervised domain adaptation (UDA) frameworks have shown good
generalization capabilities for 3D point cloud semantic segmentation models on
clean data. However, existing works overlook adversarial robustness when the
source domain itself is compromised. To comprehensively explore the robustness
of the UDA frameworks, we first design a stealthy adversarial point cloud
generation attack that can significantly contaminate datasets with only minor
perturbations to the point cloud surface. Based on that, we propose a novel
dataset, AdvSynLiDAR, comprising synthesized contaminated LiDAR point clouds.
With the generated corrupted data, we further develop the Adversarial
Adaptation Framework (AAF) as the countermeasure. Specifically, by extending
the key point sensitive (KPS) loss towards the Robust Long-Tail loss (RLT loss)
and utilizing a decoder branch, our approach enables the model to focus on
long-tail classes during the pre-training phase and leverages high-confidence
decoded point cloud information to restore point cloud structures during the
adaptation phase. We evaluated our AAF method on the AdvSynLiDAR dataset, where
the results demonstrate that our AAF method can mitigate performance
degradation under source adversarial perturbations for UDA in the 3D point
cloud segmentation application.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 12:11:34 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 02:58:42 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Li",
"Haosheng",
""
],
[
"Chen",
"Junjie",
""
],
[
"Xu",
"Yuecong",
""
],
[
"Ding",
"Kemi",
""
]
] | TITLE: Robust Unsupervised Domain Adaptation for 3D Point Cloud Segmentation
Under Source Adversarial Attacks
ABSTRACT: Unsupervised domain adaptation (UDA) frameworks have shown good
generalization capabilities for 3D point cloud semantic segmentation models on
clean data. However, existing works overlook adversarial robustness when the
source domain itself is compromised. To comprehensively explore the robustness
of the UDA frameworks, we first design a stealthy adversarial point cloud
generation attack that can significantly contaminate datasets with only minor
perturbations to the point cloud surface. Based on that, we propose a novel
dataset, AdvSynLiDAR, comprising synthesized contaminated LiDAR point clouds.
With the generated corrupted data, we further develop the Adversarial
Adaptation Framework (AAF) as the countermeasure. Specifically, by extending
the key point sensitive (KPS) loss towards the Robust Long-Tail loss (RLT loss)
and utilizing a decoder branch, our approach enables the model to focus on
long-tail classes during the pre-training phase and leverages high-confidence
decoded point cloud information to restore point cloud structures during the
adaptation phase. We evaluated our AAF method on the AdvSynLiDAR dataset, where
the results demonstrate that our AAF method can mitigate performance
degradation under source adversarial perturbations for UDA in the 3D point
cloud segmentation application.
|
2504.01667 | Cedric Lothritz | Cedric Lothritz, Jordi Cabot | Testing Low-Resource Language Support in LLMs Using Language Proficiency
Exams: the Case of Luxembourgish | 18 pages, 2 figures, 11 tables | null | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | Large Language Models (LLMs) have become an increasingly important tool in
research and society at large. While LLMs are regularly used all over the world
by experts and lay-people alike, they are predominantly developed with
English-speaking users in mind, performing well in English and other
wide-spread languages while less-resourced languages such as Luxembourgish are
seen as a lower priority. This lack of attention is also reflected in the
sparsity of available evaluation tools and datasets. In this study, we
investigate the viability of language proficiency exams as such evaluation
tools for the Luxembourgish language. We find that large models such as
ChatGPT, Claude and DeepSeek-R1 typically achieve high scores, while smaller
models show weak performances. We also find that the performances in such
language exams can be used to predict performances in other NLP tasks.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 12:16:14 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 11:39:22 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Lothritz",
"Cedric",
""
],
[
"Cabot",
"Jordi",
""
]
] | TITLE: Testing Low-Resource Language Support in LLMs Using Language Proficiency
Exams: the Case of Luxembourgish
ABSTRACT: Large Language Models (LLMs) have become an increasingly important tool in
research and society at large. While LLMs are regularly used all over the world
by experts and lay-people alike, they are predominantly developed with
English-speaking users in mind, performing well in English and other
wide-spread languages while less-resourced languages such as Luxembourgish are
seen as a lower priority. This lack of attention is also reflected in the
sparsity of available evaluation tools and datasets. In this study, we
investigate the viability of language proficiency exams as such evaluation
tools for the Luxembourgish language. We find that large models such as
ChatGPT, Claude and DeepSeek-R1 typically achieve high scores, while smaller
models show weak performances. We also find that the performances in such
language exams can be used to predict performances in other NLP tasks.
|
2504.01722 | Damien Robert | Kaan Karaman, Yuchang Jiang, Damien Robert, Vivien Sainte Fare Garnot,
Maria Jo\~ao Santos, Jan Dirk Wegner | GSR4B: Biomass Map Super-Resolution with Sentinel-1/2 Guidance | Accepted for an oral presentation at the ISPRS Geospatial Week 2025 | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Accurate Above-Ground Biomass (AGB) mapping at both large scale and high
spatio-temporal resolution is essential for applications ranging from climate
modeling to biodiversity assessment, and sustainable supply chain monitoring.
At present, fine-grained AGB mapping relies on costly airborne laser scanning
acquisition campaigns usually limited to regional scales. Initiatives such as
the ESA CCI map attempt to generate global biomass products from diverse
spaceborne sensors but at a coarser resolution. To enable global,
high-resolution (HR) mapping, several works propose to regress AGB from HR
satellite observations such as ESA Sentinel-1/2 images. We propose a novel way
to address HR AGB estimation, by leveraging both HR satellite observations and
existing low-resolution (LR) biomass products. We cast this problem as Guided
Super-Resolution (GSR), aiming at upsampling LR biomass maps (sources) from
$100$ to $10$ m resolution, using auxiliary HR co-registered satellite images
(guides). We compare super-resolving AGB maps with and without guidance,
against direct regression from satellite images, on the public BioMassters
dataset. We observe that Multi-Scale Guidance (MSG) outperforms direct
regression both for regression ($-780$ t/ha RMSE) and perception ($+2.0$ dB
PSNR) metrics, and better captures high-biomass values, without significant
computational overhead. Interestingly, unlike the RGB+Depth setting they were
originally designed for, our best-performing AGB GSR approaches are those that
most preserve the guide image texture. Our results make a strong case for
adopting the GSR framework for accurate HR biomass mapping at scale. Our code
and model weights are made publicly available
(https://github.com/kaankaramanofficial/GSR4B).
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 13:28:27 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 09:49:33 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Karaman",
"Kaan",
""
],
[
"Jiang",
"Yuchang",
""
],
[
"Robert",
"Damien",
""
],
[
"Garnot",
"Vivien Sainte Fare",
""
],
[
"Santos",
"Maria João",
""
],
[
"Wegner",
"Jan Dirk",
""
]
] | TITLE: GSR4B: Biomass Map Super-Resolution with Sentinel-1/2 Guidance
ABSTRACT: Accurate Above-Ground Biomass (AGB) mapping at both large scale and high
spatio-temporal resolution is essential for applications ranging from climate
modeling to biodiversity assessment, and sustainable supply chain monitoring.
At present, fine-grained AGB mapping relies on costly airborne laser scanning
acquisition campaigns usually limited to regional scales. Initiatives such as
the ESA CCI map attempt to generate global biomass products from diverse
spaceborne sensors but at a coarser resolution. To enable global,
high-resolution (HR) mapping, several works propose to regress AGB from HR
satellite observations such as ESA Sentinel-1/2 images. We propose a novel way
to address HR AGB estimation, by leveraging both HR satellite observations and
existing low-resolution (LR) biomass products. We cast this problem as Guided
Super-Resolution (GSR), aiming at upsampling LR biomass maps (sources) from
$100$ to $10$ m resolution, using auxiliary HR co-registered satellite images
(guides). We compare super-resolving AGB maps with and without guidance,
against direct regression from satellite images, on the public BioMassters
dataset. We observe that Multi-Scale Guidance (MSG) outperforms direct
regression both for regression ($-780$ t/ha RMSE) and perception ($+2.0$ dB
PSNR) metrics, and better captures high-biomass values, without significant
computational overhead. Interestingly, unlike the RGB+Depth setting they were
originally designed for, our best-performing AGB GSR approaches are those that
most preserve the guide image texture. Our results make a strong case for
adopting the GSR framework for accurate HR biomass mapping at scale. Our code
and model weights are made publicly available
(https://github.com/kaankaramanofficial/GSR4B).
|
2504.01905 | Furkan \c{C}olhak | Furkan \c{C}olhak, Hasan Co\c{s}kun, Tsafac Nkombong Regine Cyrille,
Tedi Hoxa, Mert \.Ilhan Ecevit, Mehmet Nafiz Ayd{\i}n | Accelerating IoV Intrusion Detection: Benchmarking GPU-Accelerated vs
CPU-Based ML Libraries | CIIT 2025 22nd International Conference on Informatics and
Information Technologies (CIIT) | null | null | null | cs.LG cs.AI cs.CR | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The Internet of Vehicles (IoV) may face challenging cybersecurity attacks
that may require sophisticated intrusion detection systems, necessitating a
rapid development and response system. This research investigates the
performance advantages of GPU-accelerated libraries (cuML) compared to
traditional CPU-based implementations (scikit-learn), focusing on the speed and
efficiency required for machine learning models used in IoV threat detection
environments. The comprehensive evaluations conducted employ four machine
learning approaches (Random Forest, KNN, Logistic Regression, XGBoost) across
three distinct IoV security datasets (OTIDS, GIDS, CICIoV2024). Our findings
demonstrate that GPU-accelerated implementations dramatically improved
computational efficiency, with training times reduced by a factor of up to 159
and prediction speeds accelerated by up to 95 times compared to traditional CPU
processing, all while preserving detection accuracy. This remarkable
performance breakthrough empowers researchers and security specialists to
harness GPU acceleration for creating faster, more effective threat detection
systems that meet the urgent real-time security demands of today's connected
vehicle networks.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 17:04:53 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 08:42:45 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Çolhak",
"Furkan",
""
],
[
"Coşkun",
"Hasan",
""
],
[
"Cyrille",
"Tsafac Nkombong Regine",
""
],
[
"Hoxa",
"Tedi",
""
],
[
"Ecevit",
"Mert İlhan",
""
],
[
"Aydın",
"Mehmet Nafiz",
""
]
] | TITLE: Accelerating IoV Intrusion Detection: Benchmarking GPU-Accelerated vs
CPU-Based ML Libraries
ABSTRACT: The Internet of Vehicles (IoV) may face challenging cybersecurity attacks
that may require sophisticated intrusion detection systems, necessitating a
rapid development and response system. This research investigates the
performance advantages of GPU-accelerated libraries (cuML) compared to
traditional CPU-based implementations (scikit-learn), focusing on the speed and
efficiency required for machine learning models used in IoV threat detection
environments. The comprehensive evaluations conducted employ four machine
learning approaches (Random Forest, KNN, Logistic Regression, XGBoost) across
three distinct IoV security datasets (OTIDS, GIDS, CICIoV2024). Our findings
demonstrate that GPU-accelerated implementations dramatically improved
computational efficiency, with training times reduced by a factor of up to 159
and prediction speeds accelerated by up to 95 times compared to traditional CPU
processing, all while preserving detection accuracy. This remarkable
performance breakthrough empowers researchers and security specialists to
harness GPU acceleration for creating faster, more effective threat detection
systems that meet the urgent real-time security demands of today's connected
vehicle networks.
|
2504.01957 | Shu-Wei Lu | Shu-Wei Lu, Yi-Hsuan Tsai, Yi-Ting Chen | Toward Real-world BEV Perception: Depth Uncertainty Estimation via
Gaussian Splatting | Accepted to CVPR'25. https://hcis-lab.github.io/GaussianLSS/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Bird's-eye view (BEV) perception has gained significant attention because it
provides a unified representation to fuse multiple view images and enables a
wide range of down-stream autonomous driving tasks, such as forecasting and
planning. Recent state-of-the-art models utilize projection-based methods which
formulate BEV perception as query learning to bypass explicit depth estimation.
While we observe promising advancements in this paradigm, they still fall short
of real-world applications because of the lack of uncertainty modeling and
expensive computational requirement. In this work, we introduce GaussianLSS, a
novel uncertainty-aware BEV perception framework that revisits
unprojection-based methods, specifically the Lift-Splat-Shoot (LSS) paradigm,
and enhances them with depth un-certainty modeling. GaussianLSS represents
spatial dispersion by learning a soft depth mean and computing the variance of
the depth distribution, which implicitly captures object extents. We then
transform the depth distribution into 3D Gaussians and rasterize them to
construct uncertainty-aware BEV features. We evaluate GaussianLSS on the
nuScenes dataset, achieving state-of-the-art performance compared to
unprojection-based methods. In particular, it provides significant advantages
in speed, running 2.5x faster, and in memory efficiency, using 0.3x less memory
compared to projection-based methods, while achieving competitive performance
with only a 0.4% IoU difference.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 17:59:38 GMT"
},
{
"version": "v2",
"created": "Thu, 3 Apr 2025 07:01:32 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Lu",
"Shu-Wei",
""
],
[
"Tsai",
"Yi-Hsuan",
""
],
[
"Chen",
"Yi-Ting",
""
]
] | TITLE: Toward Real-world BEV Perception: Depth Uncertainty Estimation via
Gaussian Splatting
ABSTRACT: Bird's-eye view (BEV) perception has gained significant attention because it
provides a unified representation to fuse multiple view images and enables a
wide range of down-stream autonomous driving tasks, such as forecasting and
planning. Recent state-of-the-art models utilize projection-based methods which
formulate BEV perception as query learning to bypass explicit depth estimation.
While we observe promising advancements in this paradigm, they still fall short
of real-world applications because of the lack of uncertainty modeling and
expensive computational requirement. In this work, we introduce GaussianLSS, a
novel uncertainty-aware BEV perception framework that revisits
unprojection-based methods, specifically the Lift-Splat-Shoot (LSS) paradigm,
and enhances them with depth un-certainty modeling. GaussianLSS represents
spatial dispersion by learning a soft depth mean and computing the variance of
the depth distribution, which implicitly captures object extents. We then
transform the depth distribution into 3D Gaussians and rasterize them to
construct uncertainty-aware BEV features. We evaluate GaussianLSS on the
nuScenes dataset, achieving state-of-the-art performance compared to
unprojection-based methods. In particular, it provides significant advantages
in speed, running 2.5x faster, and in memory efficiency, using 0.3x less memory
compared to projection-based methods, while achieving competitive performance
with only a 0.4% IoU difference.
|
2504.01973 | Oliver Bent | Christoph Brunken, Sebastien Boyer, Mustafa Omar, Martin Maarand,
Olivier Peltre, Solal Attias, Bakary N'tji Diallo, Anastasia Markina, Olaf
Othersen, Oliver Bent | Universally applicable and tunable graph-based coarse-graining for
Machine learning force fields | null | null | null | null | physics.chem-ph cs.AI | http://creativecommons.org/licenses/by/4.0/ | Coarse-grained (CG) force field methods for molecular systems are a crucial
tool to simulate large biological macromolecules and are therefore essential
for characterisations of biomolecular systems. While state-of-the-art deep
learning (DL)-based models for all-atom force fields have improved immensely
over recent years, we observe and analyse significant limitations of the
currently available approaches for DL-based CG simulations. In this work, we
present the first transferable DL-based CG force field approach (i.e., not
specific to only one narrowly defined system type) applicable to a wide range
of biosystems. To achieve this, our CG algorithm does not rely on hard-coded
rules and is tuned to output coarse-grained systems optimised for minimal
statistical noise in the ground truth CG forces, which results in significant
improvement of model training. Our force field model is also the first CG
variant that is based on the MACE architecture and is trained on a custom
dataset created by a new approach based on the fragmentation of large
biosystems covering protein, RNA and lipid chemistry. We demonstrate that our
model can be applied in molecular dynamics simulations to obtain stable and
qualitatively accurate trajectories for a variety of systems, while also
discussing cases for which we observe limited reliability.
| [
{
"version": "v1",
"created": "Mon, 24 Mar 2025 16:55:53 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Brunken",
"Christoph",
""
],
[
"Boyer",
"Sebastien",
""
],
[
"Omar",
"Mustafa",
""
],
[
"Maarand",
"Martin",
""
],
[
"Peltre",
"Olivier",
""
],
[
"Attias",
"Solal",
""
],
[
"Diallo",
"Bakary N'tji",
""
],
[
"Markina",
"Anastasia",
""
],
[
"Othersen",
"Olaf",
""
],
[
"Bent",
"Oliver",
""
]
] | TITLE: Universally applicable and tunable graph-based coarse-graining for
Machine learning force fields
ABSTRACT: Coarse-grained (CG) force field methods for molecular systems are a crucial
tool to simulate large biological macromolecules and are therefore essential
for characterisations of biomolecular systems. While state-of-the-art deep
learning (DL)-based models for all-atom force fields have improved immensely
over recent years, we observe and analyse significant limitations of the
currently available approaches for DL-based CG simulations. In this work, we
present the first transferable DL-based CG force field approach (i.e., not
specific to only one narrowly defined system type) applicable to a wide range
of biosystems. To achieve this, our CG algorithm does not rely on hard-coded
rules and is tuned to output coarse-grained systems optimised for minimal
statistical noise in the ground truth CG forces, which results in significant
improvement of model training. Our force field model is also the first CG
variant that is based on the MACE architecture and is trained on a custom
dataset created by a new approach based on the fragmentation of large
biosystems covering protein, RNA and lipid chemistry. We demonstrate that our
model can be applied in molecular dynamics simulations to obtain stable and
qualitatively accurate trajectories for a variety of systems, while also
discussing cases for which we observe limited reliability.
|
2504.01989 | Miao Fan | Yi Yao, Miao Fan, Shengtong Xu, Haoyi Xiong, Xiangzeng Liu, Wenbo Hu,
Wenbing Huang | A Concise Survey on Lane Topology Reasoning for HD Mapping | Accepted by IEEE IV'25 | null | null | null | cs.RO cs.CV | http://creativecommons.org/licenses/by/4.0/ | Lane topology reasoning techniques play a crucial role in high-definition
(HD) mapping and autonomous driving applications. While recent years have
witnessed significant advances in this field, there has been limited effort to
consolidate these works into a comprehensive overview. This survey
systematically reviews the evolution and current state of lane topology
reasoning methods, categorizing them into three major paradigms: procedural
modeling-based methods, aerial imagery-based methods, and onboard sensors-based
methods. We analyze the progression from early rule-based approaches to modern
learning-based solutions utilizing transformers, graph neural networks (GNNs),
and other deep learning architectures. The paper examines standardized
evaluation metrics, including road-level measures (APLS and TLTS score), and
lane-level metrics (DET and TOP score), along with performance comparisons on
benchmark datasets such as OpenLane-V2. We identify key technical challenges,
including dataset availability and model efficiency, and outline promising
directions for future research. This comprehensive review provides researchers
and practitioners with insights into the theoretical frameworks, practical
implementations, and emerging trends in lane topology reasoning for HD mapping
applications.
| [
{
"version": "v1",
"created": "Mon, 31 Mar 2025 11:30:40 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Yao",
"Yi",
""
],
[
"Fan",
"Miao",
""
],
[
"Xu",
"Shengtong",
""
],
[
"Xiong",
"Haoyi",
""
],
[
"Liu",
"Xiangzeng",
""
],
[
"Hu",
"Wenbo",
""
],
[
"Huang",
"Wenbing",
""
]
] | TITLE: A Concise Survey on Lane Topology Reasoning for HD Mapping
ABSTRACT: Lane topology reasoning techniques play a crucial role in high-definition
(HD) mapping and autonomous driving applications. While recent years have
witnessed significant advances in this field, there has been limited effort to
consolidate these works into a comprehensive overview. This survey
systematically reviews the evolution and current state of lane topology
reasoning methods, categorizing them into three major paradigms: procedural
modeling-based methods, aerial imagery-based methods, and onboard sensors-based
methods. We analyze the progression from early rule-based approaches to modern
learning-based solutions utilizing transformers, graph neural networks (GNNs),
and other deep learning architectures. The paper examines standardized
evaluation metrics, including road-level measures (APLS and TLTS score), and
lane-level metrics (DET and TOP score), along with performance comparisons on
benchmark datasets such as OpenLane-V2. We identify key technical challenges,
including dataset availability and model efficiency, and outline promising
directions for future research. This comprehensive review provides researchers
and practitioners with insights into the theoretical frameworks, practical
implementations, and emerging trends in lane topology reasoning for HD mapping
applications.
|
2504.02004 | Mingshuai Yao | Mingshuai Yao, Mengting Chen, Qinye Zhou, Yabo Zhang, Ming Liu,
Xiaoming Li, Shaohui Liu, Chen Ju, Shuai Xiao, Qingwen Liu, Jinsong Lan,
Wangmeng Zuo | Beyond Static Scenes: Camera-controllable Background Generation for
Human Motion | null | null | null | null | cs.GR | http://creativecommons.org/licenses/by/4.0/ | In this paper, we investigate the generation of new video backgrounds given a
human foreground video, a camera pose, and a reference scene image. This task
presents three key challenges. First, the generated background should precisely
follow the camera movements corresponding to the human foreground. Second, as
the camera shifts in different directions, newly revealed content should appear
seamless and natural. Third, objects within the video frame should maintain
consistent textures as the camera moves to ensure visual coherence. To address
these challenges, we propose DynaScene, a new framework that uses camera poses
extracted from the original video as an explicit control to drive background
motion. Specifically, we design a multi-task learning paradigm that
incorporates auxiliary tasks, namely background outpainting and scene
variation, to enhance the realism of the generated backgrounds. Given the
scarcity of suitable data, we constructed a large-scale, high-quality dataset
tailored for this task, comprising video foregrounds, reference scene images,
and corresponding camera poses. This dataset contains 200K video clips, ten
times larger than existing real-world human video datasets, providing a
significantly richer and more diverse training resource. Project page:
https://yaomingshuai.github.io/Beyond-Static-Scenes.github.io/
| [
{
"version": "v1",
"created": "Tue, 1 Apr 2025 18:12:22 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Yao",
"Mingshuai",
""
],
[
"Chen",
"Mengting",
""
],
[
"Zhou",
"Qinye",
""
],
[
"Zhang",
"Yabo",
""
],
[
"Liu",
"Ming",
""
],
[
"Li",
"Xiaoming",
""
],
[
"Liu",
"Shaohui",
""
],
[
"Ju",
"Chen",
""
],
[
"Xiao",
"Shuai",
""
],
[
"Liu",
"Qingwen",
""
],
[
"Lan",
"Jinsong",
""
],
[
"Zuo",
"Wangmeng",
""
]
] | TITLE: Beyond Static Scenes: Camera-controllable Background Generation for
Human Motion
ABSTRACT: In this paper, we investigate the generation of new video backgrounds given a
human foreground video, a camera pose, and a reference scene image. This task
presents three key challenges. First, the generated background should precisely
follow the camera movements corresponding to the human foreground. Second, as
the camera shifts in different directions, newly revealed content should appear
seamless and natural. Third, objects within the video frame should maintain
consistent textures as the camera moves to ensure visual coherence. To address
these challenges, we propose DynaScene, a new framework that uses camera poses
extracted from the original video as an explicit control to drive background
motion. Specifically, we design a multi-task learning paradigm that
incorporates auxiliary tasks, namely background outpainting and scene
variation, to enhance the realism of the generated backgrounds. Given the
scarcity of suitable data, we constructed a large-scale, high-quality dataset
tailored for this task, comprising video foregrounds, reference scene images,
and corresponding camera poses. This dataset contains 200K video clips, ten
times larger than existing real-world human video datasets, providing a
significantly richer and more diverse training resource. Project page:
https://yaomingshuai.github.io/Beyond-Static-Scenes.github.io/
|
2504.02008 | Kecheng Chen | Kecheng Chen, Xinyu Luo, Tiexin Qin, Jie Liu, Hui Liu, Victor Ho Fun
Lee, Hong Yan, and Haoliang Li | Test-time Adaptation for Foundation Medical Segmentation Model without
Parametric Updates | Under review | null | null | null | q-bio.QM cs.AI | http://creativecommons.org/licenses/by/4.0/ | Foundation medical segmentation models, with MedSAM being the most popular,
have achieved promising performance across organs and lesions. However, MedSAM
still suffers from compromised performance on specific lesions with intricate
structures and appearance, as well as bounding box prompt-induced
perturbations. Although current test-time adaptation (TTA) methods for medical
image segmentation may tackle this issue, partial (e.g., batch normalization)
or whole parametric updates restrict their effectiveness due to limited update
signals or catastrophic forgetting in large models. Meanwhile, these approaches
ignore the computational complexity during adaptation, which is particularly
significant for modern foundation models. To this end, our theoretical analyses
reveal that directly refining image embeddings is feasible to approach the same
goal as parametric updates under the MedSAM architecture, which enables us to
realize high computational efficiency and segmentation performance without the
risk of catastrophic forgetting. Under this framework, we propose to encourage
maximizing factorized conditional probabilities of the posterior prediction
probability using a proposed distribution-approximated latent conditional
random field loss combined with an entropy minimization loss. Experiments show
that we achieve about 3\% Dice score improvements across three datasets while
reducing computational complexity by over 7 times.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 03:03:34 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Chen",
"Kecheng",
""
],
[
"Luo",
"Xinyu",
""
],
[
"Qin",
"Tiexin",
""
],
[
"Liu",
"Jie",
""
],
[
"Liu",
"Hui",
""
],
[
"Lee",
"Victor Ho Fun",
""
],
[
"Yan",
"Hong",
""
],
[
"Li",
"Haoliang",
""
]
] | TITLE: Test-time Adaptation for Foundation Medical Segmentation Model without
Parametric Updates
ABSTRACT: Foundation medical segmentation models, with MedSAM being the most popular,
have achieved promising performance across organs and lesions. However, MedSAM
still suffers from compromised performance on specific lesions with intricate
structures and appearance, as well as bounding box prompt-induced
perturbations. Although current test-time adaptation (TTA) methods for medical
image segmentation may tackle this issue, partial (e.g., batch normalization)
or whole parametric updates restrict their effectiveness due to limited update
signals or catastrophic forgetting in large models. Meanwhile, these approaches
ignore the computational complexity during adaptation, which is particularly
significant for modern foundation models. To this end, our theoretical analyses
reveal that directly refining image embeddings is feasible to approach the same
goal as parametric updates under the MedSAM architecture, which enables us to
realize high computational efficiency and segmentation performance without the
risk of catastrophic forgetting. Under this framework, we propose to encourage
maximizing factorized conditional probabilities of the posterior prediction
probability using a proposed distribution-approximated latent conditional
random field loss combined with an entropy minimization loss. Experiments show
that we achieve about 3\% Dice score improvements across three datasets while
reducing computational complexity by over 7 times.
|
2504.02009 | Zhonghang Li | Zhonghang Li, Lianghao Xia, Xubin Ren, Jiabin Tang, Tianyi Chen, Yong
Xu, Chao Huang | Urban Computing in the Era of Large Language Models | 36 pages | null | null | null | cs.CY cs.CL | http://creativecommons.org/licenses/by/4.0/ | Urban computing has emerged as a multidisciplinary field that harnesses
data-driven technologies to address challenges and improve urban living.
Traditional approaches, while beneficial, often face challenges with
generalization, scalability, and contextual understanding. The advent of Large
Language Models (LLMs) offers transformative potential in this domain. This
survey explores the intersection of LLMs and urban computing, emphasizing the
impact of LLMs in processing and analyzing urban data, enhancing
decision-making, and fostering citizen engagement. We provide a concise
overview of the evolution and core technologies of LLMs. Additionally, we
survey their applications across key urban domains, such as transportation,
public safety, and environmental monitoring, summarizing essential tasks and
prior works in various urban contexts, while highlighting LLMs' functional
roles and implementation patterns. Building on this, we propose potential
LLM-based solutions to address unresolved challenges. To facilitate in-depth
research, we compile a list of available datasets and tools applicable to
diverse urban scenarios. Finally, we discuss the limitations of current
approaches and outline future directions for advancing LLMs in urban computing.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 05:12:13 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Li",
"Zhonghang",
""
],
[
"Xia",
"Lianghao",
""
],
[
"Ren",
"Xubin",
""
],
[
"Tang",
"Jiabin",
""
],
[
"Chen",
"Tianyi",
""
],
[
"Xu",
"Yong",
""
],
[
"Huang",
"Chao",
""
]
] | TITLE: Urban Computing in the Era of Large Language Models
ABSTRACT: Urban computing has emerged as a multidisciplinary field that harnesses
data-driven technologies to address challenges and improve urban living.
Traditional approaches, while beneficial, often face challenges with
generalization, scalability, and contextual understanding. The advent of Large
Language Models (LLMs) offers transformative potential in this domain. This
survey explores the intersection of LLMs and urban computing, emphasizing the
impact of LLMs in processing and analyzing urban data, enhancing
decision-making, and fostering citizen engagement. We provide a concise
overview of the evolution and core technologies of LLMs. Additionally, we
survey their applications across key urban domains, such as transportation,
public safety, and environmental monitoring, summarizing essential tasks and
prior works in various urban contexts, while highlighting LLMs' functional
roles and implementation patterns. Building on this, we propose potential
LLM-based solutions to address unresolved challenges. To facilitate in-depth
research, we compile a list of available datasets and tools applicable to
diverse urban scenarios. Finally, we discuss the limitations of current
approaches and outline future directions for advancing LLMs in urban computing.
|
2504.02011 | Dohyun Kim | Dohyun Kim, Sehwan Park, Geonhee Han, Seung Wook Kim, and Paul
Hongsuck Seo | Random Conditioning with Distillation for Data-Efficient Diffusion Model
Compression | Accepted to CVPR 2025. 8 pages main paper + 4 pages references + 5
pages supplementary, 9 figures in total | null | null | null | cs.LG cs.AI | http://creativecommons.org/licenses/by/4.0/ | Diffusion models generate high-quality images through progressive denoising
but are computationally intensive due to large model sizes and repeated
sampling. Knowledge distillation, which transfers knowledge from a complex
teacher to a simpler student model, has been widely studied in recognition
tasks, particularly for transferring concepts unseen during student training.
However, its application to diffusion models remains underexplored, especially
in enabling student models to generate concepts not covered by the training
images. In this work, we propose Random Conditioning, a novel approach that
pairs noised images with randomly selected text conditions to enable efficient,
image-free knowledge distillation. By leveraging this technique, we show that
the student can generate concepts unseen in the training images. When applied
to conditional diffusion model distillation, our method allows the student to
explore the condition space without generating condition-specific images,
resulting in notable improvements in both generation quality and efficiency.
This promotes resource-efficient deployment of generative diffusion models,
broadening their accessibility for both research and real-world applications.
Code, models, and datasets are available at
https://dohyun-as.github.io/Random-Conditioning .
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 05:41:19 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Kim",
"Dohyun",
""
],
[
"Park",
"Sehwan",
""
],
[
"Han",
"Geonhee",
""
],
[
"Kim",
"Seung Wook",
""
],
[
"Seo",
"Paul Hongsuck",
""
]
] | TITLE: Random Conditioning with Distillation for Data-Efficient Diffusion Model
Compression
ABSTRACT: Diffusion models generate high-quality images through progressive denoising
but are computationally intensive due to large model sizes and repeated
sampling. Knowledge distillation, which transfers knowledge from a complex
teacher to a simpler student model, has been widely studied in recognition
tasks, particularly for transferring concepts unseen during student training.
However, its application to diffusion models remains underexplored, especially
in enabling student models to generate concepts not covered by the training
images. In this work, we propose Random Conditioning, a novel approach that
pairs noised images with randomly selected text conditions to enable efficient,
image-free knowledge distillation. By leveraging this technique, we show that
the student can generate concepts unseen in the training images. When applied
to conditional diffusion model distillation, our method allows the student to
explore the condition space without generating condition-specific images,
resulting in notable improvements in both generation quality and efficiency.
This promotes resource-efficient deployment of generative diffusion models,
broadening their accessibility for both research and real-world applications.
Code, models, and datasets are available at
https://dohyun-as.github.io/Random-Conditioning .
|
2504.02012 | Bedionita Soro | Soro Bedionita, Bruno Andreis, Song Chong, Sung Ju Hwang | Instruction-Guided Autoregressive Neural Network Parameter Generation | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Learning to generate neural network parameters conditioned on task
descriptions and architecture specifications is pivotal for advancing model
adaptability and transfer learning. Existing methods especially those based on
diffusion models suffer from limited scalability to large architectures,
rigidity in handling varying network depths, and disjointed parameter
generation that undermines inter-layer coherence. In this work, we propose IGPG
(Instruction Guided Parameter Generation), an autoregressive framework that
unifies parameter synthesis across diverse tasks and architectures. IGPG
leverages a VQ-VAE and an autoregressive model to generate neural network
parameters, conditioned on task instructions, dataset, and architecture
details. By autoregressively generating neural network weights' tokens, IGPG
ensures inter-layer coherence and enables efficient adaptation across models
and datasets. Operating at the token level, IGPG effectively captures complex
parameter distributions aggregated from a broad spectrum of pretrained models.
Extensive experiments on multiple vision datasets demonstrate that IGPG
consolidates diverse pretrained models into a single, flexible generative
framework. The synthesized parameters achieve competitive or superior
performance relative to state-of-the-art methods, especially in terms of
scalability and efficiency when applied to large architectures. These results
underscore ICPG potential as a powerful tool for pretrained weight retrieval,
model selection, and rapid task-specific fine-tuning.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 05:50:19 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Bedionita",
"Soro",
""
],
[
"Andreis",
"Bruno",
""
],
[
"Chong",
"Song",
""
],
[
"Hwang",
"Sung Ju",
""
]
] | TITLE: Instruction-Guided Autoregressive Neural Network Parameter Generation
ABSTRACT: Learning to generate neural network parameters conditioned on task
descriptions and architecture specifications is pivotal for advancing model
adaptability and transfer learning. Existing methods especially those based on
diffusion models suffer from limited scalability to large architectures,
rigidity in handling varying network depths, and disjointed parameter
generation that undermines inter-layer coherence. In this work, we propose IGPG
(Instruction Guided Parameter Generation), an autoregressive framework that
unifies parameter synthesis across diverse tasks and architectures. IGPG
leverages a VQ-VAE and an autoregressive model to generate neural network
parameters, conditioned on task instructions, dataset, and architecture
details. By autoregressively generating neural network weights' tokens, IGPG
ensures inter-layer coherence and enables efficient adaptation across models
and datasets. Operating at the token level, IGPG effectively captures complex
parameter distributions aggregated from a broad spectrum of pretrained models.
Extensive experiments on multiple vision datasets demonstrate that IGPG
consolidates diverse pretrained models into a single, flexible generative
framework. The synthesized parameters achieve competitive or superior
performance relative to state-of-the-art methods, especially in terms of
scalability and efficiency when applied to large architectures. These results
underscore ICPG potential as a powerful tool for pretrained weight retrieval,
model selection, and rapid task-specific fine-tuning.
|
2504.02013 | Sijie Xiong | Sijie Xiong, Shuqing Liu, Cheng Tang, Fumiya Okubo, Haoling Xiong,
Atsushi Shimada | Attention Mamba: Time Series Modeling with Adaptive Pooling Acceleration
and Receptive Field Enhancements | null | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | "This work has been submitted to the lEEE for possible publication. Copyright
may be transferred without noticeafter which this version may no longer be
accessible." Time series modeling serves as the cornerstone of real-world
applications, such as weather forecasting and transportation management.
Recently, Mamba has become a promising model that combines near-linear
computational complexity with high prediction accuracy in time series modeling,
while facing challenges such as insufficient modeling of nonlinear dependencies
in attention and restricted receptive fields caused by convolutions. To
overcome these limitations, this paper introduces an innovative framework,
Attention Mamba, featuring a novel Adaptive Pooling block that accelerates
attention computation and incorporates global information, effectively
overcoming the constraints of limited receptive fields. Furthermore, Attention
Mamba integrates a bidirectional Mamba block, efficiently capturing long-short
features and transforming inputs into the Value representations for attention
mechanisms. Extensive experiments conducted on diverse datasets underscore the
effectiveness of Attention Mamba in extracting nonlinear dependencies and
enhancing receptive fields, establishing superior performance among leading
counterparts. Our codes will be available on GitHub.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 05:56:43 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Xiong",
"Sijie",
""
],
[
"Liu",
"Shuqing",
""
],
[
"Tang",
"Cheng",
""
],
[
"Okubo",
"Fumiya",
""
],
[
"Xiong",
"Haoling",
""
],
[
"Shimada",
"Atsushi",
""
]
] | TITLE: Attention Mamba: Time Series Modeling with Adaptive Pooling Acceleration
and Receptive Field Enhancements
ABSTRACT: "This work has been submitted to the lEEE for possible publication. Copyright
may be transferred without noticeafter which this version may no longer be
accessible." Time series modeling serves as the cornerstone of real-world
applications, such as weather forecasting and transportation management.
Recently, Mamba has become a promising model that combines near-linear
computational complexity with high prediction accuracy in time series modeling,
while facing challenges such as insufficient modeling of nonlinear dependencies
in attention and restricted receptive fields caused by convolutions. To
overcome these limitations, this paper introduces an innovative framework,
Attention Mamba, featuring a novel Adaptive Pooling block that accelerates
attention computation and incorporates global information, effectively
overcoming the constraints of limited receptive fields. Furthermore, Attention
Mamba integrates a bidirectional Mamba block, efficiently capturing long-short
features and transforming inputs into the Value representations for attention
mechanisms. Extensive experiments conducted on diverse datasets underscore the
effectiveness of Attention Mamba in extracting nonlinear dependencies and
enhancing receptive fields, establishing superior performance among leading
counterparts. Our codes will be available on GitHub.
|
2504.02014 | Jiannuo Li | Jiannuo Li and Lan Yao | HCAF-DTA: drug-target binding affinity prediction with cross-attention
fused hypergraph neural networks | null | null | null | null | q-bio.BM cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Accurate prediction of the binding affinity between drugs and target proteins
is a core task in computer-aided drug design. Existing deep learning methods
tend to ignore the information of internal sub-structural features of drug
molecules and drug-target interactions, resulting in limited prediction
performance. In this paper, we propose a drug-target association prediction
model HCAF-DTA based on cross-attention fusion hypergraph neural network. The
model innovatively introduces hypergraph representation in the feature
extraction stage: drug molecule hypergraphs are constructed based on the tree
decomposition algorithm, and the sub-structural and global features extracted
by fusing the hypergraph neural network with the graphical neural network
through hopping connections, in which the hyper edges can efficiently
characterise the functional functional groups and other key chemical features;
for the protein feature extraction, a weighted graph is constructed based on
the residues predicted by the ESM model contact maps to construct weighted
graphs, and multilayer graph neural networks were used to capture spatial
dependencies. In the prediction stage, a bidirectional multi-head
cross-attention mechanism is designed to model intermolecular interactions from
the dual viewpoints of atoms and amino acids, and cross-modal features with
correlated information are fused by attention. Experiments on benchmark
datasets such as Davis and KIBA show that HCAF-DTA outperforms state of the
arts in all three performance evaluation metrics, with the MSE metrics reaching
0.198 and 0.122, respectively, with an improvement of up to 4% from the optimal
baseline.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 06:46:28 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Li",
"Jiannuo",
""
],
[
"Yao",
"Lan",
""
]
] | TITLE: HCAF-DTA: drug-target binding affinity prediction with cross-attention
fused hypergraph neural networks
ABSTRACT: Accurate prediction of the binding affinity between drugs and target proteins
is a core task in computer-aided drug design. Existing deep learning methods
tend to ignore the information of internal sub-structural features of drug
molecules and drug-target interactions, resulting in limited prediction
performance. In this paper, we propose a drug-target association prediction
model HCAF-DTA based on cross-attention fusion hypergraph neural network. The
model innovatively introduces hypergraph representation in the feature
extraction stage: drug molecule hypergraphs are constructed based on the tree
decomposition algorithm, and the sub-structural and global features extracted
by fusing the hypergraph neural network with the graphical neural network
through hopping connections, in which the hyper edges can efficiently
characterise the functional functional groups and other key chemical features;
for the protein feature extraction, a weighted graph is constructed based on
the residues predicted by the ESM model contact maps to construct weighted
graphs, and multilayer graph neural networks were used to capture spatial
dependencies. In the prediction stage, a bidirectional multi-head
cross-attention mechanism is designed to model intermolecular interactions from
the dual viewpoints of atoms and amino acids, and cross-modal features with
correlated information are fused by attention. Experiments on benchmark
datasets such as Davis and KIBA show that HCAF-DTA outperforms state of the
arts in all three performance evaluation metrics, with the MSE metrics reaching
0.198 and 0.122, respectively, with an improvement of up to 4% from the optimal
baseline.
|
2504.02016 | Zechen Liu | Zechen Liu, Feiyang Zhang, Wei Song, Xiang Li, Wei Wei | Fourier Feature Attribution: A New Efficiency Attribution Method | 11 pages, 13 figures | null | null | null | cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | The study of neural networks from the perspective of Fourier features has
garnered significant attention. While existing analytical research suggests
that neural networks tend to learn low-frequency features, a clear attribution
method for identifying the specific learned Fourier features has remained
elusive. To bridge this gap, we propose a novel Fourier feature attribution
method grounded in signal decomposition theory. Additionally, we analyze the
differences between game-theoretic attribution metrics for Fourier and spatial
domain features, demonstrating that game-theoretic evaluation metrics are
better suited for Fourier-based feature attribution.
Our experiments show that Fourier feature attribution exhibits superior
feature selection capabilities compared to spatial domain attribution methods.
For instance, in the case of Vision Transformers (ViTs) on the ImageNet
dataset, only $8\%$ of the Fourier features are required to maintain the
original predictions for $80\%$ of the samples. Furthermore, we compare the
specificity of features identified by our method against traditional spatial
domain attribution methods. Results reveal that Fourier features exhibit
greater intra-class concentration and inter-class distinctiveness, indicating
their potential for more efficient classification and explainable AI
algorithms.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 13:20:19 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Liu",
"Zechen",
""
],
[
"Zhang",
"Feiyang",
""
],
[
"Song",
"Wei",
""
],
[
"Li",
"Xiang",
""
],
[
"Wei",
"Wei",
""
]
] | TITLE: Fourier Feature Attribution: A New Efficiency Attribution Method
ABSTRACT: The study of neural networks from the perspective of Fourier features has
garnered significant attention. While existing analytical research suggests
that neural networks tend to learn low-frequency features, a clear attribution
method for identifying the specific learned Fourier features has remained
elusive. To bridge this gap, we propose a novel Fourier feature attribution
method grounded in signal decomposition theory. Additionally, we analyze the
differences between game-theoretic attribution metrics for Fourier and spatial
domain features, demonstrating that game-theoretic evaluation metrics are
better suited for Fourier-based feature attribution.
Our experiments show that Fourier feature attribution exhibits superior
feature selection capabilities compared to spatial domain attribution methods.
For instance, in the case of Vision Transformers (ViTs) on the ImageNet
dataset, only $8\%$ of the Fourier features are required to maintain the
original predictions for $80\%$ of the samples. Furthermore, we compare the
specificity of features identified by our method against traditional spatial
domain attribution methods. Results reveal that Fourier features exhibit
greater intra-class concentration and inter-class distinctiveness, indicating
their potential for more efficient classification and explainable AI
algorithms.
|
2504.02017 | Xin-Ye Li | Li Xin-Ye, Du Ya-Li, and Li Ming | Enhancing LLMs in Long Code Translation through Instrumentation and
Program State Alignment | 20 pages | null | null | null | cs.SE | http://creativecommons.org/licenses/by/4.0/ | Code translation aims to transform code between programming languages while
preserving functionality, with applications in cross-platform development and
software migration. Recent advances in Large Language Models (LLMs) have
improved code translation, but challenges remain, particularly in inferring
program functionality. These issues worsen with longer and more complex code,
where current LLMs struggle to handle length and intricate semantics. To
evaluate LLMs on long code translation, we introduce LongTrans, a large-scale
execution-based benchmark with C++, Java, and Python programs, ranging from
hundreds to thousands of tokens. Our empirical study of 12 LLMs reveals a sharp
performance decline as code length increases, with even the best-performing
model, GPT-4o, achieving only 57.51% computational accuracy. This highlights
the need for further research in long code translation. We argue that code
translation should maintain invariant functionality while transforming syntax
and keywords across languages. Despite differences in appearance, program
states should remain consistent throughout execution. To address this, we
propose PAST (Program State Alignment augmented Translation), which integrates
instrumentation to capture and align program states during translation. This
approach is the first to leverage LLMs to insert instrumentation in both
original and translated code, tracing program states at runtime. By prompting
the LLM to correct errors based on output traces, we mitigate inconsistencies
and enhance translation accuracy. Experimental results show significant
improvements, with computational accuracy rising from 57.51% to 84.70% for
GPT-4o, 50.68% to 69.97% for Mistral-Large-2, and 52.45% to 76.43% for
DeepSeek-Coder-V2. These improvements are consistent across models and
datasets, with ablation studies confirming the benefits of instrumentation and
state alignment.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 13:55:29 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Xin-Ye",
"Li",
""
],
[
"Ya-Li",
"Du",
""
],
[
"Ming",
"Li",
""
]
] | TITLE: Enhancing LLMs in Long Code Translation through Instrumentation and
Program State Alignment
ABSTRACT: Code translation aims to transform code between programming languages while
preserving functionality, with applications in cross-platform development and
software migration. Recent advances in Large Language Models (LLMs) have
improved code translation, but challenges remain, particularly in inferring
program functionality. These issues worsen with longer and more complex code,
where current LLMs struggle to handle length and intricate semantics. To
evaluate LLMs on long code translation, we introduce LongTrans, a large-scale
execution-based benchmark with C++, Java, and Python programs, ranging from
hundreds to thousands of tokens. Our empirical study of 12 LLMs reveals a sharp
performance decline as code length increases, with even the best-performing
model, GPT-4o, achieving only 57.51% computational accuracy. This highlights
the need for further research in long code translation. We argue that code
translation should maintain invariant functionality while transforming syntax
and keywords across languages. Despite differences in appearance, program
states should remain consistent throughout execution. To address this, we
propose PAST (Program State Alignment augmented Translation), which integrates
instrumentation to capture and align program states during translation. This
approach is the first to leverage LLMs to insert instrumentation in both
original and translated code, tracing program states at runtime. By prompting
the LLM to correct errors based on output traces, we mitigate inconsistencies
and enhance translation accuracy. Experimental results show significant
improvements, with computational accuracy rising from 57.51% to 84.70% for
GPT-4o, 50.68% to 69.97% for Mistral-Large-2, and 52.45% to 76.43% for
DeepSeek-Coder-V2. These improvements are consistent across models and
datasets, with ablation studies confirming the benefits of instrumentation and
state alignment.
|
2504.02055 | Sajjadur Rahman | Chen Shen, Jin Wang, Sajjadur Rahman, Eser Kandogan | MageSQL: Enhancing In-context Learning for Text-to-SQL Applications with
Large Language Models | null | null | null | null | cs.DB | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The text-to-SQL problem aims to translate natural language questions into SQL
statements to ease the interaction between database systems and end users.
Recently, Large Language Models (LLMs) have exhibited impressive capabilities
in a variety of tasks, including text-to-SQL. While prior works have explored
various strategies for prompting LLMs to generate SQL statements, they still
fall short of fully harnessing the power of LLM due to the lack of (1)
high-quality contextual information when constructing the prompts and (2)
robust feedback mechanisms to correct translation errors. To address these
challenges, we propose MageSQL, a text-to-SQL approach based on in-context
learning over LLMs. MageSQL explores a suite of techniques that leverage the
syntax and semantics of SQL queries to identify relevant few-shot
demonstrations as context for prompting LLMs. In particular, we introduce a
graph-based demonstration selection method -- the first of its kind in the
text-to-SQL problem -- that leverages graph contrastive learning adapted with
SQL-specific data augmentation strategies. Furthermore, an error correction
module is proposed to detect and fix potential inaccuracies in the generated
SQL query. We conduct comprehensive evaluations on several benchmarking
datasets. The results show that our proposed methods outperform
state-of-the-art methods by an obvious margin.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 18:33:16 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Shen",
"Chen",
""
],
[
"Wang",
"Jin",
""
],
[
"Rahman",
"Sajjadur",
""
],
[
"Kandogan",
"Eser",
""
]
] | TITLE: MageSQL: Enhancing In-context Learning for Text-to-SQL Applications with
Large Language Models
ABSTRACT: The text-to-SQL problem aims to translate natural language questions into SQL
statements to ease the interaction between database systems and end users.
Recently, Large Language Models (LLMs) have exhibited impressive capabilities
in a variety of tasks, including text-to-SQL. While prior works have explored
various strategies for prompting LLMs to generate SQL statements, they still
fall short of fully harnessing the power of LLM due to the lack of (1)
high-quality contextual information when constructing the prompts and (2)
robust feedback mechanisms to correct translation errors. To address these
challenges, we propose MageSQL, a text-to-SQL approach based on in-context
learning over LLMs. MageSQL explores a suite of techniques that leverage the
syntax and semantics of SQL queries to identify relevant few-shot
demonstrations as context for prompting LLMs. In particular, we introduce a
graph-based demonstration selection method -- the first of its kind in the
text-to-SQL problem -- that leverages graph contrastive learning adapted with
SQL-specific data augmentation strategies. Furthermore, an error correction
module is proposed to detect and fix potential inaccuracies in the generated
SQL query. We conduct comprehensive evaluations on several benchmarking
datasets. The results show that our proposed methods outperform
state-of-the-art methods by an obvious margin.
|
2504.02060 | Minh-Quan Ho-Le | Minh-Quan Ho-Le, Duy-Khang Ho, Van-Tu Ninh, Cathal Gurrin, Minh-Triet
Tran | LSC-ADL: An Activity of Daily Living (ADL)-Annotated Lifelog Dataset
Generated via Semi-Automatic Clustering | 11 pages, 4 figures | null | null | null | cs.CV cs.IR | http://creativecommons.org/licenses/by/4.0/ | Lifelogging involves continuously capturing personal data through wearable
cameras, providing an egocentric view of daily activities. Lifelog retrieval
aims to search and retrieve relevant moments from this data, yet existing
methods largely overlook activity-level annotations, which capture temporal
relationships and enrich semantic understanding. In this work, we introduce
LSC-ADL, an ADL-annotated lifelog dataset derived from the LSC dataset,
incorporating Activities of Daily Living (ADLs) as a structured semantic layer.
Using a semi-automatic approach featuring the HDBSCAN algorithm for intra-class
clustering and human-in-the-loop verification, we generate accurate ADL
annotations to enhance retrieval explainability. By integrating action
recognition into lifelog retrieval, LSC-ADL bridges a critical gap in existing
research, offering a more context-aware representation of daily life. We
believe this dataset will advance research in lifelog retrieval, activity
recognition, and egocentric vision, ultimately improving the accuracy and
interpretability of retrieved content. The ADL annotations can be downloaded at
https://bit.ly/lsc-adl-annotations.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 18:39:28 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Ho-Le",
"Minh-Quan",
""
],
[
"Ho",
"Duy-Khang",
""
],
[
"Ninh",
"Van-Tu",
""
],
[
"Gurrin",
"Cathal",
""
],
[
"Tran",
"Minh-Triet",
""
]
] | TITLE: LSC-ADL: An Activity of Daily Living (ADL)-Annotated Lifelog Dataset
Generated via Semi-Automatic Clustering
ABSTRACT: Lifelogging involves continuously capturing personal data through wearable
cameras, providing an egocentric view of daily activities. Lifelog retrieval
aims to search and retrieve relevant moments from this data, yet existing
methods largely overlook activity-level annotations, which capture temporal
relationships and enrich semantic understanding. In this work, we introduce
LSC-ADL, an ADL-annotated lifelog dataset derived from the LSC dataset,
incorporating Activities of Daily Living (ADLs) as a structured semantic layer.
Using a semi-automatic approach featuring the HDBSCAN algorithm for intra-class
clustering and human-in-the-loop verification, we generate accurate ADL
annotations to enhance retrieval explainability. By integrating action
recognition into lifelog retrieval, LSC-ADL bridges a critical gap in existing
research, offering a more context-aware representation of daily life. We
believe this dataset will advance research in lifelog retrieval, activity
recognition, and egocentric vision, ultimately improving the accuracy and
interpretability of retrieved content. The ADL annotations can be downloaded at
https://bit.ly/lsc-adl-annotations.
|
2504.02061 | Yuxin Guo | Yuxin Guo, Shuailei Ma, Shijie Ma, Xiaoyi Bao, Chen-Wei Xie, Kecheng
Zheng, Tingyu Weng, Siyang Sun, Yun Zheng, Wei Zou | Aligned Better, Listen Better for Audio-Visual Large Language Models | Accepted to ICLR 2025 | null | null | null | cs.CV cs.MM cs.SD eess.AS | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Audio is essential for multimodal video understanding. On the one hand, video
inherently contains audio, which supplies complementary information to vision.
Besides, video large language models (Video-LLMs) can encounter many
audio-centric settings. However, existing Video-LLMs and Audio-Visual Large
Language Models (AV-LLMs) exhibit deficiencies in exploiting audio information,
leading to weak understanding and hallucinations. To solve the issues, we delve
into the model architecture and dataset. (1) From the architectural
perspective, we propose a fine-grained AV-LLM, namely Dolphin. The concurrent
alignment of audio and visual modalities in both temporal and spatial
dimensions ensures a comprehensive and accurate understanding of videos.
Specifically, we devise an audio-visual multi-scale adapter for multi-scale
information aggregation, which achieves spatial alignment. For temporal
alignment, we propose audio-visual interleaved merging. (2) From the dataset
perspective, we curate an audio-visual caption and instruction-tuning dataset,
called AVU. It comprises 5.2 million diverse, open-ended data tuples (video,
audio, question, answer) and introduces a novel data partitioning strategy.
Extensive experiments show our model not only achieves remarkable performance
in audio-visual understanding, but also mitigates potential hallucinations.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 18:47:09 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Guo",
"Yuxin",
""
],
[
"Ma",
"Shuailei",
""
],
[
"Ma",
"Shijie",
""
],
[
"Bao",
"Xiaoyi",
""
],
[
"Xie",
"Chen-Wei",
""
],
[
"Zheng",
"Kecheng",
""
],
[
"Weng",
"Tingyu",
""
],
[
"Sun",
"Siyang",
""
],
[
"Zheng",
"Yun",
""
],
[
"Zou",
"Wei",
""
]
] | TITLE: Aligned Better, Listen Better for Audio-Visual Large Language Models
ABSTRACT: Audio is essential for multimodal video understanding. On the one hand, video
inherently contains audio, which supplies complementary information to vision.
Besides, video large language models (Video-LLMs) can encounter many
audio-centric settings. However, existing Video-LLMs and Audio-Visual Large
Language Models (AV-LLMs) exhibit deficiencies in exploiting audio information,
leading to weak understanding and hallucinations. To solve the issues, we delve
into the model architecture and dataset. (1) From the architectural
perspective, we propose a fine-grained AV-LLM, namely Dolphin. The concurrent
alignment of audio and visual modalities in both temporal and spatial
dimensions ensures a comprehensive and accurate understanding of videos.
Specifically, we devise an audio-visual multi-scale adapter for multi-scale
information aggregation, which achieves spatial alignment. For temporal
alignment, we propose audio-visual interleaved merging. (2) From the dataset
perspective, we curate an audio-visual caption and instruction-tuning dataset,
called AVU. It comprises 5.2 million diverse, open-ended data tuples (video,
audio, question, answer) and introduces a novel data partitioning strategy.
Extensive experiments show our model not only achieves remarkable performance
in audio-visual understanding, but also mitigates potential hallucinations.
|
2504.02067 | Mete Kemertas | Mete Kemertas, Amir-massoud Farahmand, Allan D. Jepson | A Truncated Newton Method for Optimal Transport | Accepted to ICLR 2025 | null | null | null | cs.LG cs.MS math.OC | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Developing a contemporary optimal transport (OT) solver requires navigating
trade-offs among several critical requirements: GPU parallelization,
scalability to high-dimensional problems, theoretical convergence guarantees,
empirical performance in terms of precision versus runtime, and numerical
stability in practice. With these challenges in mind, we introduce a
specialized truncated Newton algorithm for entropic-regularized OT. In addition
to proving that locally quadratic convergence is possible without assuming a
Lipschitz Hessian, we provide strategies to maximally exploit the high rate of
local convergence in practice. Our GPU-parallel algorithm exhibits
exceptionally favorable runtime performance, achieving high precision orders of
magnitude faster than many existing alternatives. This is evidenced by
wall-clock time experiments on 24 problem sets (12 datasets $\times$ 2 cost
functions). The scalability of the algorithm is showcased on an extremely large
OT problem with $n \approx 10^6$, solved approximately under weak entopric
regularization.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 19:00:24 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Kemertas",
"Mete",
""
],
[
"Farahmand",
"Amir-massoud",
""
],
[
"Jepson",
"Allan D.",
""
]
] | TITLE: A Truncated Newton Method for Optimal Transport
ABSTRACT: Developing a contemporary optimal transport (OT) solver requires navigating
trade-offs among several critical requirements: GPU parallelization,
scalability to high-dimensional problems, theoretical convergence guarantees,
empirical performance in terms of precision versus runtime, and numerical
stability in practice. With these challenges in mind, we introduce a
specialized truncated Newton algorithm for entropic-regularized OT. In addition
to proving that locally quadratic convergence is possible without assuming a
Lipschitz Hessian, we provide strategies to maximally exploit the high rate of
local convergence in practice. Our GPU-parallel algorithm exhibits
exceptionally favorable runtime performance, achieving high precision orders of
magnitude faster than many existing alternatives. This is evidenced by
wall-clock time experiments on 24 problem sets (12 datasets $\times$ 2 cost
functions). The scalability of the algorithm is showcased on an extremely large
OT problem with $n \approx 10^6$, solved approximately under weak entopric
regularization.
|
2504.02069 | Zhiyuan Zhang | Zhiyuan Zhang, Yuxin He, Yong Sun, Junyu Shi, Lijiang Liu, Qiang Nie | RoboAct-CLIP: Video-Driven Pre-training of Atomic Action Understanding
for Robotics | IROS 2025 | null | null | null | cs.RO cs.AI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Visual Language Models (VLMs) have emerged as pivotal tools for robotic
systems, enabling cross-task generalization, dynamic environmental interaction,
and long-horizon planning through multimodal perception and semantic reasoning.
However, existing open-source VLMs predominantly trained for generic
vision-language alignment tasks fail to model temporally correlated action
semantics that are crucial for robotic manipulation effectively. While current
image-based fine-tuning methods partially adapt VLMs to robotic applications,
they fundamentally disregard temporal evolution patterns in video sequences and
suffer from visual feature entanglement between robotic agents, manipulated
objects, and environmental contexts, thereby limiting semantic decoupling
capability for atomic actions and compromising model generalizability.To
overcome these challenges, this work presents RoboAct-CLIP with dual technical
contributions: 1) A dataset reconstruction framework that performs
semantic-constrained action unit segmentation and re-annotation on open-source
robotic videos, constructing purified training sets containing singular atomic
actions (e.g., "grasp"); 2) A temporal-decoupling fine-tuning strategy based on
Contrastive Language-Image Pretraining (CLIP) architecture, which disentangles
temporal action features across video frames from object-centric
characteristics to achieve hierarchical representation learning of robotic
atomic actions.Experimental results in simulated environments demonstrate that
the RoboAct-CLIP pretrained model achieves a 12% higher success rate than
baseline VLMs, along with superior generalization in multi-object manipulation
tasks.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 19:02:08 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zhang",
"Zhiyuan",
""
],
[
"He",
"Yuxin",
""
],
[
"Sun",
"Yong",
""
],
[
"Shi",
"Junyu",
""
],
[
"Liu",
"Lijiang",
""
],
[
"Nie",
"Qiang",
""
]
] | TITLE: RoboAct-CLIP: Video-Driven Pre-training of Atomic Action Understanding
for Robotics
ABSTRACT: Visual Language Models (VLMs) have emerged as pivotal tools for robotic
systems, enabling cross-task generalization, dynamic environmental interaction,
and long-horizon planning through multimodal perception and semantic reasoning.
However, existing open-source VLMs predominantly trained for generic
vision-language alignment tasks fail to model temporally correlated action
semantics that are crucial for robotic manipulation effectively. While current
image-based fine-tuning methods partially adapt VLMs to robotic applications,
they fundamentally disregard temporal evolution patterns in video sequences and
suffer from visual feature entanglement between robotic agents, manipulated
objects, and environmental contexts, thereby limiting semantic decoupling
capability for atomic actions and compromising model generalizability.To
overcome these challenges, this work presents RoboAct-CLIP with dual technical
contributions: 1) A dataset reconstruction framework that performs
semantic-constrained action unit segmentation and re-annotation on open-source
robotic videos, constructing purified training sets containing singular atomic
actions (e.g., "grasp"); 2) A temporal-decoupling fine-tuning strategy based on
Contrastive Language-Image Pretraining (CLIP) architecture, which disentangles
temporal action features across video frames from object-centric
characteristics to achieve hierarchical representation learning of robotic
atomic actions.Experimental results in simulated environments demonstrate that
the RoboAct-CLIP pretrained model achieves a 12% higher success rate than
baseline VLMs, along with superior generalization in multi-object manipulation
tasks.
|
2504.02107 | Jeffrey Li | Jeffrey Li, Mohammadreza Armandpour, Iman Mirzadeh, Sachin Mehta,
Vaishaal Shankar, Raviteja Vemulapalli, Samy Bengio, Oncel Tuzel, Mehrdad
Farajtabar, Hadi Pouransari, Fartash Faghri | TiC-LM: A Web-Scale Benchmark for Time-Continual LLM Pretraining | Code available at: https://github.com/apple/ml-tic-lm | null | null | null | cs.LG cs.CL | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Large Language Models (LLMs) trained on historical web data inevitably become
outdated. We investigate evaluation strategies and update methods for LLMs as
new data becomes available. We introduce a web-scale dataset for time-continual
pretraining of LLMs derived from 114 dumps of Common Crawl (CC) - orders of
magnitude larger than previous continual language modeling benchmarks. We also
design time-stratified evaluations across both general CC data and specific
domains (Wikipedia, StackExchange, and code documentation) to assess how well
various continual learning methods adapt to new data while retaining past
knowledge. Our findings demonstrate that, on general CC data, autoregressive
meta-schedules combined with a fixed-ratio replay of older data can achieve
comparable held-out loss to re-training from scratch, while requiring
significantly less computation (2.6x). However, the optimal balance between
incorporating new data and replaying old data differs as replay is crucial to
avoid forgetting on generic web data but less so on specific domains.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 20:11:54 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Li",
"Jeffrey",
""
],
[
"Armandpour",
"Mohammadreza",
""
],
[
"Mirzadeh",
"Iman",
""
],
[
"Mehta",
"Sachin",
""
],
[
"Shankar",
"Vaishaal",
""
],
[
"Vemulapalli",
"Raviteja",
""
],
[
"Bengio",
"Samy",
""
],
[
"Tuzel",
"Oncel",
""
],
[
"Farajtabar",
"Mehrdad",
""
],
[
"Pouransari",
"Hadi",
""
],
[
"Faghri",
"Fartash",
""
]
] | TITLE: TiC-LM: A Web-Scale Benchmark for Time-Continual LLM Pretraining
ABSTRACT: Large Language Models (LLMs) trained on historical web data inevitably become
outdated. We investigate evaluation strategies and update methods for LLMs as
new data becomes available. We introduce a web-scale dataset for time-continual
pretraining of LLMs derived from 114 dumps of Common Crawl (CC) - orders of
magnitude larger than previous continual language modeling benchmarks. We also
design time-stratified evaluations across both general CC data and specific
domains (Wikipedia, StackExchange, and code documentation) to assess how well
various continual learning methods adapt to new data while retaining past
knowledge. Our findings demonstrate that, on general CC data, autoregressive
meta-schedules combined with a fixed-ratio replay of older data can achieve
comparable held-out loss to re-training from scratch, while requiring
significantly less computation (2.6x). However, the optimal balance between
incorporating new data and replaying old data differs as replay is crucial to
avoid forgetting on generic web data but less so on specific domains.
|
2504.02111 | Giannis Chatziveroglou | Giannis Chatziveroglou, Richard Yun, Maura Kelleher | Exploring LLM Reasoning Through Controlled Prompt Variations | null | null | null | null | cs.AI cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | This study investigates the reasoning robustness of large language models
(LLMs) on mathematical problem-solving tasks under systematically introduced
input perturbations. Using the GSM8K dataset as a controlled testbed, we
evaluate how well state-of-the-art models maintain logical consistency and
correctness when confronted with four categories of prompt perturbations:
irrelevant context, pathological instructions, factually relevant but
non-essential context, and a combination of the latter two. Our experiments,
conducted on thirteen open-source and closed-source LLMs, reveal that
introducing irrelevant context within the model's context window significantly
degrades performance, suggesting that distinguishing essential from extraneous
details remains a pressing challenge. Surprisingly, performance regressions are
relatively insensitive to the complexity of the reasoning task, as measured by
the number of steps required, and are not strictly correlated with model size.
Moreover, we observe that certain perturbations inadvertently trigger
chain-of-thought-like reasoning behaviors, even without explicit prompting. Our
findings highlight critical vulnerabilities in current LLMs and underscore the
need for improved robustness against noisy, misleading, and contextually dense
inputs, paving the way for more resilient and reliable reasoning in real-world
applications.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 20:18:50 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Chatziveroglou",
"Giannis",
""
],
[
"Yun",
"Richard",
""
],
[
"Kelleher",
"Maura",
""
]
] | TITLE: Exploring LLM Reasoning Through Controlled Prompt Variations
ABSTRACT: This study investigates the reasoning robustness of large language models
(LLMs) on mathematical problem-solving tasks under systematically introduced
input perturbations. Using the GSM8K dataset as a controlled testbed, we
evaluate how well state-of-the-art models maintain logical consistency and
correctness when confronted with four categories of prompt perturbations:
irrelevant context, pathological instructions, factually relevant but
non-essential context, and a combination of the latter two. Our experiments,
conducted on thirteen open-source and closed-source LLMs, reveal that
introducing irrelevant context within the model's context window significantly
degrades performance, suggesting that distinguishing essential from extraneous
details remains a pressing challenge. Surprisingly, performance regressions are
relatively insensitive to the complexity of the reasoning task, as measured by
the number of steps required, and are not strictly correlated with model size.
Moreover, we observe that certain perturbations inadvertently trigger
chain-of-thought-like reasoning behaviors, even without explicit prompting. Our
findings highlight critical vulnerabilities in current LLMs and underscore the
need for improved robustness against noisy, misleading, and contextually dense
inputs, paving the way for more resilient and reliable reasoning in real-world
applications.
|
2504.02116 | Xiulin Yang | Xiulin Yang | Language Models at the Syntax-Semantics Interface: A Case Study of the
Long-Distance Binding of Chinese Reflexive ziji | null | COLING 2025 | null | null | cs.CL | http://creativecommons.org/licenses/by/4.0/ | This paper explores whether language models can effectively resolve the
complex binding patterns of the Mandarin Chinese reflexive ziji, which are
constrained by both syntactic and semantic factors. We construct a dataset of
240 synthetic sentences using templates and examples from syntactic literature,
along with 320 natural sentences from the BCC corpus. Evaluating 21 language
models against this dataset and comparing their performance to judgments from
native Mandarin speakers, we find that none of the models consistently
replicates human-like judgments. The results indicate that existing language
models tend to rely heavily on sequential cues, though not always favoring the
closest strings, and often overlooking subtle semantic and syntactic
constraints. They tend to be more sensitive to noun-related than verb-related
semantics.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 20:25:27 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Yang",
"Xiulin",
""
]
] | TITLE: Language Models at the Syntax-Semantics Interface: A Case Study of the
Long-Distance Binding of Chinese Reflexive ziji
ABSTRACT: This paper explores whether language models can effectively resolve the
complex binding patterns of the Mandarin Chinese reflexive ziji, which are
constrained by both syntactic and semantic factors. We construct a dataset of
240 synthetic sentences using templates and examples from syntactic literature,
along with 320 natural sentences from the BCC corpus. Evaluating 21 language
models against this dataset and comparing their performance to judgments from
native Mandarin speakers, we find that none of the models consistently
replicates human-like judgments. The results indicate that existing language
models tend to rely heavily on sequential cues, though not always favoring the
closest strings, and often overlooking subtle semantic and syntactic
constraints. They tend to be more sensitive to noun-related than verb-related
semantics.
|
2504.02119 | Hoda Eldardiry | Wang Wei, Tiankai Yang, Hongjie Chen, Ryan A. Rossi, Yue Zhao, Franck
Dernoncourt, Hoda Eldardiry | Efficient Model Selection for Time Series Forecasting via LLMs | 16 pages, 3 Figures | null | null | null | cs.LG | http://creativecommons.org/licenses/by/4.0/ | Model selection is a critical step in time series forecasting, traditionally
requiring extensive performance evaluations across various datasets.
Meta-learning approaches aim to automate this process, but they typically
depend on pre-constructed performance matrices, which are costly to build. In
this work, we propose to leverage Large Language Models (LLMs) as a lightweight
alternative for model selection. Our method eliminates the need for explicit
performance matrices by utilizing the inherent knowledge and reasoning
capabilities of LLMs. Through extensive experiments with LLaMA, GPT and Gemini,
we demonstrate that our approach outperforms traditional meta-learning
techniques and heuristic baselines, while significantly reducing computational
overhead. These findings underscore the potential of LLMs in efficient model
selection for time series forecasting.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 20:33:27 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Wei",
"Wang",
""
],
[
"Yang",
"Tiankai",
""
],
[
"Chen",
"Hongjie",
""
],
[
"Rossi",
"Ryan A.",
""
],
[
"Zhao",
"Yue",
""
],
[
"Dernoncourt",
"Franck",
""
],
[
"Eldardiry",
"Hoda",
""
]
] | TITLE: Efficient Model Selection for Time Series Forecasting via LLMs
ABSTRACT: Model selection is a critical step in time series forecasting, traditionally
requiring extensive performance evaluations across various datasets.
Meta-learning approaches aim to automate this process, but they typically
depend on pre-constructed performance matrices, which are costly to build. In
this work, we propose to leverage Large Language Models (LLMs) as a lightweight
alternative for model selection. Our method eliminates the need for explicit
performance matrices by utilizing the inherent knowledge and reasoning
capabilities of LLMs. Through extensive experiments with LLaMA, GPT and Gemini,
we demonstrate that our approach outperforms traditional meta-learning
techniques and heuristic baselines, while significantly reducing computational
overhead. These findings underscore the potential of LLMs in efficient model
selection for time series forecasting.
|
2504.02146 | Lingzhi Shen | Lingzhi Shen, Yunfei Long, Xiaohao Cai, Guanming Chen, Yuhan Wang,
Imran Razzak, Shoaib Jameel | LL4G: Self-Supervised Dynamic Optimization for Graph-Based Personality
Detection | null | null | null | null | cs.CL cs.LG | http://creativecommons.org/licenses/by/4.0/ | Graph-based personality detection constructs graph structures from textual
data, particularly social media posts. Current methods often struggle with
sparse or noisy data and rely on static graphs, limiting their ability to
capture dynamic changes between nodes and relationships. This paper introduces
LL4G, a self-supervised framework leveraging large language models (LLMs) to
optimize graph neural networks (GNNs). LLMs extract rich semantic features to
generate node representations and to infer explicit and implicit relationships.
The graph structure adaptively adds nodes and edges based on input data,
continuously optimizing itself. The GNN then uses these optimized
representations for joint training on node reconstruction, edge prediction, and
contrastive learning tasks. This integration of semantic and structural
information generates robust personality profiles. Experimental results on
Kaggle and Pandora datasets show LL4G outperforms state-of-the-art models.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 21:46:30 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Shen",
"Lingzhi",
""
],
[
"Long",
"Yunfei",
""
],
[
"Cai",
"Xiaohao",
""
],
[
"Chen",
"Guanming",
""
],
[
"Wang",
"Yuhan",
""
],
[
"Razzak",
"Imran",
""
],
[
"Jameel",
"Shoaib",
""
]
] | TITLE: LL4G: Self-Supervised Dynamic Optimization for Graph-Based Personality
Detection
ABSTRACT: Graph-based personality detection constructs graph structures from textual
data, particularly social media posts. Current methods often struggle with
sparse or noisy data and rely on static graphs, limiting their ability to
capture dynamic changes between nodes and relationships. This paper introduces
LL4G, a self-supervised framework leveraging large language models (LLMs) to
optimize graph neural networks (GNNs). LLMs extract rich semantic features to
generate node representations and to infer explicit and implicit relationships.
The graph structure adaptively adds nodes and edges based on input data,
continuously optimizing itself. The GNN then uses these optimized
representations for joint training on node reconstruction, edge prediction, and
contrastive learning tasks. This integration of semantic and structural
information generates robust personality profiles. Experimental results on
Kaggle and Pandora datasets show LL4G outperforms state-of-the-art models.
|
2504.02148 | Heming Zhang | Heming Zhang, Tim Xu, Dekang Cao, Shunning Liang, Lars
Schimmelpfennig, Levi Kaster, Di Huang, Carlos Cruchaga, Guangfu Li, Michael
Province, Yixin Chen, Philip Payne, Fuhai Li | OmniCellTOSG: The First Cell Text-Omic Signaling Graphs Dataset for
Joint LLM and GNN Modeling | null | null | null | null | cs.AI cs.LG | http://creativecommons.org/licenses/by/4.0/ | Complex cell signaling systems -- governed by varying protein abundances and
interactions -- generate diverse cell types across organs. These systems evolve
under influences such as age, sex, diet, environmental exposures, and diseases,
making them challenging to decode given the involvement of tens of thousands of
genes and proteins. Recently, hundreds of millions of single-cell omics data
have provided a robust foundation for understanding these signaling networks
within various cell subpopulations and conditions. Inspired by the success of
large foundation models (for example, large language models and large vision
models) pre-trained on massive datasets, we introduce OmniCellTOSG, the first
dataset of cell text-omic signaling graphs (TOSGs). Each TOSG represents the
signaling network of an individual or meta-cell and is labeled with information
such as organ, disease, sex, age, and cell subtype. OmniCellTOSG offers two key
contributions. First, it introduces a novel graph model that integrates
human-readable annotations -- such as biological functions, cellular locations,
signaling pathways, related diseases, and drugs -- with quantitative gene and
protein abundance data, enabling graph reasoning to decode cell signaling. This
approach calls for new joint models combining large language models and graph
neural networks. Second, the dataset is built from single-cell RNA sequencing
data of approximately 120 million cells from diverse tissues and conditions
(healthy and diseased) and is fully compatible with PyTorch. This facilitates
the development of innovative cell signaling models that could transform
research in life sciences, healthcare, and precision medicine. The OmniCellTOSG
dataset is continuously expanding and will be updated regularly. The dataset
and code are available at https://github.com/FuhaiLiAiLab/OmniCellTOSG.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 21:47:58 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zhang",
"Heming",
""
],
[
"Xu",
"Tim",
""
],
[
"Cao",
"Dekang",
""
],
[
"Liang",
"Shunning",
""
],
[
"Schimmelpfennig",
"Lars",
""
],
[
"Kaster",
"Levi",
""
],
[
"Huang",
"Di",
""
],
[
"Cruchaga",
"Carlos",
""
],
[
"Li",
"Guangfu",
""
],
[
"Province",
"Michael",
""
],
[
"Chen",
"Yixin",
""
],
[
"Payne",
"Philip",
""
],
[
"Li",
"Fuhai",
""
]
] | TITLE: OmniCellTOSG: The First Cell Text-Omic Signaling Graphs Dataset for
Joint LLM and GNN Modeling
ABSTRACT: Complex cell signaling systems -- governed by varying protein abundances and
interactions -- generate diverse cell types across organs. These systems evolve
under influences such as age, sex, diet, environmental exposures, and diseases,
making them challenging to decode given the involvement of tens of thousands of
genes and proteins. Recently, hundreds of millions of single-cell omics data
have provided a robust foundation for understanding these signaling networks
within various cell subpopulations and conditions. Inspired by the success of
large foundation models (for example, large language models and large vision
models) pre-trained on massive datasets, we introduce OmniCellTOSG, the first
dataset of cell text-omic signaling graphs (TOSGs). Each TOSG represents the
signaling network of an individual or meta-cell and is labeled with information
such as organ, disease, sex, age, and cell subtype. OmniCellTOSG offers two key
contributions. First, it introduces a novel graph model that integrates
human-readable annotations -- such as biological functions, cellular locations,
signaling pathways, related diseases, and drugs -- with quantitative gene and
protein abundance data, enabling graph reasoning to decode cell signaling. This
approach calls for new joint models combining large language models and graph
neural networks. Second, the dataset is built from single-cell RNA sequencing
data of approximately 120 million cells from diverse tissues and conditions
(healthy and diseased) and is fully compatible with PyTorch. This facilitates
the development of innovative cell signaling models that could transform
research in life sciences, healthcare, and precision medicine. The OmniCellTOSG
dataset is continuously expanding and will be updated regularly. The dataset
and code are available at https://github.com/FuhaiLiAiLab/OmniCellTOSG.
|
2504.02151 | Jiztom Kavalakkatt Francis | Jiztom Kavalakkatt Francis, Matthew J Darr | Multivariate Temporal Regression at Scale: A Three-Pillar Framework
Combining ML, XAI, and NLP | 7 pages | null | null | null | cs.LG cs.AI cs.CV | http://creativecommons.org/licenses/by/4.0/ | The rapid use of artificial intelligence (AI) in processes such as coding,
image processing, and data prediction means it is crucial to understand and
validate the data we are working with fully. This paper dives into the hurdles
of analyzing high-dimensional data, especially when it gets too complex.
Traditional methods in data analysis often look at direct connections between
input variables, which can miss out on the more complicated relationships
within the data.
To address these issues, we explore several tested techniques, such as
removing specific variables to see their impact and using statistical analysis
to find connections between multiple variables. We also consider the role of
synthetic data and how information can sometimes be redundant across different
sensors. These analyses are typically very computationally demanding and often
require much human effort to make sense of the results.
A common approach is to treat the entire dataset as one unit and apply
advanced models to handle it. However, this can become problematic with larger,
noisier datasets and more complex models. So, we suggest methods to identify
overall patterns that can help with tasks like classification or regression
based on the idea that more straightforward approaches might be more
understandable.
Our research looks at two datasets: a real-world dataset and a synthetic one.
The goal is to create a methodology that highlights key features on a global
scale that lead to predictions, making it easier to validate or quantify the
data set. By reducing the dimensionality with this method, we can simplify the
models used and thus clarify the insights we gain. Furthermore, our method can
reveal unexplored relationships between specific inputs and outcomes, providing
a way to validate these new connections further.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 21:53:03 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Francis",
"Jiztom Kavalakkatt",
""
],
[
"Darr",
"Matthew J",
""
]
] | TITLE: Multivariate Temporal Regression at Scale: A Three-Pillar Framework
Combining ML, XAI, and NLP
ABSTRACT: The rapid use of artificial intelligence (AI) in processes such as coding,
image processing, and data prediction means it is crucial to understand and
validate the data we are working with fully. This paper dives into the hurdles
of analyzing high-dimensional data, especially when it gets too complex.
Traditional methods in data analysis often look at direct connections between
input variables, which can miss out on the more complicated relationships
within the data.
To address these issues, we explore several tested techniques, such as
removing specific variables to see their impact and using statistical analysis
to find connections between multiple variables. We also consider the role of
synthetic data and how information can sometimes be redundant across different
sensors. These analyses are typically very computationally demanding and often
require much human effort to make sense of the results.
A common approach is to treat the entire dataset as one unit and apply
advanced models to handle it. However, this can become problematic with larger,
noisier datasets and more complex models. So, we suggest methods to identify
overall patterns that can help with tasks like classification or regression
based on the idea that more straightforward approaches might be more
understandable.
Our research looks at two datasets: a real-world dataset and a synthetic one.
The goal is to create a methodology that highlights key features on a global
scale that lead to predictions, making it easier to validate or quantify the
data set. By reducing the dimensionality with this method, we can simplify the
models used and thus clarify the insights we gain. Furthermore, our method can
reveal unexplored relationships between specific inputs and outcomes, providing
a way to validate these new connections further.
|
2504.02154 | Chao Huang | Chao Huang, Susan Liang, Yunlong Tang, Li Ma, Yapeng Tian, Chenliang
Xu | FreSca: Unveiling the Scaling Space in Diffusion Models | Project page: https://wikichao.github.io/FreSca/ | null | null | null | cs.CV | http://creativecommons.org/licenses/by/4.0/ | Diffusion models offer impressive controllability for image tasks, primarily
through noise predictions that encode task-specific information and
classifier-free guidance enabling adjustable scaling. This scaling mechanism
implicitly defines a ``scaling space'' whose potential for fine-grained
semantic manipulation remains underexplored. We investigate this space,
starting with inversion-based editing where the difference between
conditional/unconditional noise predictions carries key semantic information.
Our core contribution stems from a Fourier analysis of noise predictions,
revealing that its low- and high-frequency components evolve differently
throughout diffusion. Based on this insight, we introduce FreSca, a
straightforward method that applies guidance scaling independently to different
frequency bands in the Fourier domain. FreSca demonstrably enhances existing
image editing methods without retraining. Excitingly, its effectiveness extends
to image understanding tasks such as depth estimation, yielding quantitative
gains across multiple datasets.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 22:03:11 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Huang",
"Chao",
""
],
[
"Liang",
"Susan",
""
],
[
"Tang",
"Yunlong",
""
],
[
"Ma",
"Li",
""
],
[
"Tian",
"Yapeng",
""
],
[
"Xu",
"Chenliang",
""
]
] | TITLE: FreSca: Unveiling the Scaling Space in Diffusion Models
ABSTRACT: Diffusion models offer impressive controllability for image tasks, primarily
through noise predictions that encode task-specific information and
classifier-free guidance enabling adjustable scaling. This scaling mechanism
implicitly defines a ``scaling space'' whose potential for fine-grained
semantic manipulation remains underexplored. We investigate this space,
starting with inversion-based editing where the difference between
conditional/unconditional noise predictions carries key semantic information.
Our core contribution stems from a Fourier analysis of noise predictions,
revealing that its low- and high-frequency components evolve differently
throughout diffusion. Based on this insight, we introduce FreSca, a
straightforward method that applies guidance scaling independently to different
frequency bands in the Fourier domain. FreSca demonstrably enhances existing
image editing methods without retraining. Excitingly, its effectiveness extends
to image understanding tasks such as depth estimation, yielding quantitative
gains across multiple datasets.
|
2504.02160 | Shaojin Wu | Shaojin Wu, Mengqi Huang, Wenxu Wu, Yufeng Cheng, Fei Ding, Qian He | Less-to-More Generalization: Unlocking More Controllability by
In-Context Generation | Project page: https://bytedance.github.io/UNO Code and model:
https://github.com/bytedance/UNO | null | null | null | cs.CV cs.LG | http://creativecommons.org/licenses/by/4.0/ | Although subject-driven generation has been extensively explored in image
generation due to its wide applications, it still has challenges in data
scalability and subject expansibility. For the first challenge, moving from
curating single-subject datasets to multiple-subject ones and scaling them is
particularly difficult. For the second, most recent methods center on
single-subject generation, making it hard to apply when dealing with
multi-subject scenarios. In this study, we propose a highly-consistent data
synthesis pipeline to tackle this challenge. This pipeline harnesses the
intrinsic in-context generation capabilities of diffusion transformers and
generates high-consistency multi-subject paired data. Additionally, we
introduce UNO, which consists of progressive cross-modal alignment and
universal rotary position embedding. It is a multi-image conditioned
subject-to-image model iteratively trained from a text-to-image model.
Extensive experiments show that our method can achieve high consistency while
ensuring controllability in both single-subject and multi-subject driven
generation.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 22:20:21 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Wu",
"Shaojin",
""
],
[
"Huang",
"Mengqi",
""
],
[
"Wu",
"Wenxu",
""
],
[
"Cheng",
"Yufeng",
""
],
[
"Ding",
"Fei",
""
],
[
"He",
"Qian",
""
]
] | TITLE: Less-to-More Generalization: Unlocking More Controllability by
In-Context Generation
ABSTRACT: Although subject-driven generation has been extensively explored in image
generation due to its wide applications, it still has challenges in data
scalability and subject expansibility. For the first challenge, moving from
curating single-subject datasets to multiple-subject ones and scaling them is
particularly difficult. For the second, most recent methods center on
single-subject generation, making it hard to apply when dealing with
multi-subject scenarios. In this study, we propose a highly-consistent data
synthesis pipeline to tackle this challenge. This pipeline harnesses the
intrinsic in-context generation capabilities of diffusion transformers and
generates high-consistency multi-subject paired data. Additionally, we
introduce UNO, which consists of progressive cross-modal alignment and
universal rotary position embedding. It is a multi-image conditioned
subject-to-image model iteratively trained from a text-to-image model.
Extensive experiments show that our method can achieve high consistency while
ensuring controllability in both single-subject and multi-subject driven
generation.
|
2504.02163 | Lewis Matheson Creed | Lewis Matheson Creed | Neural Style Transfer for Synthesising a Dataset of Ancient Egyptian
Hieroglyphs | 50 Pages, 10 figures, Honours Thesis | null | null | null | cs.LG cs.CL cs.CV | http://creativecommons.org/licenses/by-nc-sa/4.0/ | The limited availability of training data for low-resource languages makes
applying machine learning techniques challenging. Ancient Egyptian is one such
language with few resources. However, innovative applications of data
augmentation methods, such as Neural Style Transfer, could overcome these
barriers. This paper presents a novel method for generating datasets of ancient
Egyptian hieroglyphs by applying NST to a digital typeface. Experimental
results found that image classification models trained on NST-generated
examples and photographs demonstrate equal performance and transferability to
real unseen images of hieroglyphs.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 22:30:45 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Creed",
"Lewis Matheson",
""
]
] | TITLE: Neural Style Transfer for Synthesising a Dataset of Ancient Egyptian
Hieroglyphs
ABSTRACT: The limited availability of training data for low-resource languages makes
applying machine learning techniques challenging. Ancient Egyptian is one such
language with few resources. However, innovative applications of data
augmentation methods, such as Neural Style Transfer, could overcome these
barriers. This paper presents a novel method for generating datasets of ancient
Egyptian hieroglyphs by applying NST to a digital typeface. Experimental
results found that image classification models trained on NST-generated
examples and photographs demonstrate equal performance and transferability to
real unseen images of hieroglyphs.
|
2504.02167 | Ren-Xin Zhao | Ren-Xin Zhao and Xinze Tong and Shi Wang | HQCC: A Hybrid Quantum-Classical Classifier with Adaptive Structure | null | null | null | null | quant-ph cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Parameterized Quantum Circuits (PQCs) with fixed structures severely degrade
the performance of Quantum Machine Learning (QML). To address this, a Hybrid
Quantum-Classical Classifier (HQCC) is proposed. It opens a practical way to
advance QML in the Noisy Intermediate-Scale Quantum (NISQ) era by adaptively
optimizing the PQC through a Long Short-Term Memory (LSTM) driven dynamic
circuit generator, utilizing a local quantum filter for scalable feature
extraction, and exploiting architectural plasticity to balance the entanglement
depth and noise robustness. We realize the HQCC on the TensorCircuit platform
and run simulations on the MNIST and Fashion MNIST datasets, achieving up to
97.12\% accuracy on MNIST and outperforming several alternative methods.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 22:49:00 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Zhao",
"Ren-Xin",
""
],
[
"Tong",
"Xinze",
""
],
[
"Wang",
"Shi",
""
]
] | TITLE: HQCC: A Hybrid Quantum-Classical Classifier with Adaptive Structure
ABSTRACT: Parameterized Quantum Circuits (PQCs) with fixed structures severely degrade
the performance of Quantum Machine Learning (QML). To address this, a Hybrid
Quantum-Classical Classifier (HQCC) is proposed. It opens a practical way to
advance QML in the Noisy Intermediate-Scale Quantum (NISQ) era by adaptively
optimizing the PQC through a Long Short-Term Memory (LSTM) driven dynamic
circuit generator, utilizing a local quantum filter for scalable feature
extraction, and exploiting architectural plasticity to balance the entanglement
depth and noise robustness. We realize the HQCC on the TensorCircuit platform
and run simulations on the MNIST and Fashion MNIST datasets, achieving up to
97.12\% accuracy on MNIST and outperforming several alternative methods.
|
2504.02174 | Minzhao Lyu | Rushi Jayeshkumar Babaria and Minzhao Lyu and Gustavo Batista and
Vijay Sivaraman | FastFlow: Early Yet Robust Network Flow Classification using the Minimal
Number of Time-Series Packets | This paper is accepted at ACM SIGMETRICS 2025. Proc. ACM Meas. Anal.
Comput. Syst (2025) | null | 10.1145/3727115 | null | cs.NI cs.LG | http://arxiv.org/licenses/nonexclusive-distrib/1.0/ | Network traffic classification is of great importance for network operators
in their daily routines, such as analyzing the usage patterns of multimedia
applications and optimizing network configurations. Internet service providers
(ISPs) that operate high-speed links expect network flow classifiers to
accurately classify flows early, using the minimal number of necessary initial
packets per flow. These classifiers must also be robust to packet sequence
disorders in candidate flows and capable of detecting unseen flow types that
are not within the existing classification scope, which are not well achieved
by existing methods. In this paper, we develop FastFlow, a time-series flow
classification method that accurately classifies network flows as one of the
known types or the unknown type, which dynamically selects the minimal number
of packets to balance accuracy and efficiency. Toward the objectives, we first
develop a flow representation process that converts packet streams at both
per-packet and per-slot granularity for precise packet statistics with
robustness to packet sequence disorders. Second, we develop a sequential
decision-based classification model that leverages LSTM architecture trained
with reinforcement learning. Our model makes dynamic decisions on the minimal
number of time-series data points per flow for the confident classification as
one of the known flow types or an unknown one. We evaluated our method on
public datasets and demonstrated its superior performance in early and accurate
flow classification. Deployment insights on the classification of over 22.9
million flows across seven application types and 33 content providers in a
campus network over one week are discussed, showing that FastFlow requires an
average of only 8.37 packets and 0.5 seconds to classify the application type
of a flow with over 91% accuracy and over 96% accuracy for the content
providers.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 23:17:14 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Babaria",
"Rushi Jayeshkumar",
""
],
[
"Lyu",
"Minzhao",
""
],
[
"Batista",
"Gustavo",
""
],
[
"Sivaraman",
"Vijay",
""
]
] | TITLE: FastFlow: Early Yet Robust Network Flow Classification using the Minimal
Number of Time-Series Packets
ABSTRACT: Network traffic classification is of great importance for network operators
in their daily routines, such as analyzing the usage patterns of multimedia
applications and optimizing network configurations. Internet service providers
(ISPs) that operate high-speed links expect network flow classifiers to
accurately classify flows early, using the minimal number of necessary initial
packets per flow. These classifiers must also be robust to packet sequence
disorders in candidate flows and capable of detecting unseen flow types that
are not within the existing classification scope, which are not well achieved
by existing methods. In this paper, we develop FastFlow, a time-series flow
classification method that accurately classifies network flows as one of the
known types or the unknown type, which dynamically selects the minimal number
of packets to balance accuracy and efficiency. Toward the objectives, we first
develop a flow representation process that converts packet streams at both
per-packet and per-slot granularity for precise packet statistics with
robustness to packet sequence disorders. Second, we develop a sequential
decision-based classification model that leverages LSTM architecture trained
with reinforcement learning. Our model makes dynamic decisions on the minimal
number of time-series data points per flow for the confident classification as
one of the known flow types or an unknown one. We evaluated our method on
public datasets and demonstrated its superior performance in early and accurate
flow classification. Deployment insights on the classification of over 22.9
million flows across seven application types and 33 content providers in a
campus network over one week are discussed, showing that FastFlow requires an
average of only 8.37 packets and 0.5 seconds to classify the application type
of a flow with over 91% accuracy and over 96% accuracy for the content
providers.
|
2504.02180 | Pei-Chi Chen | Pei-Chi Chen, Yi Yao, Chan-Feng Hsu, HongXia Xie, Hung-Jen Chen,
Hong-Han Shuai, Wen-Huang Cheng | Foreground Focus: Enhancing Coherence and Fidelity in Camouflaged Image
Generation | null | null | null | null | cs.CV | http://creativecommons.org/publicdomain/zero/1.0/ | Camouflaged image generation is emerging as a solution to data scarcity in
camouflaged vision perception, offering a cost-effective alternative to data
collection and labeling. Recently, the state-of-the-art approach successfully
generates camouflaged images using only foreground objects. However, it faces
two critical weaknesses: 1) the background knowledge does not integrate
effectively with foreground features, resulting in a lack of
foreground-background coherence (e.g., color discrepancy); 2) the generation
process does not prioritize the fidelity of foreground objects, which leads to
distortion, particularly for small objects. To address these issues, we propose
a Foreground-Aware Camouflaged Image Generation (FACIG) model. Specifically, we
introduce a Foreground-Aware Feature Integration Module (FAFIM) to strengthen
the integration between foreground features and background knowledge. In
addition, a Foreground-Aware Denoising Loss is designed to enhance foreground
reconstruction supervision. Experiments on various datasets show our method
outperforms previous methods in overall camouflaged image quality and
foreground fidelity.
| [
{
"version": "v1",
"created": "Wed, 2 Apr 2025 23:51:13 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Chen",
"Pei-Chi",
""
],
[
"Yao",
"Yi",
""
],
[
"Hsu",
"Chan-Feng",
""
],
[
"Xie",
"HongXia",
""
],
[
"Chen",
"Hung-Jen",
""
],
[
"Shuai",
"Hong-Han",
""
],
[
"Cheng",
"Wen-Huang",
""
]
] | TITLE: Foreground Focus: Enhancing Coherence and Fidelity in Camouflaged Image
Generation
ABSTRACT: Camouflaged image generation is emerging as a solution to data scarcity in
camouflaged vision perception, offering a cost-effective alternative to data
collection and labeling. Recently, the state-of-the-art approach successfully
generates camouflaged images using only foreground objects. However, it faces
two critical weaknesses: 1) the background knowledge does not integrate
effectively with foreground features, resulting in a lack of
foreground-background coherence (e.g., color discrepancy); 2) the generation
process does not prioritize the fidelity of foreground objects, which leads to
distortion, particularly for small objects. To address these issues, we propose
a Foreground-Aware Camouflaged Image Generation (FACIG) model. Specifically, we
introduce a Foreground-Aware Feature Integration Module (FAFIM) to strengthen
the integration between foreground features and background knowledge. In
addition, a Foreground-Aware Denoising Loss is designed to enhance foreground
reconstruction supervision. Experiments on various datasets show our method
outperforms previous methods in overall camouflaged image quality and
foreground fidelity.
|
2504.02195 | Hiroki Kanezashi | Hiroki Kanezashi, Toyotaro Suzumura, Cade Reid, Md Mostafizur Rahman,
Yu Hirate | LLM-Augmented Graph Neural Recommenders: Integrating User Reviews | Under Review | null | null | null | cs.IR | http://creativecommons.org/licenses/by/4.0/ | Recommender systems increasingly aim to combine signals from both user
reviews and purchase (or other interaction) behaviors. While user-written
comments provide explicit insights about preferences, merging these textual
representations from large language models (LLMs) with graph-based embeddings
of user actions remains a challenging task. In this work, we propose a
framework that employs both a Graph Neural Network (GNN)-based model and an LLM
to produce review-aware representations, preserving review semantics while
mitigating textual noise. Our approach utilizes a hybrid objective that
balances user-item interactions against text-derived features, ensuring that
user's both behavioral and linguistic signals are effectively captured. We
evaluate this method on multiple datasets from diverse application domains,
demonstrating consistent improvements over a baseline GNN-based recommender
model. Notably, our model achieves significant gains in recommendation accuracy
when review data is sparse or unevenly distributed. These findings highlight
the importance of integrating LLM-driven textual feedback with GNN-derived user
behavioral patterns to develop robust, context-aware recommender systems.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 00:40:09 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Kanezashi",
"Hiroki",
""
],
[
"Suzumura",
"Toyotaro",
""
],
[
"Reid",
"Cade",
""
],
[
"Rahman",
"Md Mostafizur",
""
],
[
"Hirate",
"Yu",
""
]
] | TITLE: LLM-Augmented Graph Neural Recommenders: Integrating User Reviews
ABSTRACT: Recommender systems increasingly aim to combine signals from both user
reviews and purchase (or other interaction) behaviors. While user-written
comments provide explicit insights about preferences, merging these textual
representations from large language models (LLMs) with graph-based embeddings
of user actions remains a challenging task. In this work, we propose a
framework that employs both a Graph Neural Network (GNN)-based model and an LLM
to produce review-aware representations, preserving review semantics while
mitigating textual noise. Our approach utilizes a hybrid objective that
balances user-item interactions against text-derived features, ensuring that
user's both behavioral and linguistic signals are effectively captured. We
evaluate this method on multiple datasets from diverse application domains,
demonstrating consistent improvements over a baseline GNN-based recommender
model. Notably, our model achieves significant gains in recommendation accuracy
when review data is sparse or unevenly distributed. These findings highlight
the importance of integrating LLM-driven textual feedback with GNN-derived user
behavioral patterns to develop robust, context-aware recommender systems.
|
2504.02199 | Tae-Young Lee | Tae-Young Lee, Sundong Park, Minwoo Jeon, Hyoseok Hwang, Gyeong-Moon
Park | ESC: Erasing Space Concept for Knowledge Deletion | 22 pages, 14 figures, 18 tables, CVPR 2025 | null | null | null | cs.CV cs.AI | http://creativecommons.org/licenses/by-nc-sa/4.0/ | As concerns regarding privacy in deep learning continue to grow, individuals
are increasingly apprehensive about the potential exploitation of their
personal knowledge in trained models. Despite several research efforts to
address this, they often fail to consider the real-world demand from users for
complete knowledge erasure. Furthermore, our investigation reveals that
existing methods have a risk of leaking personal knowledge through embedding
features. To address these issues, we introduce a novel concept of Knowledge
Deletion (KD), an advanced task that considers both concerns, and provides an
appropriate metric, named Knowledge Retention score (KR), for assessing
knowledge retention in feature space. To achieve this, we propose a novel
training-free erasing approach named Erasing Space Concept (ESC), which
restricts the important subspace for the forgetting knowledge by eliminating
the relevant activations in the feature. In addition, we suggest ESC with
Training (ESC-T), which uses a learnable mask to better balance the trade-off
between forgetting and preserving knowledge in KD. Our extensive experiments on
various datasets and models demonstrate that our proposed methods achieve the
fastest and state-of-the-art performance. Notably, our methods are applicable
to diverse forgetting scenarios, such as facial domain setting, demonstrating
the generalizability of our methods. The code is available at
http://github.com/KU-VGI/ESC .
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 00:53:09 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Lee",
"Tae-Young",
""
],
[
"Park",
"Sundong",
""
],
[
"Jeon",
"Minwoo",
""
],
[
"Hwang",
"Hyoseok",
""
],
[
"Park",
"Gyeong-Moon",
""
]
] | TITLE: ESC: Erasing Space Concept for Knowledge Deletion
ABSTRACT: As concerns regarding privacy in deep learning continue to grow, individuals
are increasingly apprehensive about the potential exploitation of their
personal knowledge in trained models. Despite several research efforts to
address this, they often fail to consider the real-world demand from users for
complete knowledge erasure. Furthermore, our investigation reveals that
existing methods have a risk of leaking personal knowledge through embedding
features. To address these issues, we introduce a novel concept of Knowledge
Deletion (KD), an advanced task that considers both concerns, and provides an
appropriate metric, named Knowledge Retention score (KR), for assessing
knowledge retention in feature space. To achieve this, we propose a novel
training-free erasing approach named Erasing Space Concept (ESC), which
restricts the important subspace for the forgetting knowledge by eliminating
the relevant activations in the feature. In addition, we suggest ESC with
Training (ESC-T), which uses a learnable mask to better balance the trade-off
between forgetting and preserving knowledge in KD. Our extensive experiments on
various datasets and models demonstrate that our proposed methods achieve the
fastest and state-of-the-art performance. Notably, our methods are applicable
to diverse forgetting scenarios, such as facial domain setting, demonstrating
the generalizability of our methods. The code is available at
http://github.com/KU-VGI/ESC .
|
2504.02213 | Anshul Pundhir | Shourya Goel, Himanshi Tibrewal, Anant Jain, Anshul Pundhir, Pravendra
Singh | Secure Generalization through Stochastic Bidirectional Parameter Updates
Using Dual-Gradient Mechanism | null | null | null | null | cs.LG | http://creativecommons.org/licenses/by-nc-nd/4.0/ | Federated learning (FL) has gained increasing attention due to
privacy-preserving collaborative training on decentralized clients, mitigating
the need to upload sensitive data to a central server directly. Nonetheless,
recent research has underscored the risk of exposing private data to
adversaries, even within FL frameworks. In general, existing methods sacrifice
performance while ensuring resistance to privacy leakage in FL. We overcome
these issues and generate diverse models at a global server through the
proposed stochastic bidirectional parameter update mechanism. Using diverse
models, we improved the generalization and feature representation in the FL
setup, which also helped to improve the robustness of the model against privacy
leakage without hurting the model's utility. We use global models from past FL
rounds to follow systematic perturbation in parameter space at the server to
ensure model generalization and resistance against privacy attacks. We generate
diverse models (in close neighborhoods) for each client by using systematic
perturbations in model parameters at a fine-grained level (i.e., altering each
convolutional filter across the layers of the model) to improve the
generalization and security perspective. We evaluated our proposed approach on
four benchmark datasets to validate its superiority. We surpassed the
state-of-the-art methods in terms of model utility and robustness towards
privacy leakage. We have proven the effectiveness of our method by evaluating
performance using several quantitative and qualitative results.
| [
{
"version": "v1",
"created": "Thu, 3 Apr 2025 02:06:57 GMT"
}
] | 2025-04-04T00:00:00 | [
[
"Goel",
"Shourya",
""
],
[
"Tibrewal",
"Himanshi",
""
],
[
"Jain",
"Anant",
""
],
[
"Pundhir",
"Anshul",
""
],
[
"Singh",
"Pravendra",
""
]
] | TITLE: Secure Generalization through Stochastic Bidirectional Parameter Updates
Using Dual-Gradient Mechanism
ABSTRACT: Federated learning (FL) has gained increasing attention due to
privacy-preserving collaborative training on decentralized clients, mitigating
the need to upload sensitive data to a central server directly. Nonetheless,
recent research has underscored the risk of exposing private data to
adversaries, even within FL frameworks. In general, existing methods sacrifice
performance while ensuring resistance to privacy leakage in FL. We overcome
these issues and generate diverse models at a global server through the
proposed stochastic bidirectional parameter update mechanism. Using diverse
models, we improved the generalization and feature representation in the FL
setup, which also helped to improve the robustness of the model against privacy
leakage without hurting the model's utility. We use global models from past FL
rounds to follow systematic perturbation in parameter space at the server to
ensure model generalization and resistance against privacy attacks. We generate
diverse models (in close neighborhoods) for each client by using systematic
perturbations in model parameters at a fine-grained level (i.e., altering each
convolutional filter across the layers of the model) to improve the
generalization and security perspective. We evaluated our proposed approach on
four benchmark datasets to validate its superiority. We surpassed the
state-of-the-art methods in terms of model utility and robustness towards
privacy leakage. We have proven the effectiveness of our method by evaluating
performance using several quantitative and qualitative results.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.