5 RB-Modulation: Training-Free Personalization of Diffusion Models using Stochastic Optimal Control We propose Reference-Based Modulation (RB-Modulation), a new plug-and-play solution for training-free personalization of diffusion models. Existing training-free approaches exhibit difficulties in (a) style extraction from reference images in the absence of additional style or content text descriptions, (b) unwanted content leakage from reference style images, and (c) effective composition of style and content. RB-Modulation is built on a novel stochastic optimal controller where a style descriptor encodes the desired attributes through a terminal cost. The resulting drift not only overcomes the difficulties above, but also ensures high fidelity to the reference style and adheres to the given text prompt. We also introduce a cross-attention-based feature aggregation scheme that allows RB-Modulation to decouple content and style from the reference image. With theoretical justification and empirical evidence, our framework demonstrates precise extraction and control of content and style in a training-free manner. Further, our method allows a seamless composition of content and style, which marks a departure from the dependency on external adapters or ControlNets. 7 authors · May 27, 2024 3
- The Frequency-dependent Modulation Features of PSR J1948+3540 Using observations from GMRT and FAST, we conducted multi-wavelength studies on PSR J1948+3540 and analyzed its intensity modulation characteristics in detail. We found that the intensity modulation of this pulsar exhibits broad low-frequency modulation features. The modulation frequency/period is time-dependent, but the dominant modulation component varies with the observing frequency. Specifically, at low frequencies, the modulation is dominated by the first half of the middle component, while at high frequencies, it is dominated by the second half of the middle component. Spectral analysis revealed that the intensities of the leading and trailing components vary with the observing frequency, but the middle component does not change significantly. Besides, the polarization analyses reveal that the peak of the radiation intensity is located in the latter half of the middle component, whereas the linear polarization is dominant in the former half. However, due to the low degree of linear polarization, the change of the dominant modulation component with the observed frequency is not caused by the variation in linear polarization. The phenomenon of the dominant modulation component varying with observing frequency has not been reported before and remains difficult to understand within the current theoretical framework. 9 authors · May 6
- GroupMamba: Parameter-Efficient and Accurate Group Visual State Space Model Recent advancements in state-space models (SSMs) have showcased effective performance in modeling long-range dependencies with subquadratic complexity. However, pure SSM-based models still face challenges related to stability and achieving optimal performance on computer vision tasks. Our paper addresses the challenges of scaling SSM-based models for computer vision, particularly the instability and inefficiency of large model sizes. To address this, we introduce a Modulated Group Mamba layer which divides the input channels into four groups and applies our proposed SSM-based efficient Visual Single Selective Scanning (VSSS) block independently to each group, with each VSSS block scanning in one of the four spatial directions. The Modulated Group Mamba layer also wraps the four VSSS blocks into a channel modulation operator to improve cross-channel communication. Furthermore, we introduce a distillation-based training objective to stabilize the training of large models, leading to consistent performance gains. Our comprehensive experiments demonstrate the merits of the proposed contributions, leading to superior performance over existing methods for image classification on ImageNet-1K, object detection, instance segmentation on MS-COCO, and semantic segmentation on ADE20K. Our tiny variant with 23M parameters achieves state-of-the-art performance with a classification top-1 accuracy of 83.3% on ImageNet-1K, while being 26% efficient in terms of parameters, compared to the best existing Mamba design of same model size. Our code and models are available at: https://github.com/Amshaker/GroupMamba. 5 authors · Jul 18, 2024
- Boosting Multi-modal Model Performance with Adaptive Gradient Modulation While the field of multi-modal learning keeps growing fast, the deficiency of the standard joint training paradigm has become clear through recent studies. They attribute the sub-optimal performance of the jointly trained model to the modality competition phenomenon. Existing works attempt to improve the jointly trained model by modulating the training process. Despite their effectiveness, those methods can only apply to late fusion models. More importantly, the mechanism of the modality competition remains unexplored. In this paper, we first propose an adaptive gradient modulation method that can boost the performance of multi-modal models with various fusion strategies. Extensive experiments show that our method surpasses all existing modulation methods. Furthermore, to have a quantitative understanding of the modality competition and the mechanism behind the effectiveness of our modulation method, we introduce a novel metric to measure the competition strength. This metric is built on the mono-modal concept, a function that is designed to represent the competition-less state of a modality. Through systematic investigation, our results confirm the intuition that the modulation encourages the model to rely on the more informative modality. In addition, we find that the jointly trained model typically has a preferred modality on which the competition is weaker than other modalities. However, this preferred modality need not dominate others. Our code will be available at https://github.com/lihong2303/AGM_ICCV2023. 6 authors · Aug 15, 2023