fenfan commited on
Commit
87ead8b
Β·
verified Β·
1 Parent(s): be74ca5

doc: README copy and change the badge to github repo

Browse files
Files changed (1) hide show
  1. README.md +128 -3
README.md CHANGED
@@ -1,3 +1,128 @@
1
- ---
2
- license: cc-by-nc-nd-4.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-nc-nd-4.0
3
+ base_model:
4
+ - black-forest-labs/FLUX.1-dev
5
+ pipeline_tag: text-to-image
6
+ tags:
7
+ - subject-personalization
8
+ - image-generation
9
+ ---
10
+
11
+ <h3 align="center">
12
+ Less-to-More Generalization: Unlocking More Controllability by In-Context Generation
13
+ </h3>
14
+
15
+ <div style="display:flex;justify-content: center">
16
+ <a href="https://bytedance.github.io/UNO/"><img alt="Build" src="https://img.shields.io/badge/Project%20Page-UNO-yellow"></a>
17
+ <a href="https://arxiv.org/abs/2504.02160"><img alt="Build" src="https://img.shields.io/badge/arXiv%20paper-2504.02160-b31b1b.svg"></a>
18
+ <a href="https://github.com/bytedance/UNO"><img src="https://img.shields.io/static/v1?label=GitHub&message=Code&color=green&logo=github"></a>
19
+ </div>
20
+
21
+ ><p align="center"> <span style="color:#137cf3; font-family: Gill Sans">Shaojin Wu,</span><sup></sup></a> <span style="color:#137cf3; font-family: Gill Sans">Mengqi Huang</span><sup>*</sup>,</a> <span style="color:#137cf3; font-family: Gill Sans">Wenxu Wu,</span><sup></sup></a> <span style="color:#137cf3; font-family: Gill Sans">Yufeng Cheng,</span><sup></sup> </a> <span style="color:#137cf3; font-family: Gill Sans">Fei Ding</span><sup>+</sup>,</a> <span style="color:#137cf3; font-family: Gill Sans">Qian He</span></a> <br>
22
+ ><span style="font-size: 16px">Intelligent Creation Team, ByteDance</span></p>
23
+
24
+ <p align="center">
25
+ <img src="./assets/teaser.jpg" width=95% height=95%
26
+ class="center">
27
+ </p>
28
+
29
+ ## πŸ”₯ News
30
+ - [04/2025] πŸ”₯ The [training code](https://github.com/bytedance/UNO), [inference code](https://github.com/bytedance/UNO), and [model](https://huggingface.co/bytedance-research/UNO) of UNO are released. The [demo](https://huggingface.co/spaces/bytedance-research/UNO-FLUX) will coming soon.
31
+ - [04/2025] πŸ”₯ The [project page](https://bytedance.github.io/UNO) of UNO is created.
32
+ - [04/2025] πŸ”₯ The arXiv [paper](https://arxiv.org/abs/2504.02160) of UNO is released.
33
+
34
+ ## πŸ“– Introduction
35
+ In this study, we propose a highly-consistent data synthesis pipeline to tackle this challenge. This pipeline harnesses the intrinsic in-context generation capabilities of diffusion transformers and generates high-consistency multi-subject paired data. Additionally, we introduce UNO, which consists of progressive cross-modal alignment and universal rotary position embedding. It is a multi-image conditioned subject-to-image model iteratively trained from a text-to-image model. Extensive experiments show that our method can achieve high consistency while ensuring controllability in both single-subject and multi-subject driven generation.
36
+
37
+
38
+ ## ⚑️ Quick Start
39
+
40
+ ### πŸ”§ Requirements and Installation
41
+
42
+ Clone our [Github repo](https://github.com/bytedance/UNO)
43
+
44
+
45
+ Install the requirements
46
+ ```bash
47
+ ## create a virtual environment with python >= 3.10 <= 3.12, like
48
+ # python -m venv uno_env
49
+ # source uno_env/bin/activate
50
+ # then install
51
+ pip install -r requirements.txt
52
+ ```
53
+
54
+ then download checkpoints in one of the three ways:
55
+ 1. Directly run the inference scripts, the checkpoints will be downloaded automatically by the `hf_hub_download` function in the code to your `$HF_HOME`(the default value is `~/.cache/huggingface`).
56
+ 2. use `huggingface-cli download <repo name>` to download `black-forest-labs/FLUX.1-dev`, `xlabs-ai/xflux_text_encoders`, `openai/clip-vit-large-patch14`, `TODO UNO hf model`, then run the inference scripts.
57
+ 3. use `huggingface-cli download <repo name> --local-dir <LOCAL_DIR>` to download all the checkpoints menthioned in 2. to the directories your want. Then set the environment variable `TODO`. Finally, run the inference scripts.
58
+
59
+ ### 🌟 Gradio Demo
60
+
61
+ ```bash
62
+ python app.py
63
+ ```
64
+
65
+
66
+ ### ✍️ Inference
67
+
68
+ - Optional prepreration: If you want to test the inference on dreambench at the first time, you should clone the submodule `dreambench` to download the dataset.
69
+
70
+ ```bash
71
+ git submodule update --init
72
+ ```
73
+
74
+
75
+ ```bash
76
+ python inference.py
77
+ ```
78
+
79
+ ### πŸš„ Training
80
+
81
+ ```bash
82
+ accelerate launch train.py
83
+ ```
84
+
85
+ ## 🎨 Application Scenarios
86
+ <p align="center">
87
+ <img src="./assets/simplecase.jpeg" width=95% height=95%
88
+ class="center">
89
+ </p>
90
+
91
+ ## πŸ“„ Disclaimer
92
+ <p>
93
+ We open-source this project for academic research. The vast majority of images
94
+ used in this project are either generated or licensed. If you have any concerns,
95
+ please contact us, and we will promptly remove any inappropriate content.
96
+ Our code is released under the Apache 2.0 License,, while our models are under
97
+ the CC BY-NC 4.0 License. Any models related to <a href="https://huggingface.co/black-forest-labs/FLUX.1-dev" target="_blank">FLUX.1-dev</a>
98
+ base model must adhere to the original licensing terms.
99
+ <br><br>This research aims to advance the field of generative AI. Users are free to
100
+ create images using this tool, provided they comply with local laws and exercise
101
+ responsible usage. The developers are not liable for any misuse of the tool by users.</p>
102
+
103
+ ## πŸš€ Updates
104
+ For the purpose of fostering research and the open-source community, we plan to open-source the entire project, encompassing training, inference, weights, etc. Thank you for your patience and support! 🌟
105
+ - [x] Release github repo.
106
+ - [x] Release inference code.
107
+ - [x] Release training code.
108
+ - [x] Release model checkpoints.
109
+ - [x] Release arXiv paper.
110
+ - [ ] Release in-context data generation pipelines.
111
+
112
+ ## Citation
113
+ If UNO is helpful, please help to ⭐ the repo.
114
+
115
+ If you find this project useful for your research, please consider citing our paper:
116
+ ```bibtex
117
+ @misc{wu2025lesstomoregeneralizationunlockingcontrollability,
118
+ title={Less-to-More Generalization: Unlocking More Controllability by In-Context Generation},
119
+ author={Shaojin Wu and Mengqi Huang and Wenxu Wu and Yufeng Cheng and Fei Ding and Qian He},
120
+ year={2025},
121
+ eprint={2504.02160},
122
+ archivePrefix={arXiv},
123
+ primaryClass={cs.CV},
124
+ url={https://arxiv.org/abs/2504.02160},
125
+ }
126
+ ```
127
+
128
+