onethousand commited on
Commit
468dced
·
verified ·
1 Parent(s): 577e386

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +15 -0
README.md CHANGED
@@ -14,6 +14,21 @@ For details about this ControlNet and access to the pretrained models, please vi
14
  For facial RGB images, we use the **FFHQ dataset**. You can download it from **[here](https://github.com/NVlabs/ffhq-dataset)**.
15
  🔹 *Note: This dataset only provides annotated face normal maps and face segmentation maps; face RGB images are not included.*
16
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
17
  ### Download
18
 
19
  ```
 
14
  For facial RGB images, we use the **FFHQ dataset**. You can download it from **[here](https://github.com/NVlabs/ffhq-dataset)**.
15
  🔹 *Note: This dataset only provides annotated face normal maps and face segmentation maps; face RGB images are not included.*
16
 
17
+
18
+ ### Details
19
+
20
+ For **face** data, we utilize the [FFHQ](https://github.com/NVlabs/ffhq-dataset) and [LPFF](https://github.com/oneThousand1000/LPFF-dataset) (a large-pose variant of FFHQ) datasets. The text prompt for each image is extracted by [BLIP](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2).
21
+ Using the **3D face reconstruction** method, we estimate normal maps as geometric conditional signals.
22
+ We then apply [Face Parsing](https://github.com/hukenovs/easyportrait) to segment teeth and eye regions. Additionally, [MediaPipe](https://github.com/google-ai-edge/mediapipe) is used to track iris positions, providing further precision in gaze localization.
23
+
24
+
25
+ For **eye** data, we first crop the eye regions from the face dataset. To augment the dataset with closed-eye variations, which are rare in in-the-wild portraits, we use [LivePortrait](https://github.com/KwaiVGI/LivePortrait), a portrait animation method, to generate closed-eye variations from the FFHQ dataset. These closed-eye face images are then processed using a similar methodology to extract conditions, and the eye regions are cropped and added to the eye dataset.
26
+
27
+
28
+ To construct the **mouth** dataset, we begin by cropping the mouth regions from the face dataset. To augment this dataset with a broader range of open-mouth variations, we incorporate additional images featuring open-mouth expressions sourced from the [NeRSemble](https://tobias-kirschstein.github.io/nersemble/) dataset. These open-mouth face images are processed using a similar methodology to extract conditions, after which their mouth regions are cropped and integrated into the mouth dataset.
29
+
30
+
31
+
32
  ### Download
33
 
34
  ```