|
--- |
|
license: mit |
|
--- |
|
|
|
# **FaceNormalSeg-ControlNet Dataset** π π |
|
|
|
This is the training dataset for the ControlNet used in the **AnimPortrait3D** pipeline. |
|
|
|
For details about this ControlNet and access to the pretrained models, please visit: |
|
- **[Project Page](https://onethousandwu.com/animportrait3d.github.io/)** |
|
- **[Hugging Face Page](https://huggingface.co/onethousand/AnimPortrait3D_controlnet)** |
|
|
|
### **RGB Images** |
|
For facial RGB images, we use the **FFHQ dataset**. You can download it from **[here](https://github.com/NVlabs/ffhq-dataset)**. |
|
πΉ *Note: This dataset only provides annotated face normal maps and face segmentation maps; face RGB images are not included.* |
|
|
|
|
|
### Details |
|
|
|
For **face** data, we utilize the [FFHQ](https://github.com/NVlabs/ffhq-dataset) and [LPFF](https://github.com/oneThousand1000/LPFF-dataset) (a large-pose variant of FFHQ) datasets. The text prompt for each image is extracted by [BLIP](https://huggingface.co/docs/transformers/main/en/model_doc/blip-2). |
|
Using the **3D face reconstruction** method, we estimate normal maps as geometric conditional signals. |
|
We then apply [Face Parsing](https://github.com/hukenovs/easyportrait) to segment teeth and eye regions. Additionally, [MediaPipe](https://github.com/google-ai-edge/mediapipe) is used to track iris positions, providing further precision in gaze localization. |
|
|
|
|
|
For **eye** data, we first crop the eye regions from the face dataset. To augment the dataset with closed-eye variations, which are rare in in-the-wild portraits, we use [LivePortrait](https://github.com/KwaiVGI/LivePortrait), a portrait animation method, to generate closed-eye variations from the FFHQ dataset. These closed-eye face images are then processed using a similar methodology to extract conditions, and the eye regions are cropped and added to the eye dataset. |
|
|
|
|
|
To construct the **mouth** dataset, we begin by cropping the mouth regions from the face dataset. To augment this dataset with a broader range of open-mouth variations, we incorporate additional images featuring open-mouth expressions sourced from the [NeRSemble](https://tobias-kirschstein.github.io/nersemble/) dataset. These open-mouth face images are processed using a similar methodology to extract conditions, after which their mouth regions are cropped and integrated into the mouth dataset. |
|
|
|
|
|
|
|
### Download |
|
|
|
``` |
|
huggingface-cli download onethousand/FaceNormalSeg-ControlNet-dataset --local-dir ./FaceNormalSeg-ControlNet-dataset --repo-type dataset |
|
``` |
|
|
|
|
|
## **π¦ Dataset Overview** |
|
|
|
| Region | Image Count | |
|
|---------|------------| |
|
| Face | 107,209 | |
|
| Mouth | 131,758 | |
|
| Eye (left + right) | 214,418 | |
|
| **Total** | **453,385** | |
|
|
|
|
|
|