Spaces:
Running
Running
File size: 5,370 Bytes
013c1ab 3c4c9a8 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 |
---
title: Face Segmentation
emoji: ⚡
colorFrom: indigo
colorTo: red
sdk: streamlit
sdk_version: 1.44.1
app_file: app.py
pinned: false
license: mit
short_description: AI-powered application for face segmentation
---
Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
# Face Segmentation Tool
A deep learning-based tool for precise face segmentation using BiSeNet trained on the CelebAMask-HQ dataset. This tool extracts faces from images with a transparent background, perfect for creating profile pictures, avatars, or creative photo editing.
## Features
- Accurate face and hair segmentation with transparent background
- 19-class facial attribute segmentation (skin, eyes, eyebrows, nose, lips, hair, etc.)
- User-friendly Streamlit web interface
- MediaPipe face detection to identify and focus on faces
- Support for downloading the segmented result
## Technical Details
This project uses:
- **BiSeNet** (Bilateral Segmentation Network) for semantic segmentation
- **CelebAMask-HQ dataset** trained model with 19 facial attribute classes
- **MediaPipe** for initial face detection and bounding box estimation
- **PyTorch** for the deep learning components
- **Streamlit** for the web interface
## Installation
### Prerequisites
- Python 3.7 or newer
- CUDA-compatible GPU (optional, but recommended for faster processing)
### Setup
1. Clone the repository:
```bash
git clone https://github.com/yourusername/face-segmentation.git
cd face-segmentation
```
2. Create and activate a virtual environment (recommended):
```bash
# On Windows
python -m venv venv
venv\Scripts\activate
# On macOS/Linux
python -m venv venv
source venv/bin/activate
```
3. Install the required dependencies:
```bash
pip install -r requirements.txt
```
4. Ensure you have the BiSeNet model weights file (`bisenet.pth`) in the project root directory.
(The file should already be included in the repository)
## Usage
1. Start the Streamlit app:
```bash
streamlit run app.py
```
2. Open your web browser and go to the URL shown in the console (typically http://localhost:8501)
3. Upload an image with a face
4. Click the "Segment Face" button
5. View and download the segmented result
## Class Labels in CelebAMask-HQ
The model recognizes 19 different facial attributes:
| ID | Class | Description |
| --- | ---------- | ------------------------- |
| 0 | background | Non-face background areas |
| 1 | skin | Face skin |
| 2 | nose | Nose |
| 3 | eye_g | Eyeglasses |
| 4 | l_eye | Left eye |
| 5 | r_eye | Right eye |
| 6 | l_brow | Left eyebrow |
| 7 | r_brow | Right eyebrow |
| 8 | l_ear | Left ear |
| 9 | r_ear | Right ear |
| 10 | mouth | Mouth |
| 11 | u_lip | Upper lip |
| 12 | l_lip | Lower lip |
| 13 | hair | Hair |
| 14 | hat | Hat |
| 15 | ear_r | Ear rings |
| 16 | neck_l | Neck area |
| 17 | neck | Neck |
| 18 | cloth | Clothing |
## How It Works
1. The app uses MediaPipe to detect faces in the uploaded image
2. It crops and processes the face region using BiSeNet
3. BiSeNet performs semantic segmentation to classify each pixel into one of 19 classes
4. Selected facial features are preserved while background, neck, and clothes are made transparent
5. The result is an RGBA image with the face and hair intact and a transparent background
## Customization
You can modify which facial attributes to keep by adjusting the `keep_classes` list in the `FaceHairSegmenter` class:
```python
# Current configuration - keeps all face parts except background, clothes, and neck
self.keep_classes = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 15, 17, 18]
```
## Troubleshooting
- **No face detected**: Ensure the image contains a clearly visible face.
- **Multiple faces detected**: The app works best with a single face per image.
- **Poor segmentation**: For best results, use images with good lighting and a clear face.
- **CUDA out of memory**: Try using a smaller image or run on CPU if your GPU has limited memory.
- **PyTorch class path error**: If you encounter an error like "Tried to instantiate class '**path**.\_path', but it does not exist!", try updating your PyTorch version to 1.9.0 or newer using `pip install torch==1.9.0 torchvision==0.10.0`. This is a known issue with certain PyTorch versions when loading models.
## PyTorch Version Compatibility
This project is tested with PyTorch 1.7.0 to 1.13.0. If you encounter model loading issues with newer PyTorch versions (2.x), try downgrading to PyTorch 1.13.0:
```bash
pip install torch==1.13.0 torchvision==0.14.0
```
## References
- [BiSeNet: Bilateral Segmentation Network for Real-time Semantic Segmentation](https://arxiv.org/abs/1808.00897)
- [CelebAMask-HQ Dataset](https://github.com/switchablenorms/CelebAMask-HQ)
## License
This project is licensed under the MIT License - see the LICENSE file for details.
|