MakiPan commited on
Commit
b79a923
·
1 Parent(s): cacea21

Update app.py

Browse files

Updated Markdown texts

Files changed (1) hide show
  1. app.py +56 -12
app.py CHANGED
@@ -217,20 +217,64 @@ with gr.Blocks(theme='gradio/soft') as demo:
217
  gr.Markdown("## Stable Diffusion with Hand Control")
218
  gr.Markdown("This model is a ControlNet model using MediaPipe hand landmarks for control.")
219
 
220
- gr.Markdown("""
221
- Standard Model can be found at [https://huggingface.co/Vincent-luo/controlnet-hands](https://huggingface.co/Vincent-luo/controlnet-hands)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
222
 
223
- Model using hand encoding can be found at [https://huggingface.co/MakiPan/controlnet-encoded-hands-130k/ ](https://huggingface.co/MakiPan/controlnet-encoded-hands-130k/)
224
-
225
- Standard model Dataset can be found at [https://huggingface.co/datasets/MakiPan/hagrid250k-blip2](https://huggingface.co/datasets/MakiPan/hagrid250k-blip2)
226
-
227
- Hand Encoding Dataset can be found at [https://huggingface.co/datasets/MakiPan/hagrid-hand-enc-250k](https://huggingface.co/datasets/MakiPan/hagrid-hand-enc-250k)
228
 
229
- Standard preprocessing script can be found at [https://github.com/Maki-DS/Jax-Controlnet-hand-training/blob/main/normal-preprocessing.py](https://github.com/Maki-DS/Jax-Controlnet-hand-training/blob/main/normal-preprocessing.py)
230
-
231
- Hand Encoding preprocessing script can be found at [https://github.com/Maki-DS/Jax-Controlnet-hand-training/blob/main/Hand-encoded-preprocessing.py](https://github.com/Maki-DS/Jax-Controlnet-hand-training/blob/main/Hand-encoded-preprocessing.py)
232
- """)
233
- model_type = gr.Radio(["Standard", "Hand Encoding"], label="Model preprocessing", info="We developed two models, one with standard mediapipe landmarks, and one with different (but similar) coloring on palm landmards to distinguish left and right")
234
 
235
  with gr.Row():
236
  with gr.Column():
 
217
  gr.Markdown("## Stable Diffusion with Hand Control")
218
  gr.Markdown("This model is a ControlNet model using MediaPipe hand landmarks for control.")
219
 
220
+ gr.Markdown("""<center><h1>Summary</h1></center>
221
+
222
+ As Stable diffusion and other diffusion models are notoriously poor at generating realistic hands for our project we decided to train a ControlNet model using MediaPipes landmarks in order to generate more realistic hands avoiding common issues such as unrealistic positions and irregular digits.
223
+ <br>
224
+ We opted to use the [HAnd Gesture Recognition Image Dataset](https://github.com/hukenovs/hagrid) (HaGRID) and [MediaPipe's Hand Landmarker](https://developers.google.com/mediapipe/solutions/vision/hand_landmarker) to train a control net that could potentially be used independently or as an in-painting tool.
225
+ <br>
226
+ To preprocess the data there were three options we considered:
227
+ - The first was to use Mediapipes built-in draw landmarks function. This was an obvious first choice however we noticed with low training steps that the model couldn't easily distinguish handedness and would often generate the wrong hand for the conditioning image.<center>
228
+ <table><tr>
229
+ <td>
230
+ <p align="center" style="padding: 10px">
231
+ <img alt="Forwarding" src="https://datasets-server.huggingface.co/assets/MakiPan/hagrid250k-blip2/--/MakiPan--hagrid250k-blip2/train/29/image/image.jpg" width="300">
232
+ <br>
233
+ <em style="color: grey">Original Image</em>
234
+ </p>
235
+ </td>
236
+ <td>
237
+ <p align="center">
238
+ <img alt="Routing" src="https://datasets-server.huggingface.co/assets/MakiPan/hagrid250k-blip2/--/MakiPan--hagrid250k-blip2/train/29/conditioning_image/image.jpg" width="300">
239
+ <br>
240
+ <em style="color: grey">Conditioning Image</em>
241
+ </p>
242
+ </td>
243
+ </tr></table>
244
+ </center>
245
+
246
+ - To counter this issue we changed the palm landmark colours with the intention to keep the color similar in order to learn that they provide similar information, but different to make the model know which hands were left or right.<center>
247
+ <table><tr>
248
+ <td>
249
+ <p align="center" style="padding: 10px">
250
+ <img alt="Forwarding" src="https://datasets-server.huggingface.co/assets/MakiPan/hagrid-hand-enc-250k/--/MakiPan--hagrid-hand-enc-250k/train/96/image/image.jpg" width="300">
251
+ <br>
252
+ <em style="color: grey">Original Image</em>
253
+ </p>
254
+ </td>
255
+ <td>
256
+ <p align="center">
257
+ <img alt="Routing" src="https://datasets-server.huggingface.co/assets/MakiPan/hagrid-hand-enc-250k/--/MakiPan--hagrid-hand-enc-250k/train/96/conditioning_image/image.jpg" width="300">
258
+ <br>
259
+ <em style="color: grey">Conditioning Image</em>
260
+ </p>
261
+ </td>
262
+ </tr></table>
263
+ </center>
264
+ - The last option was to use [MediaPipe Holistic](https://ai.googleblog.com/2020/12/mediapipe-holistic-simultaneous-face.html) to provide pose face and hand landmarks to the ControlNet. This method was promising in theory, however, the HaGRID dataset was not suitable for this method as the Holistic model performs poorly with partial body and obscurely cropped images.
265
+
266
+ We anecdotally determined that when trained at lower steps the encoded hand model performed better than the standard MediaPipe model due to implied handedness. We theorize that with a larger dataset of more full-body hand and pose classifications, Holistic landmarks will provide the best images in the future however for the moment the hand encoded model performs best. """)
267
+
268
+ gr.Markdown("""<center><h3 style="text-align: center;"><a href="https://huggingface.co/Vincent-luo/controlnet-hands">Standard Model Link</a></h3>
269
+ <h3 style="text-align: center;"> <a href="https://huggingface.co/MakiPan/controlnet-encoded-hands-130k/">Model using Hand Encoding</a></h3>
270
+
271
+ <h3 style="text-align: center;"> <a href="https://huggingface.co/datasets/MakiPan/hagrid250k-blip2">Dataset Used To Train the Standard Model</a></h3>
272
+ <h3 style="text-align: center;"> <a href="https://huggingface.co/datasets/MakiPan/hagrid-hand-enc-250k">Dataset Used To Train the Hand Encoding Model</a></h3>
273
 
274
+ <h3 style="text-align: center;"> <a href="https://github.com/Maki-DS/Jax-Controlnet-hand-training/blob/main/normal-preprocessing.py">Standard Data Preprocessing Script</a></h3>
275
+ <h3 style="text-align: center;"> <a href="https://github.com/Maki-DS/Jax-Controlnet-hand-training/blob/main/Hand-encoded-preprocessing.py">Hand Encoding Data Preprocessing Script</a></h3></center>""")
 
 
 
276
 
277
+ model_type = gr.Radio(["Standard", "Hand Encoding"], label="Model preprocessing", info="We developed two models, one with standard MediaPipe landmarks, and one with different (but similar) coloring on palm landmarks to distinguish left and right")
 
 
 
 
278
 
279
  with gr.Row():
280
  with gr.Column():