update model cards
Browse files
README.md
CHANGED
@@ -1,6 +1,74 @@
|
|
1 |
---
|
2 |
-
license: openrail
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
3 |
---
|
4 |
|
|
|
5 |
|
6 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
1 |
---
|
2 |
+
license: openrail++
|
3 |
+
language:
|
4 |
+
- en
|
5 |
+
pipeline_tag: depth-estimation
|
6 |
+
pinned: true
|
7 |
+
tags:
|
8 |
+
- depth estimation
|
9 |
+
- image analysis
|
10 |
+
- computer vision
|
11 |
+
- in-the-wild
|
12 |
+
- zero-shot
|
13 |
---
|
14 |
|
15 |
+
<h1 align="center">Marigold Depth v1-1 Model Card</h1>
|
16 |
|
17 |
+
<p align="center">
|
18 |
+
<a title="Image Depth" href="https://huggingface.co/spaces/prs-eth/marigold" target="_blank" rel="noopener noreferrer" style="display: inline-block;">
|
19 |
+
<img src="https://img.shields.io/badge/%F0%9F%A4%97%20Image%20Depth%20-Demo-yellow" alt="Image Depth">
|
20 |
+
</a>
|
21 |
+
<a title="diffusers" href="https://huggingface.co/docs/diffusers/using-diffusers/marigold_usage" target="_blank" rel="noopener noreferrer" style="display: inline-block;">
|
22 |
+
<img src="https://img.shields.io/badge/%F0%9F%A4%97%20diffusers%20-Integration%20🧨-yellow" alt="diffusers">
|
23 |
+
</a>
|
24 |
+
<a title="Github" href="https://github.com/prs-eth/marigold" target="_blank" rel="noopener noreferrer" style="display: inline-block;">
|
25 |
+
<img src="https://img.shields.io/github/stars/prs-eth/marigold?label=GitHub%20%E2%98%85&logo=github&color=C8C" alt="Github">
|
26 |
+
</a>
|
27 |
+
<a title="Website" href="https://marigoldcomputervision.github.io/" target="_blank" rel="noopener noreferrer" style="display: inline-block;">
|
28 |
+
<img src="https://img.shields.io/badge/%E2%99%A5%20Project%20-Website-blue" alt="Website">
|
29 |
+
</a>
|
30 |
+
<a title="arXiv" href="https://arxiv.org/abs/2312.02145" target="_blank" rel="noopener noreferrer" style="display: inline-block;">
|
31 |
+
<img src="https://img.shields.io/badge/%F0%9F%93%84%20Read%20-Paper-AF3436" alt="arXiv">
|
32 |
+
</a>
|
33 |
+
<a title="Social" href="https://twitter.com/antonobukhov1" target="_blank" rel="noopener noreferrer" style="display: inline-block;">
|
34 |
+
<img src="https://img.shields.io/twitter/follow/:?label=Subscribe%20for%20updates!" alt="Social">
|
35 |
+
</a>
|
36 |
+
<a title="License" href="https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL" target="_blank" rel="noopener noreferrer" style="display: inline-block;">
|
37 |
+
<img src="https://img.shields.io/badge/License-OpenRAIL++-929292" alt="License">
|
38 |
+
</a>
|
39 |
+
</p>
|
40 |
+
|
41 |
+
This is a model card for the `marigold-depth-v1-1` model for monocular depth estimation from a single image.
|
42 |
+
The model is fine-tuned from the `stable-diffusion-2` [model](https://huggingface.co/stabilityai/stable-diffusion-2) as
|
43 |
+
described in
|
44 |
+
<span style="color:red;">a follow-up of our [CVPR'2024 paper](https://arxiv.org/abs/2312.02145) titled "Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation".</span>
|
45 |
+
|
46 |
+
- Play with the interactive [Hugging Face Spaces demo](https://huggingface.co/spaces/prs-eth/marigold): check out how the model works with example images or upload your own.
|
47 |
+
- Use it with [diffusers](https://huggingface.co/docs/diffusers/using-diffusers/marigold_usage) to compute the results with a few lines of code.
|
48 |
+
- Get to the bottom of things with our [official codebase](https://github.com/prs-eth/marigold).
|
49 |
+
|
50 |
+
## Model Details
|
51 |
+
- **Developed by:** [Bingxin Ke](http://www.kebingxin.com/), [Kevin Qu](https://ch.linkedin.com/in/kevin-qu-b3417621b), [Tianfu Wang](https://tianfwang.github.io/), [Nando Metzger](https://nandometzger.github.io/), [Shengyu Huang](https://shengyuh.github.io/), [Bo Li](https://www.linkedin.com/in/bobboli0202), [Anton Obukhov](https://www.obukhov.ai/), [Konrad Schindler](https://scholar.google.com/citations?user=FZuNgqIAAAAJ).
|
52 |
+
- **Model type:** Generative latent diffusion-based affine-invariant monocular depth estimation from a single image.
|
53 |
+
- **Language:** English.
|
54 |
+
- **License:** [CreativeML Open RAIL++-M License](https://huggingface.co/stabilityai/stable-diffusion-2/blob/main/LICENSE-MODEL).
|
55 |
+
- **Model Description:** This model can be used to generate an estimated depth map of an input image.
|
56 |
+
- **Resolution**: Even though any resolution can be processed, the model inherits the base diffusion model's effective resolution of roughly **768** pixels.
|
57 |
+
This means that for optimal predictions, any larger input image should be resized to make the longer side 768 pixels before feeding it into the model.
|
58 |
+
- **Steps and scheduler**: This model was designed for usage with the **DDIM** scheduler and between **1 and 50** denoising steps.
|
59 |
+
- **Outputs**:
|
60 |
+
- **Affine-invariant depth map**: The predicted values are between 0 and 1, interpolating between the near and far planes of the model's choice.
|
61 |
+
- **Uncertainty map**: Produced only when multiple predictions are ensembled with ensemble size larger than 2.
|
62 |
+
- **Resources for more information:** [Project Website](https://marigoldcomputervision.github.io/), [Paper](https://arxiv.org/abs/2312.02145), [Code](https://github.com/prs-eth/marigold).
|
63 |
+
- **Cite as:**
|
64 |
+
|
65 |
+
<span style="color:red;">Placeholder for the citation block of the follow-up paper</span>
|
66 |
+
|
67 |
+
```bibtex
|
68 |
+
@InProceedings{ke2023repurposing,
|
69 |
+
title={Repurposing Diffusion-Based Image Generators for Monocular Depth Estimation},
|
70 |
+
author={Bingxin Ke and Anton Obukhov and Shengyu Huang and Nando Metzger and Rodrigo Caye Daudt and Konrad Schindler},
|
71 |
+
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
|
72 |
+
year={2024}
|
73 |
+
}
|
74 |
+
```
|