FQiao commited on
Commit
f502870
·
verified ·
1 Parent(s): d04ff67

Update app.py

Browse files
Files changed (1) hide show
  1. app.py +24 -26
app.py CHANGED
@@ -118,18 +118,7 @@ def crop(img: Image) -> Image:
118
  with tempfile.TemporaryDirectory() as tmpdir:
119
  with gr.Blocks(
120
  title='StereoGen Demo',
121
- css="""
122
- .badge-container {
123
- display: flex;
124
- gap: 8px;
125
- flex-wrap: wrap;
126
- margin: 1em 0;
127
- }
128
- .badge-container img {
129
- height: 28px;
130
- display: inline-block;
131
- }
132
- """
133
  ) as demo:
134
  # Internal states.
135
  src_image = gr.State()
@@ -185,22 +174,31 @@ with tempfile.TemporaryDirectory() as tmpdir:
185
  # Blocks.
186
  gr.Markdown(
187
  """
188
- # StereoGen: Towards Open-World Generation of Stereo Images and Unsupervised Matching
189
-
190
- <div class="badge-container">
191
- [![Project Site](https://img.shields.io/badge/Project-Web-green)](https://qjizhi.github.io/genstereo)
192
- [![Spaces](https://img.shields.io/badge/Spaces-Demo-yellow?logo=huggingface)](https://huggingface.co/spaces/FQiao/GenStereo)
193
- [![Github](https://img.shields.io/badge/Github-Repo-orange?logo=github)](https://github.com/Qjizhi/GenStereo)
194
- [![Models](https://img.shields.io/badge/Models-checkpoints-blue?logo=huggingface)](https://huggingface.co/FQiao/GenStereo/tree/main)
195
  [![arXiv](https://img.shields.io/badge/arXiv-2405.17251-red?logo=arxiv)](https://arxiv.org/abs/2405.17251)
196
- </div>
197
-
198
- ## Introduction
199
- This is an official demo for the paper "[Towards Open-World Generation of Stereo Images and Unsupervised Matching](https://qjizhi.github.io/genstereo)". Given an arbitrary reference image, GenStereo can generate the corresponding right-view image.
200
  ## How to Use
201
- 1. Upload a reference image to "Left Image"
202
- - You can also select an image from "Examples"
203
- 3. Hit "Generate a right image" button and check the result
 
 
 
 
 
 
 
 
 
 
 
204
  """
205
  )
206
  file = gr.File(label='Left', file_types=['image'])
 
118
  with tempfile.TemporaryDirectory() as tmpdir:
119
  with gr.Blocks(
120
  title='StereoGen Demo',
121
+ css='img {display: inline;}'
 
 
 
 
 
 
 
 
 
 
 
122
  ) as demo:
123
  # Internal states.
124
  src_image = gr.State()
 
174
  # Blocks.
175
  gr.Markdown(
176
  """
177
+ # GenWarp: Single Image to Novel Views with Semantic-Preserving Generative Warping
178
+ [![Project Site](https://img.shields.io/badge/Project-Web-green)](https://genwarp-nvs.github.io/) &nbsp;
179
+ [![Spaces](https://img.shields.io/badge/Spaces-Demo-yellow?logo=huggingface)](https://huggingface.co/spaces/Sony/GenWarp) &nbsp;
180
+ [![Github](https://img.shields.io/badge/Github-Repo-orange?logo=github)](https://github.com/sony/genwarp/) &nbsp;
181
+ [![Models](https://img.shields.io/badge/Models-checkpoints-blue?logo=huggingface)](https://huggingface.co/Sony/genwarp) &nbsp;
 
 
182
  [![arXiv](https://img.shields.io/badge/arXiv-2405.17251-red?logo=arxiv)](https://arxiv.org/abs/2405.17251)
183
+
184
+ ## Introduction
185
+ This is an official demo for the paper "[GenWarp: Single Image to Novel Views with Semantic-Preserving Generative Warping](https://genwarp-nvs.github.io/)". Genwarp can generate novel view images from a single input conditioned on camera poses. In this demo, we offer a basic use of inference of the model. For detailed information, please refer to the [paper](https://arxiv.org/abs/2405.17251).
186
+
187
  ## How to Use
188
+
189
+ ### Try examples
190
+ - Examples are in the bottom section of the page
191
+
192
+ ### Upload your own images
193
+ 1. Upload a reference image to "Reference Input"
194
+ 2. Move the camera to your desired view in "Unprojected 3DGS" 3D viewer
195
+ 3. Hit "Generate a novel view" button and check the result
196
+
197
+ ## Tips
198
+ - This model is mainly trained for indoor/outdoor scenery. It might not work well for object-centric inputs. For details on training the model, please check our [paper](https://arxiv.org/abs/2405.17251).
199
+ - Extremely large camera movement from the input view might cause low performance results due to the unexpected deviation from the training distribution, which is not the scope of this model. Instead, you can feed the generation result for the small camera movement repeatedly and progressively move towards a desired view.
200
+ - 3D viewer might take some time to update especially when trying different images back to back. Wait until it fully updates to the new image.
201
+
202
  """
203
  )
204
  file = gr.File(label='Left', file_types=['image'])