prakrutpatel commited on
Commit
c233daa
·
verified ·
1 Parent(s): cea5001

Revert to last stable

Browse files
Files changed (1) hide show
  1. app.py +1 -3
app.py CHANGED
@@ -11,7 +11,6 @@ from PIL import Image
11
  import io
12
  import pathlib
13
 
14
-
15
  Image.MAX_IMAGE_PIXELS = None
16
 
17
  RoboflowAPI = os.environ['RoboflowAPI']
@@ -413,7 +412,6 @@ def segment(image):
413
  return img
414
  examples = os.listdir('../../Examples')
415
  examples = ['../../Examples/' + item for item in examples]
416
- print(examples)
417
  title="Context R-CNN"
418
  description=f'<p class="has-line-data" data-line-start="0" data-line-end="1">Gradio demo for <strong>Context R-CNN</strong>: <a href="https://arxiv.org/abs/1912.03538">[Paper]</a>.</p><p class="has-line-data" data-line-start="2" data-line-end="3">Context R-CNN is an object detection algorithm that uses contextual features to improve object detection. It is based on Faster R-CNN, but it adds a module that can incorporate contextual features from surrounding frames. This allows Context R-CNN to better identify objects that are partially obscured or that are moving quickly.</p><p class="has-line-data" data-line-start="4" data-line-end="5">The contextual features are stored in a memory bank, which is built up over time as the camera captures images. The memory bank is indexed using an attention mechanism, which allows Context R-CNN to focus on the most relevant contextual features for each object.</p><p class="has-line-data" data-line-start="6" data-line-end="7">Context R-CNN has been shown to improve object detection performance on a variety of datasets, including camera trap data and traffic camera data. It is a promising approach for improving object detection in static monitoring cameras, where the sampling rate is low and the objects may exhibit long-term behavior.</p><p class="has-line-data" data-line-start="8" data-line-end="9">This application of Context R-CNN demonstrates its potential for use in camera trap images of Gopher Tortoises in the wild. It also shows how Context R-CNN can improve object detection performance over existing Faster R-CNN implementations. Both models of R-CNN were trained on the exact same datasets for best comparison. Context R-CNN improves upon Faster R-CNN by building a contextual memory bank, such contextual information can include the position of other objects in the scene, the motion of the objects, and the time of day. The contextual feature matrix used by Context R-CNN model was build using Faster R-CNN model.</p><p class="has-line-data" data-line-start="11" data-line-end="12"><strong>The examples images provided in this demo were not used to train or test the models.</strong></p><p class="has-line-data" data-line-start="13" data-line-end="14">Note: The model requires the date taken attribute to be present in the metadata of the uploaded images in order to process them correctly.</p></br>Training instructions for Context R-CNN can be found on this <a href="https://github.com/prakrutpatel/Context-RCNN-Tortoises">Github</a>'
419
- gr.Interface(fn=segment, inputs = gr.File(file_types=["Image"], interactive=True), outputs = gr.Image(type="pil") ,title=title, description=description ,examples=examples, cache_examples=False, live=True).launch(width=50, height=20)
 
11
  import io
12
  import pathlib
13
 
 
14
  Image.MAX_IMAGE_PIXELS = None
15
 
16
  RoboflowAPI = os.environ['RoboflowAPI']
 
412
  return img
413
  examples = os.listdir('../../Examples')
414
  examples = ['../../Examples/' + item for item in examples]
 
415
  title="Context R-CNN"
416
  description=f'<p class="has-line-data" data-line-start="0" data-line-end="1">Gradio demo for <strong>Context R-CNN</strong>: <a href="https://arxiv.org/abs/1912.03538">[Paper]</a>.</p><p class="has-line-data" data-line-start="2" data-line-end="3">Context R-CNN is an object detection algorithm that uses contextual features to improve object detection. It is based on Faster R-CNN, but it adds a module that can incorporate contextual features from surrounding frames. This allows Context R-CNN to better identify objects that are partially obscured or that are moving quickly.</p><p class="has-line-data" data-line-start="4" data-line-end="5">The contextual features are stored in a memory bank, which is built up over time as the camera captures images. The memory bank is indexed using an attention mechanism, which allows Context R-CNN to focus on the most relevant contextual features for each object.</p><p class="has-line-data" data-line-start="6" data-line-end="7">Context R-CNN has been shown to improve object detection performance on a variety of datasets, including camera trap data and traffic camera data. It is a promising approach for improving object detection in static monitoring cameras, where the sampling rate is low and the objects may exhibit long-term behavior.</p><p class="has-line-data" data-line-start="8" data-line-end="9">This application of Context R-CNN demonstrates its potential for use in camera trap images of Gopher Tortoises in the wild. It also shows how Context R-CNN can improve object detection performance over existing Faster R-CNN implementations. Both models of R-CNN were trained on the exact same datasets for best comparison. Context R-CNN improves upon Faster R-CNN by building a contextual memory bank, such contextual information can include the position of other objects in the scene, the motion of the objects, and the time of day. The contextual feature matrix used by Context R-CNN model was build using Faster R-CNN model.</p><p class="has-line-data" data-line-start="11" data-line-end="12"><strong>The examples images provided in this demo were not used to train or test the models.</strong></p><p class="has-line-data" data-line-start="13" data-line-end="14">Note: The model requires the date taken attribute to be present in the metadata of the uploaded images in order to process them correctly.</p></br>Training instructions for Context R-CNN can be found on this <a href="https://github.com/prakrutpatel/Context-RCNN-Tortoises">Github</a>'
417
+ gr.Interface(fn=segment, inputs = "file",outputs = gr.Image(type="pil", width=50, height=20) ,title=title, description=description ,examples=examples,cache_examples=True).launch()