Spaces:
Runtime error
Runtime error
Update app.py
Browse files
app.py
CHANGED
@@ -40,13 +40,13 @@ Here is an example of it:
|
|
40 |
|
41 |
To understand style transfer you should first know what each layer in a Convolution Neural Network learns. Here is paper published on visualizing CNN: [Visualizing and Understanding Convolutional Networks](https://arxiv.org/abs/1311.2901).
|
42 |
|
43 |
-
- They proposed a technique which uses multi-layered Deconvolution Network (deconvnet), to project the feature activations back to the input pixel space. A deconvnet can be thought of as a convnet model that uses the same components (filtering, pooling) but in reverse, so instead of mapping pixels to features does the opposite.
|
44 |
- To examine a convnet, a deconvnet is attached to each of its layers, providing a continuous path back to image pixels. To examine a given convet activation, they set all the other activations in the layer to zero and pass it to the attached deconvnet layer.
|
45 |
- After training, we can visulalize what feature does the particular layer is learning.
|
46 |
|
47 |
### Neural Style Transfer
|
48 |
|
49 |
-
A paper was published
|
50 |
|
51 |
- Higher layers in the network capture the high-level content in terms of objects and their arrangement in the input image but do not constrain the exact pixel values of the reconstruction. In contrast, reconstructions from the lower layers simply reproduce the exact pixel values of the original image.
|
52 |
- We therefore refer to the feature responses in higher layers of the network as the **content representation** and in lower layers as **style representation**.
|
|
|
40 |
|
41 |
To understand style transfer you should first know what each layer in a Convolution Neural Network learns. Here is paper published on visualizing CNN: [Visualizing and Understanding Convolutional Networks](https://arxiv.org/abs/1311.2901).
|
42 |
|
43 |
+
- They proposed a technique which uses multi-layered Deconvolution Network (deconvnet), to project the feature activations back to the input pixel space. A deconvnet can be thought of as a convnet model that uses the same components (filtering, pooling) but in reverse, so instead of mapping pixels to features, it does the opposite.
|
44 |
- To examine a convnet, a deconvnet is attached to each of its layers, providing a continuous path back to image pixels. To examine a given convet activation, they set all the other activations in the layer to zero and pass it to the attached deconvnet layer.
|
45 |
- After training, we can visulalize what feature does the particular layer is learning.
|
46 |
|
47 |
### Neural Style Transfer
|
48 |
|
49 |
+
A paper was published ib 2015 named [A Neural Algorithm of Artistic Style](https://arxiv.org/abs/1508.06576). In this paper, the researchers found that the representations of content and style in the CNN are separable. That is, we can manipulate both representations independently to produce new, perceptually meaningful images.
|
50 |
|
51 |
- Higher layers in the network capture the high-level content in terms of objects and their arrangement in the input image but do not constrain the exact pixel values of the reconstruction. In contrast, reconstructions from the lower layers simply reproduce the exact pixel values of the original image.
|
52 |
- We therefore refer to the feature responses in higher layers of the network as the **content representation** and in lower layers as **style representation**.
|