markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
<a name="loading-a-dataset"></a>
Loading a Dataset
Let's use the Celeb Dataset just for demonstration purposes. In Part 2, you can explore using your own dataset. This code is exactly the same as we did in Session 3's homework with the VAE.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | # You'll want to change this to your own data if you end up training your own GAN.
batch_size = 64
n_epochs = 1
crop_shape = [n_pixels, n_pixels, 3]
crop_factor = 0.8
input_shape = [218, 178, 3]
files = datasets.CELEB()
batch = dataset_utils.create_input_pipeline(
files=files,
batch_size=batch_size,
n_epochs=n_epochs,
crop_shape=crop_shape,
crop_factor=crop_factor,
shape=input_shape) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="training"></a>
Training
We'll now go through the setup of training the network. We won't actually spend the time to train the network but just see how it would be done. This is because in Part 2, we'll see an extension to this network which makes it much easier to train. | ckpt_name = './gan.ckpt'
sess = tf.Session()
saver = tf.train.Saver()
sess.run(tf.global_variables_initializer())
coord = tf.train.Coordinator()
tf.get_default_graph().finalize()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
if os.path.exists(ckpt_name + '.index') or os.path.exists(ckpt_name):
saver.restore(sess, ckpt_name)
print("VAE model restored.")
n_examples = 10
zs = np.random.uniform(0.0, 1.0, [4, n_latent]).astype(np.float32)
zs = utils.make_latent_manifold(zs, n_examples) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="equilibrium"></a>
Equilibrium
Equilibrium is at 0.693. Why? Consider what the cost is measuring, the binary cross entropy. If we have random guesses, then we have as many 0s as we have 1s. And on average, we'll be 50% correct. The binary cross entropy is:
\begin{align}
\sum_i \text{X}_i * \text{log}(\tilde{\text{X}}_i) + (1 - \text{X}_i) * \text{log}(1 - \tilde{\text{X}}_i)
\end{align}
Which is written out in tensorflow as:
python
(-(x * tf.log(z) + (1. - x) * tf.log(1. - z)))
Where x is the discriminator's prediction of the true distribution, in the case of GANs, the input images, and z is the discriminator's prediction of the generated images corresponding to the mathematical notation of $\tilde{\text{X}}$. We sum over all features, but in the case of the discriminator, we have just 1 feature, the guess of whether it is a true image or not. If our discriminator guesses at chance, i.e. 0.5, then we'd have something like:
\begin{align}
0.5 * \text{log}(0.5) + (1 - 0.5) * \text{log}(1 - 0.5) = -0.693
\end{align}
So this is what we'd expect at the start of learning and from a game theoretic point of view, where we want things to remain. So unlike our previous networks, where our loss continues to drop closer and closer to 0, we want our loss to waver around this value as much as possible, and hope for the best. | equilibrium = 0.693
margin = 0.2 | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
When we go to train the network, we switch back and forth between each optimizer, feeding in the appropriate values for each optimizer. The opt_g optimizer only requires the Z and lr_g placeholders, while the opt_d optimizer requires the X, Z, and lr_d placeholders.
Don't train this network for very long because GANs are a huge pain to train and require a lot of fiddling. They very easily get stuck in their adversarial process, or get overtaken by one or the other, resulting in a useless model. What you need to develop is a steady equilibrium that optimizes both. That will likely take two weeks just trying to get the GAN to train and not have enough time for the rest of the assignment. They require a lot of memory/cpu and can take many days to train once you have settled on an architecture/training process/dataset. Just let it run for a short time and then interrupt the kernel (don't restart!), then continue to the next cell.
From there, we'll go over an extension to the GAN which uses a VAE like we used in Session 3. By using this extra network, we can actually train a better model in a fraction of the time and with much more ease! But the network's definition is a bit more complicated. Let's see how the GAN is trained first and then we'll train the VAE/GAN network instead. While training, the "real" and "fake" cost will be printed out. See how this cost wavers around the equilibrium and how we enforce it to try and stay around there by including a margin and some simple logic for updates. This is highly experimental and the research does not have a good answer for the best practice on how to train a GAN. I.e., some people will set the learning rate to some ratio of the performance between fake/real networks, others will have a fixed update schedule but train the generator twice and the discriminator only once. | t_i = 0
batch_i = 0
epoch_i = 0
n_files = len(files)
if not os.path.exists('imgs'):
os.makedirs('imgs')
while epoch_i < n_epochs:
batch_i += 1
batch_xs = sess.run(batch) / 255.0
batch_zs = np.random.uniform(
0.0, 1.0, [batch_size, n_latent]).astype(np.float32)
real_cost, fake_cost = sess.run([
loss_D_real, loss_D_fake],
feed_dict={
X: batch_xs,
Z: batch_zs})
real_cost = np.mean(real_cost)
fake_cost = np.mean(fake_cost)
if (batch_i % 20) == 0:
print(batch_i, 'real:', real_cost, '/ fake:', fake_cost)
gen_update = True
dis_update = True
if real_cost > (equilibrium + margin) or \
fake_cost > (equilibrium + margin):
gen_update = False
if real_cost < (equilibrium - margin) or \
fake_cost < (equilibrium - margin):
dis_update = False
if not (gen_update or dis_update):
gen_update = True
dis_update = True
if gen_update:
sess.run(opt_g,
feed_dict={
Z: batch_zs,
lr_g: learning_rate})
if dis_update:
sess.run(opt_d,
feed_dict={
X: batch_xs,
Z: batch_zs,
lr_d: learning_rate})
if batch_i % (n_files // batch_size) == 0:
batch_i = 0
epoch_i += 1
print('---------- EPOCH:', epoch_i)
# Plot example reconstructions from latent layer
recon = sess.run(G, feed_dict={Z: zs})
recon = np.clip(recon, 0, 1)
m1 = utils.montage(recon.reshape([-1] + crop_shape),
'imgs/manifold_%08d.png' % t_i)
recon = sess.run(G, feed_dict={Z: batch_zs})
recon = np.clip(recon, 0, 1)
m2 = utils.montage(recon.reshape([-1] + crop_shape),
'imgs/reconstructions_%08d.png' % t_i)
fig, axs = plt.subplots(1, 2, figsize=(15, 10))
axs[0].imshow(m1)
axs[1].imshow(m2)
plt.show()
t_i += 1
# Save the variables to disk.
save_path = saver.save(sess, "./" + ckpt_name,
global_step=batch_i,
write_meta_graph=False)
print("Model saved in file: %s" % save_path)
# Tell all the threads to shutdown.
coord.request_stop()
# Wait until all threads have finished.
coord.join(threads)
# Clean up the session.
sess.close() | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="part-2---variational-auto-encoding-generative-adversarial-network-vaegan"></a>
Part 2 - Variational Auto-Encoding Generative Adversarial Network (VAEGAN)
In our definition of the generator, we started with a feature vector, Z. This feature vector was not connected to anything before it. Instead, we had to randomly create its values using a random number generator of its n_latent values from -1 to 1, and this range was chosen arbitrarily. It could have been 0 to 1, or -3 to 3, or 0 to 100. In any case, the network would have had to learn to transform those values into something that looked like an image. There was no way for us to take an image, and find the feature vector that created it. In other words, it was not possible for us to encode an image.
The closest thing to an encoding we had was taking an image and feeding it to the discriminator, which would output a 0 or 1. But what if we had another network that allowed us to encode an image, and then we used this network for both the discriminator and generative parts of the network? That's the basic idea behind the VAEGAN: https://arxiv.org/abs/1512.09300. It is just like the regular GAN, except we also use an encoder to create our feature vector Z.
We then get the best of both worlds: a GAN that looks more or less the same, but uses the encoding from an encoder instead of an arbitrary feature vector; and an autoencoder that can model an input distribution using a trained distance function, the discriminator, leading to nicer encodings/decodings.
Let's try to build it! Refer to the paper for the intricacies and a great read. Luckily, by building the encoder and decoder functions, we're almost there. We just need a few more components and will change these slightly.
Let's reset our graph and recompose our network as a VAEGAN: | tf.reset_default_graph() | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="batch-normalization"></a>
Batch Normalization
You may have noticed from the VAE code that I've used something called "batch normalization". This is a pretty effective technique for regularizing the training of networks by "reducing internal covariate shift". The basic idea is that given a minibatch, we optimize the gradient for this small sample of the greater population. But this small sample may have different characteristics than the entire population's gradient. Consider the most extreme case, a minibatch of 1. In this case, we overfit our gradient to optimize the gradient of the single observation. If our minibatch is too large, say the size of the entire population, we aren't able to manuvuer the loss manifold at all and the entire loss is averaged in a way that doesn't let us optimize anything. What we want to do is find a happy medium between a too-smooth loss surface (i.e. every observation), and a very peaky loss surface (i.e. a single observation). Up until now we only used mini-batches to help with this. But we can also approach it by "smoothing" our updates between each mini-batch. That would effectively smooth the manifold of the loss space. Those of you familiar with signal processing will see this as a sort of low-pass filter on the gradient updates.
In order for us to use batch normalization, we need another placeholder which is a simple boolean: True or False, denoting when we are training. We'll use this placeholder to conditionally update batch normalization's statistics required for normalizing our minibatches. Let's create the placeholder and then I'll get into how to use this. | # placeholder for batch normalization
is_training = tf.placeholder(tf.bool, name='istraining') | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
The original paper that introduced the idea suggests to use batch normalization "pre-activation", meaning after the weight multipllication or convolution, and before the nonlinearity. We can use the tensorflow.contrib.layers.batch_norm module to apply batch normalization to any input tensor give the tensor and the placeholder defining whether or not we are training. Let's use this module and you can inspect the code inside the module in your own time if it interests you. | from tensorflow.contrib.layers import batch_norm
help(batch_norm) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="building-the-encoder-1"></a>
Building the Encoder
We can now change our encoder to accept the is_training placeholder and apply batch_norm just before the activation function is applied: | def encoder(x, is_training, channels, filter_sizes, activation=tf.nn.tanh, reuse=None):
# Set the input to a common variable name, h, for hidden layer
h = x
print('encoder/input:', h.get_shape().as_list())
# Now we'll loop over the list of dimensions defining the number
# of output filters in each layer, and collect each hidden layer
hs = []
for layer_i in range(len(channels)):
with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):
# Convolve using the utility convolution function
# This requirs the number of output filter,
# and the size of the kernel in `k_h` and `k_w`.
# By default, this will use a stride of 2, meaning
# each new layer will be downsampled by 2.
h, W = utils.conv2d(h, channels[layer_i],
k_h=filter_sizes[layer_i],
k_w=filter_sizes[layer_i],
d_h=2,
d_w=2,
reuse=reuse)
h = batch_norm(h, is_training=is_training)
# Now apply the activation function
h = activation(h)
print('layer:', layer_i, ', shape:', h.get_shape().as_list())
# Store each hidden layer
hs.append(h)
# Finally, return the encoding.
return h, hs | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Let's now create the input to the network using a placeholder. We can try a slightly larger image this time. But be careful experimenting with much larger images as this is a big network.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | n_pixels = 64
n_channels = 3
input_shape = [None, n_pixels, n_pixels, n_channels]
# placeholder for the input to the network
X = tf.placeholder(...) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
And now we'll connect the input to an encoder network. We'll also use the tf.nn.elu activation instead. Explore other activations but I've found this to make the training much faster (e.g. 10x faster at least!). See the paper for more details: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs)
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | channels = [64, 64, 64]
filter_sizes = [5, 5, 5]
activation = tf.nn.elu
n_hidden = 128
with tf.variable_scope('encoder'):
H, Hs = encoder(...
Z = utils.linear(H, n_hidden)[0] | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="building-the-variational-layer"></a>
Building the Variational Layer
In Session 3, we introduced the idea of Variational Bayes when we used the Variational Auto Encoder. The variational bayesian approach requires a richer understanding of probabilistic graphical models and bayesian methods which we we're not able to go over in this course (it requires a few courses all by itself!). For that reason, please treat this as a "black box" in this course.
For those of you that are more familiar with graphical models, Variational Bayesian methods attempt to model an approximate joint distribution of $Q(Z)$ using some distance function to the true distribution $P(X)$. Kingma and Welling show how this approach can be used in a graphical model resembling an autoencoder and can be trained using KL-Divergence, or $KL(Q(Z) || P(X))$. The distribution Q(Z) is the variational distribution, and attempts to model the lower-bound of the true distribution $P(X)$ through the minimization of the KL-divergence. Another way to look at this is the encoder of the network is trying to model the parameters of a known distribution, the Gaussian Distribution, through a minimization of this lower bound. We assume that this distribution resembles the true distribution, but it is merely a simplification of the true distribution. To learn more about this, I highly recommend picking up the book by Christopher Bishop called "Pattern Recognition and Machine Learning" and reading the original Kingma and Welling paper on Variational Bayes.
Now back to coding, we'll create a general variational layer that does exactly the same thing as our VAE in session 3. Treat this as a black box if you are unfamiliar with the math. It takes an input encoding, h, and an integer, n_code defining how many latent Gaussians to use to model the latent distribution. In return, we get the latent encoding from sampling the Gaussian layer, z, the mean and log standard deviation, as well as the prior loss, loss_z. | def variational_bayes(h, n_code):
# Model mu and log(\sigma)
z_mu = tf.nn.tanh(utils.linear(h, n_code, name='mu')[0])
z_log_sigma = 0.5 * tf.nn.tanh(utils.linear(h, n_code, name='log_sigma')[0])
# Sample from noise distribution p(eps) ~ N(0, 1)
epsilon = tf.random_normal(tf.stack([tf.shape(h)[0], n_code]))
# Sample from posterior
z = z_mu + tf.multiply(epsilon, tf.exp(z_log_sigma))
# Measure loss
loss_z = -0.5 * tf.reduce_sum(
1.0 + 2.0 * z_log_sigma - tf.square(z_mu) - tf.exp(2.0 * z_log_sigma),
1)
return z, z_mu, z_log_sigma, loss_z | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Let's connect this layer to our encoding, and keep all the variables it returns. Treat this as a black box if you are unfamiliar with variational bayes!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | # Experiment w/ values between 2 - 100
# depending on how difficult the dataset is
n_code = 32
with tf.variable_scope('encoder/variational'):
Z, Z_mu, Z_log_sigma, loss_Z = variational_bayes(h=Z, n_code=n_code) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="building-the-decoder-1"></a>
Building the Decoder
In the GAN network, we built a decoder and called it the generator network. Same idea here. We can use these terms interchangeably. Before we connect our latent encoding, Z to the decoder, we'll implement batch norm in our decoder just like we did with the encoder. This is a simple fix: add a second argument for is_training and then apply batch normalization just after the deconv2d operation and just before the nonlinear activation. | def decoder(z, is_training, dimensions, channels, filter_sizes,
activation=tf.nn.elu, reuse=None):
h = z
for layer_i in range(len(dimensions)):
with tf.variable_scope('layer{}'.format(layer_i+1), reuse=reuse):
h, W = utils.deconv2d(x=h,
n_output_h=dimensions[layer_i],
n_output_w=dimensions[layer_i],
n_output_ch=channels[layer_i],
k_h=filter_sizes[layer_i],
k_w=filter_sizes[layer_i],
reuse=reuse)
h = batch_norm(h, is_training=is_training)
h = activation(h)
return h | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now we'll build a decoder just like in Session 3, and just like our Generator network in Part 1. In Part 1, we created Z as a placeholder which we would have had to feed in as random values. However, now we have an explicit coding of an input image in X stored in Z by having created the encoder network. | dimensions = [n_pixels // 8, n_pixels // 4, n_pixels // 2, n_pixels]
channels = [30, 30, 30, n_channels]
filter_sizes = [4, 4, 4, 4]
activation = tf.nn.elu
n_latent = n_code * (n_pixels // 16)**2
with tf.variable_scope('generator'):
Z_decode = utils.linear(
Z, n_output=n_latent, name='fc', activation=activation)[0]
Z_decode_tensor = tf.reshape(
Z_decode, [-1, n_pixels//16, n_pixels//16, n_code], name='reshape')
G = decoder(
Z_decode_tensor, is_training, dimensions,
channels, filter_sizes, activation) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now we need to build our discriminators. We'll need to add a parameter for the is_training placeholder. We're also going to keep track of every hidden layer in the discriminator. Our encoder already returns the Hs of each layer. Alternatively, we could poll the graph for each layer in the discriminator and ask for the correspond layer names. We're going to need these layers when building our costs. | def discriminator(X,
is_training,
channels=[50, 50, 50, 50],
filter_sizes=[4, 4, 4, 4],
activation=tf.nn.elu,
reuse=None):
# We'll scope these variables to "discriminator_real"
with tf.variable_scope('discriminator', reuse=reuse):
H, Hs = encoder(
X, is_training, channels, filter_sizes, activation, reuse)
shape = H.get_shape().as_list()
H = tf.reshape(
H, [-1, shape[1] * shape[2] * shape[3]])
D, W = utils.linear(
x=H, n_output=1, activation=tf.nn.sigmoid, name='fc', reuse=reuse)
return D, Hs | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Recall the regular GAN and DCGAN required 2 discriminators: one for the generated samples in Z, and one for the input samples in X. We'll do the same thing here. One discriminator for the real input data, X, which the discriminator will try to predict as 1s, and another discriminator for the generated samples that go from X through the encoder to Z, and finally through the decoder to G. The discriminator will be trained to try and predict these as 0s, whereas the generator will be trained to try and predict these as 1s. | D_real, Hs_real = discriminator(X, is_training)
D_fake, Hs_fake = discriminator(G, is_training, reuse=True) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="building-vaegan-loss-functions"></a>
Building VAE/GAN Loss Functions
Let's now see how we can compose our loss. We have 3 losses for our discriminator. Along with measuring the binary cross entropy between each of them, we're going to also measure each layer's loss from our two discriminators using an l2-loss, and this will form our loss for the log likelihood measure. The details of how these are constructed are explained in more details in the paper: https://arxiv.org/abs/1512.09300 - please refer to this paper for more details that are way beyond the scope of this course! One parameter within this to pay attention to is gamma, which the authors of the paper suggest control the weighting between content and style, just like in Session 4's Style Net implementation. | with tf.variable_scope('loss'):
# Loss functions
loss_D_llike = 0
for h_real, h_fake in zip(Hs_real, Hs_fake):
loss_D_llike += tf.reduce_sum(tf.squared_difference(
utils.flatten(h_fake), utils.flatten(h_real)), 1)
eps = 1e-12
loss_real = tf.log(D_real + eps)
loss_fake = tf.log(1 - D_fake + eps)
loss_GAN = tf.reduce_sum(loss_real + loss_fake, 1)
gamma = 0.75
loss_enc = tf.reduce_mean(loss_Z + loss_D_llike)
loss_dec = tf.reduce_mean(gamma * loss_D_llike - loss_GAN)
loss_dis = -tf.reduce_mean(loss_GAN)
nb_utils.show_graph(tf.get_default_graph().as_graph_def()) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="creating-the-optimizers"></a>
Creating the Optimizers
We now have losses for our encoder, decoder, and discriminator networks. We can connect each of these to their own optimizer and start training! Just like with Part 1's GAN, we'll ensure each network's optimizer only trains its part of the network: the encoder's optimizer will only update the encoder variables, the generator's optimizer will only update the generator variables, and the discriminator's optimizer will only update the discriminator variables.
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | learning_rate = 0.0001
opt_enc = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(
loss_enc,
var_list=[var_i for var_i in tf.trainable_variables()
if ...])
opt_gen = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(
loss_dec,
var_list=[var_i for var_i in tf.trainable_variables()
if ...])
opt_dis = tf.train.AdamOptimizer(
learning_rate=learning_rate).minimize(
loss_dis,
var_list=[var_i for var_i in tf.trainable_variables()
if var_i.name.startswith('discriminator')]) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="loading-the-dataset"></a>
Loading the Dataset
We'll now load our dataset just like in Part 1. Here is where you should explore with your own data!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | from libs import datasets, dataset_utils
batch_size = 64
n_epochs = 100
crop_shape = [n_pixels, n_pixels, n_channels]
crop_factor = 0.8
input_shape = [218, 178, 3]
# Try w/ CELEB first to make sure it works, then explore w/ your own dataset.
files = datasets.CELEB()
batch = dataset_utils.create_input_pipeline(
files=files,
batch_size=batch_size,
n_epochs=n_epochs,
crop_shape=crop_shape,
crop_factor=crop_factor,
shape=input_shape) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
We'll also create a latent manifold just like we've done in Session 3 and Part 1. This is a random sampling of 4 points in the latent space of Z. We then interpolate between them to create a "hyper-plane" and show the decoding of 10 x 10 points on that hyperplane. | n_samples = 10
zs = np.random.uniform(
-1.0, 1.0, [4, n_code]).astype(np.float32)
zs = utils.make_latent_manifold(zs, n_samples) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now create a session and create a coordinator to manage our queues for fetching data from the input pipeline and start our queue runners: | # We create a session to use the graph
sess = tf.Session()
init_op = tf.global_variables_initializer()
saver = tf.train.Saver()
coord = tf.train.Coordinator()
threads = tf.train.start_queue_runners(sess=sess, coord=coord)
sess.run(init_op) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Load an existing checkpoint if it exists to continue training. | if os.path.exists("vaegan.ckpt"):
saver.restore(sess, "vaegan.ckpt")
print("GAN model restored.") | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
We'll also try resynthesizing a test set of images. This will help us understand how well the encoder/decoder network is doing: | n_files = len(files)
test_xs = sess.run(batch) / 255.0
if not os.path.exists('imgs'):
os.mkdir('imgs')
m = utils.montage(test_xs, 'imgs/test_xs.png')
plt.imshow(m) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="training-1"></a>
Training
Almost ready for training. Let's get some variables which we'll need. These are the same as Part 1's training process. We'll keep track of t_i which we'll use to create images of the current manifold and reconstruction every so many iterations. And we'll keep track of the current batch number within the epoch and the current epoch number. | t_i = 0
batch_i = 0
epoch_i = 0
ckpt_name = './vaegan.ckpt' | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Just like in Part 1, we'll train trying to maintain an equilibrium between our Generator and Discriminator networks. You should experiment with the margin depending on how the training proceeds. | equilibrium = 0.693
margin = 0.4 | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now we'll train! Just like Part 1, we measure the real_cost and fake_cost. But this time, we'll always update the encoder. Based on the performance of the real/fake costs, then we'll update generator and discriminator networks. This will take a long time to produce something nice, but not nearly as long as the regular GAN network despite the additional parameters of the encoder and variational networks. Be sure to monitor the reconstructions to understand when your network has reached the capacity of its learning! For reference, on Celeb Net, I would use about 5 layers in each of the Encoder, Generator, and Discriminator networks using as input a 100 x 100 image, and a minimum of 200 channels per layer. This network would take about 1-2 days to train on an Nvidia TITAN X GPU. | while epoch_i < n_epochs:
if batch_i % (n_files // batch_size) == 0:
batch_i = 0
epoch_i += 1
print('---------- EPOCH:', epoch_i)
batch_i += 1
batch_xs = sess.run(batch) / 255.0
real_cost, fake_cost, _ = sess.run([
loss_real, loss_fake, opt_enc],
feed_dict={
X: batch_xs,
is_training: True})
real_cost = -np.mean(real_cost)
fake_cost = -np.mean(fake_cost)
gen_update = True
dis_update = True
if real_cost > (equilibrium + margin) or \
fake_cost > (equilibrium + margin):
gen_update = False
if real_cost < (equilibrium - margin) or \
fake_cost < (equilibrium - margin):
dis_update = False
if not (gen_update or dis_update):
gen_update = True
dis_update = True
if gen_update:
sess.run(opt_gen, feed_dict={
X: batch_xs,
is_training: True})
if dis_update:
sess.run(opt_dis, feed_dict={
X: batch_xs,
is_training: True})
if batch_i % 50 == 0:
print('real:', real_cost, '/ fake:', fake_cost)
# Plot example reconstructions from latent layer
recon = sess.run(G, feed_dict={
Z: zs,
is_training: False})
recon = np.clip(recon, 0, 1)
m1 = utils.montage(recon.reshape([-1] + crop_shape),
'imgs/manifold_%08d.png' % t_i)
# Plot example reconstructions
recon = sess.run(G, feed_dict={
X: test_xs,
is_training: False})
recon = np.clip(recon, 0, 1)
m2 = utils.montage(recon.reshape([-1] + crop_shape),
'imgs/reconstruction_%08d.png' % t_i)
fig, axs = plt.subplots(1, 2, figsize=(15, 10))
axs[0].imshow(m1)
axs[1].imshow(m2)
plt.show()
t_i += 1
if batch_i % 200 == 0:
# Save the variables to disk.
save_path = saver.save(sess, "./" + ckpt_name,
global_step=batch_i,
write_meta_graph=False)
print("Model saved in file: %s" % save_path)
# One of the threads has issued an exception. So let's tell all the
# threads to shutdown.
coord.request_stop()
# Wait until all threads have finished.
coord.join(threads)
# Clean up the session.
sess.close() | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="part-3---latent-space-arithmetic"></a>
Part 3 - Latent-Space Arithmetic
<a name="loading-the-pre-trained-model"></a>
Loading the Pre-Trained Model
We're now going to work with a pre-trained VAEGAN model on the Celeb Net dataset. Let's load this model: | tf.reset_default_graph()
from libs import celeb_vaegan as CV
net = CV.get_celeb_vaegan_model() | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
We'll load the graph_def contained inside this dictionary. It follows the same idea as the inception, vgg16, and i2v pretrained networks. It is a dictionary with the key graph_def defined, with the graph's pretrained network. It also includes labels and a preprocess key. We'll have to do one additional thing which is to turn off the random sampling from variational layer. This isn't really necessary but will ensure we get the same results each time we use the network. We'll use the input_map argument to do this. Don't worry if this doesn't make any sense, as we didn't cover the variational layer in any depth. Just know that this is removing a random process from the network so that it is completely deterministic. If we hadn't done this, we'd get slightly different results each time we used the network (which may even be desirable for your purposes). | sess = tf.Session()
g = tf.get_default_graph()
tf.import_graph_def(net['graph_def'], name='net', input_map={
'encoder/variational/random_normal:0': np.zeros(512, dtype=np.float32)})
names = [op.name for op in g.get_operations()]
print(names) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now let's get the relevant parts of the network: X, the input image to the network, Z, the input image's encoding, and G, the decoded image. In many ways, this is just like the Autoencoders we learned about in Session 3, except instead of Y being the output, we have G from our generator! And the way we train it is very different: we use an adversarial process between the generator and discriminator, and use the discriminator's own distance measure to help train the network, rather than pixel-to-pixel differences. | X = g.get_tensor_by_name('net/x:0')
Z = g.get_tensor_by_name('net/encoder/variational/z:0')
G = g.get_tensor_by_name('net/generator/x_tilde:0') | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Let's get some data to play with: | files = datasets.CELEB()
img_i = 50
img = plt.imread(files[img_i])
plt.imshow(img) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now preprocess the image, and see what the generated image looks like (i.e. the lossy version of the image through the network's encoding and decoding). | p = CV.preprocess(img)
synth = sess.run(G, feed_dict={X: p[np.newaxis]})
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
axs[0].imshow(p)
axs[1].imshow(synth[0] / synth.max()) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
So we lost a lot of details but it seems to be able to express quite a bit about the image. Our inner most layer, Z, is only 512 values yet our dataset was 200k images of 64 x 64 x 3 pixels (about 2.3 GB of information). That means we're able to express our nearly 2.3 GB of information with only 512 values! Having some loss of detail is certainly expected!
<a name="exploring-the-celeb-net-attributes"></a>
Exploring the Celeb Net Attributes
Let's now try and explore the attributes of our dataset. We didn't train the network with any supervised labels, but the Celeb Net dataset has 40 attributes for each of its 200k images. These are already parsed and stored for you in the net dictionary: | net.keys()
len(net['labels'])
net['labels'] | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Let's see what attributes exist for one of the celeb images: | plt.imshow(img)
[net['labels'][i] for i, attr_i in enumerate(net['attributes'][img_i]) if attr_i] | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="find-the-latent-encoding-for-an-attribute"></a>
Find the Latent Encoding for an Attribute
The Celeb Dataset includes attributes for each of its 200k+ images. This allows us to feed into the encoder some images that we know have a specific attribute, e.g. "smiling". We store what their encoding is and retain this distribution of encoded values. We can then look at any other image and see how it is encoded, and slightly change the encoding by adding the encoded of our smiling images to it! The result should be our image but with more smiling. That is just insane and we're going to see how to do it. First lets inspect our latent space: | Z.get_shape() | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
We have 512 features that we can encode any image with. Assuming our network is doing an okay job, let's try to find the Z of the first 100 images with the 'Bald' attribute: | bald_label = net['labels'].index('Bald')
bald_label | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Let's get all the bald image indexes: | bald_img_idxs = np.where(net['attributes'][:, bald_label])[0]
bald_img_idxs | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now let's just load 100 of their images: | bald_imgs = [plt.imread(files[bald_img_i])[..., :3]
for bald_img_i in bald_img_idxs[:100]] | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Let's see if the mean image looks like a good bald person or not: | plt.imshow(np.mean(bald_imgs, 0).astype(np.uint8)) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Yes that is definitely a bald person. Now we're going to try to find the encoding of a bald person. One method is to try and find every other possible image and subtract the "bald" person's latent encoding. Then we could add this encoding back to any new image and hopefully it makes the image look more bald. Or we can find a bunch of bald people's encodings and then average their encodings together. This should reduce the noise from having many different attributes, but keep the signal pertaining to the baldness.
Let's first preprocess the images: | bald_p = np.array([CV.preprocess(bald_img_i) for bald_img_i in bald_imgs]) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now we can find the latent encoding of the images by calculating Z and feeding X with our bald_p images:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | bald_zs = sess.run(Z, feed_dict=... | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now let's calculate the mean encoding: | bald_feature = np.mean(bald_zs, 0, keepdims=True)
bald_feature.shape | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Let's try and synthesize from the mean bald feature now and see how it looks:
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | bald_generated = sess.run(G, feed_dict=...
plt.imshow(bald_generated[0] / bald_generated.max()) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="latent-feature-arithmetic"></a>
Latent Feature Arithmetic
Let's now try to write a general function for performing everything we've just done so that we can do this with many different features. We'll then try to combine them and synthesize people with the features we want them to have... | def get_features_for(label='Bald', has_label=True, n_imgs=50):
label_i = net['labels'].index(label)
label_idxs = np.where(net['attributes'][:, label_i] == has_label)[0]
label_idxs = np.random.permutation(label_idxs)[:n_imgs]
imgs = [plt.imread(files[img_i])[..., :3]
for img_i in label_idxs]
preprocessed = np.array([CV.preprocess(img_i) for img_i in imgs])
zs = sess.run(Z, feed_dict={X: preprocessed})
return np.mean(zs, 0) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Let's try getting some attributes positive and negative features. Be sure to explore different attributes! Also try different values of n_imgs, e.g. 2, 3, 5, 10, 50, 100. What happens with different values?
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | # Explore different attributes
z1 = get_features_for('Male', True, n_imgs=10)
z2 = get_features_for('Male', False, n_imgs=10)
z3 = get_features_for('Smiling', True, n_imgs=10)
z4 = get_features_for('Smiling', False, n_imgs=10)
b1 = sess.run(G, feed_dict={Z: z1[np.newaxis]})
b2 = sess.run(G, feed_dict={Z: z2[np.newaxis]})
b3 = sess.run(G, feed_dict={Z: z3[np.newaxis]})
b4 = sess.run(G, feed_dict={Z: z4[np.newaxis]})
fig, axs = plt.subplots(1, 4, figsize=(15, 6))
axs[0].imshow(b1[0] / b1.max()), axs[0].set_title('Male'), axs[0].grid('off'), axs[0].axis('off')
axs[1].imshow(b2[0] / b2.max()), axs[1].set_title('Not Male'), axs[1].grid('off'), axs[1].axis('off')
axs[2].imshow(b3[0] / b3.max()), axs[2].set_title('Smiling'), axs[2].grid('off'), axs[2].axis('off')
axs[3].imshow(b4[0] / b4.max()), axs[3].set_title('Not Smiling'), axs[3].grid('off'), axs[3].axis('off') | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now let's interpolate between the "Male" and "Not Male" categories: | notmale_vector = z2 - z1
n_imgs = 5
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z1 + notmale_vector*amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off') | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
And the same for smiling: | smiling_vector = z3 - z4
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z4 + smiling_vector*amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i] / g[i].max(), 0, 1))
ax_i.grid('off') | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
There's also no reason why we have to be within the boundaries of 0-1. We can extrapolate beyond, in, and around the space. | n_imgs = 5
amt = np.linspace(-1.5, 2.5, n_imgs)
zs = np.array([z4 + smiling_vector*amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off') | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
<a name="extensions"></a>
Extensions
Tom White, Lecturer at Victoria University School of Design, also recently demonstrated an alternative way of interpolating using a sinusoidal interpolation. He's created some of the most impressive generative images out there and luckily for us he has detailed his process in the arxiv preprint: https://arxiv.org/abs/1609.04468 - as well, be sure to check out his twitter bot, https://twitter.com/smilevector - which adds smiles to people :) - Note that the network we're using is only trained on aligned faces that are frontally facing, though this twitter bot is capable of adding smiles to any face. I suspect that he is running a face detection algorithm such as AAM, CLM, or ASM, cropping the face, aligning it, and then running a similar algorithm to what we've done above. Or else, perhaps he has trained a new model on faces that are not aligned. In any case, it is well worth checking out!
Let's now try and use sinusoidal interpolation using his implementation in plat which I've copied below: | def slerp(val, low, high):
"""Spherical interpolation. val has a range of 0 to 1."""
if val <= 0:
return low
elif val >= 1:
return high
omega = np.arccos(np.dot(low/np.linalg.norm(low), high/np.linalg.norm(high)))
so = np.sin(omega)
return np.sin((1.0-val)*omega) / so * low + np.sin(val*omega)/so * high
amt = np.linspace(0, 1, n_imgs)
zs = np.array([slerp(amt_i, z1, z2) for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off') | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
It's certainly worth trying especially if you are looking to explore your own model's latent space in new and interesting ways.
Let's try and load an image that we want to play with. We need an image as similar to the Celeb Dataset as possible. Unfortunately, we don't have access to the algorithm they used to "align" the faces, so we'll need to try and get as close as possible to an aligned face image. One way you can do this is to load up one of the celeb images and try and align an image to it using e.g. Photoshop or another photo editing software that lets you blend and move the images around. That's what I did for my own face... | img = plt.imread('parag.png')[..., :3]
img = CV.preprocess(img, crop_factor=1.0)[np.newaxis] | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Let's see how the network encodes it: | img_ = sess.run(G, feed_dict={X: img})
fig, axs = plt.subplots(1, 2, figsize=(10, 5))
axs[0].imshow(img[0]), axs[0].grid('off')
axs[1].imshow(np.clip(img_[0] / np.max(img_), 0, 1)), axs[1].grid('off') | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Notice how blurry the image is. Tom White's preprint suggests one way to sharpen the image is to find the "Blurry" attribute vector: | z1 = get_features_for('Blurry', True, n_imgs=25)
z2 = get_features_for('Blurry', False, n_imgs=25)
unblur_vector = z2 - z1
z = sess.run(Z, feed_dict={X: img})
n_imgs = 5
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z[0] + unblur_vector * amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i] / g[i].max(), 0, 1))
ax_i.grid('off')
ax_i.axis('off') | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Notice that the image also gets brighter and perhaps other features than simply the bluriness of the image changes. Tom's preprint suggests that this is due to the correlation that blurred images have with other things such as the brightness of the image, possibly due biases in labeling or how photographs are taken. He suggests that another way to unblur would be to synthetically blur a set of images and find the difference in the encoding between the real and blurred images. We can try it like so: | from scipy.ndimage import gaussian_filter
idxs = np.random.permutation(range(len(files)))
imgs = [plt.imread(files[idx_i]) for idx_i in idxs[:100]]
blurred = []
for img_i in imgs:
img_copy = np.zeros_like(img_i)
for ch_i in range(3):
img_copy[..., ch_i] = gaussian_filter(img_i[..., ch_i], sigma=3.0)
blurred.append(img_copy)
# Now let's preprocess the original images and the blurred ones
imgs_p = np.array([CV.preprocess(img_i) for img_i in imgs])
blur_p = np.array([CV.preprocess(img_i) for img_i in blurred])
# And then compute each of their latent features
noblur = sess.run(Z, feed_dict={X: imgs_p})
blur = sess.run(Z, feed_dict={X: blur_p})
synthetic_unblur_vector = np.mean(noblur - blur, 0)
n_imgs = 5
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z[0] + synthetic_unblur_vector * amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off') | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
For some reason, it also doesn't like my glasses very much. Let's try and add them back. | z1 = get_features_for('Eyeglasses', True)
z2 = get_features_for('Eyeglasses', False)
glass_vector = z1 - z2
z = sess.run(Z, feed_dict={X: img})
n_imgs = 5
amt = np.linspace(0, 1, n_imgs)
zs = np.array([z[0] + glass_vector * amt_i + unblur_vector * amt_i for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off') | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Well, more like sunglasses then. Let's try adding everything in there now! | n_imgs = 5
amt = np.linspace(0, 1.0, n_imgs)
zs = np.array([z[0] + glass_vector * amt_i + unblur_vector * amt_i + amt_i * smiling_vector for amt_i in amt])
g = sess.run(G, feed_dict={Z: zs})
fig, axs = plt.subplots(1, n_imgs, figsize=(20, 4))
for i, ax_i in enumerate(axs):
ax_i.imshow(np.clip(g[i], 0, 1))
ax_i.grid('off')
ax_i.axis('off') | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Well it was worth a try anyway. We can also try with a lot of images and create a gif montage of the result: | n_imgs = 5
amt = np.linspace(0, 1.5, n_imgs)
z = sess.run(Z, feed_dict={X: imgs_p})
imgs = []
for amt_i in amt:
zs = z + synthetic_unblur_vector * amt_i + amt_i * smiling_vector
g = sess.run(G, feed_dict={Z: zs})
m = utils.montage(np.clip(g, 0, 1))
imgs.append(m)
gif.build_gif(imgs, saveto='celeb.gif')
ipyd.Image(url='celeb.gif?i={}'.format(
np.random.rand()), height=1000, width=1000) | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Exploring multiple feature vectors and applying them to images from the celeb dataset to produce animations of a face, saving it as a GIF. Recall you can store each image frame in a list and then use the gif.build_gif function to create a gif. Explore your own syntheses and then include a gif of the different images you create as "celeb.gif" in the final submission. Perhaps try finding unexpected synthetic latent attributes in the same way that we created a blur attribute. You can check the documentation in scipy.ndimage for some other image processing techniques, for instance: http://www.scipy-lectures.org/advanced/image_processing/ - and see if you can find the encoding of another attribute that you then apply to your own images. You can even try it with many images and use the utils.montage function to create a large grid of images that evolves over your attributes. Or create a set of expressions perhaps. Up to you just explore!
<h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3> | imgs = []
... DO SOMETHING AWESOME ! ...
gif.build_gif(imgs=imgs, saveto='vaegan.gif') | session-5/session-5-part-1-new.ipynb | goddoe/CADL | apache-2.0 |
Now, let's first load spaCy. We import the spaCy module and load the English tokenizer, tagger, parser, NER and word vectors. | import spacy
nlp = spacy.load('en_core_web_sm') # other languages: de, es, pt, fr, it, nl | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
nlp is now a Python object representing the English NLP pipeline that we can use to process a text.
EXTRA: Larger models
For English, there are three models ranging from 'small' to 'large':
en_core_web_sm
en_core_web_md
en_core_web_lg
By default, the smallest one is loaded. Larger models should have a better accuracy, but take longer to load. If you like, you can use them instead. You will first need to download them. | #%%bash
#python -m spacy download en_core_web_md
#%%bash
#python -m spacy download en_core_web_lg
# uncomment one of the lines below if you want to load the medium or large model instead of the small one
# nlp = spacy.load('en_core_web_md')
# nlp = spacy.load('en_core_web_lg') | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
2.2 Using spaCy
Parsing a text with spaCy after loading a language model is as easy as follows: | doc = nlp("I have an awesome cat. It's sitting on the mat that I bought yesterday.") | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
doc is now a Python object of the class Doc. It is a container for accessing linguistic annotations and a sequence of Token objects.
Doc, Token and Span objects
At this point, there are three important types of objects to remember:
A Doc is a sequence of Token objects.
A Token object represents an individual token — i.e. a word, punctuation symbol, whitespace, etc. It has attributes representing linguistic annotations.
A Span object is a slice from a Doc object and a sequence of Token objects.
Since Doc is a sequence of Token objects, we can iterate over all of the tokens in the text as shown below, or select a single token from the sequence: | # Iterate over the tokens
for token in doc:
print(token)
print()
# Select one single token by index
first_token = doc[0]
print("First token:", first_token) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Please note that even though these look like strings, they are not: | for token in doc:
print(token, "\t", type(token)) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
These Token objects have many useful methods and attributes, which we can list by using dir(). We haven't really talked about attributes during this course, but while methods are operations or activities performed by that object, attributes are 'static' features of the objects. Methods are called using parantheses (as we have seen with str.upper(), for instance), while attributes are indicated without parantheses. We will see some examples below.
You can find more detailed information about the token methods and attributes in the documentation. | dir(first_token) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Let's inspect some of the attributes of the tokens. Can you figure out what they mean? Feel free to try out a few more. | # Print attributes of tokens
for token in doc:
print(token.text, token.lemma_, token.pos_, token.tag_, token.dep_, token.shape_) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Notice that some of the attributes end with an underscore. For example, tokens have both lemma and lemma_ attributes. The lemma attribute represents the id of the lemma (integer), while the lemma_ attribute represents the unicode string representation of the lemma. In practice, you will mostly use the lemma_ attribute. | for token in doc:
print(token.lemma, token.lemma_) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
You can also use spacy.explain to find out more about certain labels: | # try out some more, such as NN, ADP, PRP, VBD, VBP, VBZ, WDT, aux, nsubj, pobj, dobj, npadvmod
spacy.explain("VBZ") | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
You can create a Span object from the slice doc[start : end]. For instance, doc[2:5] produces a span consisting of tokens 2, 3 and 4. Stepped slices (e.g. doc[start : end : step]) are not supported, as Span objects must be contiguous (cannot have gaps). You can use negative indices and open-ended ranges, which have their normal Python semantics. | # Create a Span
a_slice = doc[2:5]
print(a_slice, type(a_slice))
# Iterate over Span
for token in a_slice:
print(token.lemma_, token.pos_) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Text, sentences and noun_chunks
If you call the dir() function on a Doc object, you will see that it has a range of methods and attributes. You can read more about them in the documentation. Below, we highlight three of them: text, sents and noun_chunks. | dir(doc) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
First of all, text simply gives you the whole document as a string: | print(doc.text)
print(type(doc.text)) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
sents can be used to get all the sentences. Notice that it will create a so-called 'generator'. For now, you don't have to understand exactly what a generator is (if you like, you can read more about them online). Just remember that we can use generators to iterate over an object in a fast and efficient way. | # Get all the sentences as a generator
print(doc.sents, type(doc.sents))
# We can use the generator to loop over the sentences; each sentence is a span of tokens
for sentence in doc.sents:
print(sentence, type(sentence)) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
If you find this difficult to comprehend, you can also simply convert it to a list and then loop over the list. Remember that this is less efficient, though. | # You can also store the sentences in a list and then loop over the list
sentences = list(doc.sents)
for sentence in sentences:
print(sentence, type(sentence)) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
The benefit of converting it to a list is that we can use indices to select certain sentences. For example, in the following we only print some information about the tokens in the second sentence. | # Print some information about the tokens in the second sentence.
sentences = list(doc.sents)
for token in sentences[1]:
data = '\t'.join([token.orth_,
token.lemma_,
token.pos_,
token.tag_,
str(token.i), # Turn index into string
str(token.idx)]) # Turn index into string
print(data) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Similarly, noun_chunks can be used to create a generator for all noun chunks in the text. | # Get all the noun chunks as a generator
print(doc.noun_chunks, type(doc.noun_chunks))
# You can loop over a generator; each noun chunk is a span of tokens
for chunk in doc.noun_chunks:
print(chunk, type(chunk))
print() | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Named Entities
Finally, we can also very easily access the Named Entities in a text using ents. As you can see below, it will create a tuple of the entities recognized in the text. Each entity is again a span of tokens, and you can access the type of the entity with the label_ attribute of Span. | # Here's a slightly longer text, from the Wikipedia page about Harry Potter.
harry_potter = "Harry Potter is a series of fantasy novels written by British author J. K. Rowling.\
The novels chronicle the life of a young wizard, Harry Potter, and his friends Hermione Granger and Ron Weasley,\
all of whom are students at Hogwarts School of Witchcraft and Wizardry.\
The main story arc concerns Harry's struggle against Lord Voldemort, a dark wizard who intends to become immortal,\
overthrow the wizard governing body known as the Ministry of Magic, and subjugate all wizards and Muggles."
doc = nlp(harry_potter)
print(doc.ents)
print(type(doc.ents))
# Each entity is a span of tokens and is labeled with the type of entity
for entity in doc.ents:
print(entity, "\t", entity.label_, "\t", type(entity)) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Pretty cool, but what does NORP mean? Again, you can use spacy.explain() to find out:
3. EXTRA: Stanford CoreNLP
Another very popular NLP pipeline is Stanford CoreNLP. You can use the tool from the command line, but there are also some useful Python wrappers that make use of the Stanford CoreNLP API, such as pycorenlp. As you might want to use this in the future, we will provide you with a quick start guide. To use the code below, you will have to do the following:
Download Stanford CoreNLP here.
Install pycorenlp (run pip install pycorenlp in your terminal, or simply run the cell below).
Open a terminal and run the following commands (replace with the correct directory names):
cd LOCATION_OF_CORENLP/stanford-corenlp-full-2018-02-27
java -mx4g -cp "*" edu.stanford.nlp.pipeline.StanfordCoreNLPServer
This step you will always have to do if you want to use the Stanford CoreNLP API. | %%bash
pip install pycorenlp
from pycorenlp import StanfordCoreNLP
nlp = StanfordCoreNLP('http://localhost:9000') | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Next, you will want to define which annotators to use and which output format should be produced (text, json, xml, conll, conllu, serialized). Annotating the document then is very easy. Note that Stanford CoreNLP uses some large models that can take a long time to load. You can read more about it here. | harry_potter = "Harry Potter is a series of fantasy novels written by British author J. K. Rowling.\
The novels chronicle the life of a young wizard, Harry Potter, and his friends Hermione Granger and Ron Weasley,\
all of whom are students at Hogwarts School of Witchcraft and Wizardry.\
The main story arc concerns Harry's struggle against Lord Voldemort, a dark wizard who intends to become immortal,\
overthrow the wizard governing body known as the Ministry of Magic, and subjugate all wizards and Muggles."
# Define annotators and output format
properties= {'annotators': 'tokenize, ssplit, pos, lemma, parse',
'outputFormat': 'json'}
# Annotate the string with CoreNLP
doc = nlp.annotate(harry_potter, properties=properties) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
In the next cells, we will simply show some examples of how to access the linguistic annotations if you use the properties as shown above. If you'd like to continue working with Stanford CoreNLP in the future, you will likely have to experiment a bit more. | doc.keys()
sentences = doc["sentences"]
first_sentence = sentences[0]
first_sentence.keys()
first_sentence["parse"]
first_sentence["basicDependencies"]
first_sentence["tokens"]
for sent in doc["sentences"]:
for token in sent["tokens"]:
word = token["word"]
lemma = token["lemma"]
pos = token["pos"]
print(word, lemma, pos)
# find out what the entity label 'NORP' means
spacy.explain("NORP") | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
4. NLTK vs. spaCy vs. CoreNLP
There might be different reasons why you want to use NLTK, spaCy or Stanford CoreNLP. There are differences in efficiency, quality, user friendliness, functionalities, output formats, etc. At this moment, we advise you to go with spaCy because of its ease in use and high quality performance.
Here's an example of both NLTK and spaCy in action.
The example text is a case in point. What goes wrong here?
Try experimenting with the text to see what the differences are. | import nltk
import spacy
nlp = spacy.load('en_core_web_sm')
text = "I like cheese very much"
print("NLTK results:")
nltk_tagged = nltk.pos_tag(text.split())
print(nltk_tagged)
print()
print("spaCy results:")
doc = nlp(text)
spacy_tagged = []
for token in doc:
tag_data = (token.orth_, token.tag_,)
spacy_tagged.append(tag_data)
print(spacy_tagged) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Do you want to learn more about the differences between NLTK, spaCy and CoreNLP? Here are some links:
- Facts & Figures (spaCy)
- About speed (CoreNLP vs. spaCy)
- NLTK vs. spaCy: Natural Language Processing in Python
- What are the advantages of Spacy vs NLTK?
- 5 Heroic Python NLP Libraries
5. Some other useful modules for cleaning and preprocessing
Data is often messy, noisy or includes irrelevant information. Therefore, chances are big that you will need to do some cleaning before you can start with your analysis. This is especially true for social media texts, such as tweets, chats, and emails. Typically, these texts are informal and notoriously noisy. Normalising them to be able to process them with NLP tools is a NLP challenge in itself and fully discussing it goes beyond the scope of this course. However, you may find the following modules useful in your project:
tweet-preprocessor: This library makes it easy to clean, parse or tokenize the tweets. It supports cleaning, tokenizing and parsing of URLs, hashtags, reserved words, mentions, emojis and smileys.
emot: Emot is a python library to extract the emojis and emoticons from a text (string). All the emojis and emoticons are taken from a reliable source, i.e. Wikipedia.org.
autocorrect: Spelling corrector (Python 3).
html: Can be used to remove HTML tags.
chardet: Universal encoding detector for Python 2 and 3.
ftfy: Fixes broken unicode strings.
If you are interested in reading more about these topic, these papers discuss preprocessing and normalization:
Assessing the Consequences of Text Preprocessing Decisions (Denny & Spirling 2016). This paper is a bit long, but it provides a nice discussion of common preprocessing steps and their potential effects.
What to do about bad language on the internet (Eisenstein 2013). This is a quick read that we recommend everyone to at least look through.
And here is a nice blog about character encoding.
Exercises | import spacy
nlp = spacy.load('en_core_web_sm') | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Exercise 1:
What is the difference between token.pos_ and token.tag_? Read the docs to find out.
What do the different labels mean? Use space.explain to inspect some of them. You can also refer to this page for a complete overview. | doc = nlp("I have an awesome cat. It's sitting on the mat that I bought yesterday.")
for token in doc:
print(token.pos_, token.tag_)
spacy.explain("PRON") | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Exercise 2:
Let's practice a bit with processing files. Open the file charlie.txt for reading and use read() to read its content as a string. Then use spaCy to annotate this string and print the information below. Remember: you can use dir() to remind yourself of the attributes.
For each token in the text:
1. Text
2. Lemma
3. POS tag
4. Whether it's a stopword or not
5. Whether it's a punctuation mark or not
For each sentence in the text:
1. The complete text
2. The number of tokens
3. The complete text in lowercase letters
4. The text, lemma and POS of the first word
For each noun chunk in the text:
1. The complete text
2. The number of tokens
3. The complete text in lowercase letters
4. The text, lemma and POS of the first word
For each named entity in the text:
1. The complete text
2. The number of tokens
3. The complete text in lowercase letters
4. The text, lemma and POS of the first word | filename = "../Data/Charlie/charlie.txt"
# read the file and process with spaCy
# print all information about the tokens
# print all information about the sentences
# print all information about the noun chunks
# print all information about the entities | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Exercise 3:
Remember how we can use the os and glob modules to process multiple files? For example, we can read all .txt files in the dreams folder like this: | import glob
filenames = glob.glob("../Data/dreams/*.txt")
print(filenames) | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Now create a function called get_vocabulary that takes one positional parameter filenames. It should read in all filenames and return a set called unique_words, that contains all unique words in the files. | def get_vocabulary(filenames):
# your code here
# test your function here
unique_words = get_vocabulary(filenames)
print(unique_words, len(unique_words))
assert len(unique_words) == 415 # if your code is correct, this should not raise an error | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Exercise 4:
Create a function called get_sentences_with_keyword that takes one positional parameter filenames and one keyword parameter filenames with default value None. It should read in all filenames and return a list called sentences that contains all sentences (the complete texts) with the keyword.
Hints:
- It's best to check for the lemmas of each token
- Lowercase both your keyword and the lemma | import glob
filenames = glob.glob("../Data/dreams/*.txt")
print(filenames)
def get_sentences_with_keyword(filenames, keyword=None):
#your code here
# test your function here
sentences = get_sentences_with_keyword(filenames, keyword="toy")
print(sentences)
assert len(sentences) == 4 # if your code is correct, this should not raise an error | Chapters/Chapter 19 - More about Natural Language Processing Tools (spaCy).ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
From Devito's library of examples we import the following structures: | # NBVAL_IGNORE_OUTPUT
%matplotlib inline
from examples.seismic import TimeAxis
from examples.seismic import RickerSource
from examples.seismic import Receiver
from devito import SubDomain, Grid, NODE, TimeFunction, Function, Eq, solve, Operator | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
The mesh parameters define the domain $\Omega_{0}$. The absorption region will be included bellow. | nptx = 101
nptz = 101
x0 = 0.
x1 = 1000.
compx = x1-x0
z0 = 0.
z1 = 1000.
compz = z1-z0;
hxv = (x1-x0)/(nptx-1)
hzv = (z1-z0)/(nptz-1) | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
Observation: In this code we need to work with symbolic values and the real values of $\Delta x$ and $\Delta z$, then the numerica values of $\Delta x$ and $\Delta z$ are represented by hxv and hzv, respectively. The symbolic values of $\Delta x$ and $\Delta z$ will be given after.
In this case, we need to define the size of the bands $L_{x}$ and $L_{z}$ that extend the domain $\Omega_{0}$ for $\Omega$. The code that we will implement will build the values $L_{x}$ and $L_{z}$ from choosing a certain amount of points in each direction. Without loss of generality, we say that the size $L_{x}$ is such that:
$L_{x}$ = npmlx*$\Delta x$;
0<npmlx<nptx;
Similarly, we have $L_{z}$ such that:
$L_{z}$ = npmlz*$\Delta z$;
0<npmlz<nptz;
So, we can explicitly define the lengths $L_{x}$ and $L_{z}$ depending on the number of points npmlx and npmlz. Thus, we choose these values as being: | npmlx = 20
npmlz = 20 | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
And we define $L_{x}$ and $L_{z}$ as beeing: | lx = npmlx*hxv
lz = npmlz*hzv | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
Thus, from the nptx points, the first and the last npmlx points are in the absorption region of the x direction. Similarly, from the nptz points, the last npmlz points are in the absorption region of the z direction. Considering the construction of grid, we also have the following elements: | nptx = nptx + 2*npmlx
nptz = nptz + 1*npmlz
x0 = x0 - hxv*npmlx
x1 = x1 + hxv*npmlx
compx = x1-x0
z0 = z0
z1 = z1 + hzv*npmlz
compz = z1-z0
origin = (x0,z0)
extent = (compx,compz)
shape = (nptx,nptz)
spacing = (hxv,hzv) | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
The $\zeta(x,z)$ function is non zero only in the blue region in the figure that represents the domain. In this way, the wave equation can be divided into 2 situations:
In the region in blue:
\begin{equation}
u_{tt}(x,z,t)+c^2(x,z)\zeta(x,z)u_t(x,z,t)-c^2(x,z)^\Delta(u(x,z,t))=c^2(x,z)f(x,z,t),
\end{equation}
In the white region:
\begin{equation}
u_{tt}(x,z,t)-c^2(x,z)^\Delta(u(x,z,t))=c^2(x,z)f(x,z,t),
\end{equation}
For this reason, we use the structure of the subdomains to represent the white region and the blue region.
Observation: Note that we can describe the blue region in different ways, that is, the way we choose here is not the only possible discretization for that region.
First, we define the white region, naming this region as d0, which is defined by the following pairs of points $(x,z)$:
$x\in{npmlx,nptx-npmlx}$ and $z\in{0,nptz-npmlz}$.
In the language of subdomains *d0 it is written as: | class d0domain(SubDomain):
name = 'd0'
def define(self, dimensions):
x, z = dimensions
return {x: ('middle', npmlx, npmlx), z: ('middle', 0, npmlz)}
d0_domain = d0domain() | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
The blue region will be the union of the following regions:
d1 represents the left range in the direction x, where the pairs $(x,z)$ satisfy: $x\in{0,npmlx}$ and $z\in{0,nptz}$;
d2 represents the rigth range in the direction x, where the pairs $(x,z)$ satisfy: $x\in{nptx-npmlx,nptx}$ and $z\in{0,nptz}$;
d3 represents the left range in the direction y, where the pairs $(x,z)$ satisfy: $x\in{npmlx,nptx-npmlx}$ and $z\in{nptz-npmlz,nptz}$;
Thus, the regions d1, d2 and d3 are described as follows in the language of subdomains: | class d1domain(SubDomain):
name = 'd1'
def define(self, dimensions):
x, z = dimensions
return {x: ('left',npmlx), z: z}
d1_domain = d1domain()
class d2domain(SubDomain):
name = 'd2'
def define(self, dimensions):
x, z = dimensions
return {x: ('right',npmlx), z: z}
d2_domain = d2domain()
class d3domain(SubDomain):
name = 'd3'
def define(self, dimensions):
x, z = dimensions
return {x: ('middle', npmlx, npmlx), z: ('right',npmlz)}
d3_domain = d3domain() | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
The figure below represents the division of domains that we did previously:
<img src='domain2.png' width=500>
The advantage of dividing into regions is that the equations will be calculated where they actually operate and thus we gain computational efficiency, as we decrease the number of operations to be done. After defining the spatial parameters and constructing the subdomains, we set the spatial grid with the following command: | grid = Grid(origin=origin, extent=extent, shape=shape, subdomains=(d0_domain,d1_domain,d2_domain,d3_domain), dtype=np.float64) | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
Again, we use a velocity field given by a binary file. The reading and scaling of the velocity field for the Devito work units is done with the following commands: | v0 = np.zeros((nptx,nptz))
X0 = np.linspace(x0,x1,nptx)
Z0 = np.linspace(z0,z1,nptz)
x10 = x0+lx
x11 = x1-lx
z10 = z0
z11 = z1 - lz
xm = 0.5*(x10+x11)
zm = 0.5*(z10+z11)
pxm = 0
pzm = 0
for i in range(0,nptx):
if(X0[i]==xm): pxm = i
for j in range(0,nptz):
if(Z0[j]==zm): pzm = j
p0 = 0
p1 = pzm
p2 = nptz
v0[0:nptx,p0:p1] = 1.5
v0[0:nptx,p1:p2] = 2.5 | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
Previously we introduce the local variables x10,x11,z10,z11,xm,zm,pxm and pzm that help us to create a specific velocity field, where we consider the whole domain (including the absorpion region). Below we include a routine to plot the velocity field. | def graph2dvel(vel):
plot.figure()
plot.figure(figsize=(16,8))
fscale = 1/10**(3)
scale = np.amax(vel[npmlx:-npmlx,0:-npmlz])
extent = [fscale*(x0+lx),fscale*(x1-lx), fscale*(z1-lz), fscale*(z0)]
fig = plot.imshow(np.transpose(vel[npmlx:-npmlx,0:-npmlz]), vmin=0.,vmax=scale, cmap=cm.seismic, extent=extent)
plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km'))
plot.title('Velocity Profile')
plot.grid()
ax = plot.gca()
divider = make_axes_locatable(ax)
cax = divider.append_axes("right", size="5%", pad=0.05)
cbar = plot.colorbar(fig, cax=cax, format='%.2e')
cbar.set_label('Velocity [km/s]')
plot.show() | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
Below we include the plot of velocity field. | # NBVAL_IGNORE_OUTPUT
graph2dvel(v0) | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
Time parameters are defined and constructed by the following sequence of commands: | t0 = 0.
tn = 1000.
CFL = 0.4
vmax = np.amax(v0)
dtmax = np.float64((min(hxv,hzv)*CFL)/(vmax))
ntmax = int((tn-t0)/dtmax)+1
dt0 = np.float64((tn-t0)/ntmax) | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
With the temporal parameters, we generate the time informations with TimeAxis as follows: | time_range = TimeAxis(start=t0,stop=tn,num=ntmax+1)
nt = time_range.num - 1 | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
The symbolic values associated with the spatial and temporal grids that are used in the composition of the equations are given by: | (hx,hz) = grid.spacing_map
(x, z) = grid.dimensions
t = grid.stepping_dim
dt = grid.stepping_dim.spacing | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
We chose a single Ricker source, whose frequency is $ 0.005Khz $. This source is positioned at $\bar{x}$ = 35150m and $\bar{z}$ = 32m. We then defined the following variables that represents our choice: | f0 = 0.01
nsource = 1
xposf = 0.5*(compx-2*npmlx*hxv)
zposf = hzv | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
As we know, Ricker's source is generated by the RickerSource command. Using the parameters listed above, we generate and position the Ricker source with the following sequence of commands: | src = RickerSource(name='src',grid=grid,f0=f0,npoint=nsource,time_range=time_range,staggered=NODE,dtype=np.float64)
src.coordinates.data[:, 0] = xposf
src.coordinates.data[:, 1] = zposf | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
Below we include the plot of Ricker source. | # NBVAL_IGNORE_OUTPUT
src.show() | examples/seismic/abc_methods/02_damping.ipynb | opesci/devito | mit |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.