markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Operations are then also done based off of index:
ser1 + ser2
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/03-General Pandas/02-Series.ipynb
arcyfelix/Courses
apache-2.0
Load the data file shared/bladder_cancer_genes_tcga.txt into a pandas.DataFrame, convert it to a numpy.ndarray matrix, and print the matrix dimensions
gene_matrix_for_network_df = pandas.read_csv("shared/bladder_cancer_genes_tcga.txt", sep="\t") gene_matrix_for_network = gene_matrix_for_network_df.as_matrix() print(gene_matrix_for_network.shape)
class21_reveal_python3.ipynb
ramseylab/networkscompbio
apache-2.0
Filter the matrix to include only rows for which the column-wise median is > 14; matrix should now be 13 x 414.
genes_keep = numpy.where(numpy.median(gene_matrix_for_network, axis=1) > 14) matrix_filt = gene_matrix_for_network[genes_keep, ][0] matrix_filt.shape N = matrix_filt.shape[0] M = matrix_filt.shape[1]
class21_reveal_python3.ipynb
ramseylab/networkscompbio
apache-2.0
Binarize the gene expression matrix using the mean value as a breakpoint, turning it into a NxM matrix of booleans (True/False). Call it gene_matrix_binarized.
gene_matrix_binarized = numpy.tile(numpy.mean(matrix_filt, axis=1),(M,1)).transpose() < matrix_filt print(gene_matrix_binarized.shape)
class21_reveal_python3.ipynb
ramseylab/networkscompbio
apache-2.0
Test your matrix by printing the first four columns of the first four rows:
gene_matrix_binarized[0:4,0:4]
class21_reveal_python3.ipynb
ramseylab/networkscompbio
apache-2.0
The core part of the REVEAL algorithm is a function that can compute the joint entropy of a collection of binary (TRUE/FALSE) vectors X1, X2, ..., Xn (where length(X1) = length(Xi) = M). Write a function entropy_multiple_vecs that takes as its input a nxM matrix (where n is the number of variables, i.e., genes, and M is the number of samples in which gene expression was measured). The function should use the log2 definition of the Shannon entropy. It should return the joint entropy H(X1, X2, ..., Xn) as a scalar numeric value. I have created a skeleton version of this function for you, in which you can fill in the code. I have also created some test code that you can use to test your function, below.
def entropy_multiple_vecs(binary_vecs): ## use shape to get the numbers of rows and columns as [n,M] [n, M] = binary_vecs.shape # make a "M x n" dataframe from the transpose of the matrix binary_vecs binary_df = pandas.DataFrame(binary_vecs.transpose()) # use the groupby method to obtain a data frame of counts of unique occurrences of the 2^n possible logical states binary_df_counts = binary_df.groupby(binary_df.columns.values.tolist()).size().values # divide the matrix of counts by M, to get a probability matrix probvec = binary_df_counts/M # compute the shannon entropy using the formula hvec = -probvec*numpy.log2(probvec) return numpy.sum(hvec)
class21_reveal_python3.ipynb
ramseylab/networkscompbio
apache-2.0
This test case should produce the value 3.938:
print(entropy_multiple_vecs(gene_matrix_binarized[0:4,]))
class21_reveal_python3.ipynb
ramseylab/networkscompbio
apache-2.0
Example implementation of the REVEAL algorithm: We'll go through stage 3
ratio_thresh = 0.1 genes_to_fit = list(range(0,N)) stage = 0 regulators = [None]*N entropies_for_stages = [None]*N max_stage = 4 entropies_for_stages[0] = numpy.zeros(N) for i in range(0,N): single_row_matrix = gene_matrix_binarized[i,:,None].transpose() entropies_for_stages[0][i] = entropy_multiple_vecs(single_row_matrix) genes_to_fit = set(range(0,N)) for stage in range(1,max_stage + 1): for gene in genes_to_fit.copy(): # we are trying to find regulators for gene "gene" poss_regs = set(range(0,N)) - set([gene]) poss_regs_combs = [list(x) for x in itertools.combinations(poss_regs, stage)] HGX = numpy.array([ entropy_multiple_vecs(gene_matrix_binarized[[gene] + poss_regs_comb,:]) for poss_regs_comb in poss_regs_combs ]) HX = numpy.array([ entropy_multiple_vecs(gene_matrix_binarized[poss_regs_comb,:]) for poss_regs_comb in poss_regs_combs ]) HG = entropies_for_stages[0][gene] min_value = numpy.min(HGX - HX) if HG - min_value >= ratio_thresh * HG: regulators[gene]=poss_regs_combs[numpy.argmin(HGX - HX)] genes_to_fit.remove(gene) regulators
class21_reveal_python3.ipynb
ramseylab/networkscompbio
apache-2.0
Let's start by examining the current state of the dataset. source_sentences contains the entire input sequence file as text delimited by newline symbols.
source_sentences[:50].split('\n')
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
target_sentences contains the entire output sequence file as text delimited by newline symbols. Each line corresponds to the line from source_sentences. target_sentences contains a sorted characters of the line.
target_sentences[:50].split('\n')
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Preprocess To do anything useful with it, we'll need to turn the characters into a list of integers:
def extract_character_vocab(data): special_words = ['<pad>', '<unk>', '<s>', '<\s>'] set_words = set([character for line in data.split('\n') for character in line]) int_to_vocab = {word_i: word for word_i, word in enumerate(special_words + list(set_words))} vocab_to_int = {word: word_i for word_i, word in int_to_vocab.items()} return int_to_vocab, vocab_to_int # Build int2letter and letter2int dicts source_int_to_letter, source_letter_to_int = extract_character_vocab(source_sentences) target_int_to_letter, target_letter_to_int = extract_character_vocab(target_sentences) # Convert characters to ids source_letter_ids = [[source_letter_to_int.get(letter, source_letter_to_int['<unk>']) for letter in line] for line in source_sentences.split('\n')] target_letter_ids = [[target_letter_to_int.get(letter, target_letter_to_int['<unk>']) for letter in line] for line in target_sentences.split('\n')] print("Example source sequence") print(source_letter_ids[:3]) print("\n") print("Example target sequence") print(target_letter_ids[:3]) print() print("<s> index is {}".format(target_letter_to_int['<s>']))
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
The last step in the preprocessing stage is to determine the the longest sequence size in the dataset we'll be using, then pad all the sequences to that length.
def pad_id_sequences(source_ids, source_letter_to_int, target_ids, target_letter_to_int, sequence_length): new_source_ids = [sentence + [source_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \ for sentence in source_ids] new_target_ids = [sentence + [target_letter_to_int['<pad>']] * (sequence_length - len(sentence)) \ for sentence in target_ids] return new_source_ids, new_target_ids # Use the longest sequence as sequence length sequence_length = max( [len(sentence) for sentence in source_letter_ids] + [len(sentence) for sentence in target_letter_ids]) # Pad all sequences up to sequence length source_ids, target_ids = pad_id_sequences(source_letter_ids, source_letter_to_int, target_letter_ids, target_letter_to_int, sequence_length) print("Sequence Length") print(sequence_length) print("\n") print("Input sequence example") print(source_ids[:3]) print("\n") print("Target sequence example") print(target_ids[:3])
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
This is the final shape we need them to be in. We can now proceed to building the model. Model Check the Version of TensorFlow This will check to make sure you have the correct version of TensorFlow
from distutils.version import LooseVersion import tensorflow as tf # Check TensorFlow Version assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer' print('TensorFlow Version: {}'.format(tf.__version__))
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Hyperparameters
# Number of Epochs epochs = 60 # Batch Size batch_size = 128 # RNN Size rnn_size = 50 # Number of Layers num_layers = 2 # Embedding Size encoding_embedding_size = 13 decoding_embedding_size = 13 # Learning Rate learning_rate = 0.001
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Input
input_data = tf.placeholder(tf.int32, [batch_size, sequence_length]) targets = tf.placeholder(tf.int32, [batch_size, sequence_length]) lr = tf.placeholder(tf.float32)
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Sequence to Sequence The decoder is probably the most complex part of this model. We need to declare a decoder for the training phase, and a decoder for the inference/prediction phase. These two decoders will share their parameters (so that all the weights and biases that are set during the training phase can be used when we deploy the model). First, we'll need to define the type of cell we'll be using for our decoder RNNs. We opted for LSTM. Then, we'll need to hookup a fully connected layer to the output of decoder. The output of this layer tells us which word the RNN is choosing to output at each time step. Let's first look at the inference/prediction decoder. It is the one we'll use when we deploy our chatbot to the wild (even though it comes second in the actual code). <img src="images/sequence-to-sequence-inference-decoder.png"/> We'll hand our encoder hidden state to the inference decoder and have it process its output. TensorFlow handles most of the logic for us. We just have to use tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder and supply them with the appropriate inputs. Notice that the inference decoder feeds the output of each time step as an input to the next. As for the training decoder, we can think of it as looking like this: <img src="images/sequence-to-sequence-training-decoder.png"/> The training decoder does not feed the output of each time step to the next. Rather, the inputs to the decoder time steps are the target sequence from the training dataset (the orange letters). Encoding Embed the input data using tf.contrib.layers.embed_sequence Pass the embedded input into a stack of RNNs. Save the RNN state and ignore the output.
#print(source_letter_to_int) source_vocab_size = len(source_letter_to_int) print("Length of letter to int is {}".format(source_vocab_size)) print("encoding embedding size is {}".format(encoding_embedding_size)) # Encoder embedding enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, encoding_embedding_size) # Encoder enc_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) _, enc_state = tf.nn.dynamic_rnn(enc_cell, enc_embed_input, dtype=tf.float32)
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Process Decoding Input
import numpy as np # Process the input we'll feed to the decoder ending = tf.strided_slice(targets, [0, 0], [batch_size, -1], [1, 1]) dec_input = tf.concat([tf.fill([batch_size, 1], target_letter_to_int['<s>']), ending], 1) #Demonstration/Example demonstration_outputs = np.reshape(range(batch_size * sequence_length), (batch_size, sequence_length)) sess = tf.InteractiveSession() print("Targets") print(demonstration_outputs[:2]) print("\n") print("Processed Decoding Input") print(sess.run(dec_input, {targets: demonstration_outputs})[:2]) print("targets shape is {} and ending shape is {}".format(targets.shape, ending.shape)) print("demonstration_outputs shape is {}".format(demonstration_outputs.shape))
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Decoding Embed the decoding input Build the decoding RNNs Build the output layer in the decoding scope, so the weight and bias can be shared between the training and inference decoders.
target_vocab_size = len(target_letter_to_int) #print(target_vocab_size, " : ", decoding_embedding_size) # Decoder Embedding dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, decoding_embedding_size])) dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input) #print(dec_input, target_vocab_size, decoding_embedding_size) # Decoder RNNs dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers) with tf.variable_scope("decoding") as decoding_scope: # Output Layer output_fn = lambda x: tf.contrib.layers.fully_connected(x, target_vocab_size, None, scope=decoding_scope)
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Decoder During Training Build the training decoder using tf.contrib.seq2seq.simple_decoder_fn_train and tf.contrib.seq2seq.dynamic_rnn_decoder. Apply the output layer to the output of the training decoder
with tf.variable_scope("decoding") as decoding_scope: # Training Decoder train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(enc_state) train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder( dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope) # Apply output function train_logits = output_fn(train_pred)
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Decoder During Inference Reuse the weights the biases from the training decoder using tf.variable_scope("decoding", reuse=True) Build the inference decoder using tf.contrib.seq2seq.simple_decoder_fn_inference and tf.contrib.seq2seq.dynamic_rnn_decoder. The output function is applied to the output in this step
with tf.variable_scope("decoding", reuse=True) as decoding_scope: # Inference Decoder infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference( output_fn, enc_state, dec_embeddings, target_letter_to_int['<s>'], target_letter_to_int['<\s>'], sequence_length - 1, target_vocab_size) inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope) print(inference_logits.shape)
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Optimization Our loss function is tf.contrib.seq2seq.sequence_loss provided by the tensor flow seq2seq module. It calculates a weighted cross-entropy loss for the output logits.
# Loss function cost = tf.contrib.seq2seq.sequence_loss( train_logits, targets, tf.ones([batch_size, sequence_length])) # Optimizer optimizer = tf.train.AdamOptimizer(lr) # Gradient Clipping gradients = optimizer.compute_gradients(cost) capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None] train_op = optimizer.apply_gradients(capped_gradients)
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Train We're now ready to train our model. If you run into OOM (out of memory) issues during training, try to decrease the batch_size.
import numpy as np train_source = source_ids[batch_size:] train_target = target_ids[batch_size:] valid_source = source_ids[:batch_size] valid_target = target_ids[:batch_size] sess.run(tf.global_variables_initializer()) for epoch_i in range(epochs): for batch_i, (source_batch, target_batch) in enumerate( helper.batch_data(train_source, train_target, batch_size)): _, loss = sess.run( [train_op, cost], {input_data: source_batch, targets: target_batch, lr: learning_rate}) batch_train_logits = sess.run( inference_logits, {input_data: source_batch}) batch_valid_logits = sess.run( inference_logits, {input_data: valid_source}) train_acc = np.mean(np.equal(target_batch, np.argmax(batch_train_logits, 2))) valid_acc = np.mean(np.equal(valid_target, np.argmax(batch_valid_logits, 2))) print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}' .format(epoch_i, batch_i, len(source_ids) // batch_size, train_acc, valid_acc, loss))
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Prediction
input_sentence = 'hello' input_sentence = [source_letter_to_int.get(word, source_letter_to_int['<unk>']) for word in input_sentence.lower()] input_sentence = input_sentence + [0] * (sequence_length - len(input_sentence)) batch_shell = np.zeros((batch_size, sequence_length)) batch_shell[0] = input_sentence chatbot_logits = sess.run(inference_logits, {input_data: batch_shell})[0] print('Input') print(' Word Ids: {}'.format([i for i in input_sentence])) print(' Input Words: {}'.format([source_int_to_letter[i] for i in input_sentence])) print('\nPrediction') print(' Word Ids: {}'.format([i for i in np.argmax(chatbot_logits, 1)])) print(' Chatbot Answer Words: {}'.format([target_int_to_letter[i] for i in np.argmax(chatbot_logits, 1)]))
seq2seq/sequence_to_sequence_implementation.ipynb
elenduuche/deep-learning
mit
Lets load up our trajectory. This is the trajectory that we generated in the "Running a simulation in OpenMM and analyzing the results with mdtraj" example.
traj = md.load('ala2.h5') traj
examples/principal-components.ipynb
swails/mdtraj
lgpl-2.1
Create a two component PCA model, and project our data down into this reduced dimensional space. Using just the cartesian coordinates as input to PCA, it's important to start with some kind of alignment.
pca1 = PCA(n_components=2) traj.superpose(traj, 0) reduced_cartesian = pca1.fit_transform(traj.xyz.reshape(traj.n_frames, traj.n_atoms * 3)) print(reduced_cartesian.shape)
examples/principal-components.ipynb
swails/mdtraj
lgpl-2.1
Now we can plot the data on this projection.
plt.figure() plt.scatter(reduced_cartesian[:, 0], reduced_cartesian[:,1], marker='x', c=traj.time) plt.xlabel('PC1') plt.ylabel('PC2') plt.title('Cartesian coordinate PCA: alanine dipeptide') cbar = plt.colorbar() cbar.set_label('Time [ps]')
examples/principal-components.ipynb
swails/mdtraj
lgpl-2.1
Lets try cross-checking our result by using a different feature space that isn't sensitive to alignment, and instead to "featurize" our trajectory by computing the pairwise distance between every atom in each frame, and using that as our high dimensional input space for PCA.
pca2 = PCA(n_components=2) from itertools import combinations # this python function gives you all unique pairs of elements from a list atom_pairs = list(combinations(range(traj.n_atoms), 2)) pairwise_distances = md.geometry.compute_distances(traj, atom_pairs) print(pairwise_distances.shape) reduced_distances = pca2.fit_transform(pairwise_distances) plt.figure() plt.scatter(reduced_distances[:, 0], reduced_distances[:,1], marker='x', c=traj.time) plt.xlabel('PC1') plt.ylabel('PC2') plt.title('Pairwise distance PCA: alanine dipeptide') cbar = plt.colorbar() cbar.set_label('Time [ps]')
examples/principal-components.ipynb
swails/mdtraj
lgpl-2.1
Time to build the network Below you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes. The network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation. We use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation. Hint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$. Below, you have these tasks: 1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function. 2. Implement the forward pass in the train method. 3. Implement the backpropagation algorithm in the train method, including calculating the output error. 4. Implement the forward pass in the run method.
class NeuralNetwork(object): def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate): # Set number of nodes in input, hidden and output layers. self.input_nodes = input_nodes self.hidden_nodes = hidden_nodes self.output_nodes = output_nodes # Initialize weights self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, (self.hidden_nodes, self.input_nodes)) self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, (self.output_nodes, self.hidden_nodes)) self.lr = learning_rate def train(self, inputs_list, targets_list): # Convert inputs list to 2d array inputs = np.array(inputs_list, ndmin=2).T targets = np.array(targets_list, ndmin=2).T ### Forward pass ### hidden_inputs = np.dot(self.weights_input_to_hidden,inputs) hidden_outputs = self.activation_function(hidden_inputs) final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs) final_outputs = final_inputs ### Backward pass ### output_errors = targets-final_outputs output_grad = output_errors hidden_errors = np.dot(self.weights_hidden_to_output.T,output_errors) hidden_grad = self.activation_function_derivative(hidden_inputs) self.weights_hidden_to_output += self.lr*np.dot(output_grad,hidden_outputs.T) self.weights_input_to_hidden += self.lr*np.dot(hidden_errors* hidden_grad,inputs.T) def activation_function(self,x): return 1/(1 + np.exp(-x)) def activation_function_derivative(self,x): return self.activation_function(x)*(1-self.activation_function(x)) def run(self, inputs_list): # Run a forward pass through the network inputs = np.array(inputs_list, ndmin=2).T hidden_inputs = np.dot(self.weights_input_to_hidden,inputs) hidden_outputs = self.activation_function(hidden_inputs) final_inputs = np.dot(self.weights_hidden_to_output,hidden_outputs) final_outputs = final_inputs return final_outputs def MSE(y, Y): return np.mean((y-Y)**2)
Project 1/Project-1.ipynb
ajaybhat/DLND
apache-2.0
Training the network Here you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops. You'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later. Choose the number of epochs This is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting. Choose the learning rate This scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge. Choose the number of hidden nodes The more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.
import sys ### Set the hyperparameters here ### epochs = 2000 learning_rate = 0.008 hidden_nodes = 10 output_nodes = 1 N_i = train_features.shape[1] network = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate) losses = {'train':[], 'validation':[]} for e in range(epochs): # Go through a random batch of 128 records from the training data set batch = np.random.choice(train_features.index, size=128) for record, target in zip(train_features.ix[batch].values, train_targets.ix[batch]['cnt']): network.train(record, target) # Printing out the training progress train_loss = MSE(network.run(train_features), train_targets['cnt'].values) val_loss = MSE(network.run(val_features), val_targets['cnt'].values) if e%(epochs/10) == 0: sys.stdout.write("\nProgress: " + str(100 * e/float(epochs))[:4] \ + "% ... Training loss: " + str(train_loss)[:5] \ + " ... Validation loss: " + str(val_loss)[:5]) losses['train'].append(train_loss) losses['validation'].append(val_loss) plt.plot(losses['train'], label='Training loss') plt.plot(losses['validation'], label='Validation loss') plt.legend() plt.ylim(ymax=0.5)
Project 1/Project-1.ipynb
ajaybhat/DLND
apache-2.0
Check out your predictions Here, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.
fig, ax = plt.subplots(figsize=(8,4)) mean, std = scaled_features['cnt'] predictions = network.run(test_features)*std + mean ax.plot(predictions[0], 'r',label='Prediction') ax.plot((test_targets['cnt']*std + mean).values, 'g', label='Data') ax.set_xlim(right=len(predictions)) ax.legend() dates = pd.to_datetime(rides.ix[test_data.index]['dteday']) dates = dates.apply(lambda d: d.strftime('%b %d')) ax.set_xticks(np.arange(len(dates))[12::24]) _ = ax.set_xticklabels(dates[12::24], rotation=45)
Project 1/Project-1.ipynb
ajaybhat/DLND
apache-2.0
Thinking about your results Answer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does? Note: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter Your answer below The model predicts the data fairly well for the limited amount it is provided. It fails massively for the period December 23-28, because real-world scenarios indicate most people would not get bikes at that time, and would be staying home. However, the network predicts the same behavior as the other days of the month, and so it fails. This could be avoided by feeding it similar data of the type seen in the sample (as part of the training data). Another way is to experiment with the activation function itself: Sigmoid can be replaced by tanh or Leaky RelU functions. http://cs231n.github.io/neural-networks-1/ Unit tests Run these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.
import unittest inputs = [0.5, -0.2, 0.1] targets = [0.4] test_w_i_h = np.array([[0.1, 0.4, -0.3], [-0.2, 0.5, 0.2]]) test_w_h_o = np.array([[0.3, -0.1]]) class TestMethods(unittest.TestCase): ########## # Unit tests for data loading ########## def test_data_path(self): # Test that file path to dataset has been unaltered self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv') def test_data_loaded(self): # Test that data frame loaded self.assertTrue(isinstance(rides, pd.DataFrame)) ########## # Unit tests for network functionality ########## def test_activation(self): network = NeuralNetwork(3, 2, 1, 0.5) # Test that the activation function is a sigmoid self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5)))) def test_train(self): # Test that weights are updated correctly on training network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() network.train(inputs, targets) self.assertTrue(np.allclose(network.weights_hidden_to_output, np.array([[ 0.37275328, -0.03172939]]))) self.assertTrue(np.allclose(network.weights_input_to_hidden, np.array([[ 0.10562014, 0.39775194, -0.29887597], [-0.20185996, 0.50074398, 0.19962801]]))) def test_run(self): # Test correctness of run method network = NeuralNetwork(3, 2, 1, 0.5) network.weights_input_to_hidden = test_w_i_h.copy() network.weights_hidden_to_output = test_w_h_o.copy() self.assertTrue(np.allclose(network.run(inputs), 0.09998924)) def runTest(self): pass suite = unittest.TestLoader().loadTestsFromModule(TestMethods()) unittest.TextTestRunner(verbosity=1).run(suite)
Project 1/Project-1.ipynb
ajaybhat/DLND
apache-2.0
Summary Report
import itertools import json import os import re import pickle import platform import time from collections import defaultdict as dd from functools import partial from os.path import abspath, dirname, exists, join from string import Template import numpy as np import pandas as pd import seaborn as sns import scipy.stats as stats from matplotlib import pyplot as plt from IPython import sys_info from IPython.display import display, HTML, Image, Javascript, Markdown, SVG from rsmtool.utils.files import (get_output_directory_extension, parse_json_with_comments) from rsmtool.utils.notebook import (float_format_func, int_or_float_format_func, bold_highlighter, color_highlighter, show_thumbnail) from rsmtool.reader import DataReader from rsmtool.writer import DataWriter from rsmtool.version import VERSION as rsmtool_version # turn off interactive plotting plt.ioff() rsm_report_dir = os.environ.get('RSM_REPORT_DIR', None) if rsm_report_dir is None: rsm_report_dir = os.getcwd() rsm_environ_config = join(rsm_report_dir, '.environ.json') if not exists(rsm_environ_config): raise FileNotFoundError('The file {} cannot be located. ' 'Please make sure that either (1) ' 'you have set the correct directory with the `RSM_REPORT_DIR` ' 'environment variable, or (2) that your `.environ.json` ' 'file is in the same directory as your notebook.'.format(rsm_environ_config)) environ_config = parse_json_with_comments(rsm_environ_config)
rsmtool/notebooks/summary/header.ipynb
EducationalTestingService/rsmtool
apache-2.0
<style type="text/css"> div.prompt.output_prompt { color: white; } span.highlight_color { color: red; } span.highlight_bold { font-weight: bold; } @media print { @page { size: landscape; margin: 0cm 0cm 0cm 0cm; } * { margin: 0px; padding: 0px; } #toc { display: none; } span.highlight_color, span.highlight_bold { font-weight: bolder; text-decoration: underline; } div.prompt.output_prompt { display: none; } h3#Python-packages, div#packages { display: none; } </style>
# NOTE: you will need to set the following manually # if you are using this notebook interactively. summary_id = environ_config.get('SUMMARY_ID') description = environ_config.get('DESCRIPTION') jsons = environ_config.get('JSONS') output_dir = environ_config.get('OUTPUT_DIR') use_thumbnails = environ_config.get('USE_THUMBNAILS') file_format_summarize = environ_config.get('FILE_FORMAT') # groups for subgroup analysis. groups_desc = environ_config.get('GROUPS_FOR_DESCRIPTIVES') groups_eval = environ_config.get('GROUPS_FOR_EVALUATIONS') # javascript path javascript_path = environ_config.get("JAVASCRIPT_PATH") # initialize id generator for thumbnails id_generator = itertools.count(1) with open(join(javascript_path, "sort.js"), "r", encoding="utf-8") as sortf: display(Javascript(data=sortf.read())) # load the information about all models model_list = [] for (json_file, experiment_name) in jsons: model_config = json.load(open(json_file)) model_id = model_config['experiment_id'] model_name = experiment_name if experiment_name else model_id model_csvdir = dirname(json_file) model_file_format = get_output_directory_extension(model_csvdir, model_id) model_list.append((model_id, model_name, model_config, model_csvdir, model_file_format)) Markdown("This report presents the analysis for **{}**: {} \n ".format(summary_id, description)) HTML(time.strftime('%c')) # get a matched list of model ids and descriptions models_and_desc = zip([model_name for (model_id, model_name, config, csvdir, model_file_format) in model_list], [config['description'] for (model_id, model_name, config, csvdir, file_format) in model_list]) model_desc_list = '\n\n'.join(['**{}**: {}'.format(m, d) for (m, d) in models_and_desc]) Markdown("The report compares the following models: \n\n {}".format(model_desc_list)) if use_thumbnails: display(Markdown("""***Note: Images in this report have been converted to """ """clickable thumbnails***""")) %%html <div id="toc"></div>
rsmtool/notebooks/summary/header.ipynb
EducationalTestingService/rsmtool
apache-2.0
The role of dipole orientations in distributed source localization When performing source localization in a distributed manner (MNE/dSPM/sLORETA/eLORETA), the source space is defined as a grid of dipoles that spans a large portion of the cortex. These dipoles have both a position and an orientation. In this tutorial, we will look at the various options available to restrict the orientation of the dipoles and the impact on the resulting source estimate. See inverse_orientation_constrains Loading data Load everything we need to perform source localization on the sample dataset.
import mne import numpy as np from mne.datasets import sample from mne.minimum_norm import make_inverse_operator, apply_inverse data_path = sample.data_path() evokeds = mne.read_evokeds(data_path + '/MEG/sample/sample_audvis-ave.fif') left_auditory = evokeds[0].apply_baseline() fwd = mne.read_forward_solution( data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif') mne.convert_forward_solution(fwd, surf_ori=True, copy=False) noise_cov = mne.read_cov(data_path + '/MEG/sample/sample_audvis-cov.fif') subject = 'sample' subjects_dir = data_path + '/subjects' trans_fname = data_path + '/MEG/sample/sample_audvis_raw-trans.fif'
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The source space Let's start by examining the source space as constructed by the :func:mne.setup_source_space function. Dipoles are placed along fixed intervals on the cortex, determined by the spacing parameter. The source space does not define the orientation for these dipoles.
lh = fwd['src'][0] # Visualize the left hemisphere verts = lh['rr'] # The vertices of the source space tris = lh['tris'] # Groups of three vertices that form triangles dip_pos = lh['rr'][lh['vertno']] # The position of the dipoles dip_ori = lh['nn'][lh['vertno']] dip_len = len(dip_pos) dip_times = [0] white = (1.0, 1.0, 1.0) # RGB values for a white color actual_amp = np.ones(dip_len) # misc amp to create Dipole instance actual_gof = np.ones(dip_len) # misc GOF to create Dipole instance dipoles = mne.Dipole(dip_times, dip_pos, actual_amp, dip_ori, actual_gof) trans = mne.read_trans(trans_fname) fig = mne.viz.create_3d_figure(size=(600, 400), bgcolor=white) coord_frame = 'mri' # Plot the cortex fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir, trans=trans, surfaces='white', coord_frame=coord_frame, fig=fig) # Mark the position of the dipoles with small red dots fig = mne.viz.plot_dipole_locations(dipoles=dipoles, trans=trans, mode='sphere', subject=subject, subjects_dir=subjects_dir, coord_frame=coord_frame, scale=7e-4, fig=fig) mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.25)
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Fixed dipole orientations While the source space defines the position of the dipoles, the inverse operator defines the possible orientations of them. One of the options is to assign a fixed orientation. Since the neural currents from which MEG and EEG signals originate flows mostly perpendicular to the cortex [1]_, restricting the orientation of the dipoles accordingly places a useful restriction on the source estimate. By specifying fixed=True when calling :func:mne.minimum_norm.make_inverse_operator, the dipole orientations are fixed to be orthogonal to the surface of the cortex, pointing outwards. Let's visualize this:
fig = mne.viz.create_3d_figure(size=(600, 400)) # Plot the cortex fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir, trans=trans, surfaces='white', coord_frame='head', fig=fig) # Show the dipoles as arrows pointing along the surface normal fig = mne.viz.plot_dipole_locations(dipoles=dipoles, trans=trans, mode='arrow', subject=subject, subjects_dir=subjects_dir, coord_frame='head', scale=7e-4, fig=fig) mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Restricting the dipole orientations in this manner leads to the following source estimate for the sample data:
# Compute the source estimate for the 'left - auditory' condition in the sample # dataset. inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=True) stc = apply_inverse(left_auditory, inv, pick_ori=None) # Visualize it at the moment of peak activity. _, time_max = stc.get_peak(hemi='lh') brain_fixed = stc.plot(surface='white', subjects_dir=subjects_dir, initial_time=time_max, time_unit='s', size=(600, 400))
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The direction of the estimated current is now restricted to two directions: inward and outward. In the plot, blue areas indicate current flowing inwards and red areas indicate current flowing outwards. Given the curvature of the cortex, groups of dipoles tend to point in the same direction: the direction of the electromagnetic field picked up by the sensors. Loose dipole orientations Forcing the source dipoles to be strictly orthogonal to the cortex makes the source estimate sensitive to the spacing of the dipoles along the cortex, since the curvature of the cortex changes within each ~10 square mm patch. Furthermore, misalignment of the MEG/EEG and MRI coordinate frames is more critical when the source dipole orientations are strictly constrained [2]_. To lift the restriction on the orientation of the dipoles, the inverse operator has the ability to place not one, but three dipoles at each location defined by the source space. These three dipoles are placed orthogonally to form a Cartesian coordinate system. Let's visualize this:
fig = mne.viz.create_3d_figure(size=(600, 400)) # Plot the cortex fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir, trans=trans, surfaces='white', coord_frame='head', fig=fig) # Show the three dipoles defined at each location in the source space fig = mne.viz.plot_alignment(subject=subject, subjects_dir=subjects_dir, trans=trans, fwd=fwd, surfaces='white', coord_frame='head', fig=fig) mne.viz.set_3d_view(figure=fig, azimuth=180, distance=0.1)
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
When computing the source estimate, the activity at each of the three dipoles is collapsed into the XYZ components of a single vector, which leads to the following source estimate for the sample data:
# Make an inverse operator with loose dipole orientations inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False, loose=1.0) # Compute the source estimate, indicate that we want a vector solution stc = apply_inverse(left_auditory, inv, pick_ori='vector') # Visualize it at the moment of peak activity. _, time_max = stc.magnitude().get_peak(hemi='lh') brain_mag = stc.plot(subjects_dir=subjects_dir, initial_time=time_max, time_unit='s', size=(600, 400), overlay_alpha=0)
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Limiting orientations, but not fixing them Often, the best results will be obtained by allowing the dipoles to have somewhat free orientation, but not stray too far from a orientation that is perpendicular to the cortex. The loose parameter of the :func:mne.minimum_norm.make_inverse_operator allows you to specify a value between 0 (fixed) and 1 (unrestricted or "free") to indicate the amount the orientation is allowed to deviate from the surface normal.
# Set loose to 0.2, the default value inv = make_inverse_operator(left_auditory.info, fwd, noise_cov, fixed=False, loose=0.2) stc = apply_inverse(left_auditory, inv, pick_ori='vector') # Visualize it at the moment of peak activity. _, time_max = stc.magnitude().get_peak(hemi='lh') brain_loose = stc.plot(subjects_dir=subjects_dir, initial_time=time_max, time_unit='s', size=(600, 400), overlay_alpha=0)
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Discarding dipole orientation information Often, further analysis of the data does not need information about the orientation of the dipoles, but rather their magnitudes. The pick_ori parameter of the :func:mne.minimum_norm.apply_inverse function allows you to specify whether to return the full vector solution ('vector') or rather the magnitude of the vectors (None, the default) or only the activity in the direction perpendicular to the cortex ('normal').
# Only retain vector magnitudes stc = apply_inverse(left_auditory, inv, pick_ori=None) # Visualize it at the moment of peak activity. _, time_max = stc.get_peak(hemi='lh') brain = stc.plot(surface='white', subjects_dir=subjects_dir, initial_time=time_max, time_unit='s', size=(600, 400))
0.19/_downloads/a1ab4842a5aa341564b4fa0a6bf60065/plot_dipole_orientations.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Use the np.random module to generate a normal distribution of 1,000 data points in two dimensions (e.g. x, y) - choose whatever mean and sigma^2 you like. Generate another 1,000 data points with a normal distribution in two dimensions that are well separated from the first set. You now have two "clusters". Concatenate them so you have 2,000 data points in two dimensions. Plot the points. This will be the training set. Plot the points. Generate 100 data points with the same distribution as your first random normal 2-d set, and 100 data points with the same distribution as your second random normal 2-d set. This will be the test set labeled X_test_normal. Generate 100 data points with a random uniform distribution. This will be the test set labeled X_test_uniform. Define a model classifier with the svm.OneClassSVM
model = svm.OneClassSVM()
notebooks/anomaly_detection/sample_anomaly_detection.ipynb
cavestruz/MLPipeline
mit
Fit the model to the training data. Use the trained model to predict whether X_test_normal data point are in the same distributions. Calculate the fraction of "false" predictions. Use the trained model to predict whether X_test_uniform is in the same distribution. Calculate the fraction of "false" predictions. Use the trained model to see how well it recovers the training data. (Predict on the training data, and calculate the fraction of "false" predictions.) Create another instance of the model classifier, but change the kwarg value for nu. Hint: Use help to figure out what the kwargs are. Redo the prediction on the training set, prediction on X_test_random, and prediction on X_test. Plot in scatter points the X_train in blue, X_test_normal in red, and X_test_uniform in black. Overplot the trained model decision function boundary for the first instance of the model classifier. Do the same for the second instance of the model classifier.
from sklearn.covariance import EllipticEnvelope
notebooks/anomaly_detection/sample_anomaly_detection.ipynb
cavestruz/MLPipeline
mit
Model and parameters Electron only device is simulated, without contact barrier. Note that more trap levels can be included by modifying traps= argument below. Each trap level should have unique name.
L = 200e-9 # device thickness, m model = oedes.models.std.electrononly(L, traps=['trap']) params = { 'T': 300, # K 'electrode0.workfunction': 0, # eV 'electrode1.workfunction': 0, # eV 'electron.energy': 0, # eV 'electron.mu': 1e-9, # m2/(Vs) 'electron.N0': 2.4e26, # 1/m^3 'electron.trap.energy': 0, # eV 'electron.trap.trate': 1e-22, # 1/(m^3 s) 'electron.trap.N0': 6.2e22, # 1/m^3 'electrode0.voltage': 0, # V 'electrode1.voltage': 0, # V 'epsilon_r': 3. # 1 }
examples/scl/scl-trapping.ipynb
mzszym/oedes
agpl-3.0
Sweep parameters For simplicity, the case of absent traps is modeled by putting trap level 1 eV above transport level. This makes trap states effectively unoccupied.
trapenergy_sweep = oedes.sweep('electron.trap.energy',np.asarray([-0.45, -0.33, -0.21, 1.])) voltage_sweep = oedes.sweep('electrode0.voltage', np.logspace(-3, np.log10(20.), 100))
examples/scl/scl-trapping.ipynb
mzszym/oedes
agpl-3.0
Result
c=oedes.context(model) for tdepth,ct in c.sweep(params, trapenergy_sweep): for _ in ct.sweep(ct.params, voltage_sweep): pass v,j = ct.teval(voltage_sweep.parameter_name,'J') oedes.testing.store(j, rtol=1e-3) # for automatic testing if tdepth < 0: label = 'no traps' else: label = 'trap depth %s eV' % tdepth plt.plot(v,j,label=label) plt.xscale('log') plt.yscale('log') plt.xlabel('V') plt.ylabel(r'$\mathrm{A/m^2}$') plt.legend(loc=0,frameon=False);
examples/scl/scl-trapping.ipynb
mzszym/oedes
agpl-3.0
Step 1 I started with the "Franchises" list on Boxofficemojo.com. Within each franchise page, I scraped each movie's information and enter it into a Python dictionary. If it's already in the dictionary, the entry will be overwritten, except with a different Franchise name. But note below that the url for "Franchises" list was sorted Ascending, so this conveniently rolls "subfranchises" into their "parent" franchise. E.g., "Fantastic Beasts" and the "Harry Potter" movies have their own separate Franchises, but they will all be tagged as the "JKRowling" franchise, i.e. "./chart/?id=jkrowling.htm" Also, because I was comparing sequels to their predecessors, I focused on Domestic Gross, adjusted for ticket price inflation.
url = 'http://www.boxofficemojo.com/franchises/?view=Franchise&sort=nummovies&order=ASC&p=.htm' response = requests.get(url) page = response.text soup = BeautifulSoup(page,"lxml") tables = soup.find_all("table") rows = [row for row in tables[3].find_all('tr')] rows = rows[1:] # Initialize empty dictionary of movies movies = {} for row in rows: items = row.find_all('td') franchise = items[0].find('a')['href'] franchiseurl = 'http://www.boxofficemojo.com/franchises/' + franchise[2:] response = requests.get(franchiseurl) franchise_page = response.text franchise_soup = BeautifulSoup(franchise_page,"lxml") franchise_tables = franchise_soup.find_all("table") franchise_gross = [row for row in franchise_tables[4].find_all('tr')] franchise_gross = franchise_gross[1:len(franchise_gross)-2] franchise_adjgross = [row for row in franchise_tables[5].find_all('tr')] franchise_adjgross = franchise_adjgross[1:len(franchise_adjgross)-2] # Assign movieurl as key # Add title, franchise, inflation-adjusted gross, release date. for row in franchise_adjgross: movie_info = row.find_all('td') movieurl = movie_info[1].find('a')['href'] title = movie_info[1] adjgross = movie_info[3] release = movie_info[5] movies[movieurl] = [title.text] movies[movieurl].append(franchise) movies[movieurl].append(adjgross.text) movies[movieurl].append(release.text) # Add number of theaters for the above movies for row in franchise_gross: movie_info = row.find_all('td') movieurl = movie_info[1].find('a')['href'] theaters = movie_info[4] if movieurl in movies.keys(): movies[movieurl].append(theaters.text) df = pd.DataFrame(movies.values()) df.columns = ['Title','Franchise', 'AdjGross', 'Release', 'Theaters'] df.head() df.shape
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Step 2 Clean up data.
# Remove movies that were re-issues, special editions, or separate 3D or IMAX versions. df['Ignore'] = df['Title'].apply(lambda x: 're-issue' in x.lower() or 're-release' in x.lower() or 'special edition' in x.lower() or '3d)' in x.lower() or 'imax' in x.lower()) df = df[(df.Ignore == False)] del df['Ignore'] df.shape # Convert Adjusted Gross to a number df['AdjGross'] = df['AdjGross'].apply(lambda x: int(x.replace('$','').replace(',',''))) # Convert Date string to dateobject. Need to prepend '19' for dates > 17 because Python treats '/60' as year '2060' df['Release'] = df['Release'].apply(lambda x: (x[:-2] + '19' + x[-2:]) if int(x[-2:]) > 17 else x) df['Release'] = df['Release'].apply(lambda x: dateutil.parser.parse(x))
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
The films need to be grouped by franchise so that franchise-related data can be included as featured for each observation. - The Average Adjusted Gross of all previous films in the franchise - The Adjusted Gross of the very first film in the franchise - The Release Date of the previous film in the franchise - The Release Date of the very first film in the franchise - The Series Number of the film in that franchise -- I considered using the film's number in the franchise as a rank value that could be split into indicator variables, but it's useful as a linear value because the total accrued sum of $ earned by the franchise is a linear combination of "SeriesNum" and "PrevAvgGross"
df = df.sort_values(['Franchise','Release']) df['CumGross'] = df.groupby(['Franchise'])['AdjGross'].apply(lambda x: x.cumsum()) df['SeriesNum'] = df.groupby(['Franchise'])['Release'].apply(lambda x: x.rank()) df['PrevAvgGross'] = (df['CumGross'] - df['AdjGross'])/(df['SeriesNum'] - 1)
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Number of Theaters in which the film showed -- Where this number was unavailable, replaced '-' with 0; the 0 will later be replaced with the mean number of theaters for the other films in the same franchise. I chose the average as a reasonable estimate.
df.Theaters = df.Theaters.replace('-','0') df['Theaters'] = df['Theaters'].apply(lambda x: int(x.replace(',',''))) df['PrevRelease'] = df['Release'].shift() # Create a second dataframe with franchise group-related information. df_group = pd.DataFrame(df.groupby(['Franchise'])['Title'].apply(lambda x: x.count())) df_group['FirstGross'] = df.groupby(['Franchise'])['AdjGross'].first() df_group['FirstRelease'] = df.groupby(['Franchise'])['Release'].first() df_group['SumTheaters'] = df.groupby(['Franchise'])['Theaters'].apply(lambda x: x.sum()) df_group.columns = ['NumOfFilms','FirstGross','FirstRelease','SumTheaters'] df_group['AvgTheaters'] = df_group['SumTheaters']/df_group['NumOfFilms'] df_group['Franchise'] = df.groupby(['Franchise'])['Franchise'].first() df = df.merge(df_group, on='Franchise') df.head() df['Theaters'] = df.Theaters.replace(0,df.AvgTheaters) # Drop rows with NaN. Drops all first films, but I've already stored first film information within other features. df = df.dropna() df.shape df['DaysSinceFirstFilm'] = df.Release - df.FirstRelease df['DaysSinceFirstFilm'] = df['DaysSinceFirstFilm'].apply(lambda x: x.days) df['DaysSincePrevFilm'] = df.Release - df.PrevRelease df['DaysSincePrevFilm'] = df['DaysSincePrevFilm'].apply(lambda x: x.days) df.sort_values('Release',ascending=False).head()
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
For the regression model, I decided to keep data for films released through 2016, but drop the 3 films released this year; because of their recent release date, their gross earnings will not yet be representative.
films17 = df.loc[[530,712,676]] # Grabbing columns for regression model and dropping 2017 films dfreg = df[['AdjGross','Theaters','SeriesNum','PrevAvgGross','FirstGross','DaysSinceFirstFilm','DaysSincePrevFilm']] dfreg = dfreg.drop([530,712,676]) dfreg.shape
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Step 3 Apply Linear Regression.
dfreg.corr() sns.pairplot(dfreg); sns.regplot((dfreg.PrevAvgGross), (dfreg.AdjGross)); sns.regplot(np.log(dfreg.Theaters), np.log(dfreg.AdjGross));
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
In the pairplot we can see that 'AdjGross' may have some correlation with the variables, particularly 'Theaters' and 'PrevAvgGross'. However, it looks like a polynomial model, or natural log / some other transformation will be required before fitting a linear model.
y, X = patsy.dmatrices('AdjGross ~ Theaters + SeriesNum + PrevAvgGross + FirstGross + DaysSinceFirstFilm + DaysSincePrevFilm', data=dfreg, return_type="dataframe")
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
First try: Initial linear regression model with statsmodels
model = sm.OLS(y, X) fit = model.fit() fit.summary() fit.resid.plot(style='o');
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Try Polynomial Regression
polyX=PolynomialFeatures(2).fit_transform(X) polymodel = sm.OLS(y, polyX) polyfit = polymodel.fit() polyfit.rsquared polyfit.resid.plot(style='o'); polyfit.rsquared_adj
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Heteroskedasticity The polynomial regression improved the Adjusted Rsquared and the residual plot, but there's still issues with other statistics including skew. It's worth running the Breusch-Pagan test:
hetnames = ['Lagrange multiplier statistic', 'p-val', 'f-val', 'f p-val'] hettest = sm.stats.diagnostic.het_breushpagan(fit.resid, fit.model.exog) zip(hetnames,hettest) hetnames = ['Lagrange multiplier statistic', 'p-val', 'f-val', 'f p-val'] hettest = sm.stats.diagnostic.het_breushpagan(polyfit.resid, fit.model.exog) zip(hetnames,hettest)
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Apply Box-Cox Transformation As seen above the p-values were very low, suggesting the data is indeed tending towards heteroskedasticity. To improve the data we can apply boxcox.
dfPolyX = pd.DataFrame(polyX) bcPolyX = pd.DataFrame() for i in range(dfPolyX.shape[1]): bcPolyX[i] = scipy.stats.boxcox(dfPolyX[i])[0] # Transformed data with Box-Cox: bcPolyX.head() # Introduce log(y) for target variable: y = y.reset_index(drop=True) logy = np.log(y)
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Try Polynomial Regression again with Log Y and Box-Cox transformed X
logPolyModel = sm.OLS(logy, bcPolyX) logPolyFit = logPolyModel.fit() logPolyFit.rsquared_adj
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Apply Regularization using Elastic Net to optimize this model.
X_scaled = preprocessing.scale(bcPolyX) en_cv = linear_model.ElasticNetCV(cv=10, normalize=False) en_cv.fit(X_scaled, logy) en_cv.coef_ logy_en = en_cv.predict(X_scaled) mse = metrics.mean_squared_error(logy, logy_en) # The mean square error for this model mse plt.scatter([x for x in range(540)],(pd.DataFrame(logy_en)[0] - logy['AdjGross']));
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Step 4 As seen above, Polynomial Regression with Elastic Net produces a model with several nonzero coefficients for the given features. I decided to try testing this model on the three new sequels for 2017.
films17 df17 = films17[['AdjGross','Theaters','SeriesNum','PrevAvgGross','FirstGross','DaysSinceFirstFilm','DaysSincePrevFilm']] y17, X17 = patsy.dmatrices('AdjGross ~ Theaters + SeriesNum + PrevAvgGross + FirstGross + DaysSinceFirstFilm + DaysSincePrevFilm', data=df17, return_type="dataframe") polyX17 = PolynomialFeatures(2).fit_transform(X17) dfPolyX17 = pd.DataFrame(polyX17) bcPolyX17 = pd.DataFrame() for i in range(dfPolyX17.shape[1]): bcPolyX17[i] = scipy.stats.boxcox(dfPolyX17[i])[0] X17_scaled = preprocessing.scale(bcPolyX17) # Run the "en_cv" model from above on the 2017 data: logy_en_2017 = en_cv.predict(X17_scaled) # Predicted Adjusted Gross: pd.DataFrame(np.exp(logy_en_2017)) # Adjusted Gross as of 2/1: y17
Projects/Project2/Project2_Prashant.ipynb
ptpro3/ptpro3.github.io
mit
Multiplexers \begin{definition}\label{def:MUX} A Multiplexer, typically referred to as a MUX, is a Digital(or analog) switching unit that picks one input channel to be streamed to an output via a control input. For single output MUXs with $2^n$ inputs, there are then $n$ input selection signals that make up the control word to select the input channel for output. From a behavioral standpoint, a MUX can be thought of as an element that performs the same functionality as the if-elif-else (case) control statements found in almost every software language. \end{definition} 2 Channel Input:1 Channel Output multiplexer in Gate Level Logic \begin{figure} \centerline{\includegraphics{MUX21Gate.png}} \caption{\label{fig:M21G} 2:1 MUX Symbol and Gate internals} \end{figure} Sympy Expression
x0, x1, s, y=symbols('x0, x1, s, y') y21Eq=Eq(y, (~s&x0) |(s&x1) ); y21Eq TruthTabelGenrator(y21Eq)[[x1, x0, s, y]] y21EqN=lambdify([x0, x1, s], y21Eq.rhs, dummify=False) SystmaticVals=np.array(list(itertools.product([0,1], repeat=3))) print(SystmaticVals) y21EqN(SystmaticVals[:, 1], SystmaticVals[:, 2], SystmaticVals[:, 0]).astype(int)
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Module
@block def MUX2_1_Combo(x0, x1, s, y): """ 2:1 Multiplexer written in full combo Input: x0(bool): input channel 0 x1(bool): input channel 1 s(bool): channel selection input Output: y(bool): ouput """ @always_comb def logic(): y.next= (not s and x0) |(s and x1) return instances()
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
#generate systmatic and random test values #stimules inputs X1 and X2 TestLen=10 SystmaticVals=list(itertools.product([0,1], repeat=3)) x0TVs=np.array([i[1] for i in SystmaticVals]).astype(int) np.random.seed(15) x0TVs=np.append(x0TVs, np.random.randint(0,2, TestLen)).astype(int) x1TVs=np.array([i[2] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(16) x1TVs=np.append(x1TVs, np.random.randint(0,2, TestLen)).astype(int) sTVs=np.array([i[0] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(17) sTVs=np.append(sTVs, np.random.randint(0,2, TestLen)).astype(int) TestLen=len(x0TVs) x0TVs, x1TVs, sTVs, TestLen Peeker.clear() x0=Signal(bool(0)); Peeker(x0, 'x0') x1=Signal(bool(0)); Peeker(x1, 'x1') s=Signal(bool(0)); Peeker(s, 's') y=Signal(bool(0)); Peeker(y, 'y') DUT=MUX2_1_Combo(x0, x1, s, y) def MUX2_1_Combo_TB(): """ myHDL only testbench for module `MUX2_1_Combo` """ @instance def stimules(): for i in range(TestLen): x0.next=int(x0TVs[i]) x1.next=int(x1TVs[i]) s.next=int(sTVs[i]) yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, MUX2_1_Combo_TB(), *Peeker.instances()).run() Peeker.to_wavedrom('x1', 'x0', 's', 'y') MUX2_1_ComboData=Peeker.to_dataframe() MUX2_1_ComboData=MUX2_1_ComboData[['x1', 'x0', 's', 'y']] MUX2_1_ComboData MUX2_1_ComboData['yRef']=MUX2_1_ComboData.apply(lambda row:y21EqN(row['x0'], row['x1'], row['s']), axis=1).astype(int) MUX2_1_ComboData Test=(MUX2_1_ComboData['y']==MUX2_1_ComboData['yRef']).all() print(f'Module `MUX2_1_Combo` works as exspected: {Test}')
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('MUX2_1_Combo');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{MUX2_1_Combo_RTL.png}} \caption{\label{fig:M21CRTL} MUX2_1_Combo RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX2_1_Combo_SYN.png}} \caption{\label{fig:M21CSYN} MUX2_1_Combo Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX2_1_Combo_IMP.png}} \caption{\label{fig:M21CIMP} MUX2_1_Combo Implementated Schematic; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog Testbench
#create BitVectors x0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:] x1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:] sTVs=intbv(int(''.join(sTVs.astype(str)), 2))[TestLen:] x0TVs, bin(x0TVs), x1TVs, bin(x1TVs), sTVs, bin(sTVs) @block def MUX2_1_Combo_TBV(): """ myHDL -> Verilog testbench for module `MUX2_1_Combo` """ x0=Signal(bool(0)) x1=Signal(bool(0)) s=Signal(bool(0)) y=Signal(bool(0)) @always_comb def print_data(): print(x0, x1, s, y) #Test Signal Bit Vectors x0TV=Signal(x0TVs) x1TV=Signal(x1TVs) sTV=Signal(sTVs) DUT=MUX2_1_Combo(x0, x1, s, y) @instance def stimules(): for i in range(TestLen): x0.next=int(x0TV[i]) x1.next=int(x1TV[i]) s.next=int(sTV[i]) yield delay(1) raise StopSimulation() return instances() TB=MUX2_1_Combo_TBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('MUX2_1_Combo_TBV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit \begin{figure} \centerline{\includegraphics[width=5cm]{MUX21PYNQZ1Circ.png}} \caption{\label{fig:M21Circ} 2:1 MUX PYNQ-Z1 (Non SoC) conceptualized circuit} \end{figure} Board Constraint
ConstraintXDCTextReader('MUX2_1');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Video of Deployment MUX2_1_Combo myHDL PYNQ-Z1 (YouTube) 4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic Sympy Expression
x0, x1, x2, x3, s0, s1, y=symbols('x0, x1, x2, x3, s0, s1, y') y41Eq=Eq(y, (~s0&~s1&x0) | (s0&~s1&x1)| (~s0&s1&x2)|(s0&s1&x3)) y41Eq TruthTabelGenrator(y41Eq)[[x3, x2, x1, x0, s1, s0, y]] y41EqN=lambdify([x0, x1, x2, x3, s0, s1], y41Eq.rhs, dummify=False) SystmaticVals=np.array(list(itertools.product([0,1], repeat=6))) SystmaticVals y41EqN(*[SystmaticVals[:, i] for i in range(6)] ).astype(int)
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Module
@block def MUX4_1_Combo(x0, x1, x2, x3, s0, s1, y): """ 4:1 Multiplexer written in full combo Input: x0(bool): input channel 0 x1(bool): input channel 1 x2(bool): input channel 2 x3(bool): input channel 3 s1(bool): channel selection input bit 1 s0(bool): channel selection input bit 0 Output: y(bool): ouput """ @always_comb def logic(): y.next= (not s0 and not s1 and x0) or (s0 and not s1 and x1) or (not s0 and s1 and x2) or (s0 and s1 and x3) return instances()
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
#generate systmatic and random test values TestLen=5 SystmaticVals=list(itertools.product([0,1], repeat=6)) s0TVs=np.array([i[0] for i in SystmaticVals]).astype(int) np.random.seed(15) s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int) s1TVs=np.array([i[1] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(16) s1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int) x0TVs=np.array([i[2] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(17) x0TVs=np.append(x0TVs, np.random.randint(0,2, TestLen)).astype(int) x1TVs=np.array([i[3] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(18) x1TVs=np.append(x1TVs, np.random.randint(0,2, TestLen)).astype(int) x2TVs=np.array([i[4] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(19) x2TVs=np.append(x2TVs, np.random.randint(0,2, TestLen)).astype(int) x3TVs=np.array([i[5] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(20) x3TVs=np.append(x3TVs, np.random.randint(0,2, TestLen)).astype(int) TestLen=len(x0TVs) SystmaticVals, s0TVs, s1TVs, x3TVs, x2TVs, x1TVs, x0TVs, TestLen Peeker.clear() x0=Signal(bool(0)); Peeker(x0, 'x0') x1=Signal(bool(0)); Peeker(x1, 'x1') x2=Signal(bool(0)); Peeker(x2, 'x2') x3=Signal(bool(0)); Peeker(x3, 'x3') s0=Signal(bool(0)); Peeker(s0, 's0') s1=Signal(bool(0)); Peeker(s1, 's1') y=Signal(bool(0)); Peeker(y, 'y') DUT=MUX4_1_Combo(x0, x1, x2, x3, s0, s1, y) def MUX4_1_Combo_TB(): """ myHDL only testbench for module `MUX4_1_Combo` """ @instance def stimules(): for i in range(TestLen): x0.next=int(x0TVs[i]) x1.next=int(x1TVs[i]) x2.next=int(x2TVs[i]) x3.next=int(x3TVs[i]) s0.next=int(s0TVs[i]) s1.next=int(s1TVs[i]) yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, MUX4_1_Combo_TB(), *Peeker.instances()).run() Peeker.to_wavedrom() MUX4_1_ComboData=Peeker.to_dataframe() MUX4_1_ComboData=MUX4_1_ComboData[['x3', 'x2', 'x1', 'x0', 's1', 's0', 'y']] MUX4_1_ComboData MUX4_1_ComboData['yRef']=MUX4_1_ComboData.apply(lambda row:y41EqN(row['x0'], row['x1'], row['x2'], row['x3'], row['s0'], row['s1']), axis=1).astype(int) MUX4_1_ComboData Test=(MUX4_1_ComboData['y']==MUX4_1_ComboData['yRef']).all() print(f'Module `MUX4_1_Combo` works as exspected: {Test}')
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('MUX4_1_Combo');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_Combo_RTL.png}} \caption{\label{fig:M41CRTL} MUX4_1_Combo RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_Combo_SYN.png}} \caption{\label{fig:M41CSYN} MUX4_1_Combo Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_Combo_IMP.png}} \caption{\label{fig:M41CIMP} MUX4_1_Combo Implementated Schematic; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog Testbench
#create BitVectors for MUX4_1_Combo_TBV x0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:] x1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:] x2TVs=intbv(int(''.join(x2TVs.astype(str)), 2))[TestLen:] x3TVs=intbv(int(''.join(x3TVs.astype(str)), 2))[TestLen:] s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:] s1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:] x0TVs, bin(x0TVs), x1TVs, bin(x1TVs), x2TVs, bin(x2TVs), x3TVs, bin(x3TVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs) @block def MUX4_1_Combo_TBV(): """ myHDL -> Verilog testbench for module `MUX4_1_Combo` """ x0=Signal(bool(0)) x1=Signal(bool(0)) x2=Signal(bool(0)) x3=Signal(bool(0)) y=Signal(bool(0)) s0=Signal(bool(0)) s1=Signal(bool(0)) @always_comb def print_data(): print(x0, x1, x2, x3, s0, s1, y) #Test Signal Bit Vectors x0TV=Signal(x0TVs) x1TV=Signal(x1TVs) x2TV=Signal(x2TVs) x3TV=Signal(x3TVs) s0TV=Signal(s0TVs) s1TV=Signal(s1TVs) DUT=MUX4_1_Combo(x0, x1, x2, x3, s0, s1, y) @instance def stimules(): for i in range(TestLen): x0.next=int(x0TV[i]) x1.next=int(x1TV[i]) x2.next=int(x2TV[i]) x3.next=int(x3TV[i]) s0.next=int(s0TV[i]) s1.next=int(s1TV[i]) yield delay(1) raise StopSimulation() return instances() TB=MUX4_1_Combo_TBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('MUX4_1_Combo_TBV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit \begin{figure} \centerline{\includegraphics[width=5cm]{MUX41PYNQZ1Circ.png}} \caption{\label{fig:M41Circ} 4:1 MUX PYNQ-Z1 (Non SoC) conceptualized circuit} \end{figure} Board Constraint
ConstraintXDCTextReader('MUX4_1');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Video of Deployment MUX4_1_MS myHDL PYNQ-Z1 (YouTube) Shannon's Expansion Formula & Stacking of MUXs Claude Shannon, of the famed Shannon-Nyquist theorem, discovered that any boolean expression $F(x_0, x_1, \ldots, x_n)$ can be decomposed in a manner akin to polynomials of perfect squares via $$ F(x_0, x_1, \ldots, x_n)=x_0 \cdot F(x_0=1, x_1, \ldots, x_n) +\overline{x_0} \cdot F(x_0=0, x_1, \ldots, x_n) $$ known as the Sum of Products (SOP) form since when the expansion is completed for all $x_n$ the result is that $$ F(x_0, x_1, \ldots, x_n)=\sum^{2^n-1}_{i=0} (m_i \cdot F(m_i)) $$ aka the Sum of all Minterms ($m_i$) belonging to the original boolean expression $F$ factored down to the $i$th of $n$ variables belonging to $F$ and product (&) of $F$ evaluated with the respective minterm as the argument The Dual to the SOP form of Shannon's expansion formula is the Product of Sum (POS) form $$ F(x_0, x_1, \ldots, x_n)=(x_0+ F(x_0=1, x_1, \ldots, x_n)) \cdot (\overline{x_0} + F(x_0=0, x_1, \ldots, x_n)) $$ thus $$F(x_0, x_1, \ldots, x_n)=\prod^{2^n-1}_{i=0} (M_i + F(M_i)) $$ with $M_i$ being the $i$th Maxterm it is for this reason that Shannon's Expansion Formula is known is further liked to the fundamental theorem of algebra that it is called the "fundamental theorem of Boolean algebra" So why then is Shannon's decomposition formula discussed in terms of Multiplexers. Because the general expression for a $2^n:1$ multiplexer is $$y_{\text{MUX}}=\sum^{2^n-1}_{i=0}m_i\cdot x_n$$ where then $n$ is the required number of control inputs (referred to in this tutorial as $s_i$). Which is the same as the SOP form of Shannon's Formula for a boolean expression that has been fully decomposed (Factored). And further, if the boolean expression has not been fully factored we can replace $n-1$ parts of the partially factored expression with multiplexers. This then gives way to what is called "Multiplexer Stacking" in order to implement large boolean expressions and or large multiplexers 4 Channel Input: 1 Channel Output multiplexer via MUX Stacking \begin{figure} \centerline{\includegraphics{MUX41MS.png}} \caption{\label{fig:M41MS} 4:1 MUX via MUX stacking 2:1MUXs} \end{figure} myHDL Module
@block def MUX4_1_MS(x0, x1, x2, x3, s0, s1, y): """ 4:1 Multiplexer via 2:1 MUX stacking Input: x0(bool): input channel 0 x1(bool): input channel 1 x2(bool): input channel 2 x3(bool): input channel 3 s1(bool): channel selection input bit 1 s0(bool): channel selection input bit 0 Output: y(bool): ouput """ #create ouput from x0x1 input MUX to y ouput MUX x0x1_yWire=Signal(bool(0)) #create instance of 2:1 mux and wire in inputs #a, b, s0 and wire to ouput mux x0x1MUX=MUX2_1_Combo(x0, x1, s0, x0x1_yWire) #create ouput from x2x3 input MUX to y ouput MUX x2x3_yWire=Signal(bool(0)) #create instance of 2:1 mux and wire in inputs #c, d, s0 and wire to ouput mux x2x3MUX=MUX2_1_Combo(x2, x3, s0, x2x3_yWire) #create ouput MUX and wire to internal wires, #s1 and ouput y yMUX=MUX2_1_Combo(x0x1_yWire, x2x3_yWire, s1, y) return instances()
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
#generate systmatic and random test values TestLen=5 SystmaticVals=list(itertools.product([0,1], repeat=6)) s0TVs=np.array([i[0] for i in SystmaticVals]).astype(int) np.random.seed(15) s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int) s1TVs=np.array([i[1] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(16) s1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int) x0TVs=np.array([i[2] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(17) x0TVs=np.append(x0TVs, np.random.randint(0,2, TestLen)).astype(int) x1TVs=np.array([i[3] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(18) x1TVs=np.append(x1TVs, np.random.randint(0,2, TestLen)).astype(int) x2TVs=np.array([i[4] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(19) x2TVs=np.append(x2TVs, np.random.randint(0,2, TestLen)).astype(int) x3TVs=np.array([i[5] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(20) x3TVs=np.append(x3TVs, np.random.randint(0,2, TestLen)).astype(int) TestLen=len(x0TVs) SystmaticVals, s0TVs, s1TVs, x3TVs, x2TVs, x1TVs, x0TVs, TestLen Peeker.clear() x0=Signal(bool(0)); Peeker(x0, 'x0') x1=Signal(bool(0)); Peeker(x1, 'x1') x2=Signal(bool(0)); Peeker(x2, 'x2') x3=Signal(bool(0)); Peeker(x3, 'x3') s0=Signal(bool(0)); Peeker(s0, 's0') s1=Signal(bool(0)); Peeker(s1, 's1') y=Signal(bool(0)); Peeker(y, 'y') DUT=MUX4_1_MS(x0, x1, x2, x3, s0, s1, y) def MUX4_1_MS_TB(): """ myHDL only testbench for module `MUX4_1_MS` """ @instance def stimules(): for i in range(TestLen): x0.next=int(x0TVs[i]) x1.next=int(x1TVs[i]) x2.next=int(x2TVs[i]) x3.next=int(x3TVs[i]) s0.next=int(s0TVs[i]) s1.next=int(s1TVs[i]) yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, MUX4_1_MS_TB(), *Peeker.instances()).run() Peeker.to_wavedrom() MUX4_1_MSData=Peeker.to_dataframe() MUX4_1_MSData=MUX4_1_MSData[['x3', 'x2', 'x1', 'x0', 's1', 's0', 'y']] MUX4_1_MSData Test=MUX4_1_ComboData[['x3', 'x2', 'x1', 'x0', 's1', 's0', 'y']]==MUX4_1_MSData Test=Test.all().all() print(f'Module `MUX4_1_MS` works as exspected: {Test}')
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('MUX4_1_MS');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_MS_RTL.png}} \caption{\label{fig:M41MSRTL} MUX4_1_MS RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_MS_SYN.png}} \caption{\label{fig:M41MSSYN} MUX4_1_MS Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_MS_IMP.png}} \caption{\label{fig:M41MSIMP} MUX4_1_MS Implementated Schematic; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog Testbench
#create BitVectors x0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:] x1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:] x2TVs=intbv(int(''.join(x2TVs.astype(str)), 2))[TestLen:] x3TVs=intbv(int(''.join(x3TVs.astype(str)), 2))[TestLen:] s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:] s1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:] x0TVs, bin(x0TVs), x1TVs, bin(x1TVs), x2TVs, bin(x2TVs), x3TVs, bin(x3TVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs) @block def MUX4_1_MS_TBV(): """ myHDL -> Verilog testbench for module `MUX4_1_MS` """ x0=Signal(bool(0)) x1=Signal(bool(0)) x2=Signal(bool(0)) x3=Signal(bool(0)) y=Signal(bool(0)) s0=Signal(bool(0)) s1=Signal(bool(0)) @always_comb def print_data(): print(x0, x1, x2, x3, s0, s1, y) #Test Signal Bit Vectors x0TV=Signal(x0TVs) x1TV=Signal(x1TVs) x2TV=Signal(x2TVs) x3TV=Signal(x3TVs) s0TV=Signal(s0TVs) s1TV=Signal(s1TVs) DUT=MUX4_1_MS(x0, x1, x2, x3, s0, s1, y) @instance def stimules(): for i in range(TestLen): x0.next=int(x0TV[i]) x1.next=int(x1TV[i]) x2.next=int(x2TV[i]) x3.next=int(x3TV[i]) s0.next=int(s0TV[i]) s1.next=int(s1TV[i]) yield delay(1) raise StopSimulation() return instances() TB=MUX4_1_MS_TBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('MUX4_1_MS_TBV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit See Board Circuit for "4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic" Board Constraint uses same 'MUX4_1.xdc' as "4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic" Video of Deployment MUX4_1_MS myHDL PYNQ-Z1 (YouTube) Introduction to HDL Behavioral Modeling HDL behavioral modeling is a "High" level, though not at the HLS level, HDL syntax where the intended hardware element is modeled via its intended abstract algorithm behavior. Thus the common computer science (and mathematician)tool of abstraction is borrowed and incorporated into the HDL syntax. The abstraction that follows has, like all things, its pros and cons. As a pro, this means that the Hard Ware Designer is no longer consumed by the manuchia of implementing boolean algebra for every device and can instead focus on implementing the intended algorithm in hardware. And it is thanks to this blending of Software and Hardware that the design of digital devices has grown as prolific as it has. However, there is quite a cache for using behavioral modeling. First off HDL now absolutely requires synthesis tools that can map the behavioral statements to hardware. And even when the behavioral logic is mapped at least to the RTL level there is no escaping two points. 1. At the end of the day, the RTL will be implemented via Gate level devices in some form or another. 2. the way the synthesis tool has mapped the abstract behavioral to RTL may not be physical implementable especially in ASIC implementations. For these reasons it as Hardware Developers using Behavioral HDL we have to be able to still be able to implement the smallest indivisible units of our HDL at the gate level. Must know what physical limits our target architecture (FPGA, ASIC, etc) has and keep within these limits when writing our HDL code. And lastly, we can not grow lazy in writing behavioral HDL, but must always see at least down to the major RTL elements that our behavioral statements are embodying. 2:1 MUX via Behavioral IF myHDL Module
@block def MUX2_1_B(x0, x1, s, y): """ 2:1 Multiplexer written via behavioral if Input: x0(bool): input channel 0 x1(bool): input channel 1 s(bool): channel selection input Output: y(bool): ouput """ @always_comb def logic(): if s: y.next=x1 else: y.next=x0 return instances()
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
#generate systmatic and random test values TestLen=10 SystmaticVals=list(itertools.product([0,1], repeat=3)) x0TVs=np.array([i[1] for i in SystmaticVals]).astype(int) np.random.seed(15) x0TVs=np.append(x0TVs, np.random.randint(0,2, TestLen)).astype(int) x1TVs=np.array([i[2] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(16) x1TVs=np.append(x1TVs, np.random.randint(0,2, TestLen)).astype(int) sTVs=np.array([i[0] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(17) sTVs=np.append(sTVs, np.random.randint(0,2, TestLen)).astype(int) TestLen=len(x0TVs) x0TVs, x1TVs, sTVs, TestLen Peeker.clear() x0=Signal(bool(0)); Peeker(x0, 'x0') x1=Signal(bool(0)); Peeker(x1, 'x1') s=Signal(bool(0)); Peeker(s, 's') y=Signal(bool(0)); Peeker(y, 'y') DUT=MUX2_1_B(x0, x1, s, y) def MUX2_1_B_TB(): """ myHDL only testbench for module `MUX2_1_B` """ @instance def stimules(): for i in range(TestLen): x0.next=int(x0TVs[i]) x1.next=int(x1TVs[i]) s.next=int(sTVs[i]) yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, MUX2_1_B_TB(), *Peeker.instances()).run() Peeker.to_wavedrom('x1', 'x0', 's', 'y') MUX2_1_BData=Peeker.to_dataframe() MUX2_1_BData=MUX2_1_BData[['x1', 'x0', 's', 'y']] MUX2_1_BData Test=MUX2_1_ComboData[['x1', 'x0', 's', 'y']]==MUX2_1_BData Test=Test.all().all() print(f'`MUX2_1_B` Behavioral is Eqivlint to `MUX2_1_Combo`: {Test}')
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('MUX2_1_B');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{MUX2_1_B_RTL.png}} \caption{\label{fig:M21BRTL} MUX2_1_B RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX2_1_B_SYN.png}} \caption{\label{fig:M21BSYN} MUX2_1_B Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX2_1_B_IMP.png}} \caption{\label{fig:M21BIMP} MUX2_1_B Implementated Schematic; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog Testbench
#create BitVectors x0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:] x1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:] sTVs=intbv(int(''.join(sTVs.astype(str)), 2))[TestLen:] x0TVs, bin(x0TVs), x1TVs, bin(x1TVs), sTVs, bin(sTVs) @block def MUX2_1_B_TBV(): """ myHDL -> Verilog testbench for module `MUX2_1_B` """ x0=Signal(bool(0)) x1=Signal(bool(0)) s=Signal(bool(0)) y=Signal(bool(0)) @always_comb def print_data(): print(x0, x1, s, y) #Test Signal Bit Vectors x0TV=Signal(x0TVs) x1TV=Signal(x1TVs) sTV=Signal(sTVs) DUT=MUX2_1_B(x0, x1, s, y) @instance def stimules(): for i in range(TestLen): x0.next=int(x0TV[i]) x1.next=int(x1TV[i]) s.next=int(sTV[i]) yield delay(1) raise StopSimulation() return instances() TB=MUX2_1_B_TBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('MUX2_1_B_TBV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit See Board Circuit for "2 Channel Input:1 Channel Output multiplexer in Gate Level Logic" Board Constraint uses the same MUX2_1.xdc as "2 Channel Input:1 Channel Output multiplexer in Gate Level Logic" Video of Deployment MUX2_1_B myHDL PYNQ-Z1 (YouTube) 4:1 MUX via Behavioral if-elif-else myHDL Module
@block def MUX4_1_B(x0, x1, x2, x3, s0, s1, y): """ 4:1 Multiblexer written in if-elif-else Behavioral Input: x0(bool): input channel 0 x1(bool): input channel 1 x2(bool): input channel 2 x3(bool): input channel 3 s1(bool): channel selection input bit 1 s0(bool): channel selection input bit 0 Output: y(bool): ouput """ @always_comb def logic(): if s0==0 and s1==0: y.next=x0 elif s0==1 and s1==0: y.next=x1 elif s0==0 and s1==1: y.next=x2 else: y.next=x3 return instances()
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
#generate systmatic and random test values TestLen=5 SystmaticVals=list(itertools.product([0,1], repeat=6)) s0TVs=np.array([i[0] for i in SystmaticVals]).astype(int) np.random.seed(15) s0TVs=np.append(s0TVs, np.random.randint(0,2, TestLen)).astype(int) s1TVs=np.array([i[1] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(16) s1TVs=np.append(s1TVs, np.random.randint(0,2, TestLen)).astype(int) x0TVs=np.array([i[2] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(17) x0TVs=np.append(x0TVs, np.random.randint(0,2, TestLen)).astype(int) x1TVs=np.array([i[3] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(18) x1TVs=np.append(x1TVs, np.random.randint(0,2, TestLen)).astype(int) x2TVs=np.array([i[4] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(19) x2TVs=np.append(x2TVs, np.random.randint(0,2, TestLen)).astype(int) x3TVs=np.array([i[5] for i in SystmaticVals]).astype(int) #the random genrator must have a differint seed beween each generation #call in order to produce differint values for each call np.random.seed(20) x3TVs=np.append(x3TVs, np.random.randint(0,2, TestLen)).astype(int) TestLen=len(x0TVs) SystmaticVals, s0TVs, s1TVs, x3TVs, x2TVs, x1TVs, x0TVs, TestLen Peeker.clear() x0=Signal(bool(0)); Peeker(x0, 'x0') x1=Signal(bool(0)); Peeker(x1, 'x1') x2=Signal(bool(0)); Peeker(x2, 'x2') x3=Signal(bool(0)); Peeker(x3, 'x3') s0=Signal(bool(0)); Peeker(s0, 's0') s1=Signal(bool(0)); Peeker(s1, 's1') y=Signal(bool(0)); Peeker(y, 'y') DUT=MUX4_1_B(x0, x1, x2, x3, s0, s1, y) def MUX4_1_B_TB(): """ myHDL only testbench for module `MUX4_1_B` """ @instance def stimules(): for i in range(TestLen): x0.next=int(x0TVs[i]) x1.next=int(x1TVs[i]) x2.next=int(x2TVs[i]) x3.next=int(x3TVs[i]) s0.next=int(s0TVs[i]) s1.next=int(s1TVs[i]) yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, MUX4_1_B_TB(), *Peeker.instances()).run() Peeker.to_wavedrom() MUX4_1_BData=Peeker.to_dataframe() MUX4_1_BData=MUX4_1_BData[['x3', 'x2', 'x1', 'x0', 's1', 's0', 'y']] MUX4_1_BData Test=MUX4_1_ComboData[['x3', 'x2', 'x1', 'x0', 's1', 's0', 'y']]==MUX4_1_BData Test=Test.all().all() print(f'Module `MUX4_1_B` works as exspected: {Test}')
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('MUX4_1_B');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_B_RTL.png}} \caption{\label{fig:M41BRTL} MUX4_1_B RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_B_SYN.png}} \caption{\label{fig:M41BSYN} MUX4_1_B Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_B_IMP.png}} \caption{\label{fig:M41BIMP} MUX4_1_B Implementated Schematic; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog Testbench
#create BitVectors x0TVs=intbv(int(''.join(x0TVs.astype(str)), 2))[TestLen:] x1TVs=intbv(int(''.join(x1TVs.astype(str)), 2))[TestLen:] x2TVs=intbv(int(''.join(x2TVs.astype(str)), 2))[TestLen:] x3TVs=intbv(int(''.join(x3TVs.astype(str)), 2))[TestLen:] s0TVs=intbv(int(''.join(s0TVs.astype(str)), 2))[TestLen:] s1TVs=intbv(int(''.join(s1TVs.astype(str)), 2))[TestLen:] x0TVs, bin(x0TVs), x1TVs, bin(x1TVs), x2TVs, bin(x2TVs), x3TVs, bin(x3TVs), s0TVs, bin(s0TVs), s1TVs, bin(s1TVs) @block def MUX4_1_B_TBV(): """ myHDL -> Verilog testbench for module `MUX4_1_B` """ x0=Signal(bool(0)) x1=Signal(bool(0)) x2=Signal(bool(0)) x3=Signal(bool(0)) y=Signal(bool(0)) s0=Signal(bool(0)) s1=Signal(bool(0)) @always_comb def print_data(): print(x0, x1, x2, x3, s0, s1, y) #Test Signal Bit Vectors x0TV=Signal(x0TVs) x1TV=Signal(x1TVs) x2TV=Signal(x2TVs) x3TV=Signal(x3TVs) s0TV=Signal(s0TVs) s1TV=Signal(s1TVs) DUT=MUX4_1_B(x0, x1, x2, x3, s0, s1, y) @instance def stimules(): for i in range(TestLen): x0.next=int(x0TV[i]) x1.next=int(x1TV[i]) x2.next=int(x2TV[i]) x3.next=int(x3TV[i]) s0.next=int(s0TV[i]) s1.next=int(s1TV[i]) yield delay(1) raise StopSimulation() return instances() TB=MUX4_1_B_TBV() TB.convert(hdl="Verilog", initial_values=True) VerilogTextReader('MUX4_1_B_TBV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
PYNQ-Z1 Deployment Board Circuit See Board Circuit for "4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic" Board Constraint uses same 'MUX4_1.xdc' as "4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic" Video of Deployment MUX4_1_B myHDL PYNQ-Z1 (YouTube) Multiplexer 4:1 Behavioral via Bitvectors myHDL Module
@block def MUX4_1_BV(X, S, y): """ 4:1 Multiblexerwritten in behvioral "if-elif-else"(case) with BitVector inputs Input: X(4bitBV):input bit vector; min=0, max=15 S(2bitBV):selection bit vector; min=0, max=3 Output: y(bool): ouput """ @always_comb def logic(): if S==0: y.next=X[0] elif S==1: y.next=X[1] elif S==2: y.next=X[2] else: y.next=X[3] return instances()
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
myHDL Testing
XTVs=np.array([1,2,4,8]) XTVs=np.append(XTVs, np.random.choice([1,2,4,8], 6)).astype(int) TestLen=len(XTVs) np.random.seed(12) STVs=np.arange(0,4) STVs=np.append(STVs, np.random.randint(0,4, 5)) TestLen, XTVs, STVs Peeker.clear() X=Signal(intbv(0)[4:]); Peeker(X, 'X') S=Signal(intbv(0)[2:]); Peeker(S, 'S') y=Signal(bool(0)); Peeker(y, 'y') DUT=MUX4_1_BV(X, S, y) def MUX4_1_BV_TB(): @instance def stimules(): for i in STVs: for j in XTVs: S.next=int(i) X.next=int(j) yield delay(1) raise StopSimulation() return instances() sim=Simulation(DUT, MUX4_1_BV_TB(), *Peeker.instances()).run() Peeker.to_wavedrom('X', 'S', 'y', start_time=0, stop_time=2*TestLen+2) MUX4_1_BVData=Peeker.to_dataframe() MUX4_1_BVData=MUX4_1_BVData[['X', 'S', 'y']] MUX4_1_BVData MUX4_1_BVData['x0']=None; MUX4_1_BVData['x1']=None; MUX4_1_BVData['x2']=None; MUX4_1_BVData['x3']=None MUX4_1_BVData[['x3', 'x2', 'x1', 'x0']]=MUX4_1_BVData[['X']].apply(lambda bv: [int(i) for i in bin(bv, 4)], axis=1, result_type='expand') MUX4_1_BVData['s0']=None; MUX4_1_BVData['s1']=None MUX4_1_BVData[['s1', 's0']]=MUX4_1_BVData[['S']].apply(lambda bv: [int(i) for i in bin(bv, 2)], axis=1, result_type='expand') MUX4_1_BVData=MUX4_1_BVData[['X', 'x0', 'x1', 'x2', 'x3', 'S', 's0', 's1', 'y']] MUX4_1_BVData MUX4_1_BVData['yRef']=MUX4_1_BVData.apply(lambda row:y41EqN(row['x0'], row['x1'], row['x2'], row['x3'], row['s0'], row['s1']), axis=1).astype(int) MUX4_1_BVData Test=(MUX4_1_BVData['y']==MUX4_1_BVData['yRef']).all() print(f'Module `MUX4_1_BVData` works as exspected: {Test}')
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Verilog Conversion
DUT.convert() VerilogTextReader('MUX4_1_BV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
\begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_BV_RTL.png}} \caption{\label{fig:M41BVRTL} MUX4_1_BV RTL schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_BV_SYN.png}} \caption{\label{fig:M41BVSYN} MUX4_1_BV Synthesized Schematic; Xilinx Vivado 2017.4} \end{figure} \begin{figure} \centerline{\includegraphics[width=10cm]{MUX4_1_BV_IMP.png}} \caption{\label{fig:M41BVIMP} MUX4_1_BV Implementated Schematic; Xilinx Vivado 2017.4} \end{figure} myHDL to Verilog Testbench Will Do later PYNQ-Z1 Deployment Board Circuit See Board Circuit for "4 Channel Input : 1 Channel Output multiplexer in Gate Level Logic" Board Constraint notice that in get_ports the pin is set to the a single bit of the bitvector via bitvector indexing
ConstraintXDCTextReader('MUX4_1_BV');
myHDL_DigLogicFundamentals/myHDL_Combinational/Multiplexers(MUX).ipynb
PyLCARS/PythonUberHDL
bsd-3-clause
Using MPI To distribute emcee3 across nodes on a cluster, you'll need to use MPI. This can be done with the MPIPool from schwimmbad. To use this, you'll need to install the dependency mpi4py. Otherwise, the code is almost the same as the multiprocessing example above – the main change is the definition of the pool: The if not pool.is_master() block is crucial otherwise the code will hang at the end of execution. To run this code, you would execute something like the following: Using ipyparallel ipyparallel is a flexible and powerful framework for running distributed computation in Python. It works on a single machine with multiple cores in the same way as it does on a huge compute cluster and in both cases it is very efficient! To use IPython parallel, make sure that you have a recent version of IPython installed (ipyparallel docs) and start up the cluster by running: Then, run the following:
# Connect to the cluster. from ipyparallel import Client rc = Client() dv = rc.direct_view() # Run the imports on the cluster too. with dv.sync_imports(): import emcee3 import numpy # Define the model. def log_prob(x): return -0.5 * numpy.sum(x ** 2) # Distribute the model to the nodes of the cluster. dv.push(dict(log_prob=log_prob), block=True) # Set up the ensemble with the IPython "DirectView" as the pool. ndim, nwalkers = 10, 100 ensemble = emcee3.Ensemble(log_prob, numpy.random.randn(nwalkers, ndim), pool=dv) # Run the sampler in the same way as usual. sampler = emcee3.Sampler() ensemble = sampler.run(ensemble, 1000)
docs/user/parallel.ipynb
dfm/emcee3
mit
1. Conceptual Questions (8 Points) Answer these in Markdown [1 point] In problem 4 from HW 3 we discussed probabilities of having HIV and results of a test being positive. What was the sample space for this problem? [4 points] One of the notations in the answer key is a random variable $H$ which indicated if a person has HIV. Make a table showing this functions inputs and outputs for the sample space. Making Markdown Tables [1 point] A probability density function is used for what types of probability distributions? [2 points] What is the probability of $t > 4$ in an exponential distribution with $\lambda = 1$? Leave your answer in terms of an exponential. 1.1 First element is HIV and second is test $$ { (0,0), (1,0), (0,1), (1,1)} $$ 1.2 |$x$|$H$| |---|---:| |(0,0)| 0| |(0,1)| 0| |(1,0)| 1| |(1,1)| 1| 1.3 Continuous 1.4 $$ \int_4^{\infty} e^{-t} \, dt = \left. -e^{-t}\right]_4^{\infty} = 0 - - e^{-4} = e^{-4} $$ 2. The Nile (10 Points) Answer in Python [4 points] Load the Nile dataset and convert to a numpy array. It contains measurements of the annual flow of the river Nile at Aswan. Make a scatter plot of the year vs flow rate. If you get an error when loading pydataset that says No Module named 'pydataset', then execute this code in a new cell once: !pip install pydataset [2 points] Report the correlation coefficient between year and flow rate. [4 points] Create a histogram of the flow rates and show the median with a vertical line. Labels your axes and make a legend indicating what the vertical line is.
#2.1 nile = pydataset.data('Nile').as_matrix() plt.plot(nile[:,0], nile[:,1], '-o') plt.xlabel('Year') plt.ylabel('Nile Flow Rate') plt.show() #2.2 print('{:.3}'.format(np.corrcoef(nile[:,0], nile[:,1])[0,1])) #2.3 ok to distplot or plt.hist sns.distplot(nile[:,1]) plt.axvline(np.mean(nile[:,1]), color='C2', label='Mean') plt.legend() plt.xlabel('Flow Rate') plt.show()
unit_7/hw_2018/Homework_7_Key.ipynb
whitead/numerical_stats
gpl-3.0
2. Insect Spray (10 Points) Answer in Python [2 points] Load the 'InsectSpray' dataset, convert to a numpy array and print the number of rows and columns. Recall that numpy arrays can only hold one type of data (e.g., string, float, int). What is the data type of the loaded dataset? [2 points] Using np.unique, print out the list of insect spray used. This data is a count insects on a crop field with various insect sprays. [4 points] Create a violin plot of the data. Label your axes. [2 points] Which insect spray worked best? What is the mean number of insects for the best insect spray?
#1.1 insect = pydataset.data('InsectSprays').as_matrix() print(insect.shape, 'string or object is acceptable') #1.2 print(np.unique(insect[:,1])) #1.3 labels = np.unique(insect[:,1]) ldata = [] #slice out each set of rows that matches label #and add to list for l in labels: ldata.append(insect[insect[:,1] == l, 0].astype(float)) sns.violinplot(data=ldata) plt.xticks(range(len(labels)), labels) plt.xlabel('Insecticide Type') plt.ylabel('Insect Count') plt.show() #1.4 print('C is best and its mean is {:.2}'.format(np.mean(ldata[2])))
unit_7/hw_2018/Homework_7_Key.ipynb
whitead/numerical_stats
gpl-3.0
3. NY Air Quality (6 Points) Load the 'airquality' dataset and convert into to a numpy array. Make a scatter plot of wind (column 2, mph) and ozone concentration (column 0, ppb). Using the plt.text command, display the correlation coefficient in the plot. This data as nan, which means "not a number". You can select non-nans by using x[~numpy.isnan(x)]. You'll need to remove these to calculate correlation coefficient.
nyair = pydataset.data('airquality').as_matrix() plt.plot(nyair[:,2], nyair[:,0], 'o') plt.xlabel('Wind [mph]') plt.ylabel('Ozone [ppb]') nans = np.isnan(nyair[:,0]) r = np.corrcoef(nyair[~nans,2], nyair[~nans,0])[0,1] plt.text(10, 130, 'Correlation Coefficient = {:.2}'.format(r)) plt.show()
unit_7/hw_2018/Homework_7_Key.ipynb
whitead/numerical_stats
gpl-3.0
Read the RIRE data and generate a larger point set as a reference
fixed_image = sitk.ReadImage(fdata("training_001_ct.mha"), sitk.sitkFloat32) moving_image = sitk.ReadImage(fdata("training_001_mr_T1.mha"), sitk.sitkFloat32) fixed_fiducial_points, moving_fiducial_points = ru.load_RIRE_ground_truth(fdata("ct_T1.standard")) # Estimate the reference_transform defined by the RIRE fiducials and check that the FRE makes sense (low) R, t = ru.absolute_orientation_m(fixed_fiducial_points, moving_fiducial_points) reference_transform = sitk.Euler3DTransform() reference_transform.SetMatrix(R.flatten()) reference_transform.SetTranslation(t) reference_errors_mean, reference_errors_std, _, reference_errors_max,_ = ru.registration_errors(reference_transform, fixed_fiducial_points, moving_fiducial_points) print('Reference data errors (FRE) in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(reference_errors_mean, reference_errors_std, reference_errors_max)) # Generate a reference dataset from the reference transformation # (corresponding points in the fixed and moving images). fixed_points = ru.generate_random_pointset(image=fixed_image, num_points=100) moving_points = [reference_transform.TransformPoint(p) for p in fixed_points] # Compute the TRE prior to registration. pre_errors_mean, pre_errors_std, pre_errors_min, pre_errors_max, _ = ru.registration_errors(sitk.Euler3DTransform(), fixed_points, moving_points, display_errors = True) print('Before registration, errors (TRE) in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(pre_errors_mean, pre_errors_std, pre_errors_max))
62_Registration_Tuning.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Initial Alignment We use the CenteredTransformInitializer. Should we use the GEOMETRY based version or the MOMENTS based one?
initial_transform = sitk.CenteredTransformInitializer(sitk.Cast(fixed_image,moving_image.GetPixelIDValue()), moving_image, sitk.Euler3DTransform(), sitk.CenteredTransformInitializerFilter.GEOMETRY) initial_errors_mean, initial_errors_std, initial_errors_min, initial_errors_max, _ = ru.registration_errors(initial_transform, fixed_points, moving_points, min_err=pre_errors_min, max_err=pre_errors_max, display_errors=True) print('After initialization, errors (TRE) in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(initial_errors_mean, initial_errors_std, initial_errors_max))
62_Registration_Tuning.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Registration Possible choices for simple rigid multi-modality registration framework (<b>300</b> component combinations, in addition to parameter settings for each of the components): <ul> <li>Similarity metric, 2 options (Mattes MI, JointHistogram MI): <ul> <li>Number of histogram bins.</li> <li>Sampling strategy, 3 options (NONE, REGULAR, RANDOM)</li> <li>Sampling percentage.</li> </ul> </li> <li>Interpolator, 10 options (sitkNearestNeighbor, sitkLinear, sitkGaussian, sitkBSpline,...)</li> <li>Optimizer, 5 options (GradientDescent, GradientDescentLineSearch, RegularStepGradientDescent...): <ul> <li>Number of iterations.</li> <li>learning rate (step size along parameter space traversal direction).</li> </ul> </li> </ul> In this example we will plot the similarity metric's value and more importantly the TREs for our reference data. A good choice for the former should be reflected by the later. That is, the TREs should go down as the similarity measure value goes down (not necessarily at the same rates). Finally, we are also interested in timing our registration. Ipython allows us to do this with minimal effort using the <a href="http://ipython.org/ipython-doc/stable/interactive/magics.html?highlight=timeit#magic-timeit">timeit</a> cell magic (Ipython has a set of predefined functions that use a command line syntax, and are referred to as magic functions).
#%%timeit -r1 -n1 # to time this cell uncomment the line above #the arguments to the timeit magic specify that this cell should only be run once. running it multiple #times to get performance statistics is also possible, but takes time. if you want to analyze the accuracy #results from multiple runs you will have to modify the code to save them instead of just printing them out. registration_method = sitk.ImageRegistrationMethod() registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50) registration_method.SetMetricSamplingStrategy(registration_method.RANDOM) registration_method.SetMetricSamplingPercentage(0.01) registration_method.SetInterpolator(sitk.sitkNearestNeighbor) #2. Replace with sitkLinear registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100) #1. Increase to 1000 registration_method.SetOptimizerScalesFromPhysicalShift() # Don't optimize in-place, we would like to run this cell multiple times registration_method.SetInitialTransform(initial_transform, inPlace=False) # Add callbacks which will display the similarity measure value and the reference data during the registration process registration_method.AddCommand(sitk.sitkStartEvent, rc.metric_and_reference_start_plot) registration_method.AddCommand(sitk.sitkEndEvent, rc.metric_and_reference_end_plot) registration_method.AddCommand(sitk.sitkIterationEvent, lambda: rc.metric_and_reference_plot_values(registration_method, fixed_points, moving_points)) final_transform_single_scale = registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32), sitk.Cast(moving_image, sitk.sitkFloat32)) print('Final metric value: {0}'.format(registration_method.GetMetricValue())) print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription())) final_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform_single_scale, fixed_points, moving_points, min_err=initial_errors_min, max_err=initial_errors_max, display_errors=True) print('After registration, errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))
62_Registration_Tuning.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
In some cases visual comparison of the registration errors using the same scale is not informative, as seen above [all points are grey/black]. We therefor set the color scale to the min-max error range found in the current data and not the range from the previous stage.
final_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform_single_scale, fixed_points, moving_points, display_errors=True)
62_Registration_Tuning.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Now using the built in multi-resolution framework Perform registration using the same settings as above, but take advantage of the multi-resolution framework which provides a significant speedup with minimal effort (3 lines of code). It should be noted that when using this framework the similarity metric value will not necessarily decrease between resolutions, we are only ensured that it decreases per resolution. This is not an issue, as we are actually observing the values of a different function at each resolution. The example below shows that registration is improving even though the similarity value increases when changing resolution levels.
%%timeit -r1 -n1 #the arguments to the timeit magic specify that this cell should only be run once. running it multiple #times to get performance statistics is also possible, but takes time. if you want to analyze the accuracy #results from multiple runs you will have to modify the code to save them instead of just printing them out. registration_method = sitk.ImageRegistrationMethod() registration_method.SetMetricAsMattesMutualInformation(numberOfHistogramBins=50) registration_method.SetMetricSamplingStrategy(registration_method.RANDOM) registration_method.SetMetricSamplingPercentage(0.1) registration_method.SetInterpolator(sitk.sitkLinear) #2. Replace with sitkLinear registration_method.SetOptimizerAsGradientDescent(learningRate=1.0, numberOfIterations=100) registration_method.SetOptimizerScalesFromPhysicalShift() # Don't optimize in-place, we would like to run this cell multiple times registration_method.SetInitialTransform(initial_transform, inPlace=False) # Add callbacks which will display the similarity measure value and the reference data during the registration process registration_method.AddCommand(sitk.sitkStartEvent, rc.metric_and_reference_start_plot) registration_method.AddCommand(sitk.sitkEndEvent, rc.metric_and_reference_end_plot) registration_method.AddCommand(sitk.sitkIterationEvent, lambda: rc.metric_and_reference_plot_values(registration_method, fixed_points, moving_points)) registration_method.SetShrinkFactorsPerLevel(shrinkFactors = [4,2,1]) registration_method.SetSmoothingSigmasPerLevel(smoothingSigmas=[2,1,0]) registration_method.SmoothingSigmasAreSpecifiedInPhysicalUnitsOn() final_transform = registration_method.Execute(sitk.Cast(fixed_image, sitk.sitkFloat32), sitk.Cast(moving_image, sitk.sitkFloat32)) print('Final metric value: {0}'.format(registration_method.GetMetricValue())) print('Optimizer\'s stopping condition, {0}'.format(registration_method.GetOptimizerStopConditionDescription())) final_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform, fixed_points, moving_points, True) print('After registration, errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))
62_Registration_Tuning.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Sufficient accuracy <u>inside</u> the ROI Up to this point our accuracy evaluation has ignored the content of the image and is likely overly conservative. We have been looking at the registration errors inside the volume, but not necesserily in the smaller ROI. To see the difference you will have to <b>comment out the timeit magic in the code above</b>, run it again, and then run the following cell.
# Threshold the original fixed, CT, image at 0HU (water), resulting in a binary labeled [0,1] image. roi = fixed_image> 0 # Our ROI consists of all voxels with a value of 1, now get the bounding box surrounding the head. label_shape_analysis = sitk.LabelShapeStatisticsImageFilter() label_shape_analysis.SetBackgroundValue(0) label_shape_analysis.Execute(roi) bounding_box = label_shape_analysis.GetBoundingBox(1) # Bounding box in physical space. sub_image_min = fixed_image.TransformIndexToPhysicalPoint((bounding_box[0],bounding_box[1], bounding_box[2])) sub_image_max = fixed_image.TransformIndexToPhysicalPoint((bounding_box[0]+bounding_box[3]-1, bounding_box[1]+bounding_box[4]-1, bounding_box[2]+bounding_box[5]-1)) # Only look at the points inside our bounding box. sub_fixed_points = [] sub_moving_points = [] for fixed_pnt, moving_pnt in zip(fixed_points, moving_points): if sub_image_min[0]<=fixed_pnt[0]<=sub_image_max[0] and \ sub_image_min[1]<=fixed_pnt[1]<=sub_image_max[1] and \ sub_image_min[2]<=fixed_pnt[2]<=sub_image_max[2] : sub_fixed_points.append(fixed_pnt) sub_moving_points.append(moving_pnt) final_errors_mean, final_errors_std, _, final_errors_max,_ = ru.registration_errors(final_transform, sub_fixed_points, sub_moving_points, True) print('After registration, errors in millimeters, mean(std): {:.2f}({:.2f}), max: {:.2f}'.format(final_errors_mean, final_errors_std, final_errors_max))
62_Registration_Tuning.ipynb
thewtex/SimpleITK-Notebooks
apache-2.0
Line Chart Selectors Fast Interval Selector
## First we define a Figure dt_x_fast = DateScale() lin_y = LinearScale() x_ax = Axis(label="Index", scale=dt_x_fast) x_ay = Axis(label=(symbol + " Price"), scale=lin_y, orientation="vertical") lc = Lines( x=dates_actual, y=prices, scales={"x": dt_x_fast, "y": lin_y}, colors=["orange"] ) lc_2 = Lines( x=dates_actual[50:], y=prices[50:] + 2, scales={"x": dt_x_fast, "y": lin_y}, colors=["blue"], ) ## Next we define the type of selector we would like intsel_fast = FastIntervalSelector(scale=dt_x_fast, marks=[lc, lc_2]) ## Now, we define a function that will be called when the FastIntervalSelector is interacted with def fast_interval_change_callback(change): db_fast.value = "The selected period is " + str(change.new) ## Now we connect the selectors to that function intsel_fast.observe(fast_interval_change_callback, names=["selected"]) ## We use the HTML widget to see the value of what we are selecting and modify it when an interaction is performed ## on the selector db_fast = HTML() db_fast.value = "The selected period is " + str(intsel_fast.selected) fig_fast_intsel = Figure( marks=[lc, lc_2], axes=[x_ax, x_ay], title="Fast Interval Selector Example", interaction=intsel_fast, ) # This is where we assign the interaction to this particular Figure VBox([db_fast, fig_fast_intsel])
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0
Index Selector
db_index = HTML(value="[]") ## Now we try a selector made to select all the y-values associated with a single x-value index_sel = IndexSelector(scale=dt_x_fast, marks=[lc, lc_2]) ## Now, we define a function that will be called when the selectors are interacted with def index_change_callback(change): db_index.value = "The selected date is " + str(change.new) index_sel.observe(index_change_callback, names=["selected"]) fig_index_sel = Figure( marks=[lc, lc_2], axes=[x_ax, x_ay], title="Index Selector Example", interaction=index_sel, ) VBox([db_index, fig_index_sel])
examples/Interactions/Interaction Layer.ipynb
bloomberg/bqplot
apache-2.0