markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
The book says fin is an acceptable name, but I opt for a more descriptive name There are a number of methods for reading and writing files, including: read( size ) Reads size bytes of data. If size is omitted or negative, the entire file is readn and return. Returns an empty string if the end of the file (EOF) is reached. readline() Reads a single line from the file write( a_string ) Writes a string to the file close() Closes the file object and frees up any system resources You can also use a for loop to read each line of the file
for line in input_file: word = line.strip() print( word )
CSNE2444-Intro-to-CS-I/jupyter-notebooks/ch09-word-play.ipynb
snucsne/CSNE-Course-Source-Code
mit
The strip method removes whitespace at the beginning and end of a string Search Most of the exercises in this chapter have something in common They all involve searching a string for specific characters
def has_no_e( word ): result = True for letter in word: if( 'e' == letter ): result = False return result input_file = open( 'data/short-words.txt' ) for line in input_file: word = line.strip() if( has_no_e( word ) ): print( 'No `e`: ', word )
CSNE2444-Intro-to-CS-I/jupyter-notebooks/ch09-word-play.ipynb
snucsne/CSNE-Course-Source-Code
mit
The for loop traverses each letter in the word looking for an e In fact, if you paid very good attention, you will see that the uses_all and uses_only functions in the book are the same In computer science, we frequently encounter problems that are essentially the same as ones we have already solved, but are just worded differently When you find one (called problem recognition), you can apply a previously developed solution How much work you need to do to apply it is dependent on how general your solution is This is an essential skill for problem-solving in general and not just programming Looping with indices The previous code didn't have a need to use the indices of characters so the simple for ... in loop was used There are a number of ways to traverse a string while maintaining a current index Use a for loop across the range of the length of the string Use recursion Use a while loop and maintain the current index I recommend the first option as it lets the for loop maintain the index Recursion is more complex than necessary for this problem A while loop can be used, but isn't as well suited since we know exactly how many times we need to run through the loop Examples of all three options are below
fruit = 'banana' # For loop for i in range( len( fruit ) ): print( 'For: [',i,']=[',fruit[i],']' ) # Recursive function def recurse_through_string( word, i ): print( 'Recursive: [',i,']=[',fruit[i],']' ) if( (i + 1) < len( word ) ): recurse_through_string( word, i + 1 ) recurse_through_string( fruit, 0 ) # While loop i = 0 while( i < len( fruit ) ): print( 'While: [',i,']=[',fruit[i],']' ) i = i + 1
CSNE2444-Intro-to-CS-I/jupyter-notebooks/ch09-word-play.ipynb
snucsne/CSNE-Course-Source-Code
mit
test_simulate_LLN
u = coo_payoffs beta = 1.0 P = np.zeros((2,2))
test_logitdyn.ipynb
oyamad/game_theory_models
bsd-3-clause
I made a probabilistic choice matrix $P$ in a redundant way just in case.
P[0,0] = np.exp(u[0,0] * beta) / (np.exp(u[0,0] * beta) + np.exp(u[1,0] * beta)) P[0,0] P[1,0] = np.exp(u[1,0] * beta) / (np.exp(u[0,0] * beta) + np.exp(u[1,0] * beta)) P[1,0] P[0,1] = np.exp(u[0,1] * beta) / (np.exp(u[0,1] * beta) + np.exp(u[1,1] * beta)) P[0,1] P[1,1] = np.exp(u[1,1] * beta) / (np.exp(u[0,1] * beta) + np.exp(u[1,1] * beta)) P[1,1] print P
test_logitdyn.ipynb
oyamad/game_theory_models
bsd-3-clause
$P[i,j]$ represents the probability that a player chooses an action $i$ provided that his opponent takes an action $j$.
Q = np.zeros((4,4)) Q[0, 0] = P[0, 0] Q[0, 1] = 0.5 * P[1, 0] Q[0, 2] = 0.5 * P[1, 0] Q[0, 3] = 0 Q[1, 0] = 0.5 * P[0, 0] Q[1, 1] = 0.5 * P[0, 1] + 0.5 * P[1, 0] Q[1, 2] = 0 Q[1, 3] = 0.5 * P[1, 1] Q[2, 0] = 0.5 * P[0, 0] Q[2, 1] = 0 Q[2, 2] = 0.5 * P[1, 0] + 0.5 * P[0, 1] Q[2, 3] = 0.5 * P[1, 1] Q[3, 0] = 0 Q[3, 1] = 0.5 * P[0, 1] Q[3, 2] = 0.5 * P[0, 1] Q[3, 3] = P[1, 1] print Q
test_logitdyn.ipynb
oyamad/game_theory_models
bsd-3-clause
$Q$ is the transition probability matrix. The first row and column represent the state $(0,0)$, which means that player 1 takes action 0 and player 2 also takes action 0. The second ones represent $(0,1)$, the third ones represent $(1,0)$, and the last ones represent $(1,1)$.
from quantecon.mc_tools import MarkovChain mc = MarkovChain(Q) mc.stationary_distributions[0]
test_logitdyn.ipynb
oyamad/game_theory_models
bsd-3-clause
I take 0.61029569 as the criterion for the test.
ld = LogitDynamics(g_coo) # New one (using replicate) n = 1000 seq = ld.replicate(T=100, num_reps=n) count = 0 for i in range(n): if all(seq[i, :] == [1, 1]): count += 1 ratio = count / n ratio # Old one counts = np.zeros(1000) for i in range(1000): seq = ld.simulate(ts_length=100) count = 0 for j in range(100): if all(seq[j, :] == [1, 1]): count += 1 counts[i] = count m = counts.mean() / 100 m
test_logitdyn.ipynb
oyamad/game_theory_models
bsd-3-clause
flexx.app
from flexx import app, react app.init_notebook() class Greeter(app.Model): @react.input def name(s): return str(s) class JS: @react.connect('name') def _greet(name): alert('Hello %s!' % name) greeter = Greeter() greeter.name('John')
EuroScipy 2015 demo.ipynb
zoofIO/flexx-notebooks
bsd-3-clause
The Spacetime of Rx In the examples above all the events happen at the same moment in time. The events are only separated by ordering. This confuses many newcomers to Rx since the result of the merge operation above may have several valid results such as: a1b2c3d4e5 1a2b3c4d5e ab12cd34e5 abcde12345 The only guarantee you have is that 1 will be before 2 in xs, but 1 in xs can be before or after a in ys. It's up the the sort stability of the scheduler to decide which event should go first. For real time data streams this will not be a problem since the events will be separated by actual time. To make sure you get the results you "expect", it's always a good idea to add some time between the events when playing with Rx. Marbles and Marble Diagrams As we saw in the previous section it's nice to add some time when playing with Rx and RxPY. A great way to explore RxPY is to use the marbles test module that enables us to play with marble diagrams. The marbles module adds two new extension methods to Observable. The methods are from_marbles() and to_marbles(). Examples: 1. res = rx.Observable.from_marbles("1-2-3-|") 2. res = rx.Observable.from_marbles("1-2-3-x", rx.Scheduler.timeout) The marble string consists of some special characters: - = Timespan of 100 ms x = on_error() | = on_completed() All other characters are treated as an on_next() event at the given moment they are found on the string. If you need to represent multi character values, then you can group then with brackets such as "1-(42)-3". Lets try it out:
from rx.testing import marbles xs = Observable.from_marbles("a-b-c-|") xs.to_blocking().to_marbles()
python/libs/rxpy/GettingStarted.ipynb
satishgoda/learning
mit
Laplace approximation from scratch in JAX As mentioned in book2 section 7.4.3, Using laplace approximation, any distribution can be approximated as normal distribution having mean $\hat{\theta}$ and standard deviation as $H^{-1}$ \begin{align} H = \triangledown ^2_{\theta = \hat{\theta}} \log p(\theta|\mathcal{D}) \ p(\theta|\mathcal{D}) = \frac{1}{Z}p(\theta|\mathcal{D}) = \mathcal{N}(\theta |\hat{\theta}, H^{-1}) \end{align} Where H is Hessian and $\hat{\theta}$ is the mode Find $\hat{\theta}$ No we find $\hat{\theta}$ ($\theta$_map) by minimizing negative log prior-likelhihood.
def neg_log_prior_likelihood_fn(params, dataset): theta = params["theta"] likelihood_log_prob = likelihood_dist(theta).log_prob(dataset).sum() # log probability of likelihood prior_log_prob = prior_dist().log_prob(theta) # log probability of prior return -(likelihood_log_prob + prior_log_prob) # negative log_prior_liklihood loss_and_grad_fn = jax.value_and_grad(neg_log_prior_likelihood_fn) params = {"theta": 0.5} neg_joint_log_prob, grads = loss_and_grad_fn(params, dataset) optimizer = optax.adam(0.01) opt_state = optimizer.init(params) @jax.jit def train_step(carry, data_output): params = carry["params"] neg_joint_log_prob, grads = loss_and_grad_fn(params, dataset) opt_state = carry["opt_state"] updates, opt_state = optimizer.update(grads, opt_state) params = optax.apply_updates(params, updates) carry = {"params": params, "opt_state": opt_state} data_output = {"params": params, "loss": neg_joint_log_prob} return carry, data_output carry = {"params": params, "opt_state": opt_state} data_output = {"params": params, "loss": neg_joint_log_prob} n = 100 iterator = jnp.ones(n) last_carry, output = jax.lax.scan(train_step, carry, iterator) loss = output["loss"] plt.plot(loss, label="loss") plt.legend(); optimized_params = last_carry["params"] theta_map = optimized_params["theta"] print(f"theta_map = {theta_map}")
notebooks/book1/04/laplace_approx_beta_binom_jax.ipynb
probml/pyprobml
mit
loc and scale of approximated normal posterior
loc = theta_map # loc of approximate posterior print(f"loc = {loc}") # scale of approximate posterior scale = 1 / jnp.sqrt(jax.hessian(neg_log_prior_likelihood_fn)(optimized_params, dataset)["theta"]["theta"]) print(f"scale = {scale}")
notebooks/book1/04/laplace_approx_beta_binom_jax.ipynb
probml/pyprobml
mit
True posterior and laplace approximated posterior
plt.figure() y = jnp.exp(dist.Normal(loc, scale).log_prob(theta_range)) plt.title("Quadratic approximation") plt.plot(theta_range, y, label="laplace approximation", color="tab:red") plt.plot(theta_range, exact_posterior.prob(theta_range), label="true posterior", color="tab:green", linestyle="--") plt.xlabel("$\\theta$") plt.ylabel("$p(\\theta)$") sns.despine() plt.legend() savefig("bb_laplace") # set FIG_DIR = "path/to/figure" enviornment variable to save figure
notebooks/book1/04/laplace_approx_beta_binom_jax.ipynb
probml/pyprobml
mit
Pymc
try: import pymc3 as pm except ModuleNotFoundError: %pip install -qq pymc3 import pymc3 as pm try: import scipy.stats as stats except ModuleNotFoundError: %pip install -qq scipy import scipy.stats as stats import scipy.special as sp try: import arviz as az except ModuleNotFoundError: %pip install -qq arviz import arviz as az import math # Laplace with pm.Model() as normal_aproximation: theta = pm.Beta("theta", 1.0, 1.0) y = pm.Binomial("y", n=1, p=theta, observed=dataset) # Bernoulli mean_q = pm.find_MAP() std_q = ((1 / pm.find_hessian(mean_q, vars=[theta])) ** 0.5)[0] loc = mean_q["theta"] # plt.savefig('bb_laplace.pdf'); x = theta_range plt.figure() plt.plot(x, stats.norm.pdf(x, loc, std_q), "--", label="Laplace") post_exact = stats.beta.pdf(x, n_heads + 1, n_tails + 1) plt.plot(x, post_exact, label="exact") plt.title("Quadratic approximation") plt.xlabel("θ", fontsize=14) plt.yticks([]) plt.legend()
notebooks/book1/04/laplace_approx_beta_binom_jax.ipynb
probml/pyprobml
mit
Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
import numpy as np import problem_unittests as tests from collections import Counter def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ # TODO: Implement Function counts = Counter(text) vocab = sorted(counts, key=counts.get, reverse=True) vocab_to_int = { w : i for i, w in enumerate(vocab, 0)} int_to_vocab = dict(enumerate(vocab)) return vocab_to_int, int_to_vocab """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables)
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function token_dict = {'.' : "||Period||", ',' : "||Comma||", '"' : "||Quotation_Mark||",\ ';' : "||Semicolon||", '!': "||Exclamation_Mark||", '?': "||Question_Mark||", \ '(' : "||Left_Parentheses||", ')' : "||Right_Parentheses||", '--' : "||Dash||", '\n' : "||Return||"} return token_dict """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup)
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate)
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function inputs_ = tf.placeholder(tf.int32, shape=[None, None], name='input') targets_ = tf.placeholder(tf.int32, shape=[None, None], name='targets') learn_rate_ = tf.placeholder(tf.float32, shape=None, name='learning_rate') return (inputs_, targets_, learn_rate_) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs)
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState)
def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) cell = tf.contrib.rnn.MultiRNNCell([lstm]) initial_state = tf.identity(cell.zero_state(batch_size, tf.int32), name="initial_state") return cell, initial_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell)
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState)
def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ # TODO: Implement Function outputs, fs = tf.nn.dynamic_rnn(cell, inputs, dtype=tf.float32) final_state = tf.identity(fs, name='final_state') return outputs, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn)
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ # TODO: Implement Function embed = get_embed(input_data, vocab_size, embed_dim) rnn, final_state = build_rnn(cell, embed) logits = tf.contrib.layers.fully_connected(rnn, vocab_size, activation_fn=None, \ weights_initializer = tf.truncated_normal_initializer(stddev=0.1),\ biases_initializer=tf.zeros_initializer()) return logits, final_state """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn)
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2], [ 7 8], [13 14]] # Batch of targets [[ 2 3], [ 8 9], [14 15]] ] # Second Batch [ # Batch of Input [[ 3 4], [ 9 10], [15 16]] # Batch of targets [[ 4 5], [10 11], [16 17]] ] # Third Batch [ # Batch of Input [[ 5 6], [11 12], [17 18]] # Batch of targets [[ 6 7], [12 13], [18 1]] ] ] ``` Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # TODO: Implement Function num_batches = int(len(int_text) / (batch_size * seq_length)) num_words = num_batches * batch_size * seq_length input_data = np.array(int_text[:num_words]) target_data = np.array(int_text[1:num_words+1]) input_batches = np.split(input_data.reshape(batch_size, -1), num_batches, 1) target_batches = np.split(target_data.reshape(batch_size, -1), num_batches, 1) #last target value in the last batch is the first input value of the first batch #print (batches) target_batches[-1][-1][-1]=input_batches[0][0][0] return np.array(list(zip(input_batches, target_batches))) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches)
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress.
# Number of Epochs num_epochs = 20 # Batch Size batch_size = 100 # RNN Size rnn_size = 512 # Embedding Dimension Size embed_dim = 300 # Sequence Length seq_length = 10 # Learning Rate learning_rate = 0.01 # Show stats for every n number of batches show_every_n_batches = 10 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save'
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
Generate TV Script This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
gen_length = 200 # homer_simpson, moe_szyslak, or Barney_Gumble prime_word = 'moe_szyslak' """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load saved model loader = tf.train.import_meta_graph(load_dir + '.meta') loader.restore(sess, load_dir) # Get Tensors from loaded model input_text, initial_state, final_state, probs = get_tensors(loaded_graph) # Sentences generation setup gen_sentences = [prime_word + ':'] prev_state = sess.run(initial_state, {input_text: np.array([[1]])}) # Generate sentences for n in range(gen_length): # Dynamic Input dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]] dyn_seq_length = len(dyn_input[0]) # Get Prediction probabilities, prev_state = sess.run( [probs, final_state], {input_text: dyn_input, initial_state: prev_state}) pred_word = pick_word(probabilities[0, dyn_seq_length-1], int_to_vocab) gen_sentences.append(pred_word) # Remove tokens tv_script = ' '.join(gen_sentences) for key, token in token_dict.items(): ending = ' ' if key in ['\n', '(', '"'] else '' tv_script = tv_script.replace(' ' + token.lower(), key) tv_script = tv_script.replace('\n ', '\n') tv_script = tv_script.replace('( ', '(') print(tv_script)
tv-script-generation/dlnd_tv_script_generation.ipynb
vinitsamel/udacitydeeplearning
mit
By default Prophet fits additive seasonalities, meaning the effect of the seasonality is added to the trend to get the forecast. This time series of the number of air passengers is an example of when additive seasonality does not work:
%%R -w 10 -h 6 -u in df <- read.csv('../examples/example_air_passengers.csv') m <- prophet(df) future <- make_future_dataframe(m, 50, freq = 'm') forecast <- predict(m, future) plot(m, forecast) df = pd.read_csv('../examples/example_air_passengers.csv') m = Prophet() m.fit(df) future = m.make_future_dataframe(50, freq='MS') forecast = m.predict(future) fig = m.plot(forecast)
notebooks/multiplicative_seasonality.ipynb
facebook/prophet
mit
This time series has a clear yearly cycle, but the seasonality in the forecast is too large at the start of the time series and too small at the end. In this time series, the seasonality is not a constant additive factor as assumed by Prophet, rather it grows with the trend. This is multiplicative seasonality. Prophet can model multiplicative seasonality by setting seasonality_mode='multiplicative' in the input arguments:
%%R -w 10 -h 6 -u in m <- prophet(df, seasonality.mode = 'multiplicative') forecast <- predict(m, future) plot(m, forecast) m = Prophet(seasonality_mode='multiplicative') m.fit(df) forecast = m.predict(future) fig = m.plot(forecast)
notebooks/multiplicative_seasonality.ipynb
facebook/prophet
mit
The components figure will now show the seasonality as a percent of the trend:
%%R -w 9 -h 6 -u in prophet_plot_components(m, forecast) fig = m.plot_components(forecast)
notebooks/multiplicative_seasonality.ipynb
facebook/prophet
mit
With seasonality_mode='multiplicative', holiday effects will also be modeled as multiplicative. Any added seasonalities or extra regressors will by default use whatever seasonality_mode is set to, but can be overriden by specifying mode='additive' or mode='multiplicative' as an argument when adding the seasonality or regressor. For example, this block sets the built-in seasonalities to multiplicative, but includes an additive quarterly seasonality and an additive regressor:
%%R m <- prophet(seasonality.mode = 'multiplicative') m <- add_seasonality(m, 'quarterly', period = 91.25, fourier.order = 8, mode = 'additive') m <- add_regressor(m, 'regressor', mode = 'additive') m = Prophet(seasonality_mode='multiplicative') m.add_seasonality('quarterly', period=91.25, fourier_order=8, mode='additive') m.add_regressor('regressor', mode='additive')
notebooks/multiplicative_seasonality.ipynb
facebook/prophet
mit
The skew result show a positive (right) or negative (left) skew. Values closer to zero show less skew. From the graphs, we can see that radius_mean, perimeter_mean, area_mean, concavity_mean and concave_points_mean are useful in predicting cancer type due to the distinct grouping between malignant and benign cancer types in these features. We can also see that area_worst and perimeter_worst are also quite useful.
data.diagnosis.unique() # Group by diagnosis and review the output. diag_gr = data.groupby('diagnosis', axis=0) pd.DataFrame(diag_gr.size(), columns=['# of observations'])
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Check binary encoding from NB1 to confirm the coversion of the diagnosis categorical data into numeric, where * Malignant = 1 (indicates prescence of cancer cells) * Benign = 0 (indicates abscence) Observation 357 observations indicating the absence of cancer cells and 212 show absence of cancer cell Lets confirm this, by ploting the histogram 2.3 Unimodal Data Visualizations One of the main goals of visualizing the data here is to observe which features are most helpful in predicting malignant or benign cancer. The other is to see general trends that may aid us in model selection and hyper parameter selection. Apply 3 techniques that you can use to understand each attribute of your dataset independently. * Histograms. * Density Plots. * Box and Whisker Plots.
#lets get the frequency of cancer diagnosis sns.set_style("white") sns.set_context({"figure.figsize": (10, 8)}) sns.countplot(data['diagnosis'],label='Count',palette="Set3")
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
2.3.1 Visualise distribution of data via histograms Histograms are commonly used to visualize numerical variables. A histogram is similar to a bar graph after the values of the variable are grouped (binned) into a finite number of intervals (bins). Histograms group data into bins and provide you a count of the number of observations in each bin. From the shape of the bins you can quickly get a feeling for whether an attribute is Gaussian, skewed or even has an exponential distribution. It can also help you see possible outliers. Separate columns into smaller dataframes to perform visualization
#Break up columns into groups, according to their suffix designation #(_mean, _se, # and __worst) to perform visualisation plots off. #Join the 'ID' and 'Diagnosis' back on data_id_diag=data.loc[:,["id","diagnosis"]] data_diag=data.loc[:,["diagnosis"]] #For a merge + slice: data_mean=data.ix[:,1:11] data_se=data.ix[:,11:22] data_worst=data.ix[:,23:] print(df_id_diag.columns) #print(data_mean.columns) #print(data_se.columns) #print(data_worst.columns)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Histogram the "_mean" suffix designition
#Plot histograms of CUT1 variables hist_mean=data_mean.hist(bins=10, figsize=(15, 10),grid=False,) #Any individual histograms, use this: #df_cut['radius_worst'].hist(bins=100)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Histogram for the "_se" suffix designition
#Plot histograms of _se variables #hist_se=data_se.hist(bins=10, figsize=(15, 10),grid=False,)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Histogram "_worst" suffix designition
#Plot histograms of _worst variables #hist_worst=data_worst.hist(bins=10, figsize=(15, 10),grid=False,)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Observation We can see that perhaps the attributes concavity,and concavity_point may have an exponential distribution ( ). We can also see that perhaps the texture and smooth and symmetry attributes may have a Gaussian or nearly Gaussian distribution. This is interesting because many machine learning techniques assume a Gaussian univariate distribution on the input variables. 2.3.2 Visualize distribution of data via density plots Density plots "_mean" suffix designition
#Density Plots plt = data_mean.plot(kind= 'density', subplots=True, layout=(4,3), sharex=False, sharey=False,fontsize=12, figsize=(15,10))
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Density plots "_se" suffix designition
#Density Plots #plt = data_se.plot(kind= 'density', subplots=True, layout=(4,3), sharex=False, # sharey=False,fontsize=12, figsize=(15,10))
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Density plot "_worst" suffix designition
#Density Plots #plt = data_worst.plot(kind= 'kde', subplots=True, layout=(4,3), sharex=False, sharey=False,fontsize=5, # figsize=(15,10))
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Observation We can see that perhaps the attributes perimeter,radius, area, concavity,ompactness may have an exponential distribution ( ). We can also see that perhaps the texture and smooth and symmetry attributes may have a Gaussian or nearly Gaussian distribution. This is interesting because many machine learning techniques assume a Gaussian univariate distribution on the input variables. 2.3.3 Visualise distribution of data via box plots Box plot "_mean" suffix designition
# box and whisker plots #plt=data_mean.plot(kind= 'box' , subplots=True, layout=(4,4), sharex=False, sharey=False,fontsize=12)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Box plot "_se" suffix designition
# box and whisker plots #plt=data_se.plot(kind= 'box' , subplots=True, layout=(4,4), sharex=False, sharey=False,fontsize=12)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Box plot "_worst" suffix designition
# box and whisker plots #plt=data_worst.plot(kind= 'box' , subplots=True, layout=(4,4), sharex=False, sharey=False,fontsize=12)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Observation We can see that perhaps the attributes perimeter,radius, area, concavity,ompactness may have an exponential distribution ( ). We can also see that perhaps the texture and smooth and symmetry attributes may have a Gaussian or nearly Gaussian distribution. This is interesting because many machine learning techniques assume a Gaussian univariate distribution on the input variables. 2.4 Multimodal Data Visualizations Scatter plots Correlation matrix Correlation matrix
# plot correlation matrix import pandas as pd import numpy as np import seaborn as sns from matplotlib import pyplot as plt plt.style.use('fivethirtyeight') sns.set_style("white") data = pd.read_csv('data/clean-data.csv', index_col=False) data.drop('Unnamed: 0',axis=1, inplace=True) # Compute the correlation matrix corr = data_mean.corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure data, ax = plt.subplots(figsize=(8, 8)) plt.title('Breast Cancer Feature Correlation') # Generate a custom diverging colormap cmap = sns.diverging_palette(260, 10, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr, vmax=1.2, square='square', cmap=cmap, mask=mask, ax=ax,annot=True, fmt='.2g',linewidths=2)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Observation: We can see strong positive relationship exists with mean values paramaters between 1-0.75;. * The mean area of the tissue nucleus has a strong positive correlation with mean values of radius and parameter; * Some paramters are moderately positive corrlated (r between 0.5-0.75)are concavity and area, concavity and perimeter etc * Likewise, we see some strong negative correlation between fractal_dimension with radius, texture, parameter mean values.
plt.style.use('fivethirtyeight') sns.set_style("white") data = pd.read_csv('data/clean-data.csv', index_col=False) g = sns.PairGrid(data[[data.columns[1],data.columns[2],data.columns[3], data.columns[4], data.columns[5],data.columns[6]]],hue='diagnosis' ) g = g.map_diag(plt.hist) g = g.map_offdiag(plt.scatter, s = 3)
NB2_ExploratoryDataAnalysis.ipynb
ShiroJean/Breast-cancer-risk-prediction
mit
Write a function solve_lorenz that solves the Lorenz system above for a particular initial condition $[x(0),y(0),z(0)]$. Your function should return a tuple of the solution array and time array.
def solve_lorentz(ic, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0): """Solve the Lorenz system for a single initial condition. Parameters ---------- ic : array, list, tuple Initial conditions [x,y,z]. max_time: float The max time to use. Integrate with 250 points per time unit. sigma, rho, beta: float Parameters of the differential equation. Returns ------- soln : np.ndarray The array of the solution. Each row will be the solution vector at that time. t : np.ndarray The array of time points used. """ t = np.linspace(0,max_time,250) soln = odeint(lorentz_derivs, ic, t, args=(sigma, rho, beta)) return (soln, t) assert True # leave this to grade solve_lorenz
assignments/assignment10/ODEsEx02.ipynb
rsterbentz/phys202-2015-work
mit
Write a function plot_lorentz that: Solves the Lorenz system for N different initial conditions. To generate your initial conditions, draw uniform random samples for x, y and z in the range $[-15,15]$. Call np.random.seed(1) a single time at the top of your function to use the same seed each time. Plot $[x(t),z(t)]$ using a line to show each trajectory. Color each line using the hot colormap from Matplotlib. Label your plot and choose an appropriate x and y limit. The following cell shows how to generate colors that can be used for the lines:
N = 5 colors = plt.cm.hot(np.linspace(0,1,N)) for i in range(N): # To use these colors with plt.plot, pass them as the color argument print(colors[i]) np.random.seed(1) g=[] h=[] f=[] for i in range(5): rnd = np.random.random(size=3) a,b,c = 30*rnd - 15 g.append(a) h.append(b) f.append(c) g,h,f def plot_lorentz(N=10, max_time=4.0, sigma=10.0, rho=28.0, beta=8.0/3.0): """Plot [x(t),z(t)] for the Lorenz system. Parameters ---------- N : int Number of initial conditions and trajectories to plot. max_time: float Maximum time to use. sigma, rho, beta: float Parameters of the differential equation. """ np.random.seed(1) colors = plt.cm.hot(np.linspace(0,1,N)) f = plt.figure(figsize=(7,7)) for i in range(N): ic = 30*np.random.random(size=3) - 15 soln, t = solve_lorentz(ic, max_time, sigma, rho, beta) plt.plot(soln[:,0], soln[:,2], color=colors[i]) plt.xlabel('x(t)') plt.ylabel('z(t)') plt.title('Lorenz System: x(t) vs. z(t)') plt.ylim(-20,110) plt.xlim(-60,60) plot_lorentz() assert True # leave this to grade the plot_lorenz function
assignments/assignment10/ODEsEx02.ipynb
rsterbentz/phys202-2015-work
mit
Use interact to explore your plot_lorenz function with: max_time an integer slider over the interval $[1,10]$. N an integer slider over the interval $[1,50]$. sigma a float slider over the interval $[0.0,50.0]$. rho a float slider over the interval $[0.0,50.0]$. beta fixed at a value of $8/3$.
interact(plot_lorentz, N=(1,50), max_time=(1,10), sigma=(0.0,50.0), rho=(0.0,50.0), beta=fixed(8/3));
assignments/assignment10/ODEsEx02.ipynb
rsterbentz/phys202-2015-work
mit
<table align="left"> <td> <a href="https://colab.research.google.com/github/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/tensorflow/cloud/blob/master/src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">View on GitHub </a> </td> </table> Overview This tutorial demonstrates AI Platform's CloudTuner service. Objective CloudTuner is implemented based upon the KerasTuner and uses AI Platform Vizier as an oracle to get suggested trials, run trials, etc. The usage of CloudTuner is the same as KerasTuner and additionally accept Vizier's study_config as an alternative input. Costs This tutorial uses billable components of Google Cloud: AI Platform Training Cloud Storage Learn about AI Platform Training pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. PIP install packages and dependencies Install additional dependencies not installed in the notebook environment. Use the latest major GA version of the framework.
! pip install google-cloud ! pip install google-cloud-storage ! pip install requests ! pip install tensorflow_datasets
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Install CloudTuner Download and install CloudTuner from tensorflow-cloud.
! pip install tensorflow-cloud
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Import libraries and define constants
from tensorflow_cloud import CloudTuner import keras_tuner REGION = 'us-central1' PROJECT_ID = '[your-project-id]' #@param {type:"string"} ! gcloud config set project $PROJECT_ID
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Instantiate CloudTuner Next, we instantiate an instance of the CloudTuner. We will define our tuning hyperparameters and pass them into the constructor as the parameter hyperparameters. We also set the objective ('accuracy') to measure the performance of each trial, and we shall keep the number of trials small (5) for the purpose of this demonstration.
# Configure the search space HPS = keras_tuner.HyperParameters() HPS.Float('learning_rate', min_value=1e-4, max_value=1e-2, sampling='log') HPS.Int('num_layers', 2, 10) tuner = CloudTuner( build_model, project_id=PROJECT_ID, region=REGION, objective='accuracy', hyperparameters=HPS, max_trials=5, directory='tmp_dir/1')
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Create and train the model
def build_pipeline_model(hp): model = Sequential() model.add(Flatten(input_shape=(28, 28, 1))) # the number of layers is tunable for _ in range(hp.get('num_layers')): model.add(Dense(units=64, activation='relu')) model.add(Dense(10, activation='softmax')) # the learning rate is tunable model.compile( optimizer=Adam(lr=hp.get('learning_rate')), loss='sparse_categorical_crossentropy', metrics=['accuracy']) return model # Configure the search space pipeline_HPS = keras_tuner.HyperParameters() pipeline_HPS.Float('learning_rate', min_value=1e-4, max_value=1e-2, sampling='log') pipeline_HPS.Int('num_layers', 2, 10) pipeline_tuner = CloudTuner( build_pipeline_model, project_id=PROJECT_ID, region=REGION, objective='accuracy', hyperparameters=pipeline_HPS, max_trials=5, directory='tmp_dir/2') pipeline_tuner.search(x=ds_train, epochs=10, validation_data=ds_test) pipeline_tuner.results_summary() pipeline_model = pipeline_tuner.get_best_models(num_models=1)[0] print(pipeline_model) print(pipeline_model.weights)
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Tutorial: Using a Study Configuration Now, let's repeat this study but this time the search space is passed in as a Vizier study_config. Create the Study Configuration Let's start by constructing the study config for optimizing the accuracy of the model with the hyperparameters number of layers and learning rate, just as we did before.
# Configure the search space STUDY_CONFIG = { 'algorithm': 'ALGORITHM_UNSPECIFIED', 'metrics': [{ 'goal': 'MAXIMIZE', 'metric': 'accuracy' }], 'parameters': [{ 'discrete_value_spec': { 'values': [0.0001, 0.001, 0.01] }, 'parameter': 'learning_rate', 'type': 'DISCRETE' }, { 'integer_value_spec': { 'max_value': 10, 'min_value': 2 }, 'parameter': 'num_layers', 'type': 'INTEGER' }, { 'discrete_value_spec': { 'values': [32, 64, 96, 128] }, 'parameter': 'units', 'type': 'DISCRETE' }], 'automatedStoppingConfig': { 'decayCurveStoppingConfig': { 'useElapsedTime': True } } }
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Instantiate CloudTuner Next, we instantiate an instance of the CloudTuner. In this instantiation, we replace the hyperparameters and objective parameters with the study_config parameter.
tuner = CloudTuner( build_model, project_id=PROJECT_ID, region=REGION, study_config=STUDY_CONFIG, max_trials=10, directory='tmp_dir/3')
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Tutorial: Distributed Tuning Let's run multiple tuning loops concurrently using multiple threads. To run distributed tuning, multiple tuners should share the same study_id, but different tuner_ids.
from multiprocessing.dummy import Pool # If you are running this tutorial in a notebook locally, you may run multiple # tuning loops concurrently using multi-processes instead of multi-threads. # from multiprocessing import Pool import time import datetime STUDY_ID = 'Tuner_study_{}'.format( datetime.datetime.now().strftime('%Y%m%d_%H%M%S')) def single_tuner(tuner_id): """Instantiate a `CloudTuner` and set up its `tuner_id`. Args: tuner_id: Integer. Returns: A CloudTuner. """ tuner = CloudTuner( build_model, project_id=PROJECT_ID, region=REGION, objective='accuracy', hyperparameters=HPS, max_trials=18, study_id=STUDY_ID, directory=('tmp_dir/cloud/%s' % (STUDY_ID))) tuner.tuner_id = str(tuner_id) return tuner def search_fn(tuner): # Start searching from different time points for each worker to avoid `model.build` collision. time.sleep(int(tuner.tuner_id) * 2) tuner.search(x=x, y=y, epochs=5, validation_data=(val_x, val_y), verbose=0) return tuner
src/python/tensorflow_cloud/tuner/tests/examples/ai_platform_vizier_tuner.ipynb
tensorflow/cloud
apache-2.0
Database format In a different notebook (the document you are looking at is called an IPython Notebook) I have converted the mongodb database text dump from Planet 4 into HDF format. I saved it in a subformat for very fast read-speed into memory; the 2 GB file currently loads within 20 seconds on my Macbook Pro. By the way, this HDF5 format is supported in IDL and Matlab as well, so I could provide this file as a download for Candy and others, if wanted. I save the object I get back here in the variable df, a shortcut for dataframe, which is the essential table object of the pandas library.
df = pd.read_hdf(get_data.get_current_database_fname(), 'df')
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
So, what did we receive in df (note that type 'object' often means string in our case, but could mean also a different complex datatype):
df = pd.read_hdf("/Users/klay6683/local_data/2018-10-14_planet_four_classifications_queryable_cleaned.h5") df.info() from planet4 import stats obsids = df.image_name.unique() from tqdm import tqdm_notebook as tqdm results = [] for obsid in tqdm(obsids): sub_df = df[df.image_name==obsid] results.append(stats.get_status_per_classifications(sub_df)) s = pd.Series(results, index=obsids) s.describe() %matplotlib inline s.to_csv("current_status.csv") s.hist(bins=30) s[s<50].max() s[s<50].shape !cat current_status.csv
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Here are the first 5 rows of the dataframe:
pd.Series(df.image_name.unique()).to_csv("image_names.csv", index=False)
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Image IDs For a simple first task, let's get a list of unique image ids, to know how many objects have been published.
img_ids = df.image_id.unique() print img_ids
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
We might have some NaN values in there, depending on how the database dump was created. Let's check if that's true.
df.image_id.notnull().value_counts()
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
If there's only True as an answer above, you can skip the nan-cleaning section Cleaning NaNs
df[df.image_id.isnull()].T # .T just to have it printed like a column, not a row
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
In one version of the database dump, I had the last row being completely NaN, so I dropped it with the next command:
#df = df.drop(10718113)
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Let's confirm that there's nothing with a NaN image_id now:
df[df.image_id.isnull()]
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
After NaNs are removed Ok, now we should only get non-NaNs:
img_ids = df.image_id.unique() img_ids
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
So, how many objects were online:
no_all = len(img_ids) no_all
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Classification IDs Now we need to find out how often each image_id has been looked at. For that we have the groupby functionality. Specifically, because we want to know how many citizens have submitted a classification for each image_id, we need to group by the image_id and count the unique classification_ids within each image_id group. Uniqueness within Image_ID! We need to constrain for uniqueness because each classified object is included with the same classification_id and we don't want to count them more than once, because we are interested in the overall submission only for now. In other words: Because the different fans, blobs and interesting things for one image_id have all been submitted with the same classification_id, I need to constrain to unique classification_ids, otherwise images with a lot of submitted items would appear 'more completed' just for having a lot of fan-content, and not for being analyzed by a lot of citizens, which is what we want. First, I confirm that classification_ids indeed have more than 1 entry, i.e. when there was more than one object classified by a user:
df.groupby(df.classification_id, sort=False).size()
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Ok, that is the case. Now, group those classification_ids by the image_ids and save the grouping. Switch off sorting for speed, we want to sort by the counts later anyway.
grouping = df.classification_id.groupby(df.image_id, sort=False)
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Aggregate each group by finding the size of the unique list of classification_ids.
counts = grouping.agg(lambda x: x.unique().size) counts
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Order the counts by value
counts = counts.order(ascending=False) counts
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Note also that the length of this counts data series is 98220, exactly the number of unique image_ids. Percentages done. By constraining the previous data series for the value it has (the counts) and look at the length of the remaining data, we can determine the status of the finished rate.
counts[counts >= 30].size
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Wishing to see higher values, I was for some moments contemplating if one maybe has to sum up the different counts to be correct, but I don't think that's it. The way I see it, one has to decide in what 'phase-space' one works to determine the status of Planet4. Either in the phase space of total subframes or in the total number of classifications. And I believe to determine the finished state of Planet4 it is sufficient and actually easier to focus on the available number of subframes and determine how often each of them has been looked at. Separate for seasons The different seasons of our south polar observations are separated by several counts of the thousands digit in the image_id column of the original HiRISE image id, in P4 called image_name.
from planet4 import helper_functions as hf hf.define_season_column(df) hf.unique_image_ids_per_season(df) no_all = df.season.value_counts() no_all
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
MDAP 2014
reload(hf) season1 = df.loc[df.season==1, :] inca = season1.loc[season1.image_name.str.endswith('_0985')] manhattan = season1.loc[season1.image_name.str.endswith('_0935')] hf.get_status(inca) hf.get_status(manhattan) hf.get_status(season1) inca_images = """PSP_002380_0985,PSP_002868_0985,PSP_003092_0985,PSP_003158_0985,PSP_003237_0985,PSP_003448_0985,PSP_003593_0985,PSP_003770_0815,PSP_003804_0985,PSP_003928_0815""" inca_images = inca_images.split(',') inca = df.loc[df.image_name.isin(inca_images),:] hf.get_status(inca, 25) for img in inca_images: print img print hf.get_status(season1.loc[season1.image_name == img,:]) oneimage = season1.loc[season1.image_name == 'PSP_003928_0815',:] img_ids = oneimage.image_id.unique() counts = hf.classification_counts_per_image(season1) counts[img_ids[0]] container = [] for img_id in img_ids: container.append(counts[img_id]) hist(container) savefig('done_for_PSP_003928_0815.png') counts = hf.classification_counts_per_image(df) counts[counts >=30].size df.info()
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
Ok, so not that big a deal until we require more than 80 classifications to be done. How do the different existing user counts distribute The method 'value_counts()' basically delivers a histogram on the counts_by_user data series. In other words, it shows how the frequency of classifications distribute over the dataset. It shows an to be expected peak close to 100, because that's what we are aiming now and the system does today not anymore show a subframe that has been seen 100 times. But it also shows quite some waste in citizen power from all the counts that went for counts > 100.
counts_by_user.value_counts() counts_by_user.value_counts().plot(style='*') users_work = df.classification_id.groupby(df.user_name).agg(lambda x: x.unique().size) users_work.order(ascending=False)[:10] df[df.user_name=='gwyneth walker'].classification_id.value_counts() import helper_functions as hf reload(hf) hf.classification_counts_for_user('Kitharode', df).hist? hf.classification_counts_for_user('Paul Johnson', df) np.isnan(df.marking) df.marking
notebooks/P4 stats.ipynb
michaelaye/planet4
isc
First, let's try translating between SMILES and SELFIES - as an example, we will use benzaldehyde. To translate from SMILES to SELFIES, use the selfies.encoder function, and to translate from SMILES back to SELFIES, use the selfies.decoder function.
original_smiles = "O=Cc1ccccc1" # benzaldehyde try: encoded_selfies = sf.encoder(original_smiles) # SMILES -> SELFIES decoded_smiles = sf.decoder(encoded_selfies) # SELFIES -> SMILES except sf.EncoderError as err: pass # sf.encoder error... except sf.DecoderError as err: pass # sf.decoder error... encoded_selfies decoded_smiles
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
Note that original_smiles and decoded_smiles are different strings, but they both represent benzaldehyde. Thus, when comparing the two SMILES strings, string equality should not be used. Insead, use RDKit to check whether the SMILES strings represent the same molecule.
from rdkit import Chem Chem.CanonSmiles(original_smiles) == Chem.CanonSmiles(decoded_smiles)
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
Customizing SELFIES The SELFIES grammar is derived dynamically from a set of semantic constraints, which assign bonding capacities to various atoms. Let's customize the semantic constraints that selfies operates on. By default, the following constraints are used:
sf.get_preset_constraints("default")
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
These constraints map atoms (they keys) to their bonding capacities (the values). The special ? key maps to the bonding capacity for all atoms that are not explicitly listed in the constraints. For example, S and Li are constrained to a maximum of 6 and 8 bonds, respectively. Every SELFIES string can be decoded into a molecule that obeys the current constraints.
sf.decoder("[Li][=C][C][S][=C][C][#S]")
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
But suppose that we instead wanted to constrain S and Li to a maximum of 2 and 1 bond(s), respectively. To do so, we create a new set of constraints, and tell selfies to operate on them using selfies.set_semantic_constraints.
new_constraints = sf.get_preset_constraints("default") new_constraints['Li'] = 1 new_constraints['S'] = 2 sf.set_semantic_constraints(new_constraints)
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
To check that the update was succesful, we can use selfies.get_semantic_constraints, which returns the semantic constraints that selfies is currently operating on.
sf.get_semantic_constraints()
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
Our previous SELFIES string is now decoded like so. Notice that the specified bonding capacities are met, with every S and Li making only 2 and 1 bonds, respectively.
sf.decoder("[Li][=C][C][S][=C][C][#S]")
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
Finally, to revert back to the default constraints, simply call:
sf.set_semantic_constraints()
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
Please refer to the API reference for more details and more preset constraints. SELFIES in Practice Let's use a simple example to show how selfies can be used in practice, as well as highlight some convenient utility functions from the library. We start with a toy dataset of SMILES strings. As before, we can use selfies.encoder to convert the dataset into SELFIES form.
smiles_dataset = ["COC", "FCF", "O=O", "O=Cc1ccccc1"] selfies_dataset = list(map(sf.encoder, smiles_dataset)) selfies_dataset
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
The function selfies.len_selfies computes the symbol length of a SELFIES string. We can use it to find the maximum symbol length of the SELFIES strings in the dataset.
max_len = max(sf.len_selfies(s) for s in selfies_dataset) max_len
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
To extract the SELFIES symbols that form the dataset, use selfies.get_alphabet_from_selfies. Here, we add [nop] to the alphabet, which is a special padding character that selfies recognizes.
alphabet = sf.get_alphabet_from_selfies(selfies_dataset) alphabet.add("[nop]") alphabet = list(sorted(alphabet)) alphabet
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
Then, create a mapping between the alphabet SELFIES symbols and indices.
vocab_stoi = {symbol: idx for idx, symbol in enumerate(alphabet)} vocab_itos = {idx: symbol for symbol, idx in vocab_stoi.items()} vocab_stoi
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
SELFIES provides some convenience methods to convert between SELFIES strings and label (integer) and one-hot encodings. Using the first entry of the dataset (dimethyl ether) as an example:
dimethyl_ether = selfies_dataset[0] label, one_hot = sf.selfies_to_encoding(dimethyl_ether, vocab_stoi, pad_to_len=max_len) label one_hot dimethyl_ether = sf.encoding_to_selfies(one_hot, vocab_itos, enc_type="one_hot") dimethyl_ether sf.decoder(dimethyl_ether) # sf.decoder ignores [nop]
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
If different encoding strategies are desired, selfies.split_selfies can be used to tokenize a SELFIES string into its individual symbols.
list(sf.split_selfies("[C][O][C]"))
docs/source/tutorial.ipynb
aspuru-guzik-group/selfies
apache-2.0
1
triangle_count = 0 for _ in range(N): a,b = sorted((random.random(), random.random())) x,y,z = (a,b-a,1-b) if x<0.5 and y<0.5 and z<0.5: triangle_count += 1 triangle_count / N
FiveThirtyEightRiddler/2017-09-15/classic_sticks.ipynb
andrewzwicky/puzzles
mit
2
triangle_count = 0 for _ in range(N): sticks = sorted((random.random(), random.random(), random.random())) if sticks[2] < sticks[0] + sticks[1]: triangle_count += 1 triangle_count / N
FiveThirtyEightRiddler/2017-09-15/classic_sticks.ipynb
andrewzwicky/puzzles
mit
3
triangle_count = 0 for _ in range(N): a,b = sorted((random.random(), random.random())) x,y,z = (a,b-a,1-b) if (x**2 + y**2 > z**2) and (x**2 + z**2 > y**2) and (z**2 + y**2 > x**2): triangle_count += 1 triangle_count / N
FiveThirtyEightRiddler/2017-09-15/classic_sticks.ipynb
andrewzwicky/puzzles
mit
4
triangle_count = 0 for _ in range(N): x,y,z = (random.random(), random.random(), random.random()) if (x**2 + y**2 > z**2) and (x**2 + z**2 > y**2) and (z**2 + y**2 > x**2): triangle_count += 1 triangle_count / N
FiveThirtyEightRiddler/2017-09-15/classic_sticks.ipynb
andrewzwicky/puzzles
mit
Obtener los datos Se usa el cliente VSO de SunPy para obtener los datos automáticamente, solo se tienen que cambiar las fechas correspondientes. Fechas de interés para el proyecto son: * 2012/01/29 a 2012/01/30 * 2013/03/04 a 2013/03/09 * 2014/09/23 a 2014/09/28 * 2015/09/03 a 2015/09/08 * 2016/03/11 a 2016/03/16 * 2016/08/28 a 2016/08/31 * 2016/06/13 a 2016/06/16 * 2016/03/29 a 2016/04/01 * 2016/01/29 a 2016/02/01 * 2016/08/13 a 2016/08/16 * 2012/11/01 - 2012/11/06 * 2015-11-25 - 2015-11-27 * 2012/02/28 - 2012/03/02 * 2012/02/18 - 2012/02/21 * 2011/09/29 - 2011/10/02 * 2011/10/08 - 2011/10/11 * 2012/05/01 - 2012/05/04 * 2012/07/01 - 2012/07/04
# defining datetime range and number of samples dates = [] # where the dates pairs are going to be stored date_start = datetime(2012,7,1,0,0,0) date_end = datetime(2012,7,3,23,59,59) date_samples = 35 # Number of samples to take between dates date_delta = (date_end - date_start)/date_samples # How frequent to take a sample date_window = timedelta(minutes=1.0) temp_date = date_start while temp_date < date_end: dates.append((str(temp_date),str(temp_date+date_window))) temp_date += date_delta for i in range(3): # definir instrumento instrument = 'hmi' # definir rango de longitud de onda (min,max) #wavelength = 400*u.nm , 700*u.nm # Query data - Buscar datos en esas fechas t = 0 for i in dates: tstart, tend = i[0], i[1] #data_client = vso.VSOClient() data_client = vso.VSOClient(url='https://vso.nascom.nasa.gov/API/VSOi_rpc_literal.wsdl') # workaround when VSO server fail # more info at: https://riot.im/app/#/room/!MeRdFpEonLoCwhoHeT:matrix.org/$14939136771403280rCVLc:matrix.org data_query = data_client.query(vso.attrs.Time(tstart, tend), \ vso.attrs.Instrument(instrument), vso.attrs.Physobs("intensity")) print("Found ",len(data_query)," records from ", tstart, " to ", tend) print("Time range: ", data_query.time_range()) print("Size in KB: ", data_query.total_size()) data_dir = '/home/ivan/projects/Physics/solar/solar-physics-ex/rotation/data/set18/{file}.fits' results = data_client.get(data_query, path=data_dir) if t%2 == 0: time.sleep(30) t+=1
rotation/Acquiring_Data.ipynb
ijpulidos/solar-physics-ex
mit
Acquiring data from helioviewer To this date (04/05/2017) the VSO server for fits files docs.virtualsolar.org is down and has been for some hours. So I had to choose to use Helioviewer to download data, which come in jpg/png files.
from sunpy.net.helioviewer import HelioviewerClient hv = HelioviewerClient() datasources = hv.get_data_sources() # print a list of datasources and their associated ids for observatory, instruments in datasources.items(): for inst, detectors in instruments.items(): for det, measurements in detectors.items(): for meas, params in measurements.items(): print("%s %s: %d" % (observatory, params['nickname'], params['sourceId'])) filepath = hv.download_jp2('2012/07/05 00:30:00', directory="data/HMI/set1", observatory='SDO', instrument='HMI', detector='HMI', measurement='continuum') hmi = Map(filepath) #xrange = Quantity([200, 550], 'arcsec') #yrange = Quantity([-400, 200], 'arcsec') hmi.submap(xrange, yrange).peek() # Si falla # < Cadair> mefistofeles: install glymur and openjpeg >1.5
rotation/Acquiring_Data.ipynb
ijpulidos/solar-physics-ex
mit
To analyze the bubble sort, we should note that regardless of how the items are arranged in the initial array, $n−1$ passes will be made to sort an array of size n. Table below shows the number of comparisons for each pass. The total number of comparisons is the sum of the first $n−1$ integers. Recall that the sum of the first $n-1$ integers is $\frac{n(n-1)}{2}$ This is still $\mathcal{O}(n^2)$ comparisons. In the best case, if the list is already ordered, no exchanges will be made. However, in the worst case, every comparison will cause an exchange. On average, we exchange half of the time. Remark A bubble sort is often considered the most inefficient sorting method since it must exchange items before the final location is known. These “wasted” exchange operations are very costly. However, because the bubble sort makes passes through the entire unsorted portion of the list, it has the capability to do something most sorting algorithms cannot. In particular, if during a pass there are no exchanges, then we know that the list must have been sorted already. A bubble sort can be modified to stop early if it finds that the list has become sorted. This means that for lists that require just a few passes, a bubble sort may have an advantage in that it will recognize the sorted list and stop. The following shows this modification, which is often referred to as the short bubble.
def shortBubbleSort(alist): exchanges = True passnum = len(alist)-1 while passnum > 0 and exchanges: exchanges = False for i in range(passnum): # print(i) if alist[i]>alist[i+1]: exchanges = True alist[i], alist[i+1] = alist[i+1], alist[i] passnum = passnum-1 # print('passnum = ', passnum) alist = [54,26,93,17,77,31,44,55,20] #alist = [17, 20, 26, 31, 44, 54, 55, 77, 93] shortBubbleSort(alist) print(alist)
lab2-bubble-sort.ipynb
bkimo/discrete-math-with-python
mit
Plotting Algorithmic Time Complexity of a Function using Python We may take an idea of using the Python Timer and timeit methods to create a simple plotting scheme using matplotlib. Here is the code. The code is quite simple. Perhaps the only interesting thing here is the use of partial to pass in the function and the $N$ parameter into Timer. You can add in your own function here and plot the time complexity.
from matplotlib import pyplot import numpy as np import timeit from functools import partial import random def fconst(N): """ O(1) function """ x = 1 def flinear(N): """ O(n) function """ x = [i for i in range(N)] def fsquare(N): """ O(n^2) function """ for i in range(N): for j in range(N): x = i*j def fshuffle(N): # O(N) random.shuffle(list(range(N))) def fsort(N): x = list(range(N)) random.shuffle(x) x.sort() def plotTC(fn, nMin, nMax, nInc, nTests): """ Run timer and plot time complexity """ x = [] y = [] for i in range(nMin, nMax, nInc): N = i testNTimer = timeit.Timer(partial(fn, N)) t = testNTimer.timeit(number=nTests) x.append(i) y.append(t) p1 = pyplot.plot(x, y, 'o') #pyplot.legend([p1,], [fn.__name__, ]) # main() function def main(): print('Analyzing Algorithms...') #plotTC(fconst, 10, 1000, 10, 10) #plotTC(flinear, 10, 1000, 10, 10) plotTC(fsquare, 10, 1000, 10, 10) #plotTC(fshuffle, 10, 1000, 1000, 10) #plotTC(fsort, 10, 1000, 10, 10) # enable this in case you want to set y axis limits #pyplot.ylim((-0.1, 0.5)) # show plot pyplot.show() # call main if __name__ == '__main__': main()
lab2-bubble-sort.ipynb
bkimo/discrete-math-with-python
mit
Definitions
fileUrl = "../S_lycopersicum_chromosomes.2.50.BspQI_to_EXP_REFINEFINAL1_xmap.txt" MIN_CONF = 10.0 FULL_FIG_W , FULL_FIG_H = 16, 8 CHROM_FIG_W, CHROM_FIG_H = FULL_FIG_W, 20
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Column type definition
col_type_int = np.int64 col_type_flo = np.float64 col_type_str = np.object col_info =[ [ "XmapEntryID" , col_type_int ], [ "QryContigID" , col_type_int ], [ "RefContigID" , col_type_int ], [ "QryStartPos" , col_type_flo ], [ "QryEndPos" , col_type_flo ], [ "RefStartPos" , col_type_flo ], [ "RefEndPos" , col_type_flo ], [ "Orientation" , col_type_str ], [ "Confidence" , col_type_flo ], [ "HitEnum" , col_type_str ], [ "QryLen" , col_type_flo ], [ "RefLen" , col_type_flo ], [ "LabelChannel", col_type_str ], [ "Alignment" , col_type_str ], ] col_names=[cf[0] for cf in col_info] col_types=dict(zip([c[0] for c in col_info], [c[1] for c in col_info])) col_types
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Read XMAP http://nbviewer.ipython.org/github/herrfz/dataanalysis/blob/master/week2/getting_data.ipynb
CONVERTERS = { 'info': filter_conv } SKIP_ROWS = 9 NROWS = None gffData = pd.read_csv(fileUrl, names=col_names, index_col='XmapEntryID', dtype=col_types, header=None, skiprows=SKIP_ROWS, delimiter="\t", comment="#", verbose=True, nrows=NROWS) gffData.head()
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Add length column
gffData['qry_match_len'] = abs(gffData['QryEndPos'] - gffData['QryStartPos']) gffData['ref_match_len'] = abs(gffData['RefEndPos'] - gffData['RefStartPos']) gffData['match_prop' ] = gffData['qry_match_len'] / gffData['ref_match_len'] gffData = gffData[gffData['Confidence'] >= MIN_CONF] del gffData['LabelChannel'] gffData.head() re_matches = re.compile("(\d+)M") re_insertions = re.compile("(\d+)I") re_deletions = re.compile("(\d+)D") def process_cigar(cigar, **kwargs): """ 2M3D1M1D1M1D4M1I2M1D2M1D1M2I2D9M3I3M1D6M1D2M2D1M1D6M1D1M1D1M2D2M2D1M1I1D1M1D5M2D4M2D1M2D2M1D2M1D3M1D1M1D2M3I3D1M1D1M3D2M3D1M2I1D1M2D1M1D1M1I2D3M2I1M1D2M1D1M1D1M2I3D3M3D1M2D1M1D1M1D5M2D12M """ assert(set([x for x in cigar]) <= set(['M', 'D', 'I', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9'])) cigar_matches = 0 cigar_insertions = 0 cigar_deletions = 0 i_matches = re_matches .finditer(cigar) i_inserts = re_insertions.finditer(cigar) i_deletes = re_deletions .finditer(cigar) for i in i_matches: n = i.group(1) cigar_matches += int(n) for i in i_inserts: n = i.group(1) cigar_insertions += int(n) for i in i_deletes: n = i.group(1) cigar_deletions += int(n) return cigar_matches, cigar_insertions, cigar_deletions gffData[['cigar_matches', 'cigar_insertions', 'cigar_deletions']] = gffData['HitEnum'].apply(process_cigar, axis=1).apply(pd.Series, 1) del gffData['HitEnum'] gffData.head() re_alignment = re.compile("\((\d+),(\d+)\)") def process_alignment(alignment, **kwargs): """ Alignment (4862,48)(4863,48)(4864,47)(4865,46)(4866,45)(4867,44)(4870,43)(4873,42)(4874,41)(4875,40)(4877,40)(4878,39)(4879,38)(4880,37)(4883,36)(4884,36)(4885,35)(4886,34)(4887,33)(4888,33)(4889,32)(4890,30)(4891,30)(4892,29)(4893,28)(4894,28)(4899,27)(4900,26)(4901,25)(4902,24)(4903,23)(4904,22)(4906,21)(4907,21)(4908,20)(4910,19)(4911,18)(4912,17)(4913,16)(4915,15)(4917,14)(4918,13)(4919,12)(4920,11)(4922,10)(4923,9)(4925,8)(4927,7)(4930,6)(4931,5)(4932,3)(4933,2)(4934,1) """ assert(set([x for x in alignment]) <= set(['(', ')', ',', '0', '1', '2', '3', '4', '5', '6', '7', '8', '9'])) count_refs = defaultdict(int) count_queries = defaultdict(int) count_refs_colapses = 0 count_queries_colapses = 0 i_alignment = re_alignment.finditer(alignment) for i in i_alignment: c_r = int(i.group(1)) c_q = int(i.group(2)) count_refs [c_r] += 1 count_queries[c_q] += 1 count_refs_colapses = sum([count_refs[ x] for x in count_refs if count_refs[ x] > 1]) count_queries_colapses = sum([count_queries[x] for x in count_queries if count_queries[x] > 1]) return len(count_refs), len(count_queries), count_refs_colapses, count_queries_colapses gffData[['len_count_refs', 'len_count_queries', 'count_refs_colapses', 'count_queries_colapses']] = gffData['Alignment'].apply(process_alignment, axis=1).apply(pd.Series, 1) del gffData['Alignment'] gffData.head()
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
More stats
ref_qry = gffData[['RefContigID','QryContigID']] ref_qry = ref_qry.sort('RefContigID') print ref_qry.head() ref_qry_grpby_ref = ref_qry.groupby('RefContigID', sort=True) ref_qry_grpby_ref.head() qry_ref = gffData[['QryContigID','RefContigID']] qry_ref = qry_ref.sort('QryContigID') print qry_ref.head() qry_ref_grpby_qry = qry_ref.groupby('QryContigID', sort=True) qry_ref_grpby_qry.head() def stats_from_data_vals(RefContigID, QryContigID, groups, indexer, data, data_vals, valid_data_poses): ref_lens = [ ( x["RefStartPos"], x["RefEndPos"] ) for x in data_vals ] qry_lens = [ ( x["QryStartPos"], x["QryEndPos"] ) for x in data_vals ] num_qry_matches = [] for RefContigID_l in groups["QryContigID_RefContigID"][QryContigID]: for match_pos in groups["QryContigID_RefContigID"][QryContigID][RefContigID_l]: if match_pos in valid_data_poses: num_qry_matches.append(RefContigID_l) #num_qry_matches = len( groups["QryContigID_RefContigID"][QryContigID] ) num_qry_matches = len( set(num_qry_matches) ) num_orientations = len( set([x["Orientation"] for x in data_vals]) ) ref_no_gap_len = sum( [ max(x)-min(x) for x in ref_lens ] ) ref_min_coord = min( [ min(x) for x in ref_lens ] ) ref_max_coord = max( [ max(x) for x in ref_lens ] ) ref_gap_len = ref_max_coord - ref_min_coord qry_no_gap_len = sum( [ max(x)-min(x) for x in qry_lens ] ) qry_min_coord = min( [ min(x) for x in qry_lens ] ) qry_max_coord = max( [ max(x) for x in qry_lens ] ) qry_gap_len = qry_max_coord - qry_min_coord XmapEntryIDs = groups["QryContigID_XmapEntryID"][QryContigID].keys() Confidences = [] for XmapEntryID in XmapEntryIDs: data_pos = list(indexer["XmapEntryID"][XmapEntryID])[0] if data_pos not in valid_data_poses: continue Confidences.append( [ data[data_pos]["Confidence"], data[data_pos]["RefContigID"] ] ) max_confidence = max([ x[0] for x in Confidences ]) max_confidence_chrom = [ x[1] for x in Confidences if x[0] == max_confidence][0] stats = {} stats["_meta_is_max_confidence_for_qry_chrom" ] = max_confidence_chrom == RefContigID stats["_meta_len_ref_match_gapped" ] = ref_gap_len stats["_meta_len_ref_match_no_gap" ] = ref_no_gap_len stats["_meta_len_qry_match_gapped" ] = qry_gap_len stats["_meta_len_qry_match_no_gap" ] = qry_no_gap_len stats["_meta_max_confidence_for_qry" ] = max_confidence stats["_meta_max_confidence_for_qry_chrom" ] = max_confidence_chrom stats["_meta_num_orientations" ] = num_orientations stats["_meta_num_qry_matches" ] = num_qry_matches stats["_meta_qry_matches" ] = ','.join( [ str(x) for x in sorted(list(set([ x[1] for x in Confidences ]))) ] ) stats["_meta_proportion_sizes_gapped" ] = (ref_gap_len * 1.0)/ qry_gap_len stats["_meta_proportion_sizes_no_gap" ] = (ref_no_gap_len * 1.0)/ qry_no_gap_len return stats for QryContigID in sorted(QryContigIDs): data_poses = list(groups["RefContigID_QryContigID"][RefContigID][QryContigID]) all_data_poses = list(indexer["QryContigID"][QryContigID]) data_vals = [ data[x] for x in data_poses ] stats = stats_from_data_vals(RefContigID, QryContigID, groups, indexer, data, data_vals, all_data_poses) #print "RefContigID %4d QryContigID %6d" % ( RefContigID, QryContigID ) for data_val in data_vals: cigar = data_val["HitEnum"] cigar_matches, cigar_insertions, cigar_deletions = process_cigar(cigar) Alignment = data_val["Alignment"] alignment_count_queries, alignment_count_refs, alignment_count_refs_colapses, alignment_count_queries_colapses = process_alignment(Alignment) for stat in stats: data_val[stat] = stats[stat] data_val["_meta_proportion_query_len_gapped" ] = (data_val['_meta_len_qry_match_gapped'] * 1.0)/ data_val["QryLen"] data_val["_meta_proportion_query_len_no_gap" ] = (data_val['_meta_len_qry_match_no_gap'] * 1.0)/ data_val["QryLen"] #print " ", " ".join( ["%s %s" % (x, str(data_val[x])) for x in sorted(data_val)] ) reporter.write( "\t".join( [ str(data_val[x]) for x in valid_fields['names' ] ] ) + "\n" )
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Global statistics
gffData[['Confidence', 'QryLen', 'qry_match_len', 'ref_match_len', 'match_prop']].describe()
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
List of chromosomes
chromosomes = np.unique(gffData['RefContigID'].values) chromosomes
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit
Quality distribution
with size_controller(FULL_FIG_W, FULL_FIG_H): bq = gffData.boxplot(column='Confidence')
opticalmapping/xmap_reader.ipynb
sauloal/ipython
mit