markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
That's well past the Stanford paper's accuracy - another win for CNNs!
conv1.save_weights(model_path + 'conv1.h5') conv1.load_weights(model_path + 'conv1.h5')
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
Pre-trained vectors You may want to look at wordvectors.ipynb before moving on. In this section, we replicate the previous CNN, but using <strong>pre-trained</strong> embeddings.
def load_vectors(loc): return (load_array(loc+'.dat'), pickle.load(open(loc+'_words.pkl','rb')), pickle.load(open(loc+'_idx.pkl','rb'))) #vecs, words, wordidx = load_vectors('data/glove/results/6B.50d') ## JH's original vecs, words, wordidx = load_vectors('data/glove/results/6B.100d') ## MDR's experiment
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
The glove word ids and imdb word ids use different indexes. So we create a simple function that creates an embedding matrix using the indexes from imdb, and the embeddings from glove (where they exist).
def create_emb(): n_fact = vecs.shape[1] emb = np.zeros((vocab_size, n_fact)) for i in range(1,len(emb)): word = idx2word[i] if word and re.match(r"^[a-zA-Z0-9\-]*$", word): src_idx = wordidx[word] emb[i] = vecs[src_idx] else: # If we can't find the word in glove, randomly initialize emb[i] = normal(scale=0.6, size=(n_fact,)) # This is our "rare word" id - we want to randomly initialize emb[-1] = normal(scale=0.6, size=(n_fact,)) emb/=3 return emb emb = create_emb()
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
We pass our embedding matrix to the Embedding constructor, and set it to non-trainable.
model = Sequential([ #Embedding(vocab_size, 50, Embedding(vocab_size, 100, input_length=seq_len, dropout=0.2, weights=[emb], trainable=False), Dropout(0.25), ## JH (0.25) Convolution1D(64, 5, border_mode='same', activation='relu'), Dropout(0.25), ## JH (0.25) MaxPooling1D(), Flatten(), Dense(100, activation='relu'), Dropout(0.3), ## JH (0.7) Dense(1, activation='sigmoid')]) model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
I get better results with the 100d embedding than I do with the 50d embedding, after 4 epochs. - MDR
# model.optimizer.lr = 1e-3 ## MDR: added to the 50d for marginally faster training than I was getting set_gpu_fan_speed(90) model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64) set_gpu_fan_speed(0) model.save_weights(model_path+'glove100_wt1.h5') ## care, with the weight count! model.load_weights(model_path+'glove50_wt1.h5') model.load_weights(model_path+'glove100_wt1.h5')
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
MDR: so my initial results were nowhere near as good, but we're not overfitting yet. MDR: my results are nowhere near JH's! [] Investigate this! We already have beaten our previous model! But let's fine-tune the embedding weights - especially since the words we couldn't find in glove just have random embeddings.
model.layers[0].trainable=True model.optimizer.lr=1e-4 model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64)
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
"As expected, that's given us a nice little boost. :)" - MDR: actually made it worse! For both 50d and 100d cases!
model.save_weights(model_path+'glove50.h5')
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
Multi-size CNN This is an implementation of a multi-size CNN as shown in Ben Bowles' excellent blog post.
from keras.layers import Merge
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
We use the functional API to create multiple conv layers of different sizes, and then concatenate them.
#graph_in = Input ((vocab_size, 50)) graph_in = Input ((vocab_size, 100)) ## MDR - for 100d embedding convs = [ ] for fsz in range (3, 6): x = Convolution1D(64, fsz, border_mode='same', activation="relu")(graph_in) x = MaxPooling1D()(x) x = Flatten()(x) convs.append(x) out = Merge(mode="concat")(convs) graph = Model(graph_in, out) emb = create_emb()
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
We then replace the conv/max-pool layer in our original CNN with the concatenated conv layers.
model = Sequential ([ #Embedding(vocab_size, 50, Embedding(vocab_size, 100, input_length=seq_len, dropout=0.2, weights=[emb]), Dropout (0.2), graph, Dropout (0.5), Dense (100, activation="relu"), Dropout (0.7), Dense (1, activation='sigmoid') ]) model.compile(loss='binary_crossentropy', optimizer=Adam(), metrics=['accuracy'])
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
MDR: it turns out that there's no improvement, in this expt, for using the 100d embedding over the 50d.
set_gpu_fan_speed(90) model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64) set_gpu_fan_speed(0)
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
Interestingly, I found that in this case I got best results when I started the embedding layer as being trainable, and then set it to non-trainable after a couple of epochs. I have no idea why! MDR: (does it limit overfitting, maybe?) ... anyway, my running of the same code achieved nearly the same results, so much happier.
model.save_weights(model_path+'glove50_conv2_wt1.h5') model.load_weights(model_path+'glove50_conv2_wt1.h5')
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
MDR: I want to test this statement from JH, above, by running another couple of epochs. First let's reduce the LR.
model.optimizer.lr = 1e-5 set_gpu_fan_speed(90) model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=2, batch_size=64) set_gpu_fan_speed(0)
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
Okay, so that didn't help. Reload the weights from before.
model.load_weights(model_path+'glove50_conv2_wt1.h5')
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
MDR: following JH's plan, from this point.
model.layers[0].trainable=False model.optimizer.lr=1e-5 set_gpu_fan_speed(90) model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64) set_gpu_fan_speed(0)
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
This more complex architecture has given us another boost in accuracy. MDR: although I didn't see a huge advantage, personally. LSTM We haven't covered this bit yet! MDR: so, there's no preloaded embedding, here - it's a fresh, random set?
model = Sequential([ Embedding(vocab_size, 32, input_length=seq_len, mask_zero=True, W_regularizer=l2(1e-6), dropout=0.2), LSTM(100, consume_less='gpu'), Dense(1, activation='sigmoid')]) model.compile(loss='binary_crossentropy', optimizer='adam', metrics=['accuracy']) model.summary()
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
MDR: hang on! These summary() outputs look quite different, to me! Not least that this is apparently the 13th lstm he's produced (in this session?) - and yet I've fot a higher numbered dense layer than him. Eh? But then I reach better results in fewer epochs than he does, this time around. Compare the times, and the more stable convergence in my results. Weird. Still, that's my first LSTM!!
set_gpu_fan_speed(90) model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=5, batch_size=64) set_gpu_fan_speed(0) model.save_weights(model_path+'glove50_lstm1_wt1.h5')
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
MDR: let's see if it's possible to improve on that.
model.optimizer.lr = 1e-5 model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=5, batch_size=64)
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
MDR: Conclusion: that may be all that's achievable with this dataset, of course. It's sentiment, after all! MDR's lstm + preloaded embeddings God knows whether this will work. Let's see if I can create an LSTM layer on top of pretrained embeddings...
model2 = Sequential([ Embedding(vocab_size, 100, input_length = seq_len, #mask_zero=True, W_regularizer=l2(1e-6), ## used in lstm above - not needed? dropout=0.2, weights=[emb], trainable = False), LSTM(100, consume_less = 'gpu'), Dense(100, activation = 'sigmoid') ]) model2.summary() model2.compile(loss = 'binary_crossentropy', optimizer = 'adam', metrics = ['accuracy']) set_gpu_fan_speed(90) model.fit(trn, labels_train, validation_data=(test, labels_test), nb_epoch=4, batch_size=64) set_gpu_fan_speed(0)
deeplearning1/nbs/lesson5.ipynb
Mdround/fastai-deeplearning1
apache-2.0
The input data
# the simulated SNP database file SNPS = "/tmp/oaks.snps.hdf5" # download example hdf5 dataset (158Mb, takes ~2-3 minutes) URL = "https://www.dropbox.com/s/x6a4i47xqum27fo/virentes_ref.snps.hdf5?raw=1" ipa.download(url=URL, path=SNPS);
testdocs/analysis/cookbook-pca-empirical.ipynb
dereneaton/ipyrad
gpl-3.0
Make an IMAP dictionary (map popnames to list of samplenames)
IMAP = { "virg": ["LALC2", "TXWV2", "FLBA140", "FLSF33", "SCCU3"], "mini": ["FLSF47", "FLMO62", "FLSA185", "FLCK216"], "gemi": ["FLCK18", "FLSF54", "FLWO6", "FLAB109"], "bran": ["BJSL25", "BJSB3", "BJVL19"], "fusi": ["MXED8", "MXGT4", "TXMD3", "TXGR3"], "sagr": ["CUCA4", "CUSV6", "CUVN10"], "oleo": ["MXSA3017", "BZBB1", "HNDA09", "CRL0030", "CRL0001"], } MINMAP = { "virg": 3, "mini": 3, "gemi": 3, "bran": 2, "fusi": 2, "sagr": 2, "oleo": 3, }
testdocs/analysis/cookbook-pca-empirical.ipynb
dereneaton/ipyrad
gpl-3.0
Initiate tool with filtering options
tool = ipa.pca(data=SNPS, minmaf=0.05, imap=IMAP, minmap=MINMAP, impute_method="sample")
testdocs/analysis/cookbook-pca-empirical.ipynb
dereneaton/ipyrad
gpl-3.0
Run PCA Unlinked SNPs are automatically sampled from each locus. By setting nreplicates=N the subsampling procedure is repeated N times to show variation over the subsampled SNPs. The imap dictionary is used in the .draw() function to color points, and can be overriden to color points differently from the IMAP used in the tool above.
tool.run(nreplicates=10) tool.draw(imap=IMAP); # a convenience function for plotting across three axes tool.draw_panels(0, 1, 2, imap=IMAP);
testdocs/analysis/cookbook-pca-empirical.ipynb
dereneaton/ipyrad
gpl-3.0
Run TSNE t-SNE is a manifold learning algorithm that can sometimes better project data into a 2-dimensional plane. The distances between points in this space are harder to interpret.
tool.run_tsne(perplexity=5, seed=333) tool.draw(imap=IMAP);
testdocs/analysis/cookbook-pca-empirical.ipynb
dereneaton/ipyrad
gpl-3.0
Run UMAP UMAP is similar to t-SNE but the distances between clusters are more representative of the differences betwen groups. This requires another package that if it is not yet installed it will ask you to install.
tool.run_umap(n_neighbors=13, seed=333) tool.draw(imap=IMAP);
testdocs/analysis/cookbook-pca-empirical.ipynb
dereneaton/ipyrad
gpl-3.0
Missing data with imputation Missing data has large effects on dimensionality reduction methods, and it is best to (1) minimize the amount of missing data in your input data set by using filtering, and (2) impute missing data values. In the examples above data is imputed using the 'sample' method, which probabilistically samples alleles for based on the allele frequency in the group that a taxon is assigned to in IMAP. It is good to compare this to a case where imputation is performed without IMAP assignments, to assess the impact of the a priori assignments. Although this comparison is useful, assigning taxa to groups with IMAP dictionaries for imputation is expected to yield more accurate imputation.
# allow very little missing data import itertools tool = ipa.pca( data=SNPS, imap={'samples': list(itertools.chain(*[i for i in IMAP.values()]))}, minmaf=0.05, mincov=0.9, impute_method="sample", quiet=True, ) tool.run(nreplicates=10, seed=123) tool.draw(imap=IMAP);
testdocs/analysis/cookbook-pca-empirical.ipynb
dereneaton/ipyrad
gpl-3.0
Statistics
# variance explained by each PC axes in the first replicate run tool.variances[0].round(2) # PC loadings in the first replicate tool.pcs(0)
testdocs/analysis/cookbook-pca-empirical.ipynb
dereneaton/ipyrad
gpl-3.0
Styling plots (see toyplot documentation) The .draw() function returns a canvas and axes object from toyplot which can be further modified and styled.
# get plot objects, several styling options to draw canvas, axes = tool.draw(imap=IMAP, size=8, width=400); # various axes styling options shown for x axis axes.x.ticks.show = True axes.x.spine.style['stroke-width'] = 1.5 axes.x.ticks.labels.style['font-size'] = '13px' axes.x.label.style['font-size'] = "15px" axes.x.label.offset = "22px"
testdocs/analysis/cookbook-pca-empirical.ipynb
dereneaton/ipyrad
gpl-3.0
Now for a bunch of helpers. We'll use these in a moment; skip over them for now.
def quantise(expr, quantise_to): if isinstance(expr, sympy.Float): return expr.func(round(float(expr) / quantise_to) * quantise_to) elif isinstance(expr, sympy.Symbol): return expr else: return expr.func(*[quantise(arg, quantise_to) for arg in expr.args]) class SymbolicFn(eqx.Module): fn: callable parameters: jnp.ndarray def __call__(self, x): # Dummy batch/unbatching. PySR assumes its JAX'd symbolic functions act on # tensors with a single batch dimension. return jnp.squeeze(self.fn(x[None], self.parameters)) class Stack(eqx.Module): modules: List[eqx.Module] def __call__(self, x): return jnp.stack([module(x) for module in self.modules], axis=-1) def expr_size(expr): return sum(expr_size(v) for v in expr.args) + 1 def _replace_parameters(expr, parameters, i_ref): if isinstance(expr, sympy.Float): i_ref[0] += 1 return expr.func(parameters[i_ref[0]]) elif isinstance(expr, sympy.Symbol): return expr else: return expr.func( *[_replace_parameters(arg, parameters, i_ref) for arg in expr.args] ) def replace_parameters(expr, parameters): i_ref = [-1] # Distinctly sketchy approach to making this conversion. return _replace_parameters(expr, parameters, i_ref)
examples/symbolic_regression.ipynb
patrick-kidger/diffrax
apache-2.0
Okay, let's get started. We start by running the Neural ODE example. Then we extract the learnt neural vector field, and symbolically regress across this. Finally we fine-tune the resulting symbolic expression.
def main( symbolic_dataset_size=2000, symbolic_num_populations=100, symbolic_population_size=20, symbolic_migration_steps=4, symbolic_mutation_steps=30, symbolic_descent_steps=50, pareto_coefficient=2, fine_tuning_steps=500, fine_tuning_lr=3e-3, quantise_to=0.01, ): # # First obtain a neural approximation to the dynamics. # We begin by running the previous example. # # Runs the Neural ODE example. # This defines the variables `ts`, `ys`, `model`. print("Training neural differential equation.") %run neural_ode.ipynb # # Now symbolically regress across the learnt vector field, to obtain a Pareto # frontier of symbolic equations, that trades loss against complexity of the # equation. Select the "best" from this frontier. # print("Symbolically regressing across the vector field.") vector_field = model.func.mlp # noqa: F821 dataset_size, length_size, data_size = ys.shape # noqa: F821 in_ = ys.reshape(dataset_size * length_size, data_size) # noqa: F821 in_ = in_[:symbolic_dataset_size] out = jax.vmap(vector_field)(in_) with tempfile.TemporaryDirectory() as tempdir: symbolic_regressor = pysr.PySRRegressor( niterations=symbolic_migration_steps, ncyclesperiteration=symbolic_mutation_steps, populations=symbolic_num_populations, npop=symbolic_population_size, optimizer_iterations=symbolic_descent_steps, optimizer_nrestarts=1, procs=1, verbosity=0, tempdir=tempdir, temp_equation_file=True, output_jax_format=True, ) symbolic_regressor.fit(in_, out) best_equations = symbolic_regressor.get_best() expressions = [b.sympy_format for b in best_equations] symbolic_fns = [ SymbolicFn(b.jax_format["callable"], b.jax_format["parameters"]) for b in best_equations ] # # Now the constants in this expression have been optimised for regressing across # the neural vector field. This was good enough to obtain the symbolic expression, # but won't quite be perfect -- some of the constants will be slightly off. # # To fix this we now plug our symbolic function back into the original dataset # and apply gradient descent. # print("Optimising symbolic expression.") symbolic_fn = Stack(symbolic_fns) flat, treedef = jax.tree_flatten( model, is_leaf=lambda x: x is model.func.mlp # noqa: F821 ) flat = [symbolic_fn if f is model.func.mlp else f for f in flat] # noqa: F821 symbolic_model = jax.tree_unflatten(treedef, flat) @eqx.filter_grad def grad_loss(symbolic_model): vmap_model = jax.vmap(symbolic_model, in_axes=(None, 0)) pred_ys = vmap_model(ts, ys[:, 0]) # noqa: F821 return jnp.mean((ys - pred_ys) ** 2) # noqa: F821 optim = optax.adam(fine_tuning_lr) opt_state = optim.init(eqx.filter(symbolic_model, eqx.is_inexact_array)) @eqx.filter_jit def make_step(symbolic_model, opt_state): grads = grad_loss(symbolic_model) updates, opt_state = optim.update(grads, opt_state) symbolic_model = eqx.apply_updates(symbolic_model, updates) return symbolic_model, opt_state for _ in range(fine_tuning_steps): symbolic_model, opt_state = make_step(symbolic_model, opt_state) # # Finally we round each constant to the nearest multiple of `quantise_to`. # trained_expressions = [] for module, expression in zip(symbolic_model.func.mlp.modules, expressions): expression = replace_parameters(expression, module.parameters.tolist()) expression = quantise(expression, quantise_to) trained_expressions.append(expression) print(f"Expressions found: {trained_expressions}") main()
examples/symbolic_regression.ipynb
patrick-kidger/diffrax
apache-2.0
Nesting If Statements Any conditional statements within others are called 'nested'
if myVar > 5: print('Above 10!') if myVar > 20: print('Above 20!')
public/tutorials/python/1_r2python-translation/3_controlFlow.ipynb
monicathieu/cu-psych-r-tutorial
mit
Else Statements It is also very helpful to specify code that we want to run if a condition is NOT met Else statements in python always follow if statements, and consist of the following syntax if (condition X): actions... else: actions...
myVar2 = 'dog' if myVar2 == 'cat': print('meow') else: print('woof')
public/tutorials/python/1_r2python-translation/3_controlFlow.ipynb
monicathieu/cu-psych-r-tutorial
mit
Else If & Sequential If Statements We may also want to specify a series of conditions Python always evaluates conditions on the same nest level in order, from top to bottom Elif means 'else if' -- only run this statement if the previous if statement condition was not met, and the condition following is met Sequential if statements on the same level will run if the statement condition is met, regardless of the previous
myVar2 = 'dog' if len(myVar2) == 3: print('3 letters long') elif myVar2 == 'dog': print('woof') else: print('unknown animal') myVar2 = 'dog' if len(myVar2) == 3: print('3 letters long') if myVar2 == 'dog': print('woof') else: print('unknown animal')
public/tutorials/python/1_r2python-translation/3_controlFlow.ipynb
monicathieu/cu-psych-r-tutorial
mit
Loops Looping is a great way to apply the same operation to many pieces of data Looping through a list
nums = [2,3,4,-1,7] for number in nums: print(number)
public/tutorials/python/1_r2python-translation/3_controlFlow.ipynb
monicathieu/cu-psych-r-tutorial
mit
Looping a certain number of times
for i in range(10): print(i)
public/tutorials/python/1_r2python-translation/3_controlFlow.ipynb
monicathieu/cu-psych-r-tutorial
mit
Fancly looping with enumerate
stringList = ['banana', 'mango', 'kiwi', 'blackberry'] # fancy looping with enumerate() for index, item in enumerate(stringList): print(index, item)
public/tutorials/python/1_r2python-translation/3_controlFlow.ipynb
monicathieu/cu-psych-r-tutorial
mit
Nested loops
for i in stringList: for j in range(4): print(i, j)
public/tutorials/python/1_r2python-translation/3_controlFlow.ipynb
monicathieu/cu-psych-r-tutorial
mit
Load experimental data
datadirs = ['/Users/ckemere/Development/Data/Buzsaki'] fileroot = next( (dir for dir in datadirs if os.path.isdir(dir)), None) # conda install pandas=0.19.2 if fileroot is None: raise FileNotFoundError('datadir not found') load_from_nel = True # load from nel file: if load_from_nel: jar = nel.load_pkl(os.path.join(fileroot,'gor01vvp01_processed_speed.nel')) exp_data = jar.exp_data aux_data = jar.aux_data del jar with pd.HDFStore(os.path.join(fileroot,'DibaMetadata.h5')) as store: df = store.get('Session_Metadata') df2 = store.get('Subset_Metadata')
score_bayes_parallel-dask.ipynb
ckemere/CloudShuffles
gpl-3.0
Define subset of sessions to score
# restrict sessions to explore to a smaller subset min_n_placecells = 16 min_n_PBEs = 27 # 27 total events ==> minimum 21 events in training set df2_subset = df2[(df2.n_PBEs >= min_n_PBEs) & (df2.n_placecells >= min_n_placecells)] sessions = df2_subset['time'].values.tolist() segments = df2_subset['segment'].values.tolist() print('Evaluating subset of {} sessions'.format(len(sessions))) df2_subset.sort_values(by=['n_PBEs', 'n_placecells'], ascending=[0,0])
score_bayes_parallel-dask.ipynb
ckemere/CloudShuffles
gpl-3.0
Parallel scoring NOTE: it is relatively easy (syntax-wise) to score each session as a parallel task, but since the Bayesian scoring takes such a long time to compute, we can be more efficient (higher % utilization) by further parallelizing over events, and not just over sessions. This further level of parallelization makes the bookkeeping a little ugly, so I provide the code for both approaches here.
n_jobs = 20 # set this equal to number of cores n_shuffles = 100 # 5000 n_samples = 35000 # 35000 w=3 # single sided bandwidth (0 means only include bin who's center is under line, 3 means a total of 7 bins) import matplotlib.pyplot as plt %matplotlib inline # Parallelize by EVENT import dask import distributed.joblib from joblib import Parallel, delayed from joblib import parallel_backend # A function that can be called to do work: def work_events(arg): # Split the list to individual variables: session, segment, ii, bst, tc = arg scores, shuffled_scores, percentiles = nel.analysis.replay.score_Davidson_final_bst_fast(bst=bst, tuningcurve=tc, w=w, n_shuffles=n_shuffles, n_samples=n_samples) return (session, segment, ii, scores, shuffled_scores, percentiles) # List of instances to pass to work(): # unroll all events: parallel_events = [] for session, segment in zip(sessions, segments): for nn in range(aux_data[session][segment]['PBEs'].n_epochs): parallel_events.append((session, segment, nn, aux_data[session][segment]['PBEs'][nn], aux_data[session][segment]['tc'])) #parallel_results = list(map(work_events, parallel_events)) with parallel_backend('dask.distributed', scheduler_host='35.184.42.12:8786'): # Anything returned by work() can be stored: parallel_results = Parallel(n_jobs=n_jobs, verbose=1)(map(delayed(work_events), parallel_events)) # standardize parallel results bdries_ = [aux_data[session][segment]['PBEs'].n_epochs for session, segment in zip(sessions, segments) ] bdries = np.cumsum(np.insert(bdries_,0,0)) bdries sessions_ = np.array([result[0] for result in parallel_results]) segments_ = np.array([result[1] for result in parallel_results]) idx = [result[2] for result in parallel_results] scores_bayes_evt = np.array([float(result[3]) for result in parallel_results]) scores_bayes_shuffled_evt = np.array([result[4].squeeze() for result in parallel_results]) scores_bayes_percentile_evt = np.array([float(result[5]) for result in parallel_results]) results = {} for nn in range(len(bdries)-1): session = np.unique(sessions_[bdries[nn]:bdries[nn+1]]) if len(session) > 1: raise ValueError("parallel results in different format / order than expected!") session = session[0] segment = np.unique(segments_[bdries[nn]:bdries[nn+1]]) if len(segment) > 1: raise ValueError("parallel results in different format / order than expected!") segment = segment[0] try: results[session][segment]['scores_bayes'] = scores_bayes_evt[bdries[nn]:bdries[nn+1]] except KeyError: try: results[session][segment] = dict() results[session][segment]['scores_bayes'] = scores_bayes_evt[bdries[nn]:bdries[nn+1]] except KeyError: results[session] = dict() results[session][segment] = dict() results[session][segment]['scores_bayes'] = scores_bayes_evt[bdries[nn]:bdries[nn+1]] results[session][segment]['scores_bayes_shuffled'] = scores_bayes_shuffled_evt[bdries[nn]:bdries[nn+1]] results[session][segment]['scores_bayes_percentile'] = scores_bayes_percentile_evt[bdries[nn]:bdries[nn+1]] print('done packing results')
score_bayes_parallel-dask.ipynb
ckemere/CloudShuffles
gpl-3.0
Save results to disk
jar = nel.ResultsContainer(results=results, description='gor01 and vvp01 speed restricted results for best 20 candidate sessions') jar.save_pkl('score_bayes_all_sessions.nel')
score_bayes_parallel-dask.ipynb
ckemere/CloudShuffles
gpl-3.0
Figure 1. Schematics of the setup of a atomic force microscope (Adapted from reference 6) In AFM the interacting probe is in general a rectangular cantilever (please check the image above that shows the AFM setup where you will be able to see the probe!). Probably the most used dynamic technique in AFM is the Tapping Mode. In this method the probe taps a surface in intermittent contact fashion. The purpose of tapping the probe over the surface instead of dragging it is to reduce frictional forces that may cause damage of soft samples and wear of the tip. Besides with the tapping mode we can get more information about the sample! HOW??? In Tapping Mode AFM the cantilever is shaken to oscillate up and down at a specific frequency (most of the time shaken at its natural frequency). Then the deflection of the tip is measured at that frequency to get information about the sample. Besides acquiring the topography of the sample, the phase lag between the excitation and the response of the cantilever can be related to compositional material properties! In other words one can simultaneously get information about how the surface looks and also get compositional mapping of the surface! THAT SOUNDS POWERFUL!!!
fig2 = path + '/Fig2DHO.jpg' Image(filename=fig2)
jupyter_notebooks/AFM_simulations/IntroductionToSimulations/IntroToAFMSimulations.ipynb
pycroscopy/pycroscopy
mit
Figure 2. Schematics of a damped harmonic oscillator without tip-sample interactions Analytical Solution The motion of the probe can be derived using Euler-Bernoulli's equation. However that equation has partial derivatives (it depends on time and space) because it deals with finding the position of each point of the beam in a certain time, which cant make the problem too expensive computationally for our purposes. In our case, we have the advantage that we are only concerned about the position of the tip (which is the only part of the probe that will interact with the sample). As a consequence many researchers in AFM have successfully made approximations using a simple mass point model approximation [see ref. 2] like the one in figure 2 (with of course the addition of tip sample forces! We will see more about this later). First we will study the system of figure 2 AS IS (without addition of tip-sample force term), WHY? Because we want to get an analytical solution to get a reference of how our integration schemes are working, and the addition of tip sample forces to our equation will prevent the acquisition of straightforward analytical solutions :( Then, the equation of motion of the damped harmonic oscillator of figure 2, which is DRIVEN COSINUSOIDALLY (remember that we are exciting our probe during the scanning process) is: $$\begin{equation} m \frac{d^2z}{dt^2} = - k z - \frac{m\omega_0}{Q}\frac{dz}{dt} + F_0\cos(\omega t) \end{equation}$$ where k is the stiffness of the cantilever, z is the vertical position of the tip with respect to the cantilever base position, Q is the quality factor (which is related to the damping of the system), $F_0$ is the driving force amplitude, $\omega_0$ is the resonance frequency of the oscillator, and $\omega$ is the frequency of the oscillating force. The analytical solution of the above ODE is composed by a transient term and a steady state term. We are only interested in the steady state part because during the scanning process it is assumed that the probe has achieved that state. The steady state solution is given by: $$\begin{equation} A\cos (\omega t - \phi) \end{equation}$$ where A is the steady state amplitude of the oscillation response, which depends on the cantilever parameters and the driving parameters, as can be seen in the following relation: $$\begin{equation} A = \frac{F_0/m}{\sqrt{(\omega_0^2-\omega^2)^2+(\frac{\omega\omega_0}{Q})^2}} \end{equation}$$ and $\phi$ is given by: $$\begin{equation} \phi = \arctan \big( \frac{\omega\omega_0/Q}{\omega_0^2 - \omega^2} \big) \end{equation}$$ Let's first name the variables that we are going to use. Because we are dealing with a damped harmonic oscillator model we have to include variables such as: spring stiffness, resonance frequency, quality factor (related to damping coefficient), target oscillation amplitude, etc.
k = 10. fo = 45000 wo = 2.0*numpy.pi*fo Q = 25. period = 1./fo m = k/(wo**2) Ao = 60.e-9 Fd = k*Ao/Q spp = 28. # time steps per period dt = period/spp #Intentionally chosen to be quite big #you can decrease dt by increasing the number of steps per period simultime = 100.*period N = int(simultime/dt) #Analytical solution time_an = numpy.linspace(0,simultime,N) #time array for the analytical solution z_an = numpy.zeros(N) #position array for the analytical solution #Driving force amplitude this gives us 60nm of amp response (A_target*k/Q) Fo_an = 24.0e-9 A_an = Fo_an*Q/k #when driven at resonance A is simply Fo*Q/k phi = numpy.pi/2 #when driven at resonance the phase is pi/2 z_an[:] = A_an*numpy.cos(wo*time_an[:] - phi) #this gets the analytical solution #slicing the array to include only steady state (only the last 10 periods) z_an_steady = z_an[int(90.*period/dt):] time_an_steady = time_an[int(90.*period/dt):] plt.title('Plot 1 Analytical Steady State Solution of Eq 1', fontsize=20) plt.xlabel('time, ms', fontsize=18) plt.ylabel('z_Analytical, nm', fontsize=18) plt.plot(time_an_steady*1e3, z_an_steady*1e9, 'b--')
jupyter_notebooks/AFM_simulations/IntroductionToSimulations/IntroToAFMSimulations.ipynb
pycroscopy/pycroscopy
mit
Approximating through Euler's method If we perform a Taylor series expansion of $z_{n+1}$ around $z_{n}$ we get: $$z_{n+1} = z_{n} + \Delta t\frac{dz}{dt}\big|_n + {\mathcal O}(\Delta t^2)$$ The Euler formula neglects terms in the order of two or higher, ending up as: $$\begin{equation} z_{n+1} = z_{n} + \Delta t\frac{dz}{dt}\big|_n \end{equation}$$ It can be easily seen that the truncation error of the Euler algorithm is in the order of ${\mathcal O}(\Delta t^2)$. This is a second order ODE, but we can convert it to a system of two coupled 1st order differential equations. To do it we will define $\frac{dz}{dt} = v$. Then equation (1) will be decomposed as: $$\begin{equation} \frac{dz}{dt} = v \end{equation}$$ $$\begin{equation} \frac{dv}{dt} = -kz-\frac{m\omega_0}{Q}+F_o\cos(\omega t) \end{equation}$$ These coupled equations will be used during Euler's aproximation and also during our integration using Runge Kutta 4 method.
t= numpy.linspace(0,simultime,N) #time grid for Euler method #Initializing variables for Euler vdot_E = numpy.zeros(N) v_E = numpy.zeros(N) z_E = numpy.zeros(N) #Initial conditions z_E[0]= 0.0 v_E[0]=0.0 for i in range (N-1): vdot_E[i] =( ( -k*z_E[i] - (m*wo/Q)*(v_E[i]) +\ Fd*numpy.cos(wo*t[i]) ) / m) #Equation 7 v_E[i+1] = v_E[i] + dt*vdot_E[i] #Based on equation 5 z_E[i+1] = z_E[i] + v_E[i]*dt #Equation 5 plt.title('Plot 2 Eulers approximation of Equation1', fontsize=20); plt.plot(t*1e3,z_E*1e9); plt.xlabel('time, s', fontsize=18); plt.ylabel('z_Euler, nm', fontsize=18);
jupyter_notebooks/AFM_simulations/IntroductionToSimulations/IntroToAFMSimulations.ipynb
pycroscopy/pycroscopy
mit
This looks totally unphysical! We were expecting to have a steady state oscillation of 60 nm and we got a huge oscillation that keeps growing. Can it be due to the scheme? The timestep that we have chosen is quite big with respect to the oscillation period. We have intentionally set it to ONLY 28 time steps per period (That could be the reason why the scheme can't capture the physics of the problem). That's quite discouraging. However the timestep is quite big and it really gets better as you decrease the time step. Try it! Reduce the time step and see how the numerical solution acquires an amplitude of 60 nm as the analytical one. At this point we can't state anything about accuracy before doing an analysis of error (we will make this soon). But first, let's try to analyze if another more efficient scheme can capture the physics of our damped harmonic oscillator even with this large time step. Let's try to get more accurate... Verlet Algorithm This is a very popular algorithm widely used in molecular dynamics simulations. Its popularity has been related to high stability when compared to the simple Euler method, it is also very simple to implement and accurate as we will see soon! Verlet integration can be seen as using the central difference approximation to the second derivative. Consider the Taylor expansion of $z_{n+1}$ and $z_{n-1}$ around $z_n$: $$\begin{equation} z_{n+1} = z_n + \Delta t \frac{dz}{dt}\big|_n + \frac{\Delta t^2}{2} \frac{d^2 z}{d t^2}\big|_n + \frac{\Delta t^3}{6} \frac{d^3 z}{d t^3}\big|_n + {\mathcal O}(\Delta t^4) \end{equation}$$ $$\begin{equation} z_{n-1} = z_n - \Delta t \frac{dz}{dt}\big|_n + \frac{\Delta t^2}{2} \frac{d^2 z}{dt^2}\big|_n - \frac{\Delta t^3}{6} \frac{d^3 z}{d t^3}\big|_n + {\mathcal O}(\Delta t^4) \end{equation}$$ Adding up these two expansions and solving for $z_{n+1}$ we get: $$z_{n+1}= 2z_{n} - z_{n-1} + \frac{d^2 z}{d t^2} \Delta t^2\big|_n + {\mathcal O}(\Delta t^4) $$ Verlet algorithm neglects terms on the order of 4 or higher, ending up with: $$\begin{equation} z_{n+1}= 2z_{n} - z_{n-1} + \frac{d^2 z}{d t^2} \Delta t^2\big|_n \end{equation}$$ This looks nice; it seems that the straightforward calculation of the second derivative will give us good results. BUT have you seen that we also need the value of the first derivative (velocity) to put it into the equation of motion that we are integrating (see equation 1). YES, that's a main drawback of this scheme and therefore it's mainly used in applications where the equation to be integrated doesn't have first derivative. But don't panic we will see what can we do... What about subtracting equations 8 and 9 and then solving for $\frac{dz}{dt}\big|n$: $$ \frac{dz}{dt}\big|_n = \frac{z{n+1} - z_{n-1}}{2\Delta t} + {\mathcal O}(\Delta t^2) $$ If we neglect terms on the order of 2 or higher we can calculate velocity: $$\begin{equation} \frac{dz}{dt}\big|n = \frac{z{n+1} - z_{n-1}}{2\Delta t} \end{equation}$$ This way of calculating velocity is pretty common in Verlet integration in applications where velocity is not explicit in the equation of motion. However for our purposes of solving equation 1 (where first derivative is explicitly present) it seems that we will lose accuracy because of the velocity, we will discuss more about this soon after... Have you noticed that we need a value $z_{n-1}$? Does it sound familiar? YES! This is not a self-starting method. As a result we will have to overcome the issue by setting the initial conditions of the first step using Euler approximation. This is a bit annoying, but a couple of extra lines of code won't kill you :)
time_V = numpy.linspace(0,simultime,N) #Initializing variables for Verlet zdoubledot_V = numpy.zeros(N) zdot_V = numpy.zeros(N) z_V = numpy.zeros(N) #Initial conditions Verlet. Look how we use Euler for the first step approximation! z_V[0] = 0.0 zdot_V[0] = 0.0 zdoubledot_V[0] = ( ( -k*z_V[0] - (m*wo/Q)*zdot_V[0] +\ Fd*numpy.cos(wo*t[0]) ) ) / m zdot_V[1] = zdot_V[0] + zdoubledot_V[0]*dt z_V[1] = z_V[0] + zdot_V[0]*dt zdoubledot_V[1] = ( ( -k*z_V[1] - (m*wo/Q)*zdot_V[1] +\ Fd*numpy.cos(wo*t[1]) ) ) / m #VERLET ALGORITHM for i in range(2,N): z_V[i] = 2*z_V[i-1] - z_V[i-2] + zdoubledot_V[i-1]*dt**2 #Eq 10 zdot_V[i] = (z_V[i]-z_V[i-2])/(2.0*dt) #Eq 11 zdoubledot_V[i] = ( ( -k*z_V[i] - (m*wo/Q)*zdot_V[i] +\ Fd*numpy.cos(wo*t[i]) ) ) / m #from eq 1 plt.title('Plot 3 Verlet approximation of Equation1', fontsize=20); plt.xlabel('time, ms', fontsize=18); plt.ylabel('z_Verlet, nm', fontsize=18); plt.plot(time_V*1e3, z_V*1e9, 'g-'); plt.ylim(-65,65);
jupyter_notebooks/AFM_simulations/IntroductionToSimulations/IntroToAFMSimulations.ipynb
pycroscopy/pycroscopy
mit
It WAS ABLE to capture the physics! Even with the big time step that we use with Euler scheme! As you can see, and as we previously discussed the harmonic response is composed of a transient and a steady part. We are only concerned about the steady-state, since it is assumed that the probe achieves steady state motion during the imaging process. Therefore, we are going to slice our array in order to show only the last 10 oscillations, and we will see if it resembles the analytical solution.
#Slicing the full response vector to get the steady state response z_steady_V = z_V[int(90*period/dt):] time_steady_V = time_V[int(90*period/dt):] plt.title('Plot 3 Verlet approx. of steady state sol. of Eq 1', fontsize=20); plt.xlabel('time, ms', fontsize=18); plt.ylabel('z_Verlet, nm', fontsize=18); plt.plot(time_steady_V*1e3, z_steady_V*1e9, 'g-'); plt.ylim(-65,65); plt.show();
jupyter_notebooks/AFM_simulations/IntroductionToSimulations/IntroToAFMSimulations.ipynb
pycroscopy/pycroscopy
mit
Let's use now one of the most popular schemes... The Runge Kutta 4! The Runge Kutta 4 (RK4) method is very popular for the solution of ODEs. This method is designed to solve 1st order differential equations. We have converted our 2nd order ODE to a system of two coupled 1st order ODEs when we implemented the Euler scheme (equations 5 and 6). And we will have to use these equations for the RK4 algorithm. In order to clearly see the RK4 implementation we are going to put equations 5 and 6 in the following form: $$\begin{equation} \frac{dz}{dt}=v \Rightarrow f1(t,z,v) \end{equation}$$ $$\begin{equation} \frac{dv}{dt} = -kz-\frac{m\omega_0}{Q}+F_ocos(\omega t) \Rightarrow f2(t,z,v) \end{equation}$$ It can be clearly seen that we have two coupled equations f1 and f2 and both depend in t, z, and v. The RK4 equations for our special case where we have two coupled equations, are the following: $$\begin{equation} k_1 = f1(t_i, z_i, v_i) \end{equation}$$ $$\begin{equation} m_1 = f2(t_i, z_i, v_i) \end{equation}$$ $$\begin{equation} k_2 = f1(t_i +1/2\Delta t, z_i + 1/2k_1\Delta t, v_i + 1/2m_1\Delta t) \end{equation}$$ $$\begin{equation} m_2 = f2(t_i +1/2\Delta t, z_i + 1/2k_1\Delta t, v_i + 1/2m_1\Delta t) \end{equation}$$ $$\begin{equation} k_3 = f1(t_i +1/2\Delta t, z_i + k_2\Delta t, v_i + 1/2m_2\Delta t) \end{equation}$$ $$\begin{equation} m_3 = f2(t_i +1/2\Delta t, z_i + 1/2k_2\Delta t, v_i + 1/2m_2\Delta t) \end{equation}$$ $$\begin{equation} k_4 = f1(t_i + \Delta t, z_i + k_3\Delta t, v_i + m_3\Delta t) \end{equation}$$ $$\begin{equation} k_4 = f2(t_i + \Delta t, z_i + k_3\Delta t, v_i + m_3\Delta t) \end{equation}$$ $$\begin{equation} f1_{n+1} = f1_n + \Delta t/6(k_1+2k_2+2k_3+k_4) \end{equation}$$ $$\begin{equation} f2_{n+1} = f2_n + \Delta t/6(m_1+2m_2+2m_3+m_4) \end{equation}$$ Please notice how k values and m values are used sequentially, since it is crucial in the implementation of the method!
#Definition of v, z, vectors vdot_RK4 = numpy.zeros(N) v_RK4 = numpy.zeros(N) z_RK4 = numpy.zeros(N) k1v_RK4 = numpy.zeros(N) k2v_RK4 = numpy.zeros(N) k3v_RK4 = numpy.zeros(N) k4v_RK4 = numpy.zeros(N) k1z_RK4 = numpy.zeros(N) k2z_RK4 = numpy.zeros(N) k3z_RK4 = numpy.zeros(N) k4z_RK4 = numpy.zeros(N) #calculation of velocities RK4 #INITIAL CONDITIONS v_RK4[0] = 0 z_RK4[0] = 0 for i in range (1,N): #RK4 k1z_RK4[i] = v_RK4[i-1] #k1 Equation 14 k1v_RK4[i] = (( ( -k*z_RK4[i-1] - (m*wo/Q)*v_RK4[i-1] + \ Fd*numpy.cos(wo*t[i-1]) ) ) / m ) #m1 Equation 15 k2z_RK4[i] = ((v_RK4[i-1])+k1v_RK4[i]/2.*dt) #k2 Equation 16 k2v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k1z_RK4[i]/2.*dt) - (m*wo/Q)*\ (v_RK4[i-1] +k1v_RK4[i]/2.*dt) + Fd*\ numpy.cos(wo*(t[i-1] + dt/2.)) ) ) / m ) #m2 Eq 17 k3z_RK4[i] = ((v_RK4[i-1])+k2v_RK4[i]/2.*dt) #k3, Equation 18 k3v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k2z_RK4[i]/2.*dt) - (m*wo/Q)*\ (v_RK4[i-1] +k2v_RK4[i]/2.*dt) + Fd*\ numpy.cos(wo*(t[i-1] + dt/2.)) ) ) / m ) #m3, Eq 19 k4z_RK4[i] = ((v_RK4[i-1])+k3v_RK4[i]*dt) #k4, Equation 20 k4v_RK4[i] = (( ( -k*(z_RK4[i-1] + k3z_RK4[i]*dt) - (m*wo/Q)*\ (v_RK4[i-1] + k3v_RK4[i]*dt) + Fd*\ numpy.cos(wo*(t[i-1] + dt)) ) ) / m )#m4, Eq 21 #Calculation of velocity, Equation 23 v_RK4[i] = v_RK4[i-1] + 1./6*dt*(k1v_RK4[i] + 2.*k2v_RK4[i] +\ 2.*k3v_RK4[i] + k4v_RK4[i] ) #calculation of position, Equation 22 z_RK4 [i] = z_RK4[i-1] + 1./6*dt*(k1z_RK4[i] + 2.*k2z_RK4[i] +\ 2.*k3z_RK4[i] + k4z_RK4[i] ) #slicing array to get steady state z_steady_RK4 = z_RK4[int(90.*period/dt):] time_steady_RK4 = t[int(90.*period/dt):] plt.title('Plot 3 RK4 approx. of steady state sol. of Eq 1', fontsize=20); plt.xlabel('time, ms', fontsize=18); plt.ylabel('z_RK4, nm', fontsize=18); plt.plot(time_steady_RK4 *1e3, z_steady_RK4*1e9, 'r-'); plt.ylim(-65,65); plt.show();
jupyter_notebooks/AFM_simulations/IntroductionToSimulations/IntroToAFMSimulations.ipynb
pycroscopy/pycroscopy
mit
Error Analysis Let's plot together our solutions using the different schemes along with our analytical reference.
plt.title('Plot 4 Schemes comparison with analytical sol.', fontsize=20); plt.plot(time_an_steady*1e3, z_an_steady*1e9, 'b--' ); plt.plot(time_steady_V*1e3, z_steady_V*1e9, 'g-' ); plt.plot(time_steady_RK4*1e3, z_steady_RK4*1e9, 'r-'); plt.xlim(2.0, 2.06); plt.legend(['Analytical solution', 'Verlet method', 'Runge Kutta 4']); plt.xlabel('time, ms', fontsize=18); plt.ylabel('z_position, nm', fontsize=18);
jupyter_notebooks/AFM_simulations/IntroductionToSimulations/IntroToAFMSimulations.ipynb
pycroscopy/pycroscopy
mit
It was pointless to include Euler in the last plot because it was not following the physics at all for this given time step. REMEMBER that Euler can give fair approximations, but you MUST decrease the time step in this particular case if you want to see the sinusoidal trajectory! It seems our different schemes are giving different quality in approximating the solution. However it's hard to conclude something strong based on this qualitative observations. In order to state something stronger we have to perform further error analysis. We will do this at the end of the notebook after the references and will choose L1 norm for this purpose (You can find more information about this L1 ). As we can see Runge Kutta 4 converges faster than Verlet for the range of time steps studied. And the difference between both is near one order of magnitude. One additional advantage with Runge Kutta 4 is that the method is very stable, even with big time steps (eg. 10 time steps per period) the method is able to catch up the physics of the oscillation, something where Verlet is not so good at. Let's add a sample and oscillate our probe over it It is very common in the field of probe microscopy to model the tip sample interactions through DMT contact mechanics. DMT stands for Derjaguin, Muller and Toporov who were the scientists that developed the model (see ref 1). This model uses Hertz contact mechanics (see ref 2) with the addition of long range tip-sample interactions. These long range tip-sample interactions are ascribed to intermolecular interactions between the atoms of the tip and the upper atoms of the surface, and include mainly the contribution of van de Waals forces and Pauli repulsion from electronic clouds when the atoms of the tip meet closely the atoms of the surface. Figure 2 displays a force vs distance curve (FD curve) where it is shown how the forces between the tip and the sample behave with respect to the separation. It can be seen that at positive distances the tip starts "feeling" attraction from the tip (from the contribution of van der Waals forces) where the slope of the curve is positive and at some minimum distance ($a_0$) the tip starts experiencing repulsive interactions arising from electronic cloud repulsion (area where the slope of the curve is negative and the forces are negative). At lower distances, an area known as "contact area" arises and it is characterized by a negative slope and an emerging positive force.
fig3 = path + '/Fig3FDcurve.jpg' Image(filename=fig3)
jupyter_notebooks/AFM_simulations/IntroductionToSimulations/IntroToAFMSimulations.ipynb
pycroscopy/pycroscopy
mit
Figure 3. Force vs Distance profile depicting tip-sample interactions in AFM (Adapted from reference 6) In Hertz contact mechanics, one central aspect is to consider that the contact area increases as the sphere is pressed against an elastic surface, and this increase of the contact area "modulates" the effective stiffness of the sample. This concept is represented in figure 4 where the sample is depicted as comprised by a series of springs that are activated as the tip goes deeper into the sample. In other words, the deeper the sample goes, the larger the contact area and therefore more springs are activated (see more about this on reference 5).
fig4 = path + '/Fig4Hertzspring.jpg' Image(filename= fig4)
jupyter_notebooks/AFM_simulations/IntroductionToSimulations/IntroToAFMSimulations.ipynb
pycroscopy/pycroscopy
mit
Figure 4. Conceptual representation of Hertz contact mechanics This concept is represented mathematically by a non-linear spring whose elastic coefficient is a function of the contact area which at the same time depends on the sample indentation ( k(d) ). $$F_{ts} = k(d)d$$ where $$k(d) = 4/3E\sqrt{Rd}$$ being $\sqrt{Rd}$ the contact area when a sphere of radius R indents a half-space to depth d. $E$ is the effective Young's modulus of the tip-sample interaction. The long range attractive forces are derived using Hamaker's equation (see reference 4): $if$ $d > a_0$ $$F_{ts} = \frac{-HR}{6d^2}$$ where H is the Hamaker constant, R the tip radius and d the tip sample distance. $a_0$ is defined as the intermolecular distance and normally is chosen to be 0.2 nm. In summary the equations that we will include in our code to take care of the tip sample interactions are the following: $$\begin{equation} Fts_{DMT} = \begin{cases} \frac{-HR}{6d^2} \quad \quad d \leq{a_0}\ \ \frac{-HR}{6d^2} + 4/3E*R^{1/2}d^{3/2} \quad \quad d> a_0 \end{cases} \end{equation}$$ where the effective Young's modulus E is defined by: $$\begin{equation} 1/E = \frac{1-\nu^2}{E_t}+\frac{1-\nu^2}{E_s} \end{equation}$$ where $E_t$ and $E_s$ are the tip and sample Young's modulus respectively. $\nu_t$ and $\nu_s$ are tip and sample Poisson ratios, respectively. Enough theory, Let's make our code! Now we will have to solve equation (1) but with the addition of tip-sample interactions which are described by equation (5). So we have a second order non-linear ODE which is no longer analytically straightforward: $$\begin{equation} m \frac{d^2z}{dt^2} = - k z - \frac{m\omega_0}{Q}\frac{dz}{dt} + F_0 cos(\omega t) + Fts_{DMT} \end{equation}$$ Therefore we have to use numerical methods to solve it. RK4 has shown to be more accurate to solve equation (1) among the methods reviewed in the previous section of the notebook, and therefore it is going to be the chosen method to solve equation (6). Now we have to declare all the variables related to the tip-sample forces. Since we are modeling our tip-sample forces using Hertz contact mechanics with addition of long range Van der Waals forces we have to define the Young's modulus of the tip and sample, the diameter of the tip of our probe, Poisson ratio, etc.
#DMT parameters (Hertz contact mechanics with long range Van der Waals forces added a=0.2e-9 #intermolecular parameter H=6.4e-20 #hamaker constant of sample R=20e-9 #tip radius of the cantilever Es=70e6 #elastic modulus of sample Et=130e9 #elastic modulus of the tip vt=0.3 #Poisson coefficient for tip vs=0.3 #Poisson coefficient for sample E_star= 1/((1-pow(vt,2))/Et+(1-pow(vs,2))/Es) #Effective Young Modulus
jupyter_notebooks/AFM_simulations/IntroductionToSimulations/IntroToAFMSimulations.ipynb
pycroscopy/pycroscopy
mit
Now let's declare the timestep, the simulation time and let's oscillate our probe!
#IMPORTANT distance where you place the probe above the sample z_base = 40.e-9 spp = 280. # time steps per period dt = period/spp simultime = 100.*period N = int(simultime/dt) t = numpy.linspace(0,simultime,N) #Initializing variables for RK4 v_RK4 = numpy.zeros(N) z_RK4 = numpy.zeros(N) k1v_RK4 = numpy.zeros(N) k2v_RK4 = numpy.zeros(N) k3v_RK4 = numpy.zeros(N) k4v_RK4 = numpy.zeros(N) k1z_RK4 = numpy.zeros(N) k2z_RK4 = numpy.zeros(N) k3z_RK4 = numpy.zeros(N) k4z_RK4 = numpy.zeros(N) TipPos = numpy.zeros(N) Fts = numpy.zeros(N) Fcos = numpy.zeros(N) for i in range(1,N): #RK4 k1z_RK4[i] = v_RK4[i-1] #k1 Equation 14 k1v_RK4[i] = (( ( -k*z_RK4[i-1] - (m*wo/Q)*v_RK4[i-1] + \ Fd*numpy.cos(wo*t[i-1]) +Fts[i-1]) ) / m ) #m1 Equation 15 k2z_RK4[i] = ((v_RK4[i-1])+k1v_RK4[i]/2.*dt) #k2 Equation 16 k2v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k1z_RK4[i]/2.*dt) - (m*wo/Q)*\ (v_RK4[i-1] +k1v_RK4[i]/2.*dt) + Fd*\ numpy.cos(wo*(t[i-1] + dt/2.)) +Fts[i-1]) ) / m ) #m2 Eq 17 k3z_RK4[i] = ((v_RK4[i-1])+k2v_RK4[i]/2.*dt) #k3, Equation 18 k3v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k2z_RK4[i]/2.*dt) - (m*wo/Q)*\ (v_RK4[i-1] +k2v_RK4[i]/2.*dt) + Fd*\ numpy.cos(wo*(t[i-1] + dt/2.)) +Fts[i-1]) ) / m ) #m3, Eq19 k4z_RK4[i] = ((v_RK4[i-1])+k3v_RK4[i]*dt) #k4, Equation 20 k4v_RK4[i] = (( ( -k*(z_RK4[i-1] + k3z_RK4[i]*dt) - (m*wo/Q)*\ (v_RK4[i-1] + k3v_RK4[i]*dt) + Fd*\ numpy.cos(wo*(t[i-1] + dt)) +Fts[i-1]) ) / m )#m4, Eq 21 #Calculation of velocity, Equation 23 v_RK4[i] = v_RK4[i-1] + 1./6*dt*(k1v_RK4[i] + 2.*k2v_RK4[i] +\ 2.*k3v_RK4[i] + k4v_RK4[i] ) #calculation of position, Equation 22 z_RK4 [i] = z_RK4[i-1] + 1./6*dt*(k1z_RK4[i] + 2.*k2z_RK4[i] +\ 2.*k3z_RK4[i] + k4z_RK4[i] ) TipPos[i] = z_base + z_RK4[i] #Adding base position to z position #calculation of DMT force if TipPos[i] > a: #this defines the attractive regime Fts[i] = -H*R/(6*(TipPos[i])**2) else: #this defines the repulsive regime Fts[i] = -H*R/(6*a**2)+4./3*E_star*numpy.sqrt(R)*(a-TipPos[i])**1.5 Fcos[i] = Fd*numpy.cos(wo*t[i]) #Driving force (this will be helpful to plot the driving force) #Slicing arrays to get steady state TipPos_steady = TipPos[int(95*period/dt):] t_steady = t[int(95*period/dt):] Fcos_steady = Fcos[int(95*period/dt):] Fts_steady = Fts[int(95*period/dt):] plt.figure(1) fig, ax1 = plt.subplots() ax2 = ax1.twinx() ax1.plot(t_steady*1e3,TipPos_steady*1e9, 'g-') ax2.plot(t_steady*1e3, Fcos_steady*1e9, 'b-') ax1.set_xlabel('Time,s') ax1.set_ylabel('Tip position (nm)', color='g') ax2.set_ylabel('Drive Force (nm)', color='b') plt.title('Plot 7 Tip response and driving force', fontsize = 20) plt.figure(2) plt.title('Plot 8 Force-Distance curve', fontsize=20) plt.plot(TipPos*1e9, Fts*1e9, 'b--' ) plt.xlabel('Tip Position, nm', fontsize=18) plt.ylabel('Force, nN', fontsize=18) plt.xlim(-20, 30)
jupyter_notebooks/AFM_simulations/IntroductionToSimulations/IntroToAFMSimulations.ipynb
pycroscopy/pycroscopy
mit
Check that we have two sinusoidals. The one in green (the output) is the response signal of the tip (the tip trajectory in time) while the blue one (the input) is the cosinusoidal driving force that we are using to excite the tip. When the tip is excited in free air (without tip sample interactions) the phase lag between the output and the input is 90 degrees. You can test that with the previous code by only changing the position of the base to a high-enough position that it does not interact with the sample. However in the above plot the phase lag is less than 90 degrees. Interestingly the phase can give relative information about the material properties of the sample. There is a well-developed theory of this in tapping mode AFM and it's called phase spectroscopy. If you are interested in this topic you can read reference 1. Also look at the above plot and see that the response amplitude is no longer 60 nm as we initially set (in this case is near 45 nm!). It means that we have experienced a significant amplitude reduction due to the tip sample interactions. Besides with the data acquired we are able to plot a Force-curve as the one shown in Figure 3. It shows the attractive and repulsive interactions of our probe with the surface. We have arrived to the end of the notebook. I hope you have found it interesting and helpful! REFERENCES Garcı́a, Ricardo, and Ruben Perez. "Dynamic atomic force microscopy methods." Surface science reports 47.6 (2002): 197-301. B. V. Derjaguin, V. M. Muller, and Y. P. Toporov, J. Colloid Interface Sci. 53, 314 (1975) Hertz, H. R., 1882, Ueber die Beruehrung elastischer Koerper (On Contact Between Elastic Bodies), in Gesammelte Werke (Collected Works), Vol. 1, Leipzig, Germany, 1895. Van Oss, Carel J., Manoj K. Chaudhury, and Robert J. Good. "Interfacial Lifshitz-van der Waals and polar interactions in macroscopic systems." Chemical Reviews 88.6 (1988): 927-941. Enrique A. López-Guerra, and Santiago D. Solares. "Modeling viscoelasticity through spring–dashpot models in intermittent-contact atomic force microscopy." Beilstein journal of nanotechnology 5, no. 1 (2014): 2149-2163. Enrique A. López-Guerra, and Santiago D. Solares, "El microscopio de Fuerza Atómica: Metodos y Aplicaciones." Revista UVG (2013) No. 28, 14-23. OPTIONAL: Further error analysis based in norm L1
print('This cell takes a while to compute') """ERROR ANALYSIS EULER, VERLET AND RK4""" # time-increment array dt_values = numpy.array([8.0e-7, 2.0e-7, 0.5e-7, 1e-8, 0.1e-8]) # array that will contain solution of each grid z_values_E = numpy.zeros_like(dt_values, dtype=numpy.ndarray) z_values_V = numpy.zeros_like(dt_values, dtype=numpy.ndarray) z_values_RK4 = numpy.zeros_like(dt_values, dtype=numpy.ndarray) z_values_an = numpy.zeros_like(dt_values, dtype=numpy.ndarray) for n, dt in enumerate(dt_values): simultime = 100*period timestep = dt N = int(simultime/dt) t = numpy.linspace(0.0, simultime, N) #Initializing variables for Verlet zdoubledot_V = numpy.zeros(N) zdot_V = numpy.zeros(N) z_V = numpy.zeros(N) #Initializing variables for RK4 vdot_RK4 = numpy.zeros(N) v_RK4 = numpy.zeros(N) z_RK4 = numpy.zeros(N) k1v_RK4 = numpy.zeros(N) k2v_RK4 = numpy.zeros(N) k3v_RK4 = numpy.zeros(N) k4v_RK4 = numpy.zeros(N) k1z_RK4 = numpy.zeros(N) k2z_RK4 = numpy.zeros(N) k3z_RK4 = numpy.zeros(N) k4z_RK4 = numpy.zeros(N) #Initial conditions Verlet (started with Euler approximation) z_V[0] = 0.0 zdot_V[0] = 0.0 zdoubledot_V[0] = ( ( -k*z_V[0] - (m*wo/Q)*zdot_V[0] + \ Fd*numpy.cos(wo*t[0]) ) ) / m zdot_V[1] = zdot_V[0] + zdoubledot_V[0]*timestep**2 z_V[1] = z_V[0] + zdot_V[0]*dt zdoubledot_V[1] = ( ( -k*z_V[1] - (m*wo/Q)*zdot_V[1] + \ Fd*numpy.cos(wo*t[1]) ) ) / m #Initial conditions Runge Kutta v_RK4[1] = 0 z_RK4[1] = 0 #Initialization variables for Analytical solution z_an = numpy.zeros(N) # time loop for i in range(2,N): #Verlet z_V[i] = 2*z_V[i-1] - z_V[i-2] + zdoubledot_V[i-1]*dt**2 #Eq 10 zdot_V[i] = (z_V[i]-z_V[i-2])/(2.0*dt) #Eq 11 zdoubledot_V[i] = ( ( -k*z_V[i] - (m*wo/Q)*zdot_V[i] +\ Fd*numpy.cos(wo*t[i]) ) ) / m #from eq 1 #RK4 k1z_RK4[i] = v_RK4[i-1] #k1 Equation 14 k1v_RK4[i] = (( ( -k*z_RK4[i-1] - (m*wo/Q)*v_RK4[i-1] + \ Fd*numpy.cos(wo*t[i-1]) ) ) / m ) #m1 Equation 15 k2z_RK4[i] = ((v_RK4[i-1])+k1v_RK4[i]/2.*dt) #k2 Equation 16 k2v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k1z_RK4[i]/2.*dt) - (m*wo/Q)*\ (v_RK4[i-1] +k1v_RK4[i]/2.*dt) + Fd*\ numpy.cos(wo*(t[i-1] + dt/2.)) ) ) / m ) #m2 Eq 17 k3z_RK4[i] = ((v_RK4[i-1])+k2v_RK4[i]/2.*dt) #k3, Equation 18 k3v_RK4[i] = (( ( -k*(z_RK4[i-1]+ k2z_RK4[i]/2.*dt) - (m*wo/Q)*\ (v_RK4[i-1] +k2v_RK4[i]/2.*dt) + Fd*\ numpy.cos(wo*(t[i-1] + dt/2.)) ) ) / m ) #m3, Eq 19 k4z_RK4[i] = ((v_RK4[i-1])+k3v_RK4[i]*dt) #k4, Equation 20 k4v_RK4[i] = (( ( -k*(z_RK4[i-1] + k3z_RK4[i]*dt) - (m*wo/Q)*\ (v_RK4[i-1] + k3v_RK4[i]*dt) + Fd*\ numpy.cos(wo*(t[i-1] + dt)) ) ) / m )#m4, Equation 21 #Calculation of velocity, Equation 23 v_RK4[i] = v_RK4[i-1] + 1./6*dt*(k1v_RK4[i] + 2.*k2v_RK4[i] +\ 2.*k3v_RK4[i] + k4v_RK4[i] ) #calculation of position, Equation 22 z_RK4 [i] = z_RK4[i-1] + 1./6*dt*(k1z_RK4[i] + 2.*k2z_RK4[i] +\ 2.*k3z_RK4[i] + k4z_RK4[i] ) #Analytical solution A_an = Fo_an*Q/k #when driven at resonance A is simply Fo*Q/k phi = numpy.pi/2 #when driven at resonance the phase is pi/2 z_an[i] = A_an*numpy.cos(wo*t[i] - phi) #Analytical solution eq. 1 #Slicing the full response vector to get the steady state response z_steady_V = z_V[int(80*period/timestep):] z_an_steady = z_an[int(80*period/timestep):] z_steady_RK4 = z_RK4[int(80*period/timestep):] time_steady = t[int(80*period/timestep):] z_values_V[n] = z_steady_V.copy() # error for certain value of timestep z_values_RK4[n] = z_steady_RK4.copy() #error for certain value of timestep z_values_an[n] = z_an_steady.copy() #error for certain value of timestep def get_error(z, z_exact, dt): #Returns the error with respect to the analytical solution using L1 norm return dt * numpy.sum(numpy.abs(z-z_exact)) #NOW CALCULATE THE ERROR FOR EACH RESPECTIVE DELTA T error_values_V = numpy.zeros_like(dt_values) error_values_RK4 = numpy.zeros_like(dt_values) for i, dt in enumerate(dt_values): ### call the function get_error() ### error_values_V[i] = get_error(z_values_V[i], z_values_an[i], dt) error_values_RK4[i] = get_error(z_values_RK4[i], z_values_an[i], dt) plt.figure(1) plt.title('Plot 5 Error analysis Verlet based on L1 norm', fontsize=20) plt.tick_params(axis='both', labelsize=14) plt.grid(True) #turn on grid lines plt.xlabel('$\Delta t$ Verlet', fontsize=16) #x label plt.ylabel('Error Verlet', fontsize=16) #y label plt.loglog(dt_values, error_values_V, 'go-') #log-log plot plt.axis('equal') #make axes scale equally; plt.figure(2) plt.title('Plot 6 Error analysis RK4 based on L1 norm', fontsize=20) plt.tick_params(axis='both', labelsize=14) plt.grid(True) #turn on grid lines plt.xlabel('$\Delta t$ RK4', fontsize=16) #x label plt.ylabel('Error RK4', fontsize=16) #y label plt.loglog(dt_values, error_values_RK4, 'co-') #log-log plot plt.axis('equal') #make axes scale equally;
jupyter_notebooks/AFM_simulations/IntroductionToSimulations/IntroToAFMSimulations.ipynb
pycroscopy/pycroscopy
mit
Let us plot the first five examples of the train data (first row) and test data (second row).
%matplotlib inline import pylab as P def plot_example(dat, lab): for i in xrange(5): ax=P.subplot(1,5,i+1) P.title(int(lab[i])) ax.imshow(dat[:,i].reshape((16,16)), interpolation='nearest') ax.set_xticks([]) ax.set_yticks([]) _=P.figure(figsize=(17,6)) P.gray() plot_example(Xtrain, Ytrain) _=P.figure(figsize=(17,6)) P.gray() plot_example(Xtest, Ytest)
doc/ipython-notebooks/multiclass/KNN.ipynb
minxuancao/shogun
gpl-3.0
Then we import shogun components and convert the data to shogun objects:
from modshogun import MulticlassLabels, RealFeatures from modshogun import KNN, EuclideanDistance labels = MulticlassLabels(Ytrain) feats = RealFeatures(Xtrain) k=3 dist = EuclideanDistance() knn = KNN(k, dist, labels) labels_test = MulticlassLabels(Ytest) feats_test = RealFeatures(Xtest) knn.train(feats) pred = knn.apply_multiclass(feats_test) print "Predictions", pred[:5] print "Ground Truth", Ytest[:5] from modshogun import MulticlassAccuracy evaluator = MulticlassAccuracy() accuracy = evaluator.evaluate(pred, labels_test) print "Accuracy = %2.2f%%" % (100*accuracy)
doc/ipython-notebooks/multiclass/KNN.ipynb
minxuancao/shogun
gpl-3.0
Let's plot a few missclassified examples - I guess we all agree that these are notably harder to detect.
idx=np.where(pred != Ytest)[0] Xbad=Xtest[:,idx] Ybad=Ytest[idx] _=P.figure(figsize=(17,6)) P.gray() plot_example(Xbad, Ybad)
doc/ipython-notebooks/multiclass/KNN.ipynb
minxuancao/shogun
gpl-3.0
Now the question is - is 97.30% accuracy the best we can do? While one would usually re-train KNN with different values for k here and likely perform Cross-validation, we just use a small trick here that saves us lots of computation time: When we have to determine the $K\geq k$ nearest neighbors we will know the nearest neigbors for all $k=1...K$ and can thus get the predictions for multiple k's in one step:
knn.set_k(13) multiple_k=knn.classify_for_multiple_k() print multiple_k.shape
doc/ipython-notebooks/multiclass/KNN.ipynb
minxuancao/shogun
gpl-3.0
We have the prediction for each of the 13 k's now and can quickly compute the accuracies:
for k in xrange(13): print "Accuracy for k=%d is %2.2f%%" % (k+1, 100*np.mean(multiple_k[:,k]==Ytest))
doc/ipython-notebooks/multiclass/KNN.ipynb
minxuancao/shogun
gpl-3.0
So k=3 seems to have been the optimal choice. Accellerating KNN Obviously applying KNN is very costly: for each prediction you have to compare the object against all training objects. While the implementation in SHOGUN will use all available CPU cores to parallelize this computation it might still be slow when you have big data sets. In SHOGUN, you can use Cover Trees to speed up the nearest neighbor searching process in KNN. Just call set_use_covertree on the KNN machine to enable or disable this feature. We also show the prediction time comparison with and without Cover Tree in this tutorial. So let's just have a comparison utilizing the data above:
from modshogun import Time, KNN_COVER_TREE, KNN_BRUTE start = Time.get_curtime() knn.set_k(3) knn.set_knn_solver_type(KNN_BRUTE) pred = knn.apply_multiclass(feats_test) print "Standard KNN took %2.1fs" % (Time.get_curtime() - start) start = Time.get_curtime() knn.set_k(3) knn.set_knn_solver_type(KNN_COVER_TREE) pred = knn.apply_multiclass(feats_test) print "Covertree KNN took %2.1fs" % (Time.get_curtime() - start)
doc/ipython-notebooks/multiclass/KNN.ipynb
minxuancao/shogun
gpl-3.0
So we can significantly speed it up. Let's do a more systematic comparison. For that a helper function is defined to run the evaluation for KNN:
def evaluate(labels, feats, use_cover_tree=False): from modshogun import MulticlassAccuracy, CrossValidationSplitting import time split = CrossValidationSplitting(labels, Nsplit) split.build_subsets() accuracy = np.zeros((Nsplit, len(all_ks))) acc_train = np.zeros(accuracy.shape) time_test = np.zeros(accuracy.shape) for i in range(Nsplit): idx_train = split.generate_subset_inverse(i) idx_test = split.generate_subset_indices(i) for j, k in enumerate(all_ks): #print "Round %d for k=%d..." % (i, k) feats.add_subset(idx_train) labels.add_subset(idx_train) dist = EuclideanDistance(feats, feats) knn = KNN(k, dist, labels) knn.set_store_model_features(True) if use_cover_tree: knn.set_knn_solver_type(KNN_COVER_TREE) else: knn.set_knn_solver_type(KNN_BRUTE) knn.train() evaluator = MulticlassAccuracy() pred = knn.apply_multiclass() acc_train[i, j] = evaluator.evaluate(pred, labels) feats.remove_subset() labels.remove_subset() feats.add_subset(idx_test) labels.add_subset(idx_test) t_start = time.clock() pred = knn.apply_multiclass(feats) time_test[i, j] = (time.clock() - t_start) / labels.get_num_labels() accuracy[i, j] = evaluator.evaluate(pred, labels) feats.remove_subset() labels.remove_subset() return {'eout': accuracy, 'ein': acc_train, 'time': time_test}
doc/ipython-notebooks/multiclass/KNN.ipynb
minxuancao/shogun
gpl-3.0
Evaluate KNN with and without Cover Tree. This takes a few seconds:
labels = MulticlassLabels(Ytest) feats = RealFeatures(Xtest) print("Evaluating KNN...") wo_ct = evaluate(labels, feats, use_cover_tree=False) wi_ct = evaluate(labels, feats, use_cover_tree=True) print("Done!")
doc/ipython-notebooks/multiclass/KNN.ipynb
minxuancao/shogun
gpl-3.0
Generate plots with the data collected in the evaluation:
import matplotlib fig = P.figure(figsize=(8,5)) P.plot(all_ks, wo_ct['eout'].mean(axis=0), 'r-*') P.plot(all_ks, wo_ct['ein'].mean(axis=0), 'r--*') P.legend(["Test Accuracy", "Training Accuracy"]) P.xlabel('K') P.ylabel('Accuracy') P.title('KNN Accuracy') P.tight_layout() fig = P.figure(figsize=(8,5)) P.plot(all_ks, wo_ct['time'].mean(axis=0), 'r-*') P.plot(all_ks, wi_ct['time'].mean(axis=0), 'b-d') P.xlabel("K") P.ylabel("time") P.title('KNN time') P.legend(["Plain KNN", "CoverTree KNN"], loc='center right') P.tight_layout()
doc/ipython-notebooks/multiclass/KNN.ipynb
minxuancao/shogun
gpl-3.0
Although simple and elegant, KNN is generally very resource costly. Because all the training samples are to be memorized literally, the memory cost of KNN learning becomes prohibitive when the dataset is huge. Even when the memory is big enough to hold all the data, the prediction will be slow, since the distances between the query point and all the training points need to be computed and ranked. The situation becomes worse if in addition the data samples are all very high-dimensional. Leaving aside computation time issues, k-NN is a very versatile and competitive algorithm. It can be applied to any kind of objects (not just numerical data) - as long as one can design a suitable distance function. In pratice k-NN used with bagging can create improved and more robust results. Comparison to Multiclass Support Vector Machines In contrast to KNN - multiclass Support Vector Machines (SVMs) attempt to model the decision function separating each class from one another. They compare examples utilizing similarity measures (so called Kernels) instead of distances like KNN does. When applied, they are in Big-O notation computationally as expensive as KNN but involve another (costly) training step. They do not scale very well to cases with a huge number of classes but usually lead to favorable results when applied to small number of classes cases. So for reference let us compare how a standard multiclass SVM performs wrt. KNN on the mnist data set from above. Let us first train a multiclass svm using a Gaussian kernel (kind of the SVM equivalent to the euclidean distance).
from modshogun import GaussianKernel, GMNPSVM width=80 C=1 gk=GaussianKernel() gk.set_width(width) svm=GMNPSVM(C, gk, labels) _=svm.train(feats)
doc/ipython-notebooks/multiclass/KNN.ipynb
minxuancao/shogun
gpl-3.0
Let's apply the SVM to the same test data set to compare results:
out=svm.apply(feats_test) evaluator = MulticlassAccuracy() accuracy = evaluator.evaluate(out, labels_test) print "Accuracy = %2.2f%%" % (100*accuracy)
doc/ipython-notebooks/multiclass/KNN.ipynb
minxuancao/shogun
gpl-3.0
Since the SVM performs way better on this task - let's apply it to all data we did not use in training.
Xrem=Xall[:,subset[6000:]] Yrem=Yall[subset[6000:]] feats_rem=RealFeatures(Xrem) labels_rem=MulticlassLabels(Yrem) out=svm.apply(feats_rem) evaluator = MulticlassAccuracy() accuracy = evaluator.evaluate(out, labels_rem) print "Accuracy = %2.2f%%" % (100*accuracy) idx=np.where(out.get_labels() != Yrem)[0] Xbad=Xrem[:,idx] Ybad=Yrem[idx] _=P.figure(figsize=(17,6)) P.gray() plot_example(Xbad, Ybad)
doc/ipython-notebooks/multiclass/KNN.ipynb
minxuancao/shogun
gpl-3.0
So maybe they're useful features (but not very). What about the fact they're magnitudes?
lrexp = sklearn.linear_model.LogisticRegression(C=100.0, class_weight='balanced') lrexp.fit(features_exp[x_train], t_train) cm = sklearn.metrics.confusion_matrix(t_test, lrexp.predict(features_exp[x_test])) tp = cm[1, 1] n, p = cm.sum(axis=1) tn = cm[0, 0] ba = (tp / p + tn / n) / 2 print('Exponentiated features, balanced accuracy: {:.02%}'.format(ba)) print(cm) lrnlexp = sklearn.linear_model.LogisticRegression(C=100.0, class_weight='balanced') lrnlexp.fit(features_nlexp[x_train], t_train) cm = sklearn.metrics.confusion_matrix(t_test, lrnlexp.predict(features_nlexp[x_test])) tp = cm[1, 1] n, p = cm.sum(axis=1) tn = cm[0, 0] ba = (tp / p + tn / n) / 2 print('Exponentiated features, balanced accuracy: {:.02%}'.format(ba)) print(cm)
notebooks/56_nonlinear_astro_features.ipynb
chengsoonong/crowdastro
mit
Those are promising results, but we need to rererun this a few times with different training and testing sets to get some error bars.
def balanced_accuracy(lr, x_test, t_test): cm = sklearn.metrics.confusion_matrix(t_test, lr.predict(x_test)) tp = cm[1, 1] n, p = cm.sum(axis=1) tn = cm[0, 0] ba = (tp / p + tn / n) / 2 return ba def test_feature_set(features, x_train, t_train, x_test, t_test): lr = sklearn.linear_model.LogisticRegression(C=100.0, class_weight='balanced') lr.fit(features[x_train], t_train) return balanced_accuracy(lr, features[x_test], t_test) linear_ba = [] nonlinear_ba = [] exp_ba = [] nonlinear_exp_ba = [] n_trials = 10 for trial in range(n_trials): print('Trial {}/{}'.format(trial + 1, n_trials)) x_train, x_test, t_train, t_test = sklearn.cross_validation.train_test_split( numpy.arange(raw_astro_features.shape[0]), labels, test_size=0.2) linear_ba.append(test_feature_set(features_linear, x_train, t_train, x_test, t_test)) nonlinear_ba.append(test_feature_set(features_nonlinear, x_train, t_train, x_test, t_test)) exp_ba.append(test_feature_set(features_exp, x_train, t_train, x_test, t_test)) nonlinear_exp_ba.append(test_feature_set(features_nlexp, x_train, t_train, x_test, t_test)) print('Linear features: ({:.02f} +- {:.02f})%'.format( numpy.mean(linear_ba) * 100, numpy.std(linear_ba) * 100)) print('Nonlinear features: ({:.02f} +- {:.02f})%'.format( numpy.mean(nonlinear_ba) * 100, numpy.std(nonlinear_ba) * 100)) print('Exponentiated features: ({:.02f} +- {:.02f})%'.format( numpy.mean(exp_ba) * 100, numpy.std(exp_ba) * 100)) print('Exponentiated nonlinear features: ({:.02f} +- {:.02f})%'.format( numpy.mean(nonlinear_exp_ba) * 100, numpy.std(nonlinear_exp_ba) * 100))
notebooks/56_nonlinear_astro_features.ipynb
chengsoonong/crowdastro
mit
Load and prepare data
# Download CSV from http://stats.oecd.org/index.aspx?DataSetCode=BLI oecd_bli = pd.read_csv(datapath+"oecd_bli_2015.csv", thousands=',') oecd_bli = oecd_bli[oecd_bli["INEQUALITY"]=="TOT"] oecd_bli = oecd_bli.pivot(index="Country", columns="Indicator", values="Value") oecd_bli.columns oecd_bli["Life satisfaction"].head() # Load and prepare GDP per capita data # Download data from http://goo.gl/j1MSKe (=> imf.org) gdp_per_capita = pd.read_csv(datapath+"gdp_per_capita.csv", thousands=',', delimiter='\t', encoding='latin1', na_values="n/a") gdp_per_capita.rename(columns={"2015": "GDP per capita"}, inplace=True) gdp_per_capita.set_index("Country", inplace=True) full_country_stats = pd.merge(left=oecd_bli, right=gdp_per_capita, left_index=True, right_index=True) full_country_stats.sort_values(by="GDP per capita", inplace="True") _ = full_country_stats.plot("GDP per capita",'Life satisfaction',kind='scatter')
labs/lab1-soln.ipynb
wzxiong/DAVIS-Machine-Learning
mit
Here's the full dataset, and there are other columns. I will subselect a few of them by hand.
xvars = ['Self-reported health','Water quality','Quality of support network','GDP per capita'] X = np.array(full_country_stats[xvars]) y = np.array(full_country_stats['Life satisfaction'])
labs/lab1-soln.ipynb
wzxiong/DAVIS-Machine-Learning
mit
I will define the following functions to expedite the LOO risk and the Empirical risk.
def loo_risk(X,y,regmod): """ Construct the leave-one-out square error risk for a regression model Input: design matrix, X, response vector, y, a regression model, regmod Output: scalar LOO risk """ loo = LeaveOneOut() loo_losses = [] for train_index, test_index in loo.split(X): X_train, X_test = X[train_index], X[test_index] y_train, y_test = y[train_index], y[test_index] regmod.fit(X_train,y_train) y_hat = regmod.predict(X_test) loss = np.sum((y_hat - y_test)**2) loo_losses.append(loss) return np.mean(loo_losses) def emp_risk(X,y,regmod): """ Return the empirical risk for square error loss Input: design matrix, X, response vector, y, a regression model, regmod Output: scalar empirical risk """ regmod.fit(X,y) y_hat = regmod.predict(X) return np.mean((y_hat - y)**2) lin1 = linear_model.LinearRegression(fit_intercept=False) print('LOO Risk: '+ str(loo_risk(X,y,lin1))) print('Emp Risk: ' + str(emp_risk(X,y,lin1)))
labs/lab1-soln.ipynb
wzxiong/DAVIS-Machine-Learning
mit
As you can see, the empirical risk is much less than the leave-one-out risk! This can happen in more dimensions. Nearest neighbor regression Use the method described here: http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsRegressor.html I have already imported the necessary module, so you just need to use the regression object (like we used LinearRegression)
# knn = neighbors.KNeighborsRegressor(n_neighbors=5)
labs/lab1-soln.ipynb
wzxiong/DAVIS-Machine-Learning
mit
Exercise 1 For each k from 1 to 30 compute the nearest neighbors empirical risk and LOO risk. Plot these as a function of k and reflect on the bias-variance tradeoff here. (Hint: use the previously defined functions)
LOOs = [] MSEs = [] K=30 Ks = range(1,K+1) for k in Ks: knn = neighbors.KNeighborsRegressor(n_neighbors=k) LOOs.append(loo_risk(X,y,knn)) MSEs.append(emp_risk(X,y,knn)) plt.plot(Ks,LOOs,'r',label="LOO risk") plt.title("Risks for kNN Regression") plt.plot(Ks,MSEs,'b',label="Emp risk") plt.legend() _ = plt.xlabel('k')
labs/lab1-soln.ipynb
wzxiong/DAVIS-Machine-Learning
mit
I decided to see what the performance is for k from 1 to 30. We see that the bias does not dominate until k exceeds 17, the performance is somewhat better for k around 12. This demonstrates that you can't trust the Empirical risk, since it includes the training sample. We can compare this LOO risk to that of linear regression (0.348) and see that it outperforms linear regression. Exercise 2 Do the same but for the reduced predictor variables below...
X1 = np.array(full_country_stats[['Self-reported health']]) LOOs = [] MSEs = [] K=30 Ks = range(1,K+1) for k in Ks: knn = neighbors.KNeighborsRegressor(n_neighbors=k) LOOs.append(loo_risk(X1,y,knn)) MSEs.append(emp_risk(X1,y,knn)) plt.plot(Ks,LOOs,'r',label="LOO risk") plt.title("Risks for kNN Regression") plt.plot(Ks,MSEs,'b',label="Emp risk") plt.legend() _ = plt.xlabel('k')
labs/lab1-soln.ipynb
wzxiong/DAVIS-Machine-Learning
mit
Process the English Wiktionary to generate the (default) partition probabilities. Note: this step can take significant time for large dictionaries (~5 min).
## Vignette 1: Build informed partition data from a dictionary, ## and store to local collection def preprocessENwiktionary(): pa = partitioner(informed = True, dictionary = "./dictionaries/enwiktionary.txt") pa.dumpqs(qsname="enwiktionary") preprocessENwiktionary()
tests/partitioner_examples.ipynb
jakerylandwilliams/partitioner
apache-2.0
Perform a few one-off partitions.
## Vignette 2: An informed, one-off partition of a single clause def informedOneOffPartition(clause = "How are you doing today?"): pa = oneoff() print pa.partition(clause) informedOneOffPartition() informedOneOffPartition("Fine, thanks a bunch for asking!")
tests/partitioner_examples.ipynb
jakerylandwilliams/partitioner
apache-2.0
Solve for the informed stochastic expectation partition (given the informed partition probabilities).
## Vignette 3: An informed, stochastic expectation partition of a single clause def informedStochasticPartition(clause = "How are you doing today?"): pa = stochastic() print pa.partition(clause) informedStochasticPartition()
tests/partitioner_examples.ipynb
jakerylandwilliams/partitioner
apache-2.0
Perform a pure random (uniform) one-off partition.
## Vignette 4: An uniform, one-off partition of a single clause def uniformOneOffPartition(informed = False, clause = "How are you doing today?", qunif = 0.25): pa = oneoff(informed = informed, qunif = qunif) print pa.partition(clause) uniformOneOffPartition() uniformOneOffPartition(qunif = 0.75)
tests/partitioner_examples.ipynb
jakerylandwilliams/partitioner
apache-2.0
Solve for the uniform stochastic expectation partition (given the uniform partition probabilities).
## Vignette 5: An uniform, stochastic expectation partition of a single clause def uniformStochasticPartition(informed = False, clause = "How are you doing today?", qunif = 0.25): pa = stochastic(informed = informed, qunif = qunif) print pa.partition(clause) uniformStochasticPartition() uniformStochasticPartition(clause = "Fine, thanks a bunch for asking!")
tests/partitioner_examples.ipynb
jakerylandwilliams/partitioner
apache-2.0
Build a rank-frequency distribution for a text and determine its Zipf/Simon (bag-of-phrase) $R^2$.
## Vignette 6: Use the default partitioning method to partition the main partitioner.py file and compute rsq def testPartitionTextAndFit(): pa = oneoff() pa.partitionText(textfile = pa.home+"/../README.md") pa.testFit() print "R-squared: ",round(pa.rsq,2) print phrases = sorted(pa.counts, key = lambda x: pa.counts[x], reverse = True) for j in range(25): phrase = phrases[j] print phrase, pa.counts[phrase] testPartitionTextAndFit()
tests/partitioner_examples.ipynb
jakerylandwilliams/partitioner
apache-2.0
Process the some other Wiktionaries to generate the partition probabilities. Note: These dictionaries are not as well curated and potentially contain phrases from other languages (a consequence of wiktionary construction). As a result, they hold many many more phrases and will take longer to process. However, since the vast majority of these dictionaries are language-correct, effects on the partitioner and its (course) partition probabilities is likely negligable.
## Vignette X1: Build informed partition data from other dictionaries, ## and store to local collection def preprocessOtherWiktionaries(): for lang in ["ru", "pt", "pl", "nl", "it", "fr", "fi", "es", "el", "de", "en"]: print "working on "+lang+"..." pa = partitioner(informed = True, dictionary = "./dictionaries/"+lang+".txt") pa.dumpqs(qsname=lang) preprocessOtherWiktionaries()
tests/partitioner_examples.ipynb
jakerylandwilliams/partitioner
apache-2.0
Test partitioner on some other languages.
from partitioner import partitioner from partitioner.methods import * ## Vignette X2: Use the default partitioning method to partition the main partitioner.py file and compute rsq def testFrPartitionTextAndFit(): for lang in ["ru", "pt", "pl", "nl", "it", "fr", "fi", "es", "el", "de", "en"]: pa = oneoff(qsname = lang) pa.partitionText(textfile = "./tests/test_"+lang+".txt") pa.testFit() print print lang+" R-squared: ",round(pa.rsq,2) print phrases = sorted(pa.counts, key = lambda x: pa.counts[x], reverse = True) for j in range(5): phrase = phrases[j] print phrase, pa.counts[phrase] testFrPartitionTextAndFit()
tests/partitioner_examples.ipynb
jakerylandwilliams/partitioner
apache-2.0
Display Settings
# Display graph in 'retina' format for Mac with retina display. Others, use PNG or SVG format. %config InlineBackend.figure_format = 'retina' #%config InlineBackend.figure_format = 'PNG' #%config InlineBackend.figure_format = 'SVG'
Equations/Dose-Response Relations/Dose-Response Relations.ipynb
ekaakurniawan/3nb
gpl-3.0
Housekeeping Functions Following function plots different concentrations used to visualize dose-response relation.
def plot_concentration(c): fig = plt.figure() sp111 = fig.add_subplot(111) # Display grid sp111.grid(True, which = 'both') # Plot concentration len_c = len(c) sp111.plot(np.linspace(0,len_c-1,len_c), c, color = 'gray', linewidth = 2) # Label sp111.set_ylabel('Concentration (nM)') # Set X axis within different concentration plt.xlim([0, len_c-1]) plt.show()
Equations/Dose-Response Relations/Dose-Response Relations.ipynb
ekaakurniawan/3nb
gpl-3.0
Following function plots dose-response relation in both linear (log_flag = False) and logarithmic (log_flag = True) along X axis.
# d : Dose # r1 : First response data # r1_label : First response label # r2 : Second response data # r2_label : Second response label # log_flag : Selection for linear or logarithmic along X axis # - False: Plot linear (default) # - True: Plot logarithmic def plot_dose_response_relation(d, r1, r1_label, r2 = None, r2_label = "", log_flag = False): fig = plt.figure() sp111 = fig.add_subplot(111) # Handle logarithmic along X axis if log_flag: sp111.set_xscale('log') # Display grid sp111.yaxis.set_ticks([0.0, 0.5, 1.0]) sp111.grid(True, which = 'both') # Plot dose-response sp111.plot(d, r1, color = 'blue', label = r1_label, linewidth = 2) if r2 is not None: sp111.plot(d, r2, color = 'red', label = r2_label, linewidth = 2) # Labels sp111.set_ylabel('Response') sp111.set_xlabel('Concentration (nM)') # Legend sp111.legend(loc='upper left') # Set Y axis in between 0 and 1 plt.ylim([0, 1]) plt.show()
Equations/Dose-Response Relations/Dose-Response Relations.ipynb
ekaakurniawan/3nb
gpl-3.0
Dose-Response Relations $^{[1]}$ Generally, dose-response relations can be written as follow. In which the dose is represented as concentration ($c$), while the formula returns the response ($r$). $$r = \frac{F.c^{n_H}}{{c^{n_H} + EC_{50}^{n_H}}}$$ Other terms like $EC_{50}$ is the effective concentration achieved at 50% of maximum response. Normally, efficacy ($F$) is normalized to one so that it is easier to make comparison among different drugs. Furthermore, if full agonist is defined to have efficacy equal to one, anything lower than one is treated to be partial agonist. Finally, Hill coefficients ($n_H$) defines the number of drug molecules needed to activate target receptor. Drug Concentartion Both linearly and logarithmically increased concentrations are used to study dose-response relations. Linearly increased concentration (c_lin):
c_lin = np.linspace(0,100,101) # Drug concentration in nanomolar (nM) plot_concentration(c_lin)
Equations/Dose-Response Relations/Dose-Response Relations.ipynb
ekaakurniawan/3nb
gpl-3.0
Logarithmically increased concentration (c_log):
c_log = np.logspace(0,5,101) # Drug concentration in nanomolar (nM) plot_concentration(c_log)
Equations/Dose-Response Relations/Dose-Response Relations.ipynb
ekaakurniawan/3nb
gpl-3.0
Agonist Only To calculate dose-response relation in the case of agonist only, we use general dose-response relation equation described previously. The function is shown below.
# Calculate dose-response relation (DRR) for agonist only # c : Drug concentration(s) in nanomolar (nM) # EC_50 : 50% effective concentration in nanomolar (nM) # F : Efficacy (unitless) # n_H : Hill coefficients (unitless) def calc_drr(c, EC_50 = 20, F = 1, n_H = 1): r = (F * (c ** n_H) / ((c ** n_H) + (EC_50 ** n_H))) return r
Equations/Dose-Response Relations/Dose-Response Relations.ipynb
ekaakurniawan/3nb
gpl-3.0
Following result shows drug response of agonist only to the linearly increased concentrations.
c = c_lin # Drug concentration(s) in nanomolar (nM) EC_50 = 20 # 50% effective concentration in nanomolar (nM) F = 1 # Efficacy (unitless) n_H = 1 # Hill coefficients (unitless) r = calc_drr(c, EC_50, F, n_H) plot_dose_response_relation(c, r, "Agonist")
Equations/Dose-Response Relations/Dose-Response Relations.ipynb
ekaakurniawan/3nb
gpl-3.0
Following result shows drug response of agonist only to the logarithmically increased concentrations.
c = c_log # Drug concentration(s) in nanomolar (nM) EC_50 = 20 # 50% effective concentration in nanomolar (nM) F = 1 # Efficacy (unitless) n_H = 1 # Hill coefficients (unitless) r = calc_drr(c, EC_50, F, n_H) plot_dose_response_relation(c, r, "Agonist", log_flag = True)
Equations/Dose-Response Relations/Dose-Response Relations.ipynb
ekaakurniawan/3nb
gpl-3.0
Agonist Plus Competitive Antagonist Compatitive antagonist, as the name sugest, competes with agonist molecules to sit in the same pocket. It makes the binding harder for agonist as well as to trigger the activation. Therefore, higher agonist concentration is required to reach both full and partial (like $EC_{50}$) activation. New $EC_{50}$ value, called $EC_{50}'$ ($EC_{50}$ prime) is calculated using following formula. $$EC_{50}' = EC_{50} * \left(1 + \frac{c_i}{K_i}\right)$$ It depends on inhibitor concentration ($c_i$) and dissociation constant of the inhibitor ($K_i$). Following is a new function to calculate drug response of agonist with competitive antagonist. It shows new $EC_{50}$ value (EC_50_prime) replacing agonist only $EC_{50}$ value (EC_50).
# Calculate dose-response relation (DRR) for agonist plus competitive antagonist # - Agonist # c : Drug concentration(s) in nanomolar (nM) # EC_50 : 50% effective concentration in nanomolar (nM) # F : Efficacy (unitless) # n_H : Hill coefficients (unitless) # - Antagonist # K_i : Dissociation constant of inhibitor in nanomolar (nM) # c_i : Inhibitor concentration in nanomolar (nM) def calc_drr_agonist_cptv_antagonist(c, EC_50 = 20, F = 1, n_H = 1, K_i = 5, c_i = 25): EC_50_prime = EC_50 * (1 + (c_i / K_i)) r = calc_drr(c, EC_50_prime, F, n_H) return r
Equations/Dose-Response Relations/Dose-Response Relations.ipynb
ekaakurniawan/3nb
gpl-3.0
Following result shows drug response of agonist with competitive antagonist to the linearly increased concentrations.
c = c_lin # Drug concentration(s) in nanomolar (nM) EC_50 = 20 # 50% effective concentration in nanomolar (nM) F = 1 # Efficacy (unitless) n_H = 1 # Hill coefficients (unitless) r_a = calc_drr(c, EC_50, F, n_H) K_i = 5 # Dissociation constant of inhibitor in nanomolar (nM) c_i = 25 # Inhibitor concentration in nanomolar (nM) r_aca = calc_drr_agonist_cptv_antagonist(c, EC_50, F, n_H, K_i, c_i) plot_dose_response_relation(c, r_a, "Agonist Only", r_aca, "Plus Antagonist")
Equations/Dose-Response Relations/Dose-Response Relations.ipynb
ekaakurniawan/3nb
gpl-3.0
Following result shows drug response of agonist with competitive antagonist to the logarithmically increased concentrations.
c = c_log # Drug concentration(s) in nanomolar (nM) EC_50 = 20 # 50% effective concentration in nanomolar (nM) F = 1 # Efficacy (unitless) n_H = 1 # Hill coefficients (unitless) r_a = calc_drr(c, EC_50, F, n_H) K_i = 5 # Dissociation constant of inhibitor in nanomolar (nM) c_i = 25 # Inhibitor concentration in nanomolar (nM) r_aca = calc_drr_agonist_cptv_antagonist(c, EC_50, F, n_H, K_i, c_i) plot_dose_response_relation(c, r_a, "Agonist Only", r_aca, "Plus Antagonist", log_flag = True)
Equations/Dose-Response Relations/Dose-Response Relations.ipynb
ekaakurniawan/3nb
gpl-3.0
Agonist Plus Noncompetitive Antagonist Unlike competitive antagonist, noncompetitive antagonist does not compete directly to the location where agonist binds but somewhere else in the subsequent pathway. Instead of altering effective concentration (like $EC_{50}$), noncompetitive antagonist affects efficacy. New efficacy value ($F'$) due to the existance of noncompetitive antagonist is calculated as follow. $$F' = \frac{F}{\left(1 + \frac{c_i}{K_i}\right)}$$ Following is a new function to calculate drug response of agonist with noncompetitive antagonist. It shows new efficacy value (F_prime) replacing agonist only efficacy value (F).
# Calculate dose-response relation (DRR) for agonist plus noncompetitive antagonist # - Agonist # c : Drug concentration(s) in nanomolar (nM) # EC_50 : 50% effective concentration in nanomolar (nM) # F : Efficacy (unitless) # n_H : Hill coefficients (unitless) # - Antagonist # K_i : Dissociation constant of inhibitor in nanomolar (nM) # c_i : Inhibitor concentration in nanomolar (nM) def calc_drr_agonist_non_cptv_antagonist(c, EC_50 = 20, F = 1, n_H = 1, K_i = 5, c_i = 25): F_prime = F / (1 + (c_i / K_i)) r = calc_drr(c, EC_50, F_prime, n_H) return r
Equations/Dose-Response Relations/Dose-Response Relations.ipynb
ekaakurniawan/3nb
gpl-3.0
Following result shows drug response of agonist with noncompetitive antagonist to the linearly increased concentrations.
c = c_lin # Drug concentration(s) in nanomolar (nM) EC_50 = 20 # 50% effective concentration in nanomolar (nM) F = 1 # Efficacy (unitless) n_H = 1 # Hill coefficients (unitless) r_a = calc_drr(c, EC_50, F, n_H) K_i = 5 # Dissociation constant of inhibitor in nanomolar (nM) c_i = 25 # Inhibitor concentration in nanomolar (nM) r_ana = calc_drr_agonist_non_cptv_antagonist(c, EC_50, F, n_H, K_i, c_i) plot_dose_response_relation(c, r_a, "Agonist Only", r_ana, "Plus Antagonist")
Equations/Dose-Response Relations/Dose-Response Relations.ipynb
ekaakurniawan/3nb
gpl-3.0
Following result shows drug response of agonist with noncompetitive antagonist to the logarithmically increased concentrations.
c = c_log # Drug concentration(s) in nanomolar (nM) EC_50 = 20 # 50% effective concentration in nanomolar (nM) F = 1 # Efficacy (unitless) n_H = 1 # Hill coefficients (unitless) r_a = calc_drr(c, EC_50, F, n_H) K_i = 5 # Dissociation constant of inhibitor in nanomolar (nM) c_i = 25 # Inhibitor concentration in nanomolar (nM) r_ana = calc_drr_agonist_non_cptv_antagonist(c, EC_50, F, n_H, K_i, c_i) plot_dose_response_relation(c, r_a, "Agonist Only", r_ana, "Plus Antagonist", log_flag = True)
Equations/Dose-Response Relations/Dose-Response Relations.ipynb
ekaakurniawan/3nb
gpl-3.0
skip-gram
Image('diagrams/skip-gram.png')
.ipynb_checkpoints/TextModels-checkpoint.ipynb
peterfig/keras-deep-learning-course
mit
```python word1 = Input(shape=(1,), dtype='int64', name='word1') word2 = Input(shape=(1,), dtype='int64', name='word2') shared_embedding = Embedding( input_dim=VOCAB_SIZE+1, output_dim=DENSEVEC_DIM, input_length=1, embeddings_constraint = unit_norm(), name='shared_embedding') embedded_w1 = shared_embedding(word1) embedded_w2 = shared_embedding(word2) w1 = Flatten()(embedded_w1) w2 = Flatten()(embedded_w2) dotted = Dot(axes=1, name='dot_product')([w1, w2]) prediction = Dense(1, activation='sigmoid', name='output_layer')(dotted) sg_model = Model(inputs=[word1, word2], outputs=prediction) ``` fastText ```python ft_model = Sequential() ft_model.add(Embedding( input_dim = MAX_FEATURES, output_dim = EMBEDDING_DIMS, input_length= MAXLEN)) ft_model.add(GlobalAveragePooling1D()) ft_model.add(Dense(1, activation='sigmoid')) ``` Models and Training data construction The first step for CBOW and skip-gram Our training corpus is a collection of sentences, Tweets, emails, comments, or even longer documents. It is something composed of words. Each word takes is turn being the "target" word, and we collect the n words behind it and n words which follow it. This n is referred to as window size. If our example document is the sentence "I love deep learning" and the window size is 1, we'd get: * I, love * I, love, deep * love, deep, learning * deep, learning The target word is bold. Skip-gram model training data Skip-gram means form word pairs with a target word and all words in the window. These become the "positive" (1) samples for the skip-gram algorithm. In our "I love deep learning" example we'd get (eliminating repeated pairs): (I, love) = 1 (love, deep) = 1 (deep, learning) = 1 To create negative samples (0), we pair random vocabulary words with the target word. Yes, it's possible to unluckily pick a negative sample that usually appears around the target word. For our prediction task, we'll take the dot product of the words in each pair (a small step away from the cosine similarity). The training will keep tweaking the word vectors to make this product as close to unity as possible for our positive samples, and zero for our negative samples. Happily, Keras include a function for creating skipgrams from text. It even does the negative sampling for us.
from keras.preprocessing.sequence import skipgrams from keras.preprocessing.text import Tokenizer, text_to_word_sequence text1 = "I love deep learning." text2 = "Read Douglas Adams as much as possible." tokenizer = Tokenizer() tokenizer.fit_on_texts([text1, text2]) word2id = tokenizer.word_index word2id.items()
.ipynb_checkpoints/TextModels-checkpoint.ipynb
peterfig/keras-deep-learning-course
mit
Note word id's are numbered from 1, not zero
id2word = { wordid: word for word, wordid in word2id.items()} id2word encoded_text = [word2id[word] for word in text_to_word_sequence(text1)] encoded_text [word2id[word] for word in text_to_word_sequence(text2)] sg = skipgrams(encoded_text, vocabulary_size=len(word2id.keys()), window_size=1) sg for i in range(len(sg[0])): print "({0},{1})={2}".format(id2word[sg[0][i][0]], id2word[sg[0][i][1]], sg[1][i])
.ipynb_checkpoints/TextModels-checkpoint.ipynb
peterfig/keras-deep-learning-course
mit
Model parameters
VOCAB_SIZE = len(word2id.keys()) VOCAB_SIZE DENSEVEC_DIM = 50
.ipynb_checkpoints/TextModels-checkpoint.ipynb
peterfig/keras-deep-learning-course
mit
Model build
import keras from keras.layers.embeddings import Embedding from keras.constraints import unit_norm from keras.layers.merge import Dot from keras.layers.core import Activation from keras.layers.core import Flatten from keras.layers import Input, Dense from keras.models import Model
.ipynb_checkpoints/TextModels-checkpoint.ipynb
peterfig/keras-deep-learning-course
mit