markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
With respect to receivers, the number of receivers is the same number of discrete points in the $x$ direction. So, we position these receivers along the direction $x$, at height $\bar{z}$ = 10m. In this way, our variables are chosen as:
nrec = nptx nxpos = np.linspace(x0,x1,nrec) nzpos = hzv
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
As we know, receivers are generated by the command Receiver. Thus, we use the parameters listed above and using the Receiver command, we create and position the receivers:
rec = Receiver(name='rec',grid=grid,npoint=nrec,time_range=time_range,staggered=NODE,dtype=np.float64) rec.coordinates.data[:, 0] = nxpos rec.coordinates.data[:, 1] = nzpos
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
The displacement field u is a second order field in time and space, which uses points of type non-staggered. In this way, we construct the displacement field u with the command TimeFunction:
u = TimeFunction(name="u",grid=grid,time_order=2,space_order=2,staggered=NODE,dtype=np.float64)
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
The velocity field, the source term and receivers are defined as in the previous notebook:
vel0 = Function(name="vel0",grid=grid,space_order=2,staggered=NODE,dtype=np.float64) vel0.data[:,:] = v0[:,:] src_term = src.inject(field=u.forward,expr=src*dt**2*vel0**2) rec_term = rec.interpolate(expr=u)
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
The next step is to create the sequence of structures that reproduce the function $\zeta(x,z)$. Initially, we define the region $\Omega_{0}$, since the damping function uses the limits of that region. We previously defined the limits of the $\Omega$ region to be x0, x1, z0 and z1. Now, we define the limits of the region $\Omega_{0}$ as: x0pml and x1pml in the direction $x$ and z0pml and z1pml in the direction $z$. These points satisfy the following relationships with the lengths $L_{x}$ and $L_{z}$: x0pml = x0 + $L_{x}$; x1pml = x1 - $L_{x}$; z0pml = z0; z1pml = z1 - $L_{z}$; In terms of program variables, we have the following definitions:
x0pml = x0 + npmlx*hxv x1pml = x1 - npmlx*hxv z0pml = z0 z1pml = z1 - npmlz*hzv
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
Having built the $\Omega$ limits, we then create a function, which we will call fdamp, which computationally represents the $\zeta(x,z)$ function. In the fdamp function, we highlight the following elements: quibar represents a constant choice for $\bar{\zeta_{1}}(x,z)$ and $\bar{\zeta_{2}}(x,z)$, satisfying $\bar{\zeta_{1}}(x,z)=\bar{\zeta_{2}}(x,z)$; adamp denotes the function $\zeta_{1}(x,z)$; bdamp denotes the function $\zeta_{2}(x,z)$; The terms a and b locate the $(x,z)$ points that are passed as an argument to the fdamp function. The fdamp function is defined using the following structure:
def fdamp(x,z): quibar = 1.5*np.log(1.0/0.001)/(40) cte = 1./vmax a = np.where(x<=x0pml,(np.abs(x-x0pml)/lx),np.where(x>=x1pml,(np.abs(x-x1pml)/lx),0.)) b = np.where(z<=z0pml,(np.abs(z-z0pml)/lz),np.where(z>=z1pml,(np.abs(z-z1pml)/lz),0.)) adamp = quibar*(a-(1./(2.*np.pi))*np.sin(2.*np.pi*a))/hxv bdamp = quibar*(b-(1./(2.*np.pi))*np.sin(2.*np.pi*b))/hzv fdamp = cte*(adamp+bdamp) return fdamp
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
Created the damping function, we define an array that loads the damping information in the entire domain $\Omega$. The objective is to assign this array to a Function and use it in the composition of the equations. To generate this array, we will use the function generatemdamp. In summary, this function generates a non-staggered grid and evaluates that grid in the fdamp function. At the end, we generate an array that we call D0 and which will be responsible for providing the damping value at each of the $\Omega$ points. The generatemdamp function is expressed as follows:
def generatemdamp(): X0 = np.linspace(x0,x1,nptx) Z0 = np.linspace(z0,z1,nptz) X0grid,Z0grid = np.meshgrid(X0,Z0) D0 = np.zeros((nptx,nptz)) D0 = np.transpose(fdamp(X0grid,Z0grid)) return D0
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
Built the function generatemdamp we will execute it using the command:
D0 = generatemdamp();
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
Below we include a routine to plot the damping field.
def graph2damp(D): plot.figure() plot.figure(figsize=(16,8)) fscale = 1/10**(-3) fscale = 10**(-3) scale = np.amax(D) extent = [fscale*x0,fscale*x1, fscale*z1, fscale*z0] fig = plot.imshow(np.transpose(D), vmin=0.,vmax=scale, cmap=cm.seismic, extent=extent) plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km')) plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km')) plot.title('Absorbing Layer Function') plot.grid() ax = plot.gca() divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.05) cbar = plot.colorbar(fig, cax=cax, format='%.2e') cbar.set_label('Damping') plot.show()
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
Below we include the plot of damping field.
# NBVAL_IGNORE_OUTPUT graph2damp(D0)
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
Like the velocity function $c(x,z)$, the damping function $\zeta(x,z)$ is constant in time. Therefore, the damping function will be a second-order Function in space, which uses points of the non-staggered type and which we will evaluate with the D0 array. The symbolic name damp will be assigned to this field.
damp = Function(name="damp",grid=grid,space_order=2,staggered=NODE,dtype=np.float64) damp.data[:,:] = D0
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
The expressions for the acoustic equation with damping can be separeted between the white and blue regions. Translating these expressions in terms of an eq that can be inserted in a Devito code, we have that in the white region the equation takes the form: eq1 = u.dt2 - vel0 * vel0 * u.laplace, and in the blue region we have the following equation: eq2 = u.dt2 + vel0 * vel0 * damp * u.dtc - vel0 * vel0 * u.laplace. Here u.dtc represents the centered derivative with respect to the variable $t$ for the field u. Then, we set the two pdes for the two regions
pde0 = Eq(u.dt2 - u.laplace*vel0**2) pde1 = Eq(u.dt2 - u.laplace*vel0**2 + vel0**2*damp*u.dtc)
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
As we did on the notebook <a href="introduction.ipynb">Introduction to Acoustic Problem</a>, we define the stencils for each of the pdes that we created previously. In the case of pde0 it is defined only in the white region, which is represented by subdomain d0. Then, we define the stencil0 which resolves pde0 in d0 and it is defined as follows:
stencil0 = Eq(u.forward, solve(pde0,u.forward),subdomain = grid.subdomains['d0'])
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
The pde1 will be applied in the blue region, the union of the subdomains d1, d2 and d3. In this way, we create a vector called subds that comprises these three subdomains, and we are ready to set the corresponding stencil
subds = ['d1','d2','d3'] stencil1 = [Eq(u.forward, solve(pde1,u.forward),subdomain = grid.subdomains[subds[i]]) for i in range(0,len(subds))]
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
The boundary conditions of the problem are kept the same as the notebook <a href="1_introduction.ipynb">Introduction to Acoustic Problem</a>. So these are placed in the term bc and have the following form:
bc = [Eq(u[t+1,0,z],0.),Eq(u[t+1,nptx-1,z],0.),Eq(u[t+1,x,nptz-1],0.),Eq(u[t+1,x,0],u[t+1,x,1])]
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
We then define the operator (op) that join the acoustic equation, source term, boundary conditions and receivers. The acoustic wave equation in the d0 region: [stencil0]; The acoustic wave equation in the d1, d2 and d3 region: [stencil1]; Source term: src_term; Boundary conditions: bc; Receivers: rec_term;
# NBVAL_IGNORE_OUTPUT op = Operator([stencil0,stencil1] + src_term + bc + rec_term,subs=grid.spacing_map)
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
We reset the field u:
u.data[:] = 0.
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
We assign in op the number of time steps it must execute and the size of the time step in the local variables time and dt, respectively.
# NBVAL_IGNORE_OUTPUT op(time=nt,dt=dt0)
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
To view the result of the displacement field at the end time, we use the graph2d routine given by:
def graph2d(U): plot.figure() plot.figure(figsize=(16,8)) fscale = 1/10**(3) scale = np.amax(U[npmlx:-npmlx,0:-npmlz])/10. extent = [fscale*x0pml,fscale*x1pml,fscale*z1pml,fscale*z0pml] fig = plot.imshow(np.transpose(U[npmlx:-npmlx,0:-npmlz]),vmin=-scale, vmax=scale, cmap=cm.seismic, extent=extent) plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km')) plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km')) plot.axis('equal') plot.title('Map - Acoustic Problem with Devito') plot.grid() ax = plot.gca() divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.05) cbar = plot.colorbar(fig, cax=cax, format='%.2e') cbar.set_label('Displacement [km]') plot.draw() plot.show() # NBVAL_IGNORE_OUTPUT graph2d(u.data[0,:,:])
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
Note that the solution obtained here has a reduction in noise when compared to the results displayed on the notebook <a href="01_introduction.ipynb">Introduction to Acoustic Problem</a>. To plot the result of the Receivers we use the graph2drec routine.
def graph2drec(rec): plot.figure() plot.figure(figsize=(16,8)) fscaled = 1/10**(3) fscalet = 1/10**(3) scale = np.amax(rec[:,npmlx:-npmlx])/10. extent = [fscaled*x0pml,fscaled*x1pml, fscalet*tn, fscalet*t0] fig = plot.imshow(rec[:,npmlx:-npmlx], vmin=-scale, vmax=scale, cmap=cm.seismic, extent=extent) plot.gca().xaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f km')) plot.gca().yaxis.set_major_formatter(mticker.FormatStrFormatter('%.1f s')) plot.axis('equal') plot.title('Receivers Signal Profile with Damping - Devito') ax = plot.gca() divider = make_axes_locatable(ax) cax = divider.append_axes("right", size="5%", pad=0.05) cbar = plot.colorbar(fig, cax=cax, format='%.2e') plot.show() # NBVAL_IGNORE_OUTPUT graph2drec(rec.data) assert np.isclose(np.linalg.norm(rec.data), 990, rtol=1)
examples/seismic/abc_methods/02_damping.ipynb
opesci/devito
mit
Finetuning and Training
%cd $DATA_HOME_DIR #Set path to sample/ path if desired path = DATA_HOME_DIR + '/' #'/sample/' test_path = DATA_HOME_DIR + '/test1/' #We use all the test data # FloydHub # data needs to be output under /output # if results_path cannot be created, execute mkdir directly in the terminal results_path = OUTPUT_HOME_DIR + '/results/' %mkdir results_path train_path = path + '/train/' valid_path = path + '/valid/'
fast.ai/lesson1/dogscats_run.ipynb
kazuhirokomoda/deep_learning
mit
Use a pretrained VGG model with our Vgg16 class
# As large as you can, but no larger than 64 is recommended. #batch_size = 8 batch_size = 64 no_of_epochs=3
fast.ai/lesson1/dogscats_run.ipynb
kazuhirokomoda/deep_learning
mit
The original pre-trained Vgg16 class classifies images into one of the 1000 categories. This number of categories depends on the dataset which Vgg16 was trained with. (http://image-net.org/challenges/LSVRC/2014/browse-synsets) In order to classify images into the categories which we prepare (2 categories of dogs/cats, in this notebook), fine-tuning technology is useful. It: - keeps the most weights from the pre-trained Vgg16 model, but modifies only a few parts of the weights - changes the dimension of the output layer (from 1000 to 2, in this notebook)
vgg = Vgg16() # Grab a few images at a time for training and validation. batches = vgg.get_batches(train_path, batch_size=batch_size) val_batches = vgg.get_batches(valid_path, batch_size=batch_size*2) # Finetune: note that the vgg model is compiled inside the finetune method. vgg.finetune(batches) # Fit: note that we are passing in the validation dataset to the fit() method # For each epoch we test our model against the validation set latest_weights_filename = None # FloydHub (Keras1) for epoch in range(no_of_epochs): print("Running epoch: %d" % epoch) vgg.fit(batches, val_batches, nb_epoch=1) latest_weights_filename = 'ft%d.h5' % epoch vgg.model.save_weights(results_path+latest_weights_filename) print("Completed %s fit operations" % no_of_epochs) # alternatively, for local (Keras2) """ for epoch in range(no_of_epochs): print("Running epoch: %d" % epoch) vgg.fit(batches, val_batches, batch_size, nb_epoch=1) latest_weights_filename = 'ft%d.h5' % epoch vgg.model.save_weights(results_path+latest_weights_filename) print("Completed %s fit operations" % no_of_epochs) """
fast.ai/lesson1/dogscats_run.ipynb
kazuhirokomoda/deep_learning
mit
Generate Predictions
# OUTPUT_HOME_DIR, not DATA_HOME_DIR due to FloydHub restriction %cd $OUTPUT_HOME_DIR %mkdir -p test1/unknown %cd $OUTPUT_HOME_DIR/test1 %cp $test_path/*.jpg unknown/ # rewrite test_path test_path = OUTPUT_HOME_DIR + '/test1/' #We use all the test data batches, preds = vgg.test(test_path, batch_size = batch_size*2) print(preds[:5]) filenames = batches.filenames print(filenames[:5]) # You can verify the column ordering by viewing some images from PIL import Image Image.open(test_path + filenames[2]) #Save our test results arrays so we can use them again later save_array(results_path + 'test_preds.dat', preds) save_array(results_path + 'filenames.dat', filenames)
fast.ai/lesson1/dogscats_run.ipynb
kazuhirokomoda/deep_learning
mit
Validate Predictions Calculate predictions on validation set, so we can find correct and incorrect examples:
vgg.model.load_weights(results_path+latest_weights_filename) val_batches, probs = vgg.test(valid_path, batch_size = batch_size) filenames = val_batches.filenames expected_labels = val_batches.classes #0 or 1 #Round our predictions to 0/1 to generate labels our_predictions = probs[:,0] our_labels = np.round(1-our_predictions)
fast.ai/lesson1/dogscats_run.ipynb
kazuhirokomoda/deep_learning
mit
(TODO) look at data to improve model confusion matrix
from sklearn.metrics import confusion_matrix cm = confusion_matrix(expected_labels, our_labels) plot_confusion_matrix(cm, val_batches.class_indices)
fast.ai/lesson1/dogscats_run.ipynb
kazuhirokomoda/deep_learning
mit
Submit Predictions to Kaggle! This section also depends on which dataset you use (and which Kaggle competition you are participating)
#Load our test predictions from file preds = load_array(results_path + 'test_preds.dat') filenames = load_array(results_path + 'filenames.dat') #Grab the dog prediction column isdog = preds[:,1] print("Raw Predictions: " + str(isdog[:5])) print("Mid Predictions: " + str(isdog[(isdog < .6) & (isdog > .4)])) print("Edge Predictions: " + str(isdog[(isdog == 1) | (isdog == 0)])) # sneaky trick to round down our edge predictions # Swap all ones with .95 and all zeros with .05 isdog = isdog.clip(min=0.05, max=0.95) #Extract imageIds from the filenames in our test/unknown directory filenames = batches.filenames ids = np.array([int(f[8:f.find('.')]) for f in filenames]) subm = np.stack([ids,isdog], axis=1) subm[:5] # FloydHub %cd $OUTPUT_HOME_DIR # alternatively, for local #%cd $DATA_HOME_DIR submission_file_name = 'submission1.csv' np.savetxt(submission_file_name, subm, fmt='%d,%.5f', header='id,label', comments='') from IPython.display import FileLink # FloydHub %cd $OUTPUT_HOME_DIR FileLink(submission_file_name) # alternatively, for local #%cd $LESSON_HOME_DIR #FileLink('data/redux/'+submission_file_name)
fast.ai/lesson1/dogscats_run.ipynb
kazuhirokomoda/deep_learning
mit
Steps to use the TF Experiment APIs Define dataset metadata Define data input function to read the data from csv files + feature processing Create TF feature columns based on metadata + extended feature columns Define an estimator (DNNRegressor) creation function with the required feature columns & parameters Define a serving function to export the model Run an Experiment with learn_runner to train, evaluate, and export the model Evaluate the model using test data Perform predictions
MODEL_NAME = 'reg-model-03' TRAIN_DATA_FILES_PATTERN = 'data/train-*.csv' VALID_DATA_FILES_PATTERN = 'data/valid-*.csv' TEST_DATA_FILES_PATTERN = 'data/test-*.csv' RESUME_TRAINING = False PROCESS_FEATURES = True EXTEND_FEATURE_COLUMNS = True MULTI_THREADING = True
01_Regression/04.0 - TF Regression Model - Dataset Input.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
1. Define Dataset Metadata CSV file header and defaults Numeric and categorical feature names Target feature name Unused columns
HEADER = ['key','x','y','alpha','beta','target'] HEADER_DEFAULTS = [[0], [0.0], [0.0], ['NA'], ['NA'], [0.0]] NUMERIC_FEATURE_NAMES = ['x', 'y'] CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY = {'alpha':['ax01', 'ax02'], 'beta':['bx01', 'bx02']} CATEGORICAL_FEATURE_NAMES = list(CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.keys()) FEATURE_NAMES = NUMERIC_FEATURE_NAMES + CATEGORICAL_FEATURE_NAMES TARGET_NAME = 'target' UNUSED_FEATURE_NAMES = list(set(HEADER) - set(FEATURE_NAMES) - {TARGET_NAME}) print("Header: {}".format(HEADER)) print("Numeric Features: {}".format(NUMERIC_FEATURE_NAMES)) print("Categorical Features: {}".format(CATEGORICAL_FEATURE_NAMES)) print("Target: {}".format(TARGET_NAME)) print("Unused Features: {}".format(UNUSED_FEATURE_NAMES))
01_Regression/04.0 - TF Regression Model - Dataset Input.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
2. Define Data Input Function Input csv files name pattern Use TF Dataset APIs to read and process the data Parse CSV lines to feature tensors Apply feature processing Return (features, target) tensors a. parsing and preprocessing logic
def parse_csv_row(csv_row): columns = tf.decode_csv(csv_row, record_defaults=HEADER_DEFAULTS) features = dict(zip(HEADER, columns)) for column in UNUSED_FEATURE_NAMES: features.pop(column) target = features.pop(TARGET_NAME) return features, target def process_features(features): features["x_2"] = tf.square(features['x']) features["y_2"] = tf.square(features['y']) features["xy"] = tf.multiply(features['x'], features['y']) # features['x'] * features['y'] features['dist_xy'] = tf.sqrt(tf.squared_difference(features['x'],features['y'])) return features
01_Regression/04.0 - TF Regression Model - Dataset Input.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
b. data pipeline input function
def csv_input_fn(files_name_pattern, mode=tf.estimator.ModeKeys.EVAL, skip_header_lines=0, num_epochs=None, batch_size=200): shuffle = True if mode == tf.estimator.ModeKeys.TRAIN else False print("") print("* data input_fn:") print("================") print("Input file(s): {}".format(files_name_pattern)) print("Batch size: {}".format(batch_size)) print("Epoch Count: {}".format(num_epochs)) print("Mode: {}".format(mode)) print("Shuffle: {}".format(shuffle)) print("================") print("") file_names = tf.matching_files(files_name_pattern) dataset = data.TextLineDataset(filenames=file_names) dataset = dataset.skip(skip_header_lines) if shuffle: dataset = dataset.shuffle(buffer_size=2 * batch_size + 1) #useful for distributed training when training on 1 data file, so it can be shareded #dataset = dataset.shard(num_workers, worker_index) dataset = dataset.batch(batch_size) dataset = dataset.map(lambda csv_row: parse_csv_row(csv_row)) if PROCESS_FEATURES: dataset = dataset.map(lambda features, target: (process_features(features), target)) #dataset = dataset.batch(batch_size) #??? very long time dataset = dataset.repeat(num_epochs) iterator = dataset.make_one_shot_iterator() features, target = iterator.get_next() return features, target features, target = csv_input_fn(files_name_pattern="") print("Feature read from CSV: {}".format(list(features.keys()))) print("Target read from CSV: {}".format(target))
01_Regression/04.0 - TF Regression Model - Dataset Input.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
3. Define Feature Columns The input numeric columns are assumed to be normalized (or have the same scale). Otherise, a normlizer_fn, along with the normlisation params (mean, stdv) should be passed to tf.feature_column.numeric_column() constructor.
def extend_feature_columns(feature_columns): # crossing, bucketizing, and embedding can be applied here feature_columns['alpha_X_beta'] = tf.feature_column.crossed_column( [feature_columns['alpha'], feature_columns['beta']], 4) return feature_columns def get_feature_columns(): CONSTRUCTED_NUMERIC_FEATURES_NAMES = ['x_2', 'y_2', 'xy', 'dist_xy'] all_numeric_feature_names = NUMERIC_FEATURE_NAMES.copy() if PROCESS_FEATURES: all_numeric_feature_names += CONSTRUCTED_NUMERIC_FEATURES_NAMES numeric_columns = {feature_name: tf.feature_column.numeric_column(feature_name) for feature_name in all_numeric_feature_names} categorical_column_with_vocabulary = \ {item[0]: tf.feature_column.categorical_column_with_vocabulary_list(item[0], item[1]) for item in CATEGORICAL_FEATURE_NAMES_WITH_VOCABULARY.items()} feature_columns = {} if numeric_columns is not None: feature_columns.update(numeric_columns) if categorical_column_with_vocabulary is not None: feature_columns.update(categorical_column_with_vocabulary) if EXTEND_FEATURE_COLUMNS: feature_columns = extend_feature_columns(feature_columns) return feature_columns feature_columns = get_feature_columns() print("Feature Columns: {}".format(feature_columns))
01_Regression/04.0 - TF Regression Model - Dataset Input.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
4. Define an Estimator Creation Function Get dense (numeric) columns from the feature columns Convert categorical columns to indicator columns Create Instantiate a DNNRegressor estimator given dense + indicator feature columns + params
def create_estimator(run_config, hparams): feature_columns = list(get_feature_columns().values()) dense_columns = list( filter(lambda column: isinstance(column, feature_column._NumericColumn), feature_columns ) ) categorical_columns = list( filter(lambda column: isinstance(column, feature_column._VocabularyListCategoricalColumn) | isinstance(column, feature_column._BucketizedColumn), feature_columns) ) indicator_columns = list( map(lambda column: tf.feature_column.indicator_column(column), categorical_columns) ) estimator = tf.estimator.DNNRegressor( feature_columns= dense_columns + indicator_columns , hidden_units= hparams.hidden_units, optimizer= tf.train.AdamOptimizer(), activation_fn= tf.nn.elu, dropout= hparams.dropout_prob, config= run_config ) print("") print("Estimator Type: {}".format(type(estimator))) print("") return estimator
01_Regression/04.0 - TF Regression Model - Dataset Input.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
5. Define Serving Funcion
def csv_serving_input_fn(): SERVING_HEADER = ['x','y','alpha','beta'] SERVING_HEADER_DEFAULTS = [[0.0], [0.0], ['NA'], ['NA']] rows_string_tensor = tf.placeholder(dtype=tf.string, shape=[None], name='csv_rows') receiver_tensor = {'csv_rows': rows_string_tensor} row_columns = tf.expand_dims(rows_string_tensor, -1) columns = tf.decode_csv(row_columns, record_defaults=SERVING_HEADER_DEFAULTS) features = dict(zip(SERVING_HEADER, columns)) return tf.estimator.export.ServingInputReceiver( process_features(features), receiver_tensor)
01_Regression/04.0 - TF Regression Model - Dataset Input.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
6. Run Experiment a. Define Experiment Function
def generate_experiment_fn(**experiment_args): def _experiment_fn(run_config, hparams): train_input_fn = lambda: csv_input_fn( files_name_pattern=TRAIN_DATA_FILES_PATTERN, mode = tf.contrib.learn.ModeKeys.TRAIN, num_epochs=hparams.num_epochs, batch_size=hparams.batch_size ) eval_input_fn = lambda: csv_input_fn( files_name_pattern=VALID_DATA_FILES_PATTERN, mode=tf.contrib.learn.ModeKeys.EVAL, num_epochs=1, batch_size=hparams.batch_size ) estimator = create_estimator(run_config, hparams) return tf.contrib.learn.Experiment( estimator, train_input_fn=train_input_fn, eval_input_fn=eval_input_fn, eval_steps=None, **experiment_args ) return _experiment_fn
01_Regression/04.0 - TF Regression Model - Dataset Input.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
b. Set HParam and RunConfig
TRAIN_SIZE = 12000 NUM_EPOCHS = 1000 BATCH_SIZE = 500 NUM_EVAL = 10 CHECKPOINT_STEPS = int((TRAIN_SIZE/BATCH_SIZE) * (NUM_EPOCHS/NUM_EVAL)) hparams = tf.contrib.training.HParams( num_epochs = NUM_EPOCHS, batch_size = BATCH_SIZE, hidden_units=[8, 4], dropout_prob = 0.0) model_dir = 'trained_models/{}'.format(MODEL_NAME) run_config = tf.contrib.learn.RunConfig( save_checkpoints_steps=CHECKPOINT_STEPS, tf_random_seed=19830610, model_dir=model_dir ) print(hparams) print("Model Directory:", run_config.model_dir) print("") print("Dataset Size:", TRAIN_SIZE) print("Batch Size:", BATCH_SIZE) print("Steps per Epoch:",TRAIN_SIZE/BATCH_SIZE) print("Total Steps:", (TRAIN_SIZE/BATCH_SIZE)*NUM_EPOCHS) print("Required Evaluation Steps:", NUM_EVAL) print("That is 1 evaluation step after each",NUM_EPOCHS/NUM_EVAL," epochs") print("Save Checkpoint After",CHECKPOINT_STEPS,"steps")
01_Regression/04.0 - TF Regression Model - Dataset Input.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
c. Run Experiment via learn_runner
if not RESUME_TRAINING: print("Removing previous artifacts...") shutil.rmtree(model_dir, ignore_errors=True) else: print("Resuming training...") tf.logging.set_verbosity(tf.logging.INFO) time_start = datetime.utcnow() print("Experiment started at {}".format(time_start.strftime("%H:%M:%S"))) print(".......................................") learn_runner.run( experiment_fn=generate_experiment_fn( export_strategies=[make_export_strategy( csv_serving_input_fn, exports_to_keep=1 )] ), run_config=run_config, schedule="train_and_evaluate", hparams=hparams ) time_end = datetime.utcnow() print(".......................................") print("Experiment finished at {}".format(time_end.strftime("%H:%M:%S"))) print("") time_elapsed = time_end - time_start print("Experiment elapsed time: {} seconds".format(time_elapsed.total_seconds()))
01_Regression/04.0 - TF Regression Model - Dataset Input.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
7. Evaluate the Model
TRAIN_SIZE = 12000 VALID_SIZE = 3000 TEST_SIZE = 5000 train_input_fn = lambda: csv_input_fn(files_name_pattern= TRAIN_DATA_FILES_PATTERN, mode= tf.estimator.ModeKeys.EVAL, batch_size= TRAIN_SIZE) valid_input_fn = lambda: csv_input_fn(files_name_pattern= VALID_DATA_FILES_PATTERN, mode= tf.estimator.ModeKeys.EVAL, batch_size= VALID_SIZE) test_input_fn = lambda: csv_input_fn(files_name_pattern= TEST_DATA_FILES_PATTERN, mode= tf.estimator.ModeKeys.EVAL, batch_size= TEST_SIZE) estimator = create_estimator(run_config, hparams) train_results = estimator.evaluate(input_fn=train_input_fn, steps=1) train_rmse = round(math.sqrt(train_results["average_loss"]),5) print() print("############################################################################################") print("# Train RMSE: {} - {}".format(train_rmse, train_results)) print("############################################################################################") valid_results = estimator.evaluate(input_fn=valid_input_fn, steps=1) valid_rmse = round(math.sqrt(valid_results["average_loss"]),5) print() print("############################################################################################") print("# Valid RMSE: {} - {}".format(valid_rmse,valid_results)) print("############################################################################################") test_results = estimator.evaluate(input_fn=test_input_fn, steps=1) test_rmse = round(math.sqrt(test_results["average_loss"]),5) print() print("############################################################################################") print("# Test RMSE: {} - {}".format(test_rmse, test_results)) print("############################################################################################")
01_Regression/04.0 - TF Regression Model - Dataset Input.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
8. Prediction
import itertools predict_input_fn = lambda: csv_input_fn(files_name_pattern=TEST_DATA_FILES_PATTERN, mode= tf.estimator.ModeKeys.PREDICT, batch_size= 5) predictions = estimator.predict(input_fn=predict_input_fn) values = list(map(lambda item: item["predictions"][0],list(itertools.islice(predictions, 5)))) print() print("Predicted Values: {}".format(values))
01_Regression/04.0 - TF Regression Model - Dataset Input.ipynb
GoogleCloudPlatform/tf-estimator-tutorials
apache-2.0
Pure Sinusoid Sliding Window In this first experiment, you will alter the extent of the sliding window of a pure sinusoid and examine how the geometry of a 2-D embedding changes. First, setup and plot a pure sinusoid in NumPy:
# Step 1: Setup the signal T = 40 # The period in number of samples NPeriods = 4 # How many periods to go through N = T*NPeriods #The total number of samples t = np.linspace(0, 2*np.pi*NPeriods, N+1)[:N] # Sampling indices in time x = np.cos(t) # The final signal plt.plot(x);
SlidingWindow1-Basics.ipynb
ctralie/TUMTopoTimeSeries2016
apache-2.0
Sliding Window Code The code below performs a sliding window embedding on a 1D signal. The parameters are as follows: | | | |:-:|---| |$x$ | The 1-D signal (numpy array) | |dim|The dimension of the embedding| |$\tau$ | The skip between samples in a given window | |$dT$ | The distance to slide from one window to the next | That is, along the signal given by the array $x$, the first three windows will will be $$\begin{bmatrix} x(\tau)\ x(2\tau) \ \ldots \ x((\mbox{dim}-1)\cdot\tau)\end{bmatrix}, \begin{bmatrix} x(dT + \tau)\ x(dT +2\tau) \ \ldots \ x(dT +(\mbox{dim}-1)\cdot\tau)\end{bmatrix}, \begin{bmatrix} x(2dT + \tau)\ x(2dT +2\tau) \ \ldots \ x(2dT +(\mbox{dim}-1)\cdot\tau)\end{bmatrix}$$ Spline interpolation is used to fill in information between signal samples, which is necessary for certain combinations of parameters, such as a non-integer $\tau$ or $dT$. The function getSlidingWindow below creates an array $X$ containing the windows as its columns.
def getSlidingWindow(x, dim, Tau, dT): """ Return a sliding window of a time series, using arbitrary sampling. Use linear interpolation to fill in values in windows not on the original grid Parameters ---------- x: ndarray(N) The original time series dim: int Dimension of sliding window (number of lags+1) Tau: float Length between lags, in units of time series dT: float Length between windows, in units of time series Returns ------- X: ndarray(N, dim) All sliding windows stacked up """ N = len(x) NWindows = int(np.floor((N-dim*Tau)/dT)) if NWindows <= 0: print("Error: Tau too large for signal extent") return np.zeros((3, dim)) X = np.zeros((NWindows, dim)) spl = InterpolatedUnivariateSpline(np.arange(N), x) for i in range(NWindows): idxx = dT*i + Tau*np.arange(dim) start = int(np.floor(idxx[0])) end = int(np.ceil(idxx[-1]))+2 # Only take windows that are within range if end >= len(x): X = X[0:i, :] break X[i, :] = spl(idxx) return X
SlidingWindow1-Basics.ipynb
ctralie/TUMTopoTimeSeries2016
apache-2.0
Sliding Window Result We will now perform a sliding window embedding with various choices of parameters. Principal component analysis will be performed to project the result down to 2D for visualization. The first two eigenvalues computed by PCA will be printed. The closer these eigenvalues are to each other, the rounder and more close to a circle the 2D projection of the embedding is. A red vertical line will be drawn to show the product of $\tau$ and the dimension, or "extent" (window length). An important note: we choose to project the results to 2D (or later, to 3D). Nothing in particular tells us that this is the best choice of dimension. We merely make this choice to enable visualization. In general, when doing PCA, we want to choose enough eigenvalues to account for a significant portion of explained variance. Exercise: Execute the code. Using the sliders, play around with the parameters of the sliding window embedding and examine the results. Then answer the questions below.
def on_value_change(change): execute_computation1() dimslider = widgets.IntSlider(min=1,max=40,value=20,description='Dimension:',continuous_update=False) dimslider.observe(on_value_change, names='value') Tauslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=1,description=r'\(\tau :\)' ,continuous_update=False) Tauslider.observe(on_value_change, names='value') dTslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=0.5,description='dT: ',continuous_update=False) dTslider.observe(on_value_change, names='value') display(widgets.HBox(( dimslider,Tauslider,dTslider))) plt.figure(figsize=(9.5, 3)) def execute_computation1(): plt.clf() # Step 1: Setup the signal again in case x was lost T = 40 # The period in number of samples NPeriods = 4 # How many periods to go through N = T*NPeriods # The total number of samples t = np.linspace(0, 2*np.pi*NPeriods, N+1)[0:N] # Sampling indices in time x = np.cos(t) # The final signal # Get slider values dim = dimslider.value Tau = Tauslider.value dT = dTslider.value #Step 2: Do a sliding window embedding X = getSlidingWindow(x, dim, Tau, dT) extent = Tau*dim #Step 3: Perform PCA down to 2D for visualization pca = PCA(n_components = 2) Y = pca.fit_transform(X) eigs = pca.explained_variance_ print("lambda1 = %g, lambda2 = %g"%(eigs[0], eigs[1])) #Step 4: Plot original signal and PCA of the embedding ax = plt.subplot(121) ax.plot(x) ax.set_ylim((-2*max(x), 2*max(x))) ax.set_title("Original Signal") ax.set_xlabel("Sample Number") yr = np.max(x)-np.min(x) yr = [np.min(x)-0.1*yr, np.max(x)+0.1*yr] ax.plot([extent, extent], yr, 'r') ax.plot([0, 0], yr, 'r') ax.plot([0, extent], [yr[0]]*2, 'r') ax.plot([0, extent], [yr[1]]*2, 'r') ax2 = plt.subplot(122) ax2.set_title("PCA of Sliding Window Embedding") ax2.scatter(Y[:, 0], Y[:, 1]) ax2.set_aspect('equal', 'datalim') plt.tight_layout() execute_computation1()
SlidingWindow1-Basics.ipynb
ctralie/TUMTopoTimeSeries2016
apache-2.0
Questions For fixed $\tau$: What does varying the dimension do to the extent (the length of the window)? what dimensions give eigenvalues nearest each other? (Note: dimensions! Plural!) Explain why this is the case. Explain how you might use this information to deduce the period of a signal. <br><br> What does varying $dT$ do to the PCA embedding? Explain this in terms of the definition of sliding windows above. <br><br> The command python np.random.randn(pts) generates an array of length pts filled with random values drawn from a standard normal distribution ($\mu=0$, $\sigma=1$). Modify the code above to add random noise to signal. Can you still detect the period visually by inspecting the plot of the signal? Does your method of detecting the period from the first question still work? How does adding noise change the geometry of the PCA embedding? Modify the amplitude of the noise (for example, by multiplying the noise-generating command by a constant) and examine the 2D projection. What feature of the 2D projection appears to imply that the signal is periodic? At what noise amplitude does this feature appear to vanish? Non-Periodic Signal Sliding Window For a contrasting example, we will now examine the sliding window embedding of a non-periodic signal which is a linear function plus Gaussian noise. The code below sets up the signal and then does the sliding window embedding, as before. Exercise: Execute the code. Using the sliders, play around with the parameters of the sliding window embedding and examine the results. Then answer the questions below.
noise = 0.05*np.random.randn(400) def on_value_change(change): execute_computation2() dimslider = widgets.IntSlider(min=1,max=40,value=20,description='Dimension:',continuous_update=False) dimslider.observe(on_value_change, names='value') Tauslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=1,description='Tau: ',continuous_update=False) Tauslider.observe(on_value_change, names='value') dTslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=0.5,description='dT: ',continuous_update=False) dTslider.observe(on_value_change, names='value') display(widgets.HBox(( dimslider,Tauslider,dTslider))) plt.figure(figsize=(9.5, 3)) def execute_computation2(): plt.clf() # Step 1: Set up the signal x = np.arange(400) x = x/float(len(x)) x = x + noise # Add some noise # Get slider values dim = dimslider.value Tau = Tauslider.value dT = dTslider.value #Step 2: Do a sliding window embedding X = getSlidingWindow(x, dim, Tau, dT) extent = Tau*dim #Step 3: Perform PCA down to 2D for visualization pca = PCA(n_components = 2) Y = pca.fit_transform(X) eigs = pca.explained_variance_ print("lambda1 = %g, lambda2 = %g"%(eigs[0], eigs[1])) #Step 4: Plot original signal and PCA of the embedding gs = gridspec.GridSpec(1, 2) ax = plt.subplot(gs[0,0]) ax.plot(x) ax.set_ylim((-2, 2)) ax.set_title("Original Signal") ax.set_xlabel("Sample Number") yr = np.max(x)-np.min(x) yr = [np.min(x)-0.1*yr, np.max(x)+0.1*yr] ax.plot([extent, extent], yr, 'r') ax.plot([0, 0], yr, 'r') ax.plot([0, extent], [yr[0]]*2, 'r') ax.plot([0, extent], [yr[1]]*2, 'r') ax2 = plt.subplot(gs[0, 1]) ax2.set_title("PCA of Sliding Window Embedding") ax2.scatter(Y[:, 0], Y[:, 1]) ax2.set_aspect('equal', 'datalim') execute_computation2()
SlidingWindow1-Basics.ipynb
ctralie/TUMTopoTimeSeries2016
apache-2.0
Questions Notice how changing the window extent doesn't have the same impact as it did in the periodic example above. Why might this be? <br><br> Why is the second eigenvalue always tiny? Multiple Sines Sliding Window We will now go back to periodic signals, but this time we will increase the complexity by adding two sines together. If the ratio between the two sinusoids is a rational number, then they are called harmonics of each other. For example, $\sin(t)$ and $\sin(3t)$ are harmonics of each other. By contrast, if the ratio of the two frequencies is irrational, then the sinusoids are called incommensurate. For example, $\sin(t)$ and $\sin(\pi t)$ are incommensurate. The plots below are initialized with $$f(t) = \sin(\omega t) + \sin(3\omega t),$$ a sum of two harmonics. This time, the eigenvalues of PCA will be plotted (up to the first 10), in addition to the red line showing the extent of the window. Also, 3D PCA will be displayed instead of 2D PCA, and you can click and drag your mouse to view it from different angles. Colors will be drawn to indicate the position of the window in time, with cool colors (greens and yellows) indicating earlier windows and hot colors (oranges and reds) indicating later windows (using the "jet" colormap). Exercise: Execute the code. Then play with the sliders, as well as the embedding dimension (note that for the 3-D projection, you can change the view by dragging around. Do!). Then, try changing the second sinusoid to be another multiple of the first. Try both harmonic and incommensurate values. Once you have gotten a feel for the geometries and the eigenvalues, answer the questions below.
def on_value_change(change): execute_computation3() embeddingdimbox = widgets.Dropdown(options=[2, 3],value=3,description='Embedding Dimension:',disabled=False) embeddingdimbox.observe(on_value_change,names='value') secondfreq = widgets.Dropdown(options=[2, 3, np.pi],value=3,description='Second Frequency:',disabled=False) secondfreq.observe(on_value_change,names='value') noiseampslider = widgets.FloatSlider(min=0,max=6,step=0.5,value=0,description='Noise Amplitude',continuous_update=False) noiseampslider.observe(on_value_change, names='value') dimslider = widgets.IntSlider(min=1,max=100,value=30,description='Dimension:',continuous_update=False) dimslider.observe(on_value_change, names='value') Tauslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=1,description='Tau: ',continuous_update=False) Tauslider.observe(on_value_change, names='value') dTslider = widgets.FloatSlider(min=0.1,max=5,step=0.1,value=0.5,description='dT: ',continuous_update=False) dTslider.observe(on_value_change, names='value') display(widgets.HBox(( secondfreq,embeddingdimbox,noiseampslider))) display(widgets.HBox((dimslider,Tauslider,dTslider))) noise = np.random.randn(10000) fig = plt.figure(figsize=(9.5, 4)) def execute_computation3(): plt.clf() # Step 1: Setup the signal T1 = 20 # The period of the first sine in number of samples T2 = T1*secondfreq.value # The period of the second sine in number of samples NPeriods = 10 # How many periods to go through, relative to the second sinusoid N = T2*NPeriods # The total number of samples t = np.arange(N) # Time indices x = np.cos(2*np.pi*(1.0/T1)*t) # The first sinusoid x += np.cos(2*np.pi*(1.0/T2)*t) # Add the second sinusoid x += noiseampslider.value*noise[:len(x)] # Get widget values dim = dimslider.value Tau = Tauslider.value dT = dTslider.value embeddingdim = embeddingdimbox.value # Step 2: Do a sliding window embedding X = getSlidingWindow(x, dim, Tau, dT) extent = Tau*dim # Step 3: Perform PCA down to dimension chosen for visualization pca = PCA(n_components = 10) Y = pca.fit_transform(X) eigs = pca.explained_variance_ # Step 4: Plot original signal and PCA of the embedding gs = gridspec.GridSpec(2, 2,width_ratios=[1, 2]) # Plot the signal ax = plt.subplot(gs[0,0]) ax.plot(x) yr = np.max(x)-np.min(x) yr = [np.min(x)-0.1*yr, np.max(x)+0.1*yr] ax.plot([extent, extent], yr, 'r') ax.plot([0, 0], yr, 'r') ax.plot([0, extent], [yr[0]]*2, 'r') ax.plot([0, extent], [yr[1]]*2, 'r') ax.set_title("Original Signal") ax.set_xlabel("Sample Number") c = plt.get_cmap('jet') C = c(np.array(np.round(np.linspace(0, 255, Y.shape[0])), dtype=np.int32)) C = C[:, 0:3] # Plot the PCA embedding if embeddingdim == 3: ax2 = plt.subplot(gs[:,1],projection='3d') ax2.scatter(Y[:, 0], Y[:, 1], Y[:, 2], c=C) ax2.set_aspect('equal', 'datalim') else: ax2 = plt.subplot(gs[:,1]) ax2.scatter(Y[:, 0], Y[:, 1],c=C) ax2.set_title("PCA of Sliding Window Embedding") ax2.set_aspect('equal', 'datalim') # Plot the eigenvalues as bars ax3 = plt.subplot(gs[1,0]) eigs = eigs[0:min(len(eigs), 10)] ax3.bar(np.arange(len(eigs)), eigs) ax3.set_xlabel("Eigenvalue Number") ax3.set_ylabel("Eigenvalue") ax3.set_title("PCA Eigenvalues") plt.tight_layout() plt.show(); execute_computation3()
SlidingWindow1-Basics.ipynb
ctralie/TUMTopoTimeSeries2016
apache-2.0
Questions Comment on the relationship between the eigenvalues and the extent (width) of the window. <br><br> When are the eigenvalues near each other? When are they not? <br><br> Comment on the change in geometry when the second sinusoid is incommensurate to the first. Specifically, comment on the intrinsic dimension of the object in the projection. Can you name the shape in the 3-D projection in the incommensurate case? <br><br> Try adding noise in like you did in the single frequency case. Can you distinguish in the projection between the incommensurate case and the noisy, but harmonic one with second frequency 3? Explain. What can you say about the eigenvalues in the two cases? Explain your answer. It seems reasonable to ask what the ideal dimension to embed into is. While that question may be answerable, it would be better to bypass the question altogether. Similarly, it seems that beyond detecting the largest period, these tools are limited in detecting the secondary ones. Topological tools that we will see beginning in the next lab will allow us to make some progress toward that goal. Power Spectrum We saw above that for a rather subtle change in frequency changing the second sinusoid from harmonic to noncommensurate, there is a marked change in the geometry. By contrast, the power spectral density functions are very close between the two, as shown below. Hence, it appears that geometric tools are more appropriate for telling the difference between these two types of signals
T = 20 #The period of the first sine in number of samples NPeriods = 10 #How many periods to go through, relative to the faster sinusoid N = T*NPeriods*3 #The total number of samples t = np.arange(N) #Time indices #Make the harmonic signal cos(t) + cos(3t) xH = np.cos(2*np.pi*(1.0/T)*t) + np.cos(2*np.pi*(1.0/(3*T)*t)) #Make the incommensurate signal cos(t) + cos(pi*t) xNC = np.cos(2*np.pi*(1.0/T)*t) + np.cos(2*np.pi*(1.0/(np.pi*T)*t)) plt.figure() P1 = np.abs(np.fft.fft(xH))**2 P2 = np.abs(np.fft.fft(xNC))**2 plt.plot(np.arange(len(P1)), P1) plt.plot(np.arange(len(P2)), P2) plt.xlabel("Frequency Index") plt.legend({"Harmonic", "Noncommensurate"}) plt.xlim([0, 50]) plt.show();
SlidingWindow1-Basics.ipynb
ctralie/TUMTopoTimeSeries2016
apache-2.0
Render with nupic.frameworks.viz.NetworkVisualizer, which takes as input any nupic.engine.Network instance:
from nupic.frameworks.viz import NetworkVisualizer # Initialize Network Visualizer viz = NetworkVisualizer(network) # Render to dot (stdout) viz.render()
src/nupic/frameworks/viz/examples/Demo.ipynb
pulinagrawal/nupic
agpl-3.0
That's interesting, but not necessarily useful if you don't understand dot. Let's capture that output and do something else:
from nupic.frameworks.viz import DotRenderer from io import StringIO outp = StringIO() viz.render(renderer=lambda: DotRenderer(outp))
src/nupic/frameworks/viz/examples/Demo.ipynb
pulinagrawal/nupic
agpl-3.0
outp now contains the rendered output, render to an image with graphviz:
# Render dot to image from graphviz import Source from IPython.display import Image Image(Source(outp.getvalue()).pipe("png"))
src/nupic/frameworks/viz/examples/Demo.ipynb
pulinagrawal/nupic
agpl-3.0
In the example above, each three-columned rectangle is a discrete region, the user-defined name for which is in the middle column. The left-hand and right-hand columns are respective inputs and outputs, the names for which, e.g. "bottumUpIn" and "bottomUpOut", are specific to the region type. The arrows indicate links between outputs from one region to the input of another. I know what you're thinking. That's a cool trick, but nobody cares about your contrived example. I want to see something real! Continuing below, I'll instantiate a CLA model and visualize it. In this case, I'll use one of the "hotgym" examples.
from nupic.frameworks.opf.modelfactory import ModelFactory # Note: parameters copied from examples/opf/clients/hotgym/simple/model_params.py model = ModelFactory.create({'aggregationInfo': {'hours': 1, 'microseconds': 0, 'seconds': 0, 'fields': [('consumption', 'sum')], 'weeks': 0, 'months': 0, 'minutes': 0, 'days': 0, 'milliseconds': 0, 'years': 0}, 'model': 'CLA', 'version': 1, 'predictAheadTime': None, 'modelParams': {'sensorParams': {'verbosity': 0, 'encoders': {'timestamp_timeOfDay': {'type': 'DateEncoder', 'timeOfDay': (21, 1), 'fieldname': u'timestamp', 'name': u'timestamp_timeOfDay'}, u'consumption': {'resolution': 0.88, 'seed': 1, 'fieldname': u'consumption', 'name': u'consumption', 'type': 'RandomDistributedScalarEncoder'}, 'timestamp_weekend': {'type': 'DateEncoder', 'fieldname': u'timestamp', 'name': u'timestamp_weekend', 'weekend': 21}}, 'sensorAutoReset': None}, 'spParams': {'columnCount': 2048, 'spVerbosity': 0, 'spatialImp': 'cpp', 'synPermConnected': 0.1, 'seed': 1956, 'numActiveColumnsPerInhArea': 40, 'globalInhibition': 1, 'inputWidth': 0, 'synPermInactiveDec': 0.005, 'synPermActiveInc': 0.04, 'potentialPct': 0.85, 'boostStrength': 3.0}, 'spEnable': True, 'clParams': {'implementation': 'cpp', 'alpha': 0.1, 'verbosity': 0, 'steps': '1,5', 'regionName': 'SDRClassifierRegion'}, 'inferenceType': 'TemporalMultiStep', 'tmEnable': True, 'tmParams': {'columnCount': 2048, 'activationThreshold': 16, 'pamLength': 1, 'cellsPerColumn': 32, 'permanenceInc': 0.1, 'minThreshold': 12, 'verbosity': 0, 'maxSynapsesPerSegment': 32, 'outputType': 'normal', 'initialPerm': 0.21, 'globalDecay': 0.0, 'maxAge': 0, 'permanenceDec': 0.1, 'seed': 1960, 'newSynapseCount': 20, 'maxSegmentsPerCell': 128, 'temporalImp': 'cpp', 'inputWidth': 2048}, 'trainSPNetOnlyIfRequested': False}})
src/nupic/frameworks/viz/examples/Demo.ipynb
pulinagrawal/nupic
agpl-3.0
Same deal as before, create a NetworkVisualizer instance, render to a buffer, then to an image, and finally display it inline.
# New network, new NetworkVisualizer instance viz = NetworkVisualizer(model._netInfo.net) # Render to Dot output to buffer outp = StringIO() viz.render(renderer=lambda: DotRenderer(outp)) # Render Dot to image, display inline Image(Source(outp.getvalue()).pipe("png"))
src/nupic/frameworks/viz/examples/Demo.ipynb
pulinagrawal/nupic
agpl-3.0
The constants module Before going further in explaining Maybrain's functionalities, it is important to briefly refer the constants module. This module has some constants which can be used elsewhere, rather than writing the values by hand everywhere being prone to typos. In further notebooks you will see this module being used in practice, but for now, also just a normal import is required:
from maybrain import constants as ct # Printing some of the constants print(ct.WEIGHT) print(ct.ANAT_LABEL)
docs/01 - Simple Usage.ipynb
RittmanResearch/maybrain
apache-2.0
The resources package Maybrain also have another package that can be useful for different things. In its essence, it is just a package with access to files like matrices, properties, etc. When importing this package, you will have access to different variables in the path for the file in your system. Farther in the documentation you will see this package being used in practice, but for now, also just a normal import is required:
from maybrain import resources as rr
docs/01 - Simple Usage.ipynb
RittmanResearch/maybrain
apache-2.0
Importing an Adjacency Matrix Firstly, create a Brain object:
a = mbt.Brain() print("Nodes: ", a.G.nodes()) print("Edges: ", a.G.edges()) print("Adjacency matrix: ", a.adjMat)
docs/01 - Simple Usage.ipynb
RittmanResearch/maybrain
apache-2.0
This creates a brain object, where a graph (from the package NetworkX) is stored as a.G, initially empty. Then import the adjacency matrix. The import_adj_file() function imports the adjacency matrix to form the nodes of your graph, but does not create any edges (connections), as you can check from the following outputs. Note the use of the resources package. In maybrain you can access a dummy adjacency matrix (500x500) for various reasons; in this case, just for testing purposes.
a.import_adj_file(rr.DUMMY_ADJ_FILE_500) print("Number of nodes:\n", a.G.number_of_nodes()) print("First 5 nodes (notice labelling starting with 0):\n", list(a.G.nodes())[0:5]) print("Edges:\n", a.G.edges()) print("Size of Adjacency matrix:\n", a.adjMat.shape)
docs/01 - Simple Usage.ipynb
RittmanResearch/maybrain
apache-2.0
If you wish to create a fully connected graph with all the available values in the adjacency matrix, it is necessary to threshold it, which is explained in the next section. Thresholding There are a few ways to apply a threshold, either using an absolute threshold across the whole graph to preserve a specified number of edges or percentage of total possible edges; or to apply a local thresholding that begins with the minimum spanning tree and adds successive n-nearest neighbour graphs. The advantage of local thresholding is that the graph will always be fully connected, which is necessary to collect some graph measures. For an absolute threshold you have several possibilities. Note that our adjacency matrix (a.adjMat) always stays the same so we can apply all the thresholds we want to create our graph (a.G) accordingly. Also notice that in this specific case of an undirected graph, are dealing with a symmetric adjacency matrix, so although a.adjMat will always have the size of 500x500, the a.G will not.
# Bring everything from the adjacency matrix to a.G a.apply_threshold() print("Number of edges (notice it corresponds to the upper half edges of adjacency matrix):\n", a.G.number_of_edges()) print("Size of Adjacency matrix after 1st threshold:\n", a.adjMat.shape) # Retain the most strongly connected 1000 edges a.apply_threshold(threshold_type= "totalEdges", value = 1000) print("\nNumber of edges after 2nd threshold:\n", a.G.number_of_edges()) print("Size of Adjacency matrix after 2nd threshold:\n", a.adjMat.shape) # Retain the 5% most connected edges as a percentage of the total possible number of edges a.apply_threshold(threshold_type = "edgePC", value = 5) print("\nNumber of edges after 3rd threshold:\n", a.G.number_of_edges()) print("Size of Adjacency matrix after 3rd threshold:\n", a.adjMat.shape) # Retain edges with a weight greater than 0.3 a.apply_threshold(threshold_type= "tVal", value = 0.3) print("\nNumber of edges after 4th threshold:\n", a.G.number_of_edges()) print("Size of Adjacency matrix after 4th threshold:\n", a.adjMat.shape)
docs/01 - Simple Usage.ipynb
RittmanResearch/maybrain
apache-2.0
The options for local thresholding are similar. Note that a local thresholding always yield a connected graph, and in the case where no arguments are passed, the graph will be the Minimum Spanning Tree. Local thresholding can be very slow for bigger matrices because in each step it is adding successive N-nearest neighbour degree graphs.
a.local_thresholding() print("Is the graph connected? ", mbt.nx.is_connected(a.G)) a.local_thresholding(threshold_type="edgePC", value = 5) print("Is the graph connected? ", mbt.nx.is_connected(a.G)) a.local_thresholding(threshold_type="totalEdges", value = 10000) print("Is the graph connected? ", mbt.nx.is_connected(a.G))
docs/01 - Simple Usage.ipynb
RittmanResearch/maybrain
apache-2.0
Absolute Thresholding In a real brain network, an edge with high negative value is as strong as an edge with a high positive value. So, if you want to threshold in order to get the most strongly connected edges (both negative and positive), you just have to pass an argument use_absolute=True to apply_threshold(). In the case of the brain that we are using in this notebook there are not many negative edges. Thus, we have to threshold the 80% most strongly connected edges in order to see a difference (notice the use of the module constants (ct) to access the weight property of each edge):
# Thresholding the 80% most strongly connected edges a.apply_threshold(threshold_type="edgePC", value=80) for e in a.G.edges(data=True): # Printing the edges with negative weight if e[2][ct.WEIGHT] < 0: print(e) # This line is never executed because a negative weighted edge is not strong enough # Absolute thresholding of the 70% most strongly connected edges print("Edges with negative weight which belong to the 70% strongest ones:") a.apply_threshold(threshold_type="edgePC", value=70, use_absolute=True) for e in a.G.edges(data=True): if e[2][ct.WEIGHT] < 0: print(e) # Absolute thresholding of the 80% most strongly connected edges print("\nEdges with negative weight which belong to the 80% strongest ones:") a.apply_threshold(threshold_type="edgePC", value=80, use_absolute=True) for e in a.G.edges(data=True): if e[2][ct.WEIGHT] < 0: print(e)
docs/01 - Simple Usage.ipynb
RittmanResearch/maybrain
apache-2.0
Binary and Absolute Graphs If necessary the graph can be binarised so that weights are removed. You can see that essentially this means that each edge will have a weight of 1.
a.binarise() print("Do all the edges have weight of 1?", all(e[2][ct.WEIGHT] == 1 for e in a.G.edges(data=True)))
docs/01 - Simple Usage.ipynb
RittmanResearch/maybrain
apache-2.0
Also, you can make all the weights to have an absolute value, instead of negative and positive values:
# Applying threshold again because of last changes a.apply_threshold() print("Do all the edges have a positive weight before?", all(e[2][ct.WEIGHT] >= 0 for e in a.G.edges(data=True))) a.make_edges_absolute() print("Do all the edges have a positive weight?", all(e[2][ct.WEIGHT] >= 0 for e in a.G.edges(data=True)))
docs/01 - Simple Usage.ipynb
RittmanResearch/maybrain
apache-2.0
Importing 3D Spatial Information You can add spatial info to each node of your graph. You need this information if you want to use the visualisation tools of Maybrain. To do so, provide Maybrain with a file that has 4 columns: an anatomical label, and x, y and z coordinates. e.g.: 0 70.800000 30.600000 53.320000 1 32.064909 62.154158 69.707911 2 59.870968 92.230014 41.552595 3 19.703504 66.398922 52.878706 Ideally these values would be in MNI space (this makes it easier to import background images for plotting and for some other functions), but this is not absolutely necessary. We are using the resources package again to get an already prepated text file with spatial information for a brain with 500 regions in the MNI template:
# Initially, you don't have anatomical/spatial attributes in each node: print("Attributes: ", mbt.nx.get_node_attributes(a.G, ct.ANAT_LABEL), "/", mbt.nx.get_node_attributes(a.G, ct.XYZ)) #After calling import_spatial_info(), you can see the node's attributes a.import_spatial_info(rr.MNI_SPACE_COORDINATES_500) print("Attributes of one node: ", mbt.nx.get_node_attributes(a.G, ct.ANAT_LABEL)[0], "/", mbt.nx.get_node_attributes(a.G, ct.XYZ)[0])
docs/01 - Simple Usage.ipynb
RittmanResearch/maybrain
apache-2.0
Properties in Nodes and Edges We have seen already that nodes can have properties about spatial information after calling import_spatial_info(), and edges can have properties about weight after calling applying thresholds. You can add properties to nodes or edges from a text file. The format should be as follows: property node1 value node2 value2 (...) node1 node2 value1 node3 node4 value2 (...) Let's give a specific example. Imagine that you want to add properties about colours. You can use this file, which is transcribed here: colour 1 red 3 red 6 green 0 blue 1 3 green 1 2 green 1 0 grey 2 3 green 2 0 red 3 0 green Note that the first line contains the property name. Subsequent lines refer to edges if they contain 3 terms and nodes if they contain 2. The above will give node 1 the property 'colour' with value 'red' and node 6 the property 'colour' with value 'green'. Nodes 0 and 3 will also have the property 'colour' but with value 'blue' and 'red', respectively. The edge connecting nodes 1 and 3 will have the same property with value 'green'. All the other 5 edges will have the same property but with different values. These properties are stored in the G object from networkx. In order to be easier to see the properties features, we will be importing a shorter matrix with just 4 nodes (link here). From the following code you can see that a warning is printed because we tried to add a property to a node 6, which doesn't exist. However, the other properties are added. Note the fact that as the brain is not directed, adding the property to the edge (1,0) is considered as adding to the edge (0,1). The same thing happens with edges (2,0) and (3,0). No property was imported to node 2 because it is not specified in the properties file.
# Creating a new Brain and importing the shorter adjacency matrix b = mbt.Brain() b.import_adj_file("data/3d_grid_adj.txt") b.apply_threshold() print("Edges and nodes information:") for e in b.G.edges(data=True): print(e) for n in b.G.nodes(data=True): print(n) # Importing properties and showing again edges and nodes print("\nImporting properties...") b.import_properties("data/3d_grid_properties.txt") print("\nEdges and nodes information after importing properties:") for e in b.G.edges(data=True): print(e) for n in b.G.nodes(data=True): print(n)
docs/01 - Simple Usage.ipynb
RittmanResearch/maybrain
apache-2.0
You can notice that if we threshold our brain again, edges are created from scratch and thus properties are lost. The same doesn't happen with nodes as they are always present in our G object. By default, properties of the edges are not imported everytime you threshold the brain. However, you can change that behaviour by setting the field update_props_after_threshold to True.
# Rethresholding the brain, thus loosing information b.apply_threshold(threshold_type="totalEdges", value=0) b.apply_threshold() print("Edges information:") for e in b.G.edges(data=True): print(e) # Setting field to allow automatic importing of properties after a threshold print("\nSetting b.update_properties_after_threshold and rethresholding again...") b.apply_threshold(threshold_type="totalEdges", value=0) b.update_props_after_threshold = True b.apply_threshold() # Now, warning is thrown just like before print("\nEdges information again:") for e in b.G.edges(data=True): print(e)
docs/01 - Simple Usage.ipynb
RittmanResearch/maybrain
apache-2.0
You can also import the properties from a dictionary, both for nodes and edges. In the following example there are two dictionaries being created with the values of a certain property, named own_property, that will be added to brain:
nodes_props = {0: "val1", 1: "val2"} edges_props = {(0, 1): "edge_val1", (2,3): "edge_val2"} b.import_edge_props_from_dict("own_property", edges_props) b.import_node_props_from_dict("own_property", nodes_props) print("\nEdges information:") for e in b.G.edges(data=True): print(e) print("\nNodes information:") for n in b.G.nodes(data=True): print(n)
docs/01 - Simple Usage.ipynb
RittmanResearch/maybrain
apache-2.0
Load the Dataset Here, we create a directory called usahousing. This directory will hold the dataset that we copy from Google Cloud Storage.
if not os.path.isdir("../data/explore"): os.makedirs("../data/explore")
notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Next, we copy the Usahousing dataset from Google Cloud Storage.
!gsutil cp gs://cloud-training-demos/feat_eng/housing/housing_pre-proc.csv ../data/explore
notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Then we use the "ls" command to list files in the directory. This ensures that the dataset was copied.
!ls -l ../data/explore
notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Next, we read the dataset into a Pandas dataframe.
df_USAhousing = # TODO 1: Your code goes here
notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Inspect the Data
# Show the first five row. df_USAhousing.head()
notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Let's check for any null values.
df_USAhousing.isnull().sum() df_stats = df_USAhousing.describe() df_stats = df_stats.transpose() df_stats df_USAhousing.info()
notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Let's take a peek at the first and last five rows of the data for all columns.
print("Rows : ", df_USAhousing.shape[0]) print("Columns : ", df_USAhousing.shape[1]) print("\nFeatures : \n", df_USAhousing.columns.tolist()) print("\nMissing values : ", df_USAhousing.isnull().sum().values.sum()) print("\nUnique values : \n", df_USAhousing.nunique())
notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Explore the Data Let's create some simple plots to check out the data!
_ = sns.heatmap(df_USAhousing.corr())
notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Create a distplot showing "median_house_value".
# TODO 2a: Your code goes here sns.set_style("whitegrid") df_USAhousing["median_house_value"].hist(bins=30) plt.xlabel("median_house_value") x = df_USAhousing["median_income"] y = df_USAhousing["median_house_value"] plt.scatter(x, y) plt.show()
notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Create a jointplot showing "median_income" versus "median_house_value".
# TODO 2b: Your code goes here sns.countplot(x="ocean_proximity", data=df_USAhousing) # takes numeric only? # plt.figure(figsize=(20,20)) g = sns.FacetGrid(df_USAhousing, col="ocean_proximity") _ = g.map(plt.hist, "households") # takes numeric only? # plt.figure(figsize=(20,20)) g = sns.FacetGrid(df_USAhousing, col="ocean_proximity") _ = g.map(plt.hist, "median_income")
notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
You can see below that this is the state of California!
x = df_USAhousing["latitude"] y = df_USAhousing["longitude"] plt.scatter(x, y) plt.show()
notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Explore and create ML datasets In this notebook, we will explore data corresponding to taxi rides in New York City to build a Machine Learning model in support of a fare-estimation tool. The idea is to suggest a likely fare to taxi riders so that they are not surprised, and so that they can protest if the charge is much higher than expected. Learning Objectives Access and explore a public BigQuery dataset on NYC Taxi Cab rides Visualize your dataset using the Seaborn library <h3> Extract sample data from BigQuery </h3> The dataset that we will use is <a href="https://bigquery.cloud.google.com/table/nyc-tlc:yellow.trips">a BigQuery public dataset</a>. Click on the link, and look at the column names. Switch to the Details tab to verify that the number of records is one billion, and then switch to the Preview tab to look at a few rows. Let's write a SQL query to pick up interesting fields from the dataset. It's a good idea to get the timestamp in a predictable format.
%%bigquery SELECT FORMAT_TIMESTAMP( "%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount # TODO 3: Set correct BigQuery public dataset for nyc-tlc yellow taxi cab trips # Tip: For projects with hyphens '-' be sure to escape with backticks `` FROM LIMIT 10
notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
<h3> Exploring data </h3> Let's explore this dataset and clean it up as necessary. We'll use the Python Seaborn package to visualize graphs and Pandas to do the slicing and filtering.
# TODO 4: Visualize your dataset using the Seaborn library. # Plot the distance of the trip as X and the fare amount as Y. ax = sns.regplot( x="", y="", fit_reg=False, ci=None, truncate=True, data=trips, ) ax.figure.set_size_inches(10, 8)
notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Hmm ... do you see something wrong with the data that needs addressing? It appears that we have a lot of invalid data that is being coded as zero distance and some fare amounts that are definitely illegitimate. Let's remove them from our analysis. We can do this by modifying the BigQuery query to keep only trips longer than zero miles and fare amounts that are at least the minimum cab fare ($2.50). Note the extra WHERE clauses.
%%bigquery trips SELECT FORMAT_TIMESTAMP( "%Y-%m-%d %H:%M:%S %Z", pickup_datetime) AS pickup_datetime, pickup_longitude, pickup_latitude, dropoff_longitude, dropoff_latitude, passenger_count, trip_distance, tolls_amount, fare_amount, total_amount FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1 # TODO 4a: Filter the data to only include non-zero distance trips and fares above $2.50 AND print(len(trips)) ax = sns.regplot( x="trip_distance", y="fare_amount", fit_reg=False, ci=None, truncate=True, data=trips, ) ax.figure.set_size_inches(10, 8)
notebooks/launching_into_ml/labs/supplemental/python.BQ_explore_data.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
遍历 前序 中序 后序
def preorder(tree): if tree: print tree.get_root() preorder(tree.get_left_child()) preorder(tree.get_right_child()) def postorder(tree): if tree: postorder(tree.get_left_child()) postorder(tree.get_right_child()) print tree.get_root() def inorder(tree): if tree: inorder(tree.get_left_child()) print tree.get_root() inorder(tree.get_right_child()) r = BinaryTree('root') r.insert_left('l1') r.insert_left('l2') r.insert_right('r1') r.insert_right('r2') r.get_left_child().insert_right('r3') preorder(r)
algorithms/tree.ipynb
namco1992/algorithms_in_python
mit
二叉堆实现优先队列 二叉堆是队列的一种实现方式。 二叉堆可以用完全二叉树来实现。所谓完全二叉树(complete binary tree),有定义如下: A complete binary tree is a binary tree in which every level, except possibly the last, is completely filled, and all nodes are as far left as possible. 除叶节点外,所有层都是填满的,叶节点则按照从左至右的顺序填满。 完全二叉树的一个重要性质: 当以列表表示完全二叉树时,位置 p 的父节点,其 left child 位于 2p 位置,其 right child 位于 2p+1 的位置。 为了满足使用列表表示的性质,列表中第一个位置list[0]由 0 填充,树从list[1]开始。 Operations BinaryHeap() creates a new, empty, binary heap. insert(k) adds a new item to the heap. findMin() returns the item with the minimum key value, leaving item in the heap. delMin() returns the item with the minimum key value, removing the item from the heap. isEmpty() returns true if the heap is empty, false otherwise. size() returns the number of items in the heap. buildHeap(list) builds a new heap from a list of keys.
class BinHeap(object): def __init__(self): self.heap_list = [0] self.current_size = 0
algorithms/tree.ipynb
namco1992/algorithms_in_python
mit
二叉搜索树 Binary Search Trees 其性质与字典非常相近。 Operations Map() Create a new, empty map. put(key,val) Add a new key-value pair to the map. If the key is already in the map then replace the old value with the new value. get(key) Given a key, return the value stored in the map or None otherwise. del Delete the key-value pair from the map using a statement of the form del map[key]. len() Return the number of key-value pairs stored in the map. in Return True for a statement of the form key in map, if the given key is in the map.
class BinarySearchTree(object): def __init__(self): self.root = None self.size = 0 def length(self): return self.size def __len__(self): return self.size def __iter__(self): return self.root.__iter__() def put(self, key, val): if self.root: self._put(key, val, self.root) else: self.root = TreeNode(key, val) self.size += 1 def _put(key, val, current_node): if key < current_node: if current_node.has_left_child(): _put(key, val, current_node.left_child) else: current_node.left_child = TreeNode(key, val, parent=current_node) else: if current_node.has_right_child(): _put(key, val, current_node.right_child) else: current_node.right_child = TreeNode(key, val, parent=current_node) def __setitem__(self, k, v): self.put(k, v) class TreeNode(object): def __init__(self, key, val, left=None, right=None, parent=None): self.key = key self.payload = val self.left_child = left self.right_child = right self.parent = parent def has_left_child(self): return self.left_child def has_right_child(self): return self.right_child def is_root(self): return not self.parent def is_leaf(self): return not (self.right_child or self.left_child) def has_any_children(self): return self.right_child or self.left_child def has_both_children(self): return self.right_child and self.right_child def replace_node_data(self, key, value, lc, rc): self.key = key self.payload = value self.left_child = lc self.right_child = rc if self.has_left_child(): self.left_child.parent = self if self.has_right_child(): self.right_child.parent = self
algorithms/tree.ipynb
namco1992/algorithms_in_python
mit
An integration engineer might prefer to copy system-of-records data into pandas.DataFrame objects. Note that pandas is itself capable of reading directly from various SQL databases, although it usually needs a supporting package like cx_Oracle.
from pandas import DataFrame arcs = DataFrame({"Source": ["Denver", "Denver", "Denver", "Detroit", "Detroit", "Detroit",], "Destination": ["Boston", "New York", "Seattle", "Boston", "New York", "Seattle"], "Capacity": [120, 120, 120, 100, 80, 120]}) # PanDatFactory doesn't require the fields to be in order so long as the field names are supplied arcs = arcs[["Destination", "Source", "Capacity"]] arcs
examples/amplpy/netflow/netflow_other_data_sources.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
Next we create a PanDat input data object from the list-of-lists/DataFrame representations.
%env PATH = PATH:/Users/petercacioppi/ampl/ampl from netflow import input_schema, solve, solution_schema dat = input_schema.PanDat(commodities=commodities, nodes=nodes, cost=cost, arcs=arcs, inflow=inflow)
examples/amplpy/netflow/netflow_other_data_sources.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
We now create a PanDat solution data object by calling solve.
sln = solve(dat)
examples/amplpy/netflow/netflow_other_data_sources.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
We now create a list-of-lists representation of the solution data object.
sln_lists = {t: list(map(list, getattr(sln, t).itertuples(index=False))) for t in solution_schema.all_tables}
examples/amplpy/netflow/netflow_other_data_sources.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
Here we demonstrate that sln_lists is a dictionary mapping table name to list-of-lists of solution report data.
import pprint for sln_table_name, sln_table_data in sln_lists.items(): print "\n\n**\nSolution Table %s\n**"%sln_table_name pprint.pprint(sln_table_data)
examples/amplpy/netflow/netflow_other_data_sources.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
Of course the solution data object itself contains DataFrames, if that representation is preferred.
sln.flow
examples/amplpy/netflow/netflow_other_data_sources.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
Using ticdat to build robust engines The preceding section demonstrated how we can use ticdat to build modular engines. We now demonstrate how we can use ticdat to build engines that check solve pre-conditions, and are thus robust with respect to data integrity problems. First, lets violate our (somewhat artificial) rule that the commodity volume must be positive.
dat.commodities.loc[dat.commodities["Name"] == "Pencils", "Volume"] = 0 dat.commodities
examples/amplpy/netflow/netflow_other_data_sources.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
The input_schema can not only flag this problem, but give us a useful data structure to examine.
data_type_failures = input_schema.find_data_type_failures(dat) data_type_failures data_type_failures['commodities', 'Volume']
examples/amplpy/netflow/netflow_other_data_sources.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
Next, lets add a Cost record for a non-existent commodity and see how input_schema flags this problem.
dat.cost = dat.cost.append({'Commodity':'Crayons', 'Source': 'Detroit', 'Destination': 'Seattle', 'Cost': 10}, ignore_index=True) fk_failures = input_schema.find_foreign_key_failures(dat, verbosity="Low") fk_failures fk_failures['cost', 'commodities', ('Commodity', 'Name')]
examples/amplpy/netflow/netflow_other_data_sources.ipynb
opalytics/opalytics-ticdat
bsd-2-clause
Create a sampling layer
class Sampling(layers.Layer): """Uses (z_mean, z_log_var) to sample z, the vector encoding a digit.""" def call(self, inputs): z_mean, z_log_var = inputs batch = tf.shape(z_mean)[0] dim = tf.shape(z_mean)[1] epsilon = tf.keras.backend.random_normal(shape=(batch, dim)) return z_mean + tf.exp(0.5 * z_log_var) * epsilon
examples/generative/ipynb/vae.ipynb
keras-team/keras-io
apache-2.0
Build the encoder
latent_dim = 2 encoder_inputs = keras.Input(shape=(28, 28, 1)) x = layers.Conv2D(32, 3, activation="relu", strides=2, padding="same")(encoder_inputs) x = layers.Conv2D(64, 3, activation="relu", strides=2, padding="same")(x) x = layers.Flatten()(x) x = layers.Dense(16, activation="relu")(x) z_mean = layers.Dense(latent_dim, name="z_mean")(x) z_log_var = layers.Dense(latent_dim, name="z_log_var")(x) z = Sampling()([z_mean, z_log_var]) encoder = keras.Model(encoder_inputs, [z_mean, z_log_var, z], name="encoder") encoder.summary()
examples/generative/ipynb/vae.ipynb
keras-team/keras-io
apache-2.0
Build the decoder
latent_inputs = keras.Input(shape=(latent_dim,)) x = layers.Dense(7 * 7 * 64, activation="relu")(latent_inputs) x = layers.Reshape((7, 7, 64))(x) x = layers.Conv2DTranspose(64, 3, activation="relu", strides=2, padding="same")(x) x = layers.Conv2DTranspose(32, 3, activation="relu", strides=2, padding="same")(x) decoder_outputs = layers.Conv2DTranspose(1, 3, activation="sigmoid", padding="same")(x) decoder = keras.Model(latent_inputs, decoder_outputs, name="decoder") decoder.summary()
examples/generative/ipynb/vae.ipynb
keras-team/keras-io
apache-2.0
Define the VAE as a Model with a custom train_step
class VAE(keras.Model): def __init__(self, encoder, decoder, **kwargs): super(VAE, self).__init__(**kwargs) self.encoder = encoder self.decoder = decoder self.total_loss_tracker = keras.metrics.Mean(name="total_loss") self.reconstruction_loss_tracker = keras.metrics.Mean( name="reconstruction_loss" ) self.kl_loss_tracker = keras.metrics.Mean(name="kl_loss") @property def metrics(self): return [ self.total_loss_tracker, self.reconstruction_loss_tracker, self.kl_loss_tracker, ] def train_step(self, data): with tf.GradientTape() as tape: z_mean, z_log_var, z = self.encoder(data) reconstruction = self.decoder(z) reconstruction_loss = tf.reduce_mean( tf.reduce_sum( keras.losses.binary_crossentropy(data, reconstruction), axis=(1, 2) ) ) kl_loss = -0.5 * (1 + z_log_var - tf.square(z_mean) - tf.exp(z_log_var)) kl_loss = tf.reduce_mean(tf.reduce_sum(kl_loss, axis=1)) total_loss = reconstruction_loss + kl_loss grads = tape.gradient(total_loss, self.trainable_weights) self.optimizer.apply_gradients(zip(grads, self.trainable_weights)) self.total_loss_tracker.update_state(total_loss) self.reconstruction_loss_tracker.update_state(reconstruction_loss) self.kl_loss_tracker.update_state(kl_loss) return { "loss": self.total_loss_tracker.result(), "reconstruction_loss": self.reconstruction_loss_tracker.result(), "kl_loss": self.kl_loss_tracker.result(), }
examples/generative/ipynb/vae.ipynb
keras-team/keras-io
apache-2.0
Train the VAE
(x_train, _), (x_test, _) = keras.datasets.mnist.load_data() mnist_digits = np.concatenate([x_train, x_test], axis=0) mnist_digits = np.expand_dims(mnist_digits, -1).astype("float32") / 255 vae = VAE(encoder, decoder) vae.compile(optimizer=keras.optimizers.Adam()) vae.fit(mnist_digits, epochs=30, batch_size=128)
examples/generative/ipynb/vae.ipynb
keras-team/keras-io
apache-2.0
Display a grid of sampled digits
import matplotlib.pyplot as plt def plot_latent_space(vae, n=30, figsize=15): # display a n*n 2D manifold of digits digit_size = 28 scale = 1.0 figure = np.zeros((digit_size * n, digit_size * n)) # linearly spaced coordinates corresponding to the 2D plot # of digit classes in the latent space grid_x = np.linspace(-scale, scale, n) grid_y = np.linspace(-scale, scale, n)[::-1] for i, yi in enumerate(grid_y): for j, xi in enumerate(grid_x): z_sample = np.array([[xi, yi]]) x_decoded = vae.decoder.predict(z_sample) digit = x_decoded[0].reshape(digit_size, digit_size) figure[ i * digit_size : (i + 1) * digit_size, j * digit_size : (j + 1) * digit_size, ] = digit plt.figure(figsize=(figsize, figsize)) start_range = digit_size // 2 end_range = n * digit_size + start_range pixel_range = np.arange(start_range, end_range, digit_size) sample_range_x = np.round(grid_x, 1) sample_range_y = np.round(grid_y, 1) plt.xticks(pixel_range, sample_range_x) plt.yticks(pixel_range, sample_range_y) plt.xlabel("z[0]") plt.ylabel("z[1]") plt.imshow(figure, cmap="Greys_r") plt.show() plot_latent_space(vae)
examples/generative/ipynb/vae.ipynb
keras-team/keras-io
apache-2.0
Display how the latent space clusters different digit classes
def plot_label_clusters(vae, data, labels): # display a 2D plot of the digit classes in the latent space z_mean, _, _ = vae.encoder.predict(data) plt.figure(figsize=(12, 10)) plt.scatter(z_mean[:, 0], z_mean[:, 1], c=labels) plt.colorbar() plt.xlabel("z[0]") plt.ylabel("z[1]") plt.show() (x_train, y_train), _ = keras.datasets.mnist.load_data() x_train = np.expand_dims(x_train, -1).astype("float32") / 255 plot_label_clusters(vae, x_train, y_train)
examples/generative/ipynb/vae.ipynb
keras-team/keras-io
apache-2.0
set parameter
#parameter: searchterm="big data" #lowecase! colorlist=["#01be70","#586bd0","#c0aa12","#0183e6","#f69234","#0095e9","#bd8600","#007bbe","#bb7300","#63bcfc","#a84a00","#01bedb","#82170e","#00c586","#a22f1f","#3fbe57","#3e4681","#9bc246","#9a9eec","#778f00","#00aad9","#fc9e5e","#01aec1","#832c1e","#55c99a","#dd715b","#017c1c","#ff9b74","#009556","#83392a","#00b39b","#8e5500","#50a7c6","#f4a268","#02aca7","#532b00","#67c4bd","#5e5500","#f0a18f","#007229","#d2b073","#005d3f","#a5be6b","#2a4100","#8cb88c","#2f5c00","#007463","#5b7200","#787c48","#3b7600"]
1-number of papers over time/Creating overview bar-plots.ipynb
MathiasRiechert/BigDataPapers
gpl-3.0
load data from SQL database:
dsn_tns=cx_Oracle.makedsn('127.0.0.1','6025',service_name='bibliodb01.fiz.karlsruhe') #due to licence requirements, # access is only allowed for members of the competence center of bibliometric and cooperation partners. You can still # continue with the resulting csv below. #open connection: db=cx_Oracle.connect(<username>, <password>, dsn_tns) print(db.version) #%% define sql-query function: def read_query(connection, query): cursor = connection.cursor() try: cursor.execute( query ) names = [ x[0] for x in cursor.description] rows = cursor.fetchall() return pd.DataFrame( rows, columns=names) finally: if cursor is not None: cursor.close() #%% load paper titles from WOSdb: database="wos_B_2016" command="""SELECT DISTINCT(ARTICLE_TITLE), PUBYEAR FROM """+database+""".KEYWORDS, """+database+""".ITEMS_KEYWORDS, """+database+""".ITEMS WHERE """+database+""".ITEMS_KEYWORDS.FK_KEYWORDS="""+database+""".KEYWORDS.PK_KEYWORDS AND """+database+""".ITEMS.PK_ITEMS="""+database+""".ITEMS_KEYWORDS.FK_ITEMS AND (lower("""+database+""".KEYWORDS.KEYWORD) LIKE '%"""+searchterm+"""%' OR lower(ARTICLE_TITLE) LIKE '%"""+searchterm+"""%') """ dfWOS=read_query(db,command) dfWOS['wos']=True #to make the source identifyable dfWOS.to_csv("all_big_data_titles_year_wos.csv", sep=';') #%% load paper titles from SCOPUSdb: database="SCOPUS_B_2016" command="""SELECT DISTINCT(ARTICLE_TITLE), PUBYEAR FROM """+database+""".KEYWORDS, """+database+""".ITEMS_KEYWORDS, """+database+""".ITEMS WHERE """+database+""".ITEMS_KEYWORDS.FK_KEYWORDS="""+database+""".KEYWORDS.PK_KEYWORDS AND """+database+""".ITEMS.PK_ITEMS="""+database+""".ITEMS_KEYWORDS.FK_ITEMS AND (lower("""+database+""".KEYWORDS.KEYWORD) LIKE '%"""+searchterm+"""%' OR lower(ARTICLE_TITLE) LIKE '%"""+searchterm+"""%') """ dfSCOPUS=read_query(db,command) dfSCOPUS['scopus']=True #to make the source identifyable dfSCOPUS.to_csv("all_big_data_titles_year_scopus.csv", sep=';') #this takes some time, we will work with the exported CSV from here on
1-number of papers over time/Creating overview bar-plots.ipynb
MathiasRiechert/BigDataPapers
gpl-3.0
merging data
dfWOS=pd.read_csv("all_big_data_titles_year_wos.csv",sep=";") dfSCOPUS=pd.read_csv("all_big_data_titles_year_scopus.csv",sep=";") df=pd.merge(dfWOS,dfSCOPUS,on='ARTICLE_TITLE',how='outer') #get PUBYEAR in one column: df.loc[df['wos'] == 1, 'PUBYEAR_y'] = df['PUBYEAR_x'] #save resulting csv again: df=df[['ARTICLE_TITLE','PUBYEAR_y','wos','scopus']] df.to_csv("all_big_data_titles_with_year.csv", sep=';') df
1-number of papers over time/Creating overview bar-plots.ipynb
MathiasRiechert/BigDataPapers
gpl-3.0
grouping data
grouped=df.groupby(['PUBYEAR_y']) df2=grouped.agg('count').reset_index() df2
1-number of papers over time/Creating overview bar-plots.ipynb
MathiasRiechert/BigDataPapers
gpl-3.0