markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
As another example, the r.mapcalc wrapper for raster algebra allows using a long expressions.
gscript.mapcalc("elev_strip = if(elevation > 100 && elevation < 125, elevation, null())") print(gscript.read_command('r.univar', map='elev_strip', flags='g'))
GSOC/notebooks/Projects/GRASS/python-grass-addons/01_scripting_library.ipynb
OSGeo-live/CesiumWidget
apache-2.0
The g.region wrapper is a convenient way to retrieve the current region settings (i.e., computational region). It returns a dictionary with values converted to appropriate types (floats and ints).
region = gscript.region() print region # cell area in map units (in projected Locations) region['nsres'] * region['ewres']
GSOC/notebooks/Projects/GRASS/python-grass-addons/01_scripting_library.ipynb
OSGeo-live/CesiumWidget
apache-2.0
We can list data stored in a GRASS GIS location with g.list wrappers. With this function, the map layers are grouped by mapsets (in this example, raster layers):
gscript.list_grouped(['raster'])
GSOC/notebooks/Projects/GRASS/python-grass-addons/01_scripting_library.ipynb
OSGeo-live/CesiumWidget
apache-2.0
Here is an example of a different g.list wrapper which structures the output as list of pairs (name, mapset). We obtain current mapset with g.gisenv wrapper.
current_mapset = gscript.gisenv()['MAPSET'] gscript.list_pairs('raster', mapset=current_mapset)
GSOC/notebooks/Projects/GRASS/python-grass-addons/01_scripting_library.ipynb
OSGeo-live/CesiumWidget
apache-2.0
Example with nested json/dict like data, which has been pre-aggregated and pivoted
df2 = df_from_json(data) df2 = df2.sort('total', ascending=False) df2 = df2.head(10) df2 = pd.melt(df2, id_vars=['abbr', 'name']) scatter5 = Scatter( df2, x='value', y='name', color='variable', title="x='value', y='name', color='variable'", xlabel="Medals", ylabel="Top 10 Countries", legend='bottom_right') show(scatter5)
examples/howto/charts/scatter.ipynb
azjps/bokeh
bsd-3-clause
Use blend operator to "stack" variables
scatter6 = Scatter(flowers, x=blend('petal_length', 'sepal_length', name='length'), y=blend('petal_width', 'sepal_width', name='width'), color='species', title='x=petal_length+sepal_length, y=petal_width+sepal_width, color=species', legend='top_right') show(scatter6)
examples/howto/charts/scatter.ipynb
azjps/bokeh
bsd-3-clause
Train a MNIST model.
tf.reset_default_graph() images, one_hot_labels, _ = data_fn(num_epochs=None, shuffle=True, initializable=False) loss, predictions = model_fn(images, one_hot_labels) accuracy = tf.reduce_mean(tf.to_float(tf.equal(tf.math.argmax(predictions, axis=1), tf.math.argmax(one_hot_labels, axis=1)))) train_op = tf.train.GradientDescentOptimizer(LEARNING_RATE).minimize(loss) saver = tf.train.Saver(max_to_keep=None) # Simple training loop that saves the model checkpoint every 1000 steps. with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for i in range(NUM_TRAIN_STEPS): if i % NUM_SUMMARIZE_STEPS == 0: saver.save(sess, os.path.join(TRAIN_PATH, 'model.ckpt'), global_step=i) outputs = sess.run([loss, train_op]) if i % NUM_SUMMARIZE_STEPS == 0: print 'Step: ', i, 'Loss: ', outputs[0] # Save a final checkpoint. saver.save(sess, os.path.join(TRAIN_PATH, 'model.ckpt'), global_step=NUM_TRAIN_STEPS) # Check that the model fits the training data. with tf.Session() as sess: saver.restore(sess, os.path.join(TRAIN_PATH, 'model.ckpt-10000')) minibatch_accuracy = 0.0 for i in range(100): minibatch_accuracy += sess.run(accuracy) / 100 print 'Accuracy on training data:', minibatch_accuracy
tf/mnist_spectral_density.ipynb
google/spectral-density
apache-2.0
Run Lanczos on the MNIST model.
tf.reset_default_graph() checkpoint_to_load = os.path.join(TRAIN_PATH, 'model.ckpt-10000') # For Lanczos, the tf.data pipeline should have some very specific characteristics: # 1. It should stop after a single epoch. # 2. It should be deterministic (i.e., no data augmentation). # 3. It should be initializable (we use it to restart the pipeline for each Lanczos iteration). images, one_hot_labels, init = data_fn(num_epochs=1, shuffle=False, initializable=True) loss, _ = model_fn(images, one_hot_labels) # Setup for Lanczos mode. restore_specs = [ experiment_utils.RestoreSpec(tf.trainable_variables(), checkpoint_to_load)] # This callback is used to restart the tf.data pipeline for each Lanczos # iteration on each worker (the chief has a slightly different callback). You # can check the logs to see the status of the computation: new # phases of Lanczos iteration are indicated by "New phase i", and local steps # per worker are logged with "Local step j". def end_of_input(sess, train_op): try: sess.run(train_op) except tf.errors.OutOfRangeError: sess.run(init) return True return False # This object stores the state for the phases of the Lanczos iteration. experiment = lanczos_experiment.LanczosExperiment( loss, worker=0, # These two flags will change when the number of workers > 1. num_workers=1, save_path=LANCZOS_PATH, end_of_input=end_of_input, lanczos_steps=NUM_LANCZOS_STEPS, num_draws=1, output_address=LANCZOS_PATH) # For distributed training, there are a few options: # Multi-gpu single worker: Partition the tf.data per tower of the model, and pass the aggregate # loss to the LanczosExperiment class. # Multi-gpu multi worker: Set num_workers in LanczosExperiment to be equal to the number of workers. # These have to be ordered. train_op = experiment.get_train_op() saver = experiment.get_saver(checkpoint_to_load, restore_specs) init_fn = experiment.get_init_fn() train_fn = experiment.get_train_fn() local_init_op = tf.group(tf.local_variables_initializer(), init) train_step_kwargs = {} # The LanczosExperiment class is designed with slim in mind since it gives us # very specific control of the main training loop. tf.contrib.slim.learning.train( train_op, train_step_kwargs=train_step_kwargs, train_step_fn=train_fn, logdir=LANCZOS_PATH, is_chief=True, init_fn=init_fn, local_init_op=local_init_op, global_step=tf.zeros([], dtype=tf.int64), # Dummy global step. saver=saver, save_interval_secs=0, # The LanczosExperiment class controls saving. summary_op=None, # DANGER DANGER: Do not change this. summary_writer=None) # This cell takes a little time to run: maybe 7 mins.
tf/mnist_spectral_density.ipynb
google/spectral-density
apache-2.0
Visualize the Hessian eigenvalue density.
# Outputs are saved as numpy saved files. The most interesting ones are # 'tridiag_1' and 'lanczos_vec_1'. with open(os.path.join(LANCZOS_PATH, 'tridiag_1'), 'rb') as f: tridiagonal = np.load(f) # For legacy reasons, we need to squeeze tridiagonal. tridiagonal = np.squeeze(tridiagonal) # Note that the output shape is [NUM_LANCZOS_STEPS, NUM_LANCZOS_STEPS]. print tridiagonal.shape # The function tridiag_to_density computes the density (i.e., trace estimator # the standard Gaussian c * exp(-(x - t)**2.0 / 2 sigma**2.0) where t is # from a uniform grid. Passing a reasonable sigma**2.0 to this function is # important -- somewhere between 1e-3 and 1e-5 seems to work best. density, grids = density.tridiag_to_density([tridiagonal]) # We add a small epsilon to make the plot not ugly. plt.semilogy(grids, density + 1.0e-7) plt.xlabel('$\lambda$') plt.ylabel('Density') plt.title('MNIST hessian eigenvalue density at step 10000')
tf/mnist_spectral_density.ipynb
google/spectral-density
apache-2.0
Read in the list of questions/attributes There were 13 questions
# this csv file has only a single row questions = [] with open('data/SportsDataset_ListOfAttributes.csv','r') as csvfile: myreader = csv.reader( csvfile ) for row in myreader: questions = row Question2Index = {} for ind, quest in enumerate( questions ): Question2Index[quest] = ind print('Question #', ind,': ',quest) # And example usage of the index lookup: print('The question "', questions[10],'" has 0-based index', Question2Index[questions[10]])
notebooks/20Q/setup_sportsDataset.ipynb
jamesfolberth/jupyterhub_AWS_deployment
bsd-3-clause
Read in the training data The columns of X correspond to questions, and rows correspond to more data. The rows of y are the movie indices. The values of X are 1, -1 or 0 (see YesNoDict for encoding)
YesNoDict = { "Yes": 1, "No": -1, "Unsure": 0, "": 0 } # Load from the csv file. # Note: the file only has "1"s, because blanks mean "No" X = [] with open('data/SportsDataset_DataAttributes.csv','r') as csvfile: myreader = csv.reader(csvfile) for row in myreader: data = []; for col in row: data.append( col or "-1") X.append( list(map(int,data)) ) # integers, not strings # This data file is listed in the same order as the sports # The variable "y" contains the index of the sport y = range(len(sports)) # this doesn't work y = list( map(int,y) ) # Instead, we need to ask python to really enumerate it!
notebooks/20Q/setup_sportsDataset.ipynb
jamesfolberth/jupyterhub_AWS_deployment
bsd-3-clause
Your turn: train a decision tree classifier
from sklearn import tree # the rest is up to you
notebooks/20Q/setup_sportsDataset.ipynb
jamesfolberth/jupyterhub_AWS_deployment
bsd-3-clause
Use the trained classifier to play a 20 questions game You may want to use from sklearn.tree import _tree and 'tree.DecisionTreeClassifier' with commands like tree_.children_left[node], tree_.value[node], tree_.feature[node], and `tree_.threshold[node]'.
# up to you
notebooks/20Q/setup_sportsDataset.ipynb
jamesfolberth/jupyterhub_AWS_deployment
bsd-3-clause
graded = 8/8
import requests response = requests.get("http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2009-05-10&api-key=b577eb5b46ad4bec8ee159c89208e220") best_seller = response.json() print(best_seller.keys()) print(type(best_seller)) print(type(best_seller['results'])) print(len(best_seller['results'])) print(best_seller['results'][0]) mother_best_seller_results_2009 = best_seller['results'] for item in mother_best_seller_results_2009: print("This books ranks #", item['rank'], "on the list") #just to make sure they are in order for book in item['book_details']: print(book['title']) print("The top 3 books in the Hardcover fiction NYT best-sellers on Mother's day 2009 were:") for item in mother_best_seller_results_2009: if item['rank']< 4: #to get top 3 books on the list for book in item['book_details']: print(book['title']) import requests response = requests.get("http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2010-05-09&api-key=b577eb5b46ad4bec8ee159c89208e220") best_seller_2010 = response.json() print(best_seller.keys()) print(best_seller_2010['results'][0]) mother_best_seller_2010_results = best_seller_2010['results'] print("The top 3 books in the Hardcover fiction NYT best-sellers on Mother's day 2010 were:") for item in mother_best_seller_2010_results: if item['rank']< 4: #to get top 3 books on the list for book in item['book_details']: print(book['title']) import requests response = requests.get("http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2009-06-21&api-key=b577eb5b46ad4bec8ee159c89208e220") best_seller = response.json() father_best_seller_results_2009 = best_seller['results'] print("The top 3 books in the Hardcover fiction NYT best-sellers on Father's day 2009 were:") for item in father_best_seller_results_2009: if item['rank']< 4: #to get top 3 books on the list for book in item['book_details']: print(book['title']) import requests response = requests.get("http://api.nytimes.com/svc/books/v2/lists.json?list=hardcover-fiction&published-date=2010-06-20&api-key=b577eb5b46ad4bec8ee159c89208e220") best_seller = response.json() father_best_seller_results_2010 = best_seller['results'] print("The top 3 books in the Hardcover fiction NYT best-sellers on Father's day 2010 were:") for item in father_best_seller_results_2010: if item['rank']< 4: #to get top 3 books on the list for book in item['book_details']: print(book['title'])
foundations_hw/05/Homework5_Graded.ipynb
mercybenzaquen/foundations-homework
mit
2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?
import requests response = requests.get("http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=2009-06-06&api-key=b577eb5b46ad4bec8ee159c89208e220") best_seller = response.json() print(best_seller.keys()) print(len(best_seller['results'])) book_categories_2009 = best_seller['results'] for item in book_categories_2009: print(item['display_name']) import requests response = requests.get("http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=2015-06-06&api-key=b577eb5b46ad4bec8ee159c89208e220") best_seller = response.json() print(len(best_seller['results'])) book_categories_2015 = best_seller['results'] for item in book_categories_2015: print(item['display_name'])
foundations_hw/05/Homework5_Graded.ipynb
mercybenzaquen/foundations-homework
mit
3) Muammar Gaddafi's name can be transliterated many many ways. His last name is often a source of a million and one versions - Gadafi, Gaddafi, Kadafi, and Qaddafi to name a few. How many times has the New York Times referred to him by each of those names? Tip: Add "Libya" to your search to make sure (-ish) you're talking about the right guy.
import requests response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gadafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220") gadafi = response.json() print(gadafi.keys()) print(gadafi['response']) print(gadafi['response'].keys()) print(gadafi['response']['docs']) #so no results for GADAFI. print('The New York times has not used the name Gadafi to refer to Muammar Gaddafi') import requests response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gaddafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220") gaddafi = response.json() print(gaddafi.keys()) print(gaddafi['response'].keys()) print(type(gaddafi['response']['meta'])) print(gaddafi['response']['meta']) print("'The New York times used the name Gaddafi to refer to Muammar Gaddafi", gaddafi['response']['meta']['hits'], "times") import requests response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Kadafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220") kadafi = response.json() print(kadafi.keys()) print(kadafi['response'].keys()) print(type(kadafi['response']['meta'])) print(kadafi['response']['meta']) print("'The New York times used the name Kadafi to refer to Muammar Gaddafi", kadafi['response']['meta']['hits'], "times") import requests response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Qaddafi&fq=Libya&api-key=b577eb5b46ad4bec8ee159c89208e220") qaddafi = response.json() print(qaddafi.keys()) print(qaddafi['response'].keys()) print(type(qaddafi['response']['meta'])) print(qaddafi['response']['meta']) print("'The New York times used the name Qaddafi to refer to Muammar Gaddafi", qaddafi['response']['meta']['hits'], "times")
foundations_hw/05/Homework5_Graded.ipynb
mercybenzaquen/foundations-homework
mit
4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?
import requests response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q=hipster&begin_date=19950101&end_date=19953112&sort=oldest&api-key=b577eb5b46ad4bec8ee159c89208e220") hipster = response.json() print(hipster.keys()) print(hipster['response'].keys()) print(hipster['response']['docs'][0]) hipster_info= hipster['response']['docs'] print('These articles all had the word hipster in them and were published in 1995') #ordered from oldest to newest for item in hipster_info: print(item['headline']['main'], item['pub_date']) for item in hipster_info: if item['headline']['main'] == "SOUND": print("This is the first article to mention the word hispter in 1995 and was titled:", item['headline']['main'],"and it was publised on:", item['pub_date']) print("This is the lead paragraph of", item['headline']['main'],item['lead_paragraph'])
foundations_hw/05/Homework5_Graded.ipynb
mercybenzaquen/foundations-homework
mit
5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?
import requests response = requests.get('https://api.nytimes.com/svc/search/v2/articlesearch.json?q="gay marriage"&begin_date=19500101&end_date=19593112&api-key=b577eb5b46ad4bec8ee159c89208e220') marriage_1959 = response.json() print(marriage_1959.keys()) print(marriage_1959['response'].keys()) print(marriage_1959['response']['meta']) print("___________") print("Gay marriage was mentioned", marriage_1959['response']['meta']['hits'], "between 1950-1959") import requests response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19600101&end_date=19693112&api-key=b577eb5b46ad4bec8ee159c89208e220") marriage_1969 = response.json() print(marriage_1969.keys()) print(marriage_1969['response'].keys()) print(marriage_1969['response']['meta']) print("___________") print("Gay marriage was mentioned", marriage_1969['response']['meta']['hits'], "between 1960-1969") import requests response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19700101&end_date=19783112&api-key=b577eb5b46ad4bec8ee159c89208e220") marriage_1978 = response.json() print(marriage_1978.keys()) print(marriage_1978['response'].keys()) print(marriage_1978['response']['meta']) print("___________") print("Gay marriage was mentioned", marriage_1978['response']['meta']['hits'], "between 1970-1978") import requests response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19800101&end_date=19893112&api-key=b577eb5b46ad4bec8ee159c89208e220") marriage_1989 = response.json() print(marriage_1989.keys()) print(marriage_1989['response'].keys()) print(marriage_1989['response']['meta']) print("___________") print("Gay marriage was mentioned", marriage_1989['response']['meta']['hits'], "between 1980-1989") import requests response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=19900101&end_date=20003112&api-key=b577eb5b46ad4bec8ee159c89208e220") marriage_2000 = response.json() print(marriage_2000.keys()) print(marriage_2000['response'].keys()) print(marriage_2000['response']['meta']) print("___________") print("Gay marriage was mentioned", marriage_2000['response']['meta']['hits'], "between 1990-2000") import requests response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=20000101&end_date=20093112&api-key=b577eb5b46ad4bec8ee159c89208e220") marriage_2009 = response.json() print(marriage_2009.keys()) print(marriage_2009['response'].keys()) print(marriage_2009['response']['meta']) print("___________") print("Gay marriage was mentioned", marriage_2009['response']['meta']['hits'], "between 2000-2009") import requests response = requests.get("https://api.nytimes.com/svc/search/v2/articlesearch.json?q='gay marriage'&begin_date=20100101&end_date=20160609&api-key=b577eb5b46ad4bec8ee159c89208e220") marriage_2016 = response.json() print(marriage_2016.keys()) print(marriage_2016['response'].keys()) print(marriage_2016['response']['meta']) print("___________") print("Gay marriage was mentioned", marriage_2016['response']['meta']['hits'], "between 2010-present")
foundations_hw/05/Homework5_Graded.ipynb
mercybenzaquen/foundations-homework
mit
6) What section talks about motorcycles the most? Tip: You'll be using facets
import requests response = requests.get("http://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcycles&facet_field=section_name&api-key=b577eb5b46ad4bec8ee159c89208e220") motorcycles = response.json() print(motorcycles.keys()) print(motorcycles['response'].keys()) print(motorcycles['response']['facets']['section_name']['terms']) motorcycles_info= motorcycles['response']['facets']['section_name']['terms'] print(motorcycles_info) print("These are the sections that talk the most about motorcycles:") print("_________________") for item in motorcycles_info: print("The",item['term'],"section mentioned motorcycle", item['count'], "times") motorcycle_info= motorcycles['response']['facets']['section_name']['terms'] most_motorcycle_section = 0 section_name = "" for item in motorcycle_info: if item['count']>most_motorcycle_section: most_motorcycle_section = item['count'] section_name = item['term'] print(section_name, "is the sections that talks the most about motorcycles, with", most_motorcycle_section, "mentions of the word")
foundations_hw/05/Homework5_Graded.ipynb
mercybenzaquen/foundations-homework
mit
7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60? Tip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.
import requests response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?api-key=b577eb5b46ad4bec8ee159c89208e220') movies_reviews_20 = response.json() print(movies_reviews_20.keys()) print(movies_reviews_20['results'][0]) critics_pick = 0 not_a_critics_pick = 0 for item in movies_reviews_20['results']: print(item['display_title'], item['critics_pick']) if item['critics_pick'] == 1: print("-------------CRITICS PICK!") critics_pick = critics_pick + 1 else: print("-------------NOT CRITICS PICK!") not_a_critics_pick = not_a_critics_pick + 1 print("______________________") print("There were", critics_pick, "critics picks in the last 20 revies by the NYT") import requests response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?offset=20&api-key=b577eb5b46ad4bec8ee159c89208e220') movies_reviews_40 = response.json() print(movies_reviews_40.keys()) import requests response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?offset=40&api-key=b577eb5b46ad4bec8ee159c89208e220') movies_reviews_60 = response.json() print(movies_reviews_60.keys()) new_medium_list = movies_reviews_20['results'] + movies_reviews_40['results'] print(len(new_medium_list)) critics_pick = 0 not_a_critics_pick = 0 for item in new_medium_list: print(item['display_title'], item['critics_pick']) if item['critics_pick'] == 1: print("-------------CRITICS PICK!") critics_pick = critics_pick + 1 else: print("-------------NOT CRITICS PICK!") not_a_critics_pick = not_a_critics_pick + 1 print("______________________") print("There were", critics_pick, "critics picks in the last 40 revies by the NYT") new_big_list = movies_reviews_20['results'] + movies_reviews_40['results'] + movies_reviews_60['results'] print(new_big_list[0]) print(len(new_big_list)) critics_pick = 0 not_a_critics_pick = 0 for item in new_big_list: print(item['display_title'], item['critics_pick']) if item['critics_pick'] == 1: print("-------------CRITICS PICK!") critics_pick = critics_pick + 1 else: print("-------------NOT CRITICS PICK!") not_a_critics_pick = not_a_critics_pick + 1 print("______________________") print("There were", critics_pick, "critics picks in the last 60 revies by the NYT")
foundations_hw/05/Homework5_Graded.ipynb
mercybenzaquen/foundations-homework
mit
8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews?
medium_list = movies_reviews_20['results'] + movies_reviews_40['results'] print(type(medium_list)) print(medium_list[0]) for item in medium_list: print(item['byline']) all_critics = [] for item in medium_list: all_critics.append(item['byline']) print(all_critics) unique_medium_list = set(all_critics) print(unique_medium_list) print("___________________________________________________") print("This is a list of the authors who have written the NYT last 40 movie reviews, in descending order:") from collections import Counter count = Counter(all_critics) print(count) print("___________________________________________________") print("This is a list of the top 3 authors who have written the NYT last 40 movie reviews:") count.most_common(1)
foundations_hw/05/Homework5_Graded.ipynb
mercybenzaquen/foundations-homework
mit
If we want to figure out the maximum value, we'll obviously need a loop to check each element of the list (which we know how to do), and a variable to store the maximum.
max_val = 0 for element in x: # ... now what? pass
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
We also know we can check relative values, like max_val &lt; element. If this evaluates to True, we know we've found a number in the list that's bigger than our current candidate for maximum value. But how do we execute code until this condition, and this condition alone? Enter if / elif / else statements! (otherwise known as "conditionals") We can use the keyword if, followed by a statement that evaluates to either True or False, to determine whether or not to execute the code. For a straightforward example:
x = 5 if x < 5: print("How did this happen?!") # Spoiler alert: this won't happen. if x == 5: print("Working as intended.")
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
In conjunction with if, we also have an else clause that we can use to execute whenever the if statement doesn't:
x = 5 if x < 5: print("How did this happen?!") # Spoiler alert: this won't happen. else: print("Correct.")
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
This is great! We can finally finish computing the maximum element of a list!
x = [51, 65, 56, 19, 11, 49, 81, 59, 45, 73] max_val = 0 for element in x: if max_val < element: max_val = element print("The maximum element is: {}".format(max_val))
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
We can test conditions! But what if we have multifaceted decisions to make? Let's look at a classic example: assigning letter grades from numerical grades.
student_grades = { 'Jen': 82, 'Shannon': 75, 'Natasha': 94, 'Benjamin': 48, }
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
We know the 90-100 range is an "A", 80-89 is a "B", and so on. We can't do just a standard "if / else", since we have more than two possibilities here. The third and final component of conditionals is the elif statement (short for "else if"). elif allows us to evaluate as many options as we'd like, all within the same conditional context (this is important). So for our grading example, it might look like this:
for student, grade in student_grades.items(): letter = '' if grade >= 90: letter = "A" elif grade >= 80: letter = "B" elif grade >= 70: letter = "C" elif grade >= 60: letter = "D" else: letter = "F" print("{}'s letter grade: {}".format(student, letter))
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
Ok, that's neat. But there's still one more edge case: what happens if we want to enforce multiple conditions simultaneously? To illustrate, let's go back to our example of finding the maximum value in a list, and this time, let's try to find the second-largest value in the list. For simplicity, let's say we've already found the largest value.
x = [51, 65, 56, 19, 11, 49, 81, 59, 45, 73] max_val = 81 # We've already found it! second_largest = 0
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
Here's the rub: we now have two constraints to enforce--the second largest value needs to be larger than pretty much everything in the list, but also needs to be smaller than the maximum value. Not something we can encode using if / elif / else. Instead, we'll use two more keywords integral to conditionals: and and or.
for element in x: if second_largest < element and element < max_val: second_largest = element print("The second-largest element is: {}".format(second_largest))
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
The first condition, second_largest &lt; element, is the same as before: if our current estimate of the second largest element is smaller than the latest element we're looking at, it's definitely a candidate for second-largest. The second condition, element &lt; max_val, is what ensures we don't just pick the largest value again. This enforces the constraint that the current element we're looking at is also less than our known maximum value. The and keyword glues these two conditions together: it requires that they BOTH be True before the code inside the statement is allowed to execute. It would be easy to replicate this with "nested" conditionals:
second_largest = 0 for element in x: if second_largest < element: if element < max_val: second_largest = element print("The second-largest element is: {}".format(second_largest))
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
...but your code starts getting a little unwieldy with so many indentations. You can glue as many comparisons as you want together with and; the whole statement will only be True if every single condition evaluates to True. This is what and means: everything must be True. The other side of this coin is or. Like and, you can use it to glue together multiple constraints. Unlike and, the whole statement will evaluate to True as long as at least ONE condition is True. This is far less stringent than and, where ALL conditions had to be True.
numbers = [1, 2, 5, 6, 7, 9, 10] for num in numbers: if num == 2 or num == 4 or num == 6 or num == 8 or num == 10: print("{} is an even number.".format(num))
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
In this contrived example, I've glued together a bunch of constraints. Obviously, these constraints are mutually exclusive; a number can't be equal to both 2 and 4 at the same time, so num == 2 and num == 4 would never evaluate to True. However, using or, only one of them needs to be True for the statement underneath to execute. There's a little bit of intuition to it. "I want this AND this" has the implication of both at once. "I want this OR this" sounds more like either one would be adequate. One other important tidbit, concerning not only conditionals, but also lists and booleans: the not keyword. An often-important task in data science, when you have a list of things, is querying whether or not some new piece of information you just received is already in your list. You could certainly loop through the list, asking "is my new_item == list[item i]?". But, thankfully, there's a better way:
import random list_of_numbers = [random.randint(1, 100) for i in range(10)] # Generaets 10 random numbers, between 1 and 100. if 13 not in list_of_numbers: print("Aw man, my lucky number isn't here!")
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
Notice a couple things here-- List comprehensions make an appearance! Can you parse it out? The if statement asks if the number 13 is NOT found in list_of_numbers When that statement evaluates to True--meaning the number is NOT found--it prints the message. If you omit the not keyword, then the question becomes: "is this number in the list?"
import random list_of_numbers = [random.randint(1, 2) for i in range(10)] # Generaets 10 random numbers, between 1 and 2. Yep. if 1 in list_of_numbers: print("Sweet, found a 1!")
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
This works for strings as well: 'some_string' in some_list will return True if that string is indeed found in the list. Be careful with this. Typing issues can hit you full force here: if you ask if 0 in some_list, and it's a list of floats, then this operation will always evaluate to False. Similarly, if you ask if "shannon" in name_list, it will look for the precise string "shannon" and return False even if the string "Shannon" is in the list. With great power, etc etc. Part 2: Error Handling Yes, errors: plaguing us since Windows 95 (but really, since well before then). By now, I suspect you've likely seen your fair share of Python crashes. NotImplementedError from the homework assignments TypeError from trying to multiply an integer by a string KeyError from attempting to access a dictionary key that didn't exist IndexError from referencing a list beyond its actual length or any number of other error messages. These are the standard way in which Python (and most other programming languages) handles error messages. The error is known as an Exception. Some other terminology here includes: An exception is raised when such an error occurs. This is why you see the code snippet raise NotImplementedError in your homeworks. In other languages such as Java, an exception is "thrown" instead of "raised", but the meanings are equivalent. When you are writing code that could potentially raise an exception, you can also write code to catch the exception and handle it yourself. When an exception is caught, that means it is handled without crashing the program. Here's a fairly classic example: divide by zero! Let's say we're designing a simple calculator application that divides two numbers. We'll ask the user for two numbers, divide them, and return the quotient. Seems simple enough, right?
def divide(x, y): return x / y divide(11, 0)
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
D'oh! The user fed us a 0 for the denominator and broke our calculator. Meanie-face. So we know there's a possibility of the user entering a 0. This could be malicious or simply by accident. Since it's only one value that could crash our app, we could in principle have an if statement that checks if the denominator is 0. That would be simple and perfectly valid. But for the sake of this lecture, let's assume we want to try and catch the ZeroDivisionError ourselves and handle it gracefully. To do this, we use something called a try / except block, which is very similar in its structure to if / elif / else blocks. First, put the code that could potentially crash your program inside a try statement. Under that, have a except statement that defines A variable for the error you're catching, and Any code that dictates how you want to handle the error
def divide_safe(x, y): quotient = 0 try: quotient = x / y except ZeroDivisionError: print("You tried to divide by zero. Why would you do that?!") return quotient
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
Now if our user tries to be snarky again--
divide_safe(11, 0)
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
No error, no crash! Just a "helpful" error message. Like conditionals, you can also create multiple except statements to handle multiple different possible exceptions:
import random # For generating random exceptions. num = random.randint(0, 1) try: if num == 1: raise NameError("This happens when you use a variable you haven't defined") else: raise ValueError("This happens when you try to multiply a string") except NameError: print("Caught a NameError!") except ValueError: print("Nope, it was actually a ValueError.")
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
If you download this notebook or run it with mybinder and re-run the above cell, the exception should flip randomly between the two. Also like conditionals, you can handle multiple errors simultaneously. If, like in the previous example, your code can raise multiple exceptions, but you want to handle them all the same way, you can stack them all in one except statement:
import random # For generating random exceptions. num = random.randint(0, 1) try: if num == 1: raise NameError("This happens when you use a variable you haven't defined") else: raise ValueError("This happens when you try to multiply a string") except (NameError, ValueError): # MUST have the parentheses! print("Caught...well, some kinda error, not sure which.")
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
If you're like me, and you're writing code that you know could raise one of several errors, but are too lazy to look up specifically what errors are possible, you can create a "catch-all" by just not specifying anything:
import random # For generating random exceptions. num = random.randint(0, 1) try: if num == 1: raise NameError("This happens when you use a variable you haven't defined") else: raise ValueError("This happens when you try to multiply a string") except: print("I caught something!")
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
Finally--and this is really getting into what's known as control flow (quite literally: "controlling the flow" of your program)--you can tack an else statement onto the very end of your exception-handling block to add some final code to the handler. Why? This is code that is only executed if NO exception occurs. Let's go back to our random number example: instead of raising one of two possible exceptions, we'll raise an exception only if we flip a 1.
import random # For generating random exceptions. num = random.randint(0, 1) try: if num == 1: raise NameError("This happens when you use a variable you haven't defined") except: print("I caught something!") else: print("HOORAY! Lucky coin flip!")
lectures/L6.ipynb
eds-uga/csci1360e-su16
mit
Load data and prep model for estimation
modelname="school_location" from activitysim.estimation.larch import component_model model, data = component_model(modelname, return_data=True)
activitysim/examples/example_estimation/notebooks/02_school_location.ipynb
synthicity/activitysim
agpl-3.0
Review data loaded from EDB Next we can review what was read the EDB, including the coefficients, model settings, utilities specification, and chooser and alternative data. coefficients
data.coefficients
activitysim/examples/example_estimation/notebooks/02_school_location.ipynb
synthicity/activitysim
agpl-3.0
alt_values
data.alt_values
activitysim/examples/example_estimation/notebooks/02_school_location.ipynb
synthicity/activitysim
agpl-3.0
chooser_data
data.chooser_data
activitysim/examples/example_estimation/notebooks/02_school_location.ipynb
synthicity/activitysim
agpl-3.0
landuse
data.landuse
activitysim/examples/example_estimation/notebooks/02_school_location.ipynb
synthicity/activitysim
agpl-3.0
spec
data.spec
activitysim/examples/example_estimation/notebooks/02_school_location.ipynb
synthicity/activitysim
agpl-3.0
size_spec
data.size_spec
activitysim/examples/example_estimation/notebooks/02_school_location.ipynb
synthicity/activitysim
agpl-3.0
Estimate With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
model.estimate(method='BHHH', options={'maxiter':1000})
activitysim/examples/example_estimation/notebooks/02_school_location.ipynb
synthicity/activitysim
agpl-3.0
Output Estimation Results
from activitysim.estimation.larch import update_coefficients, update_size_spec result_dir = data.edb_directory/"estimated"
activitysim/examples/example_estimation/notebooks/02_school_location.ipynb
synthicity/activitysim
agpl-3.0
Write updated utility coefficients
update_coefficients( model, data, result_dir, output_file=f"{modelname}_coefficients_revised.csv", );
activitysim/examples/example_estimation/notebooks/02_school_location.ipynb
synthicity/activitysim
agpl-3.0
Write updated size coefficients
update_size_spec( model, data, result_dir, output_file=f"{modelname}_size_terms.csv", )
activitysim/examples/example_estimation/notebooks/02_school_location.ipynb
synthicity/activitysim
agpl-3.0
Write the model estimation report, including coefficient t-statistic and log likelihood
model.to_xlsx( result_dir/f"{modelname}_model_estimation.xlsx", data_statistics=False, );
activitysim/examples/example_estimation/notebooks/02_school_location.ipynb
synthicity/activitysim
agpl-3.0
Next Steps The final step is to either manually or automatically copy the *_coefficients_revised.csv file and *_size_terms.csv file to the configs folder, rename them to *_coefficients.csv and destination_choice_size_terms.csv, and run ActivitySim in simulation mode. Note that all the location and desintation choice models share the same destination_choice_size_terms.csv input file, so if you are updating all these models, you'll need to ensure that updated sections of this file for each model are joined together correctly.
pd.read_csv(result_dir/f"{modelname}_coefficients_revised.csv") pd.read_csv(result_dir/f"{modelname}_size_terms.csv")
activitysim/examples/example_estimation/notebooks/02_school_location.ipynb
synthicity/activitysim
agpl-3.0
Simple absorbing boundary for 2D acoustic FD modelling Realistic FD modelling results for surface seismic acquisition geometries require a further modification of the 2D acoustic FD code. Except for the free surface boundary condition on top of the model, we want to suppress the artifical reflections from the other boundaries. Such absorbing boundaries can be implemented by different approaches. A comprehensive overview is compiled in Gao et al. 2015, Comparison of artificial absorbing boundaries for acoustic wave equation modelling Before implementing the absorbing boundary frame, we modify some parts of the optimized 2D acoustic FD code:
# Import Libraries # ---------------- import numpy as np from numba import jit import matplotlib import matplotlib.pyplot as plt from pylab import rcParams # Ignore Warning Messages # ----------------------- import warnings warnings.filterwarnings("ignore") from mpl_toolkits.axes_grid1 import make_axes_locatable # Definition of initial modelling parameters # ------------------------------------------ xmax = 2000.0 # maximum spatial extension of the 1D model in x-direction (m) zmax = xmax # maximum spatial extension of the 1D model in z-direction (m) dx = 10.0 # grid point distance in x-direction (m) dz = dx # grid point distance in z-direction (m) tmax = 0.75 # maximum recording time of the seismogram (s) dt = 0.0010 # time step vp0 = 3000. # P-wave speed in medium (m/s) # acquisition geometry xsrc = 1000.0 # x-source position (m) zsrc = xsrc # z-source position (m) f0 = 100.0 # dominant frequency of the source (Hz) t0 = 0.1 # source time shift (s) isnap = 2 # snapshot interval (timesteps)
05_2D_acoustic_FD_modelling/lecture_notebooks/4_fdac2d_absorbing_boundary.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
In order to modularize the code, we move the 2nd partial derivatives of the wave equation into a function update_d2px_d2pz, so the application of the JIT decorator can be restriced to this function:
@jit(nopython=True) # use JIT for C-performance def update_d2px_d2pz(p, dx, dz, nx, nz, d2px, d2pz): for i in range(1, nx - 1): for j in range(1, nz - 1): d2px[i,j] = (p[i + 1,j] - 2 * p[i,j] + p[i - 1,j]) / dx**2 d2pz[i,j] = (p[i,j + 1] - 2 * p[i,j] + p[i,j - 1]) / dz**2 return d2px, d2pz
05_2D_acoustic_FD_modelling/lecture_notebooks/4_fdac2d_absorbing_boundary.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
In the FD modelling code FD_2D_acoustic_JIT, a more flexible model definition is introduced by the function model. The block Initalize animation of pressure wavefield before the time loop displays the velocity model and initial pressure wavefield. During the time-loop, the pressure wavefield is updated with image.set_data(p.T) fig.canvas.draw() at the every isnap timestep:
# FD_2D_acoustic code with JIT optimization # ----------------------------------------- def FD_2D_acoustic_JIT(dt,dx,dz,f0): # define model discretization # --------------------------- nx = (int)(xmax/dx) # number of grid points in x-direction print('nx = ',nx) nz = (int)(zmax/dz) # number of grid points in x-direction print('nz = ',nz) nt = (int)(tmax/dt) # maximum number of time steps print('nt = ',nt) isrc = (int)(xsrc/dx) # source location in grid in x-direction jsrc = (int)(zsrc/dz) # source location in grid in x-direction # Source time function (Gaussian) # ------------------------------- src = np.zeros(nt + 1) time = np.linspace(0 * dt, nt * dt, nt) # 1st derivative of Gaussian src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2)) # define clip value: 0.1 * absolute maximum value of source wavelet clip = 0.1 * max([np.abs(src.min()), np.abs(src.max())]) / (dx*dz) * dt**2 # Define model # ------------ vp = np.zeros((nx,nz)) vp = model(nx,nz,vp,dx,dz) vp2 = vp**2 # Initialize empty pressure arrays # -------------------------------- p = np.zeros((nx,nz)) # p at time n (now) pold = np.zeros((nx,nz)) # p at time n-1 (past) pnew = np.zeros((nx,nz)) # p at time n+1 (present) d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p # Initalize animation of pressure wavefield # ----------------------------------------- fig = plt.figure(figsize=(7,3.5)) # define figure size plt.tight_layout() extent = [0.0,xmax,zmax,0.0] # define model extension # Plot pressure wavefield movie ax1 = plt.subplot(121) image = plt.imshow(p.T, animated=True, cmap="RdBu", extent=extent, interpolation='nearest', vmin=-clip, vmax=clip) plt.title('Pressure wavefield') plt.xlabel('x [m]') plt.ylabel('z [m]') # Plot Vp-model ax2 = plt.subplot(122) image1 = plt.imshow(vp.T/1000, cmap=plt.cm.viridis, interpolation='nearest', extent=extent) plt.title('Vp-model') plt.xlabel('x [m]') plt.setp(ax2.get_yticklabels(), visible=False) divider = make_axes_locatable(ax2) cax2 = divider.append_axes("right", size="2%", pad=0.1) fig.colorbar(image1, cax=cax2) plt.ion() plt.show(block=False) # Calculate Partial Derivatives # ----------------------------- for it in range(nt): # FD approximation of spatial derivative by 3 point operator d2px, d2pz = update_d2px_d2pz(p, dx, dz, nx, nz, d2px, d2pz) # Time Extrapolation # ------------------ pnew = 2 * p - pold + vp2 * dt**2 * (d2px + d2pz) # Add Source Term at isrc # ----------------------- # Absolute pressure w.r.t analytical solution pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2 # Remap Time Levels # ----------------- pold, p = p, pnew # display pressure snapshots if (it % isnap) == 0: image.set_data(p.T) fig.canvas.draw()
05_2D_acoustic_FD_modelling/lecture_notebooks/4_fdac2d_absorbing_boundary.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
Homogeneous block model without absorbing boundary frame As a reference, we first model the homogeneous block model, defined in the function model, without an absorbing boundary frame:
# Homogeneous model def model(nx,nz,vp,dx,dz): vp += vp0 return vp
05_2D_acoustic_FD_modelling/lecture_notebooks/4_fdac2d_absorbing_boundary.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
After defining the modelling parameters, we can run the modified FD code ...
%matplotlib notebook dx = 5.0 # grid point distance in x-direction (m) dz = dx # grid point distance in z-direction (m) f0 = 100.0 # centre frequency of the source wavelet (Hz) # calculate dt according to the CFL-criterion dt = dx / (np.sqrt(2.0) * vp0) FD_2D_acoustic_JIT(dt,dx,dz,f0)
05_2D_acoustic_FD_modelling/lecture_notebooks/4_fdac2d_absorbing_boundary.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
Notice the strong, artifical boundary reflections in the wavefield movie Simple absorbing Sponge boundary The simplest, and unfortunately least efficient, absorbing boundary was developed by Cerjan et al. (1985). It is based on the simple idea to damp the pressure wavefields $p^n_{i,j}$ and $p^{n+1}_{i,j}$ in an absorbing boundary frame by an exponential function: \begin{equation} f_{abs} = exp(-a^2(FW-i)^2), \nonumber \end{equation} where $FW$ denotes the thickness of the boundary frame in gridpoints, while the factor $a$ defines the damping variation within the frame. It is import to avoid overlaps of the damping profile in the model corners, when defining the absorbing function:
# Define simple absorbing boundary frame based on wavefield damping # according to Cerjan et al., 1985, Geophysics, 50, 705-708 def absorb(nx,nz): FW = # thickness of absorbing frame (gridpoints) a = # damping variation within the frame coeff = np.zeros(FW) # define coefficients # initialize array of absorbing coefficients absorb_coeff = np.ones((nx,nz)) # compute coefficients for left grid boundaries (x-direction) # compute coefficients for right grid boundaries (x-direction) # compute coefficients for bottom grid boundaries (z-direction) return absorb_coeff
05_2D_acoustic_FD_modelling/lecture_notebooks/4_fdac2d_absorbing_boundary.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
This implementation of the Sponge boundary sets a free-surface boundary condition on top of the model, while inciding waves at the other boundaries are absorbed:
# Plot absorbing damping profile # ------------------------------ fig = plt.figure(figsize=(6,4)) # define figure size extent = [0.0,xmax,0.0,zmax] # define model extension # calculate absorbing boundary weighting coefficients nx = 400 nz = 400 absorb_coeff = absorb(nx,nz) plt.imshow(absorb_coeff.T) plt.colorbar() plt.title('Sponge boundary condition') plt.xlabel('x [m]') plt.ylabel('z [m]') plt.show()
05_2D_acoustic_FD_modelling/lecture_notebooks/4_fdac2d_absorbing_boundary.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
The FD code itself requires only some small modifications, we have to add the absorb function to define the amount of damping in the boundary frame and apply the damping function to the pressure wavefields pnew and p
# FD_2D_acoustic code with JIT optimization # ----------------------------------------- def FD_2D_acoustic_JIT_absorb(dt,dx,dz,f0): # define model discretization # --------------------------- nx = (int)(xmax/dx) # number of grid points in x-direction print('nx = ',nx) nz = (int)(zmax/dz) # number of grid points in x-direction print('nz = ',nz) nt = (int)(tmax/dt) # maximum number of time steps print('nt = ',nt) isrc = (int)(xsrc/dx) # source location in grid in x-direction jsrc = (int)(zsrc/dz) # source location in grid in x-direction # Source time function (Gaussian) # ------------------------------- src = np.zeros(nt + 1) time = np.linspace(0 * dt, nt * dt, nt) # 1st derivative of Gaussian src = -2. * (time - t0) * (f0 ** 2) * (np.exp(- (f0 ** 2) * (time - t0) ** 2)) # define clip value: 0.1 * absolute maximum value of source wavelet clip = 0.1 * max([np.abs(src.min()), np.abs(src.max())]) / (dx*dz) * dt**2 # Define absorbing boundary frame # ------------------------------- # Define model # ------------ vp = np.zeros((nx,nz)) vp = model(nx,nz,vp,dx,dz) vp2 = vp**2 # Initialize empty pressure arrays # -------------------------------- p = np.zeros((nx,nz)) # p at time n (now) pold = np.zeros((nx,nz)) # p at time n-1 (past) pnew = np.zeros((nx,nz)) # p at time n+1 (present) d2px = np.zeros((nx,nz)) # 2nd spatial x-derivative of p d2pz = np.zeros((nx,nz)) # 2nd spatial z-derivative of p # Initalize animation of pressure wavefield # ----------------------------------------- fig = plt.figure(figsize=(7,3.5)) # define figure size plt.tight_layout() extent = [0.0,xmax,zmax,0.0] # define model extension # Plot pressure wavefield movie ax1 = plt.subplot(121) image = plt.imshow(p.T, animated=True, cmap="RdBu", extent=extent, interpolation='nearest', vmin=-clip, vmax=clip) plt.title('Pressure wavefield') plt.xlabel('x [m]') plt.ylabel('z [m]') # Plot Vp-model ax2 = plt.subplot(122) image1 = plt.imshow(vp.T/1000, cmap=plt.cm.viridis, interpolation='nearest', extent=extent) plt.title('Vp-model') plt.xlabel('x [m]') plt.setp(ax2.get_yticklabels(), visible=False) divider = make_axes_locatable(ax2) cax2 = divider.append_axes("right", size="2%", pad=0.1) fig.colorbar(image1, cax=cax2) plt.ion() plt.show(block=False) # Calculate Partial Derivatives # ----------------------------- for it in range(nt): # FD approximation of spatial derivative by 3 point operator d2px, d2pz = update_d2px_d2pz(p, dx, dz, nx, nz, d2px, d2pz) # Time Extrapolation # ------------------ pnew = 2 * p - pold + vp2 * dt**2 * (d2px + d2pz) # Add Source Term at isrc # ----------------------- # Absolute pressure w.r.t analytical solution pnew[isrc,jsrc] = pnew[isrc,jsrc] + src[it] / (dx * dz) * dt ** 2 # Apply absorbing boundary frame to p and pnew # Remap Time Levels # ----------------- pold, p = p, pnew # display pressure snapshots if (it % isnap) == 0: image.set_data(p.T) fig.canvas.draw()
05_2D_acoustic_FD_modelling/lecture_notebooks/4_fdac2d_absorbing_boundary.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
Let's evaluate the influence of the Sponge boundaries on the artifical boundary reflections:
%matplotlib notebook dx = 5.0 # grid point distance in x-direction (m) dz = dx # grid point distance in z-direction (m) f0 = 100.0 # centre frequency of the source wavelet (Hz) # calculate dt according to the CFL-criterion dt = dx / (np.sqrt(2.0) * vp0) FD_2D_acoustic_JIT_absorb(dt,dx,dz,f0)
05_2D_acoustic_FD_modelling/lecture_notebooks/4_fdac2d_absorbing_boundary.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
NumPy Para importar numpy, utilize: import numpy as np Você também pode utilizar: from numpy import * . Isso evitará a utilização de np., mas este comando importará todos os módulos do NumPy. Para atualizar o NumPy, abra o prompt de comando e digite: pip install numpy -U
# Importando o NumPy import numpy as np np.__version__
Cap08/Notebooks/DSA-Python-Cap08-01-NumPy.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Criando Arrays
# Help help(np.array) # Array criado a partir de uma lista: vetor1 = np.array([0, 1, 2, 3, 4, 5, 6, 7, 8]) print(vetor1) # Um objeto do tipo ndarray é um recipiente multidimensional de itens do mesmo tipo e tamanho. type(vetor1) # Usando métodos do array NumPy vetor1.cumsum() # Criando uma lista. Perceba como listas e arrays são objetos diferentes, com diferentes propriedades lst = [0, 1, 2, 3, 4, 5, 6, 7, 8] lst type(lst) # Imprimindo na tela um elemento específico no array vetor1[0] # Alterando um elemento do array vetor1[0] = 100 print(vetor1) # Não é possível incluir elemento de outro tipo vetor1[0] = 'Novo elemento' # Verificando o formato do array print(vetor1.shape)
Cap08/Notebooks/DSA-Python-Cap08-01-NumPy.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Funções NumPy
# A função arange cria um vetor contendo uma progressão aritmética a partir de um intervalo - start, stop, step vetor2 = np.arange(0., 4.5, .5) print(vetor2) # Verificando o tipo do objeto type(vetor2) # Formato do array np.shape(vetor2) print (vetor2.dtype) x = np.arange(1, 10, 0.25) print(x) print(np.zeros(10)) # Retorna 1 nas posições em diagonal e 0 no restante z = np.eye(3) z # Os valores passados como parâmetro, formam uma diagonal d = np.diag(np.array([1, 2, 3, 4])) d # Array de números complexos c = np.array([1+2j, 3+4j, 5+6*1j]) c # Array de valores booleanos b = np.array([True, False, False, True]) b # Array de strings s = np.array(['Python', 'R', 'Julia']) s # O método linspace (linearly spaced vector) retorna um número de # valores igualmente distribuídos no intervalo especificado np.linspace(0, 10) print(np.linspace(0, 10, 15)) print(np.logspace(0, 5, 10))
Cap08/Notebooks/DSA-Python-Cap08-01-NumPy.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Criando Matrizes
# Criando uma matriz matriz = np.array([[1,2,3],[4,5,6]]) print(matriz) print(matriz.shape) # Criando uma matriz 2x3 apenas com números "1" matriz1 = np.ones((2,3)) print(matriz1) # Criando uma matriz a partir de uma lista de listas lista = [[13,81,22], [0, 34, 59], [21, 48, 94]] # A função matrix cria uma matria a partir de uma sequência matriz2 = np.matrix(lista) matriz2 type(matriz2) # Formato da matriz np.shape(matriz2) matriz2.size print(matriz2.dtype) matriz2.itemsize matriz2.nbytes print(matriz2[2,1]) # Alterando um elemento da matriz matriz2[1,0] = 100 matriz2 x = np.array([1, 2]) # NumPy decide o tipo dos dados y = np.array([1.0, 2.0]) # NumPy decide o tipo dos dados z = np.array([1, 2], dtype=np.float64) # Forçamos um tipo de dado em particular print (x.dtype, y.dtype, z.dtype) matriz3 = np.array([[24, 76], [35, 89]], dtype=float) matriz3 matriz3.itemsize matriz3.nbytes matriz3.ndim matriz3[1,1] matriz3[1,1] = 100 matriz3
Cap08/Notebooks/DSA-Python-Cap08-01-NumPy.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Usando o Método random() do NumPy
print(np.random.rand(10)) import matplotlib.pyplot as plt %matplotlib inline import matplotlib as mat mat.__version__ print(np.random.rand(10)) plt.show((plt.hist(np.random.rand(1000)))) print(np.random.randn(5,5)) plt.show(plt.hist(np.random.randn(1000))) imagem = np.random.rand(30, 30) plt.imshow(imagem, cmap = plt.cm.hot) plt.colorbar()
Cap08/Notebooks/DSA-Python-Cap08-01-NumPy.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Operações com datasets
import os filename = os.path.join('iris.csv') # No Windows use !more iris.csv. Mac ou Linux use !head iris.csv !head iris.csv #!more iris.csv # Carregando um dataset para dentro de um array arquivo = np.loadtxt(filename, delimiter=',', usecols=(0,1,2,3), skiprows=1) print (arquivo) type(arquivo) # Gerando um plot a partir de um arquivo usando o NumPy var1, var2 = np.loadtxt(filename, delimiter=',', usecols=(0,1), skiprows=1, unpack=True) plt.show(plt.plot(var1, var2, 'o', markersize=8, alpha=0.75))
Cap08/Notebooks/DSA-Python-Cap08-01-NumPy.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Estatística
# Criando um array A = np.array([15, 23, 63, 94, 75]) # Em estatística a média é o valor que aponta para onde mais se concentram os dados de uma distribuição. np.mean(A) # O desvio padrão mostra o quanto de variação ou "dispersão" existe em # relação à média (ou valor esperado). # Um baixo desvio padrão indica que os dados tendem a estar próximos da média. # Um desvio padrão alto indica que os dados estão espalhados por uma gama de valores. np.std(A) # Variância de uma variável aleatória é uma medida da sua dispersão # estatística, indicando "o quão longe" em geral os seus valores se # encontram do valor esperado np.var(A) d = np.arange(1, 10) d np.sum(d) # Retorna o produto dos elementos np.prod(d) # Soma acumulada dos elementos np.cumsum(d) a = np.random.randn(400,2) m = a.mean(0) print (m, m.shape) plt.plot(a[:,0], a[:,1], 'o', markersize=5, alpha=0.50) plt.plot(m[0], m[1], 'ro', markersize=10) plt.show()
Cap08/Notebooks/DSA-Python-Cap08-01-NumPy.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Outras Operações com Arrays
# Slicing a = np.diag(np.arange(3)) a a[1, 1] a[1] b = np.arange(10) b # [start:end:step] b[2:9:3] # Comparação a = np.array([1, 2, 3, 4]) b = np.array([4, 2, 2, 4]) a == b np.array_equal(a, b) a.min() a.max() # Somando um elemento ao array np.array([1, 2, 3]) + 1.5 # Usando o método around a = np.array([1.2, 1.5, 1.6, 2.5, 3.5, 4.5]) b = np.around(a) b # Criando um array B = np.array([1, 2, 3, 4]) B # Copiando um array C = B.flatten() C # Criando um array v = np.array([1, 2, 3]) # Adcionando uma dimensão ao array v[:, np.newaxis], v[:,np.newaxis].shape, v[np.newaxis,:].shape # Repetindo os elementos de um array np.repeat(v, 3) # Repetindo os elementos de um array np.tile(v, 3) # Criando um array w = np.array([5, 6]) # Concatenando np.concatenate((v, w), axis=0) # Copiando arrays r = np.copy(v) r
Cap08/Notebooks/DSA-Python-Cap08-01-NumPy.ipynb
dsacademybr/PythonFundamentos
gpl-3.0
Figure out how to use np.random.choice to simulate 1,000 tosses of a fair coin np.random uses a "pseudorandom" number generator to simulate choices String of numbers that has the same statistical properties as random numbers Numbers are actually generated deterministically Numbers look random...
numbers = np.random.random(100000) plt.hist(numbers)
chapters/01_simulation/00_random-sampling.ipynb
harmsm/pythonic-science
unlicense
But numbers are actually deterministic...
def simple_psuedo_random(current_value, multiplier=13110243, divisor=13132): return current_value*multiplier % divisor seed = 10218888 out = [] current = seed for i in range(1000): current = simple_psuedo_random(current) out.append(current) plt.hist(out)
chapters/01_simulation/00_random-sampling.ipynb
harmsm/pythonic-science
unlicense
python uses the Mersenne Twister to generate pseudorandom numbers What does the seed do?
seed = 1021888 out = [] current = seed for i in range(1000): current = simple_psuedo_random(current) out.append(current)
chapters/01_simulation/00_random-sampling.ipynb
harmsm/pythonic-science
unlicense
What will we see if I run this cell twice in a row?
s1 = np.random.random(10) print(s1)
chapters/01_simulation/00_random-sampling.ipynb
harmsm/pythonic-science
unlicense
What will we see if I run this cell twice in a row?
np.random.seed(5235412) s1 = np.random.random(10) print(s1)
chapters/01_simulation/00_random-sampling.ipynb
harmsm/pythonic-science
unlicense
A seed lets you specify which pseudo-random numbers you will use. If you use the same seed, you will get identical samples. If you use a different seed, you will get wildly different samples. matplotlib.pyplot.hist
numbers = np.random.normal(size=10000) counts, bins, junk = plt.hist(numbers, range(-10,10))
chapters/01_simulation/00_random-sampling.ipynb
harmsm/pythonic-science
unlicense
Basic histogram plotting syntax python COUNTS, BIN_EDGES, GRAPHICS_BIT = plt.hist(ARRAY_TO_BIN,BINS_TO_USE) Figure out how the function works and report back to the class What the function does Arguments normal people would care about What it returns
np.random.normal np.random.binomial np.random.uniform np.random.poisson np.random.choice np.random.shuffle
chapters/01_simulation/00_random-sampling.ipynb
harmsm/pythonic-science
unlicense
Use pandas to read the csv of the monthly-milk-production.csv file and set index_col='Month'
milk = pd.read_csv('monthly-milk-production.csv',index_col='Month')
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Check out the head of the dataframe
milk.head()
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Make the index a time series by using: milk.index = pd.to_datetime(milk.index)
milk.index = pd.to_datetime(milk.index)
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Plot out the time series data.
milk.plot()
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Train Test Split Let's attempt to predict a year's worth of data. (12 months or 12 steps into the future) Create a test train split using indexing (hint: use .head() or tail() or .iloc[]). We don't want a random train test split, we want to specify that the test set is the last 3 months of data is the test set, with everything before it is the training.
milk.info() train_set = milk.head(156) test_set = milk.tail(12)
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Scale the Data Use sklearn.preprocessing to scale the data using the MinMaxScaler. Remember to only fit_transform on the training data, then transform the test data. You shouldn't fit on the test data as well, otherwise you are assuming you would know about future behavior!
from sklearn.preprocessing import MinMaxScaler scaler = MinMaxScaler() train_scaled = scaler.fit_transform(train_set) test_scaled = scaler.transform(test_set)
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Batch Function We'll need a function that can feed batches of the training data. We'll need to do several things that are listed out as steps in the comments of the function. Remember to reference the previous batch method from the lecture for hints. Try to fill out the function template below, this is a pretty hard step, so feel free to reference the solutions!
def next_batch(training_data,batch_size,steps): """ INPUT: Data, Batch Size, Time Steps per batch OUTPUT: A tuple of y time series results. y[:,:-1] and y[:,1:] """ # STEP 1: Use np.random.randint to set a random starting point index for the batch. # Remember that each batch needs have the same number of steps in it. # This means you should limit the starting point to len(data)-steps # STEP 2: Now that you have a starting index you'll need to index the data from # the random start to random start + steps. Then reshape this data to be (1,steps) # STEP 3: Return the batches. You'll have two batches to return y[:,:-1] and y[:,1:] # You'll need to reshape these into tensors for the RNN. Depending on your indexing it # will be either .reshape(-1,steps-1,1) or .reshape(-1,steps,1) def next_batch(training_data,batch_size,steps): # Grab a random starting point for each batch rand_start = np.random.randint(0,len(training_data)-steps) # Create Y data for time series in the batches y_batch = np.array(training_data[rand_start:rand_start+steps+1]).reshape(1,steps+1) return y_batch[:, :-1].reshape(-1, steps, 1), y_batch[:, 1:].reshape(-1, steps, 1)
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Setting Up The RNN Model Import TensorFlow
import tensorflow as tf
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
The Constants Define the constants in a single cell. You'll need the following (in parenthesis are the values I used in my solution, but you can play with some of these): * Number of Inputs (1) * Number of Time Steps (12) * Number of Neurons per Layer (100) * Number of Outputs (1) * Learning Rate (0.003) * Number of Iterations for Training (4000) * Batch Size (1)
# Just one feature, the time series num_inputs = 1 # Num of steps in each batch num_time_steps = 12 # 100 neuron layer, play with this num_neurons = 100 # Just one output, predicted time series num_outputs = 1 ## You can also try increasing iterations, but decreasing learning rate # learning rate you can play with this learning_rate = 0.03 # how many iterations to go through (training steps), you can play with this num_train_iterations = 4000 # Size of the batch of data batch_size = 1
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Create Placeholders for X and y. (You can change the variable names if you want). The shape for these placeholders should be [None,num_time_steps-1,num_inputs] and [None, num_time_steps-1, num_outputs] The reason we use num_time_steps-1 is because each of these will be one step shorter than the original time steps size, because we are training the RNN network to predict one point into the future based on the input sequence.
X = tf.placeholder(tf.float32, [None, num_time_steps, num_inputs]) y = tf.placeholder(tf.float32, [None, num_time_steps, num_outputs])
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Now create the RNN Layer, you have complete freedom over this, use tf.contrib.rnn and choose anything you want, OutputProjectionWrappers, BasicRNNCells, BasicLSTMCells, MultiRNNCell, GRUCell etc... Keep in mind not every combination will work well! (If in doubt, the solutions used an Outputprojection Wrapper around a basic LSTM cell with relu activation.
# Also play around with GRUCell cell = tf.contrib.rnn.OutputProjectionWrapper( tf.contrib.rnn.BasicLSTMCell(num_units=num_neurons, activation=tf.nn.relu), output_size=num_outputs)
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Now pass in the cells variable into tf.nn.dynamic_rnn, along with your first placeholder (X)
outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Loss Function and Optimizer Create a Mean Squared Error Loss Function and use it to minimize an AdamOptimizer, remember to pass in your learning rate.
loss = tf.reduce_mean(tf.square(outputs - y)) # MSE optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate) train = optimizer.minimize(loss)
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Initialize the global variables
init = tf.global_variables_initializer()
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Create an instance of tf.train.Saver()
saver = tf.train.Saver()
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Session Run a tf.Session that trains on the batches created by your next_batch function. Also add an a loss evaluation for every 100 training iterations. Remember to save your model after you are done training.
gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.9) with tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess: sess.run(init) for iteration in range(num_train_iterations): X_batch, y_batch = next_batch(train_scaled,batch_size,num_time_steps) sess.run(train, feed_dict={X: X_batch, y: y_batch}) if iteration % 100 == 0: mse = loss.eval(feed_dict={X: X_batch, y: y_batch}) print(iteration, "\tMSE:", mse) # Save Model for Later saver.save(sess, "./ex_time_series_model")
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Predicting Future (Test Data) Show the test_set (the last 12 months of your original complete data set)
test_set
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Now we want to attempt to predict these 12 months of data, using only the training data we had. To do this we will feed in a seed training_instance of the last 12 months of the training_set of data to predict 12 months into the future. Then we will be able to compare our generated 12 months to our actual true historical values from the test set! Generative Session NOTE: Recall that our model is really only trained to predict 1 time step ahead, asking it to generate 12 steps is a big ask, and technically not what it was trained to do! Think of this more as generating new values based off some previous pattern, rather than trying to directly predict the future. You would need to go back to the original model and train the model to predict 12 time steps ahead to really get a higher accuracy on the test data. (Which has its limits due to the smaller size of our data set) Fill out the session code below to generate 12 months of data based off the last 12 months of data from the training set. The hardest part about this is adjusting the arrays with their shapes and sizes. Reference the lecture for hints.
with tf.Session() as sess: # Use your Saver instance to restore your saved rnn time series model saver.restore(sess, "./ex_time_series_model") # Create a numpy array for your genreative seed from the last 12 months of the # training set data. Hint: Just use tail(12) and then pass it to an np.array train_seed = list(train_scaled[-12:]) ## Now create a for loop that for iteration in range(12): X_batch = np.array(train_seed[-num_time_steps:]).reshape(1, num_time_steps, 1) y_pred = sess.run(outputs, feed_dict={X: X_batch}) train_seed.append(y_pred[0, -1, 0])
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Show the result of the predictions.
train_seed
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Grab the portion of the results that are the generated values and apply inverse_transform on them to turn them back into milk production value units (lbs per cow). Also reshape the results to be (12,1) so we can easily add them to the test_set dataframe.
results = scaler.inverse_transform(np.array(train_seed[12:]).reshape(12,1))
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Create a new column on the test_set called "Generated" and set it equal to the generated results. You may get a warning about this, feel free to ignore it.
test_set['Generated'] = results
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
View the test_set dataframe.
test_set
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0
Plot out the two columns for comparison.
test_set.plot()
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/03-Time-Series-Exercise-Solutions-Final.ipynb
arcyfelix/Courses
apache-2.0