markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Test the converted models To prove these models are still accurate after conversion and quantization, we'll use both of them to make predictions and compare these against our test results:
# Instantiate an interpreter for each model sine_model = tf.lite.Interpreter('sine_model.tflite') sine_model_quantized = tf.lite.Interpreter('sine_model_quantized.tflite') # Allocate memory for each model sine_model.allocate_tensors() sine_model_quantized.allocate_tensors() # Get the input and output tensors so we can feed in values and get the results sine_model_input = sine_model.tensor(sine_model.get_input_details()[0]["index"]) sine_model_output = sine_model.tensor(sine_model.get_output_details()[0]["index"]) sine_model_quantized_input = sine_model_quantized.tensor(sine_model_quantized.get_input_details()[0]["index"]) sine_model_quantized_output = sine_model_quantized.tensor(sine_model_quantized.get_output_details()[0]["index"]) # Create arrays to store the results sine_model_predictions = np.empty(x_test.size) sine_model_quantized_predictions = np.empty(x_test.size) # Run each model's interpreter for each value and store the results in arrays for i in range(x_test.size): sine_model_input().fill(x_test[i]) sine_model.invoke() sine_model_predictions[i] = sine_model_output()[0] sine_model_quantized_input().fill(x_test[i]) sine_model_quantized.invoke() sine_model_quantized_predictions[i] = sine_model_quantized_output()[0] # See how they line up with the data plt.clf() plt.title('Comparison of various models against actual values') plt.plot(x_test, y_test, 'bo', label='Actual') plt.plot(x_test, predictions, 'ro', label='Original predictions') plt.plot(x_test, sine_model_predictions, 'bx', label='Lite predictions') plt.plot(x_test, sine_model_quantized_predictions, 'gx', label='Lite quantized predictions') plt.legend() plt.show()
tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb
gunan/tensorflow
apache-2.0
We can see from the graph that the predictions for the original model, the converted model, and the quantized model are all close enough to be indistinguishable. This means that our quantized model is ready to use! We can print the difference in file size:
import os basic_model_size = os.path.getsize("sine_model.tflite") print("Basic model is %d bytes" % basic_model_size) quantized_model_size = os.path.getsize("sine_model_quantized.tflite") print("Quantized model is %d bytes" % quantized_model_size) difference = basic_model_size - quantized_model_size print("Difference is %d bytes" % difference)
tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb
gunan/tensorflow
apache-2.0
Our quantized model is only 16 bytes smaller than the original version, which only a tiny reduction in size! At around 2.6 kilobytes, this model is already so small that the weights make up only a small fraction of the overall size, meaning quantization has little effect. More complex models have many more weights, meaning the space saving from quantization will be much higher, approaching 4x for most sophisticated models. Regardless, our quantized model will take less time to execute than the original version, which is important on a tiny microcontroller! Write to a C file The final step in preparing our model for use with TensorFlow Lite for Microcontrollers is to convert it into a C source file. You can see an example of this format in hello_world/sine_model_data.cc. To do so, we can use a command line utility named xxd. The following cell runs xxd on our quantized model and prints the output:
# Install xxd if it is not available !apt-get -qq install xxd # Save the file as a C source file !xxd -i sine_model_quantized.tflite > sine_model_quantized.cc # Print the source file !cat sine_model_quantized.cc
tensorflow/lite/micro/examples/hello_world/create_sine_model.ipynb
gunan/tensorflow
apache-2.0
Reshaping DataFrame objects In the context of a single DataFrame, we are often interested in re-arranging the layout of our data. This dataset in from Table 6.9 of Statistical Methods for the Analysis of Repeated Measurements by Charles S. Davis, pp. 161-163 (Springer, 2002). These data are from a multicenter, randomized controlled trial of botulinum toxin type B (BotB) in patients with cervical dystonia (spasmodic torticollis) from nine U.S. sites. Randomized to placebo (N=36), 5000 units of BotB (N=36), 10,000 units of BotB (N=37) Response variable: total score on Toronto Western Spasmodic Torticollis Rating Scale (TWSTRS), measuring severity, pain, and disability of cervical dystonia (high scores mean more impairment) TWSTRS measured at baseline (week 0) and weeks 2, 4, 8, 12, 16 after treatment began
cdystonia = pd.read_csv("../data/cdystonia.csv", index_col=None) cdystonia.head()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Have a peek at the structure of the index of the stacked data (and the data itself). To complement this, unstack pivots from rows back to columns.
stacked.unstack().head()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Exercise Which columns uniquely define a row? Create a DataFrame called cdystonia2 with a hierarchical index based on these columns.
# Write your answer here
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
If we want to transform this data so that repeated measurements are in columns, we can unstack the twstrs measurements according to obs.
twstrs_wide = cdystonia2['twstrs'].unstack('obs') twstrs_wide.head()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
We can now merge these reshaped outcomes data with the other variables to create a wide format DataFrame that consists of one row for each patient.
cdystonia_wide = (cdystonia[['patient','site','id','treat','age','sex']] .drop_duplicates() .merge(twstrs_wide, right_index=True, left_on='patient', how='inner')) cdystonia_wide.head()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
This illustrates the two formats for longitudinal data: long and wide formats. Its typically better to store data in long format because additional data can be included as additional rows in the database, while wide format requires that the entire database schema be altered by adding columns to every row as data are collected. The preferable format for analysis depends entirely on what is planned for the data, so it is imporant to be able to move easily between them. Method chaining In the DataFrame reshaping section above, you probably noticed how several methods were strung together to produce a wide format table:
(cdystonia[['patient','site','id','treat','age','sex']] .drop_duplicates() .merge(twstrs_wide, right_index=True, left_on='patient', how='inner') .head())
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
This approach of seqentially calling methods is called method chaining, and despite the fact that it creates very long lines of code that must be properly justified, it allows for the writing of rather concise and readable code. Method chaining is possible because of the pandas convention of returning copies of the results of operations, rather than in-place operations. This allows methods from the returned object to be immediately called, as needed, rather than assigning the output to a variable that might not otherwise be used. For example, without method chaining we would have done the following:
cdystonia_subset = cdystonia[['patient','site','id','treat','age','sex']] cdystonia_complete = cdystonia_subset.drop_duplicates() cdystonia_merged = cdystonia_complete.merge(twstrs_wide, right_index=True, left_on='patient', how='inner') cdystonia_merged.head()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
This necessitates the creation of a slew of intermediate variables that we really don't need. Let's transform another dataset using method chaining. The measles.csv file contains de-identified cases of measles from an outbreak in Sao Paulo, Brazil in 1997. The file contains rows of individual records:
measles = pd.read_csv("../data/measles.csv", index_col=0, encoding='latin-1', parse_dates=['ONSET']) measles.head()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
The goal is to summarize this data by age groups and bi-weekly period, so that we can see how the outbreak affected different ages over the course of the outbreak. The best approach is to build up the chain incrementally. We can begin by generating the age groups (using cut) and grouping by age group and the date (ONSET):
(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False)) .groupby(['ONSET', 'AGE_GROUP']))
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
What we then want is the number of occurences in each combination, which we can obtain by checking the size of each grouping:
(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False)) .groupby(['ONSET', 'AGE_GROUP']) .size()).head(10)
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
This results in a hierarchically-indexed Series, which we can pivot into a DataFrame by simply unstacking:
(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False)) .groupby(['ONSET', 'AGE_GROUP']) .size() .unstack()).head(5)
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Now, fill replace the missing values with zeros:
(measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False)) .groupby(['ONSET', 'AGE_GROUP']) .size() .unstack() .fillna(0)).head(5)
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Finally, we want the counts in 2-week intervals, rather than as irregularly-reported days, which yields our the table of interest:
case_counts_2w = (measles.assign(AGE_GROUP=pd.cut(measles.YEAR_AGE, [0,5,10,15,20,25,30,35,40,100], right=False)) .groupby(['ONSET', 'AGE_GROUP']) .size() .unstack() .fillna(0) .resample('2W') .sum()) case_counts_2w
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
From this, it is easy to create meaningful plots and conduct analyses:
case_counts_2w.plot(cmap='hot')
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Pivoting The pivot method allows a DataFrame to be transformed easily between long and wide formats in the same way as a pivot table is created in a spreadsheet. It takes three arguments: index, columns and values, corresponding to the DataFrame index (the row headers), columns and cell values, respectively. For example, we may want the twstrs variable (the response variable) in wide format according to patient, as we saw with the unstacking method above:
cdystonia.pivot(index='patient', columns='obs', values='twstrs').head()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Exercise Try pivoting the cdystonia DataFrame without specifying a variable for the cell values:
# Write your answer here
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Data transformation There are a slew of additional operations for DataFrames that we would collectively refer to as transformations which include tasks such as: removing duplicate values replacing values grouping values. Dealing with duplicates We can easily identify and remove duplicate values from DataFrame objects. For example, say we want to remove ships from our vessels dataset that have the same name:
vessels = pd.read_csv('../data/AIS/vessel_information.csv') vessels.tail(10) vessels.duplicated(subset='names').tail(10)
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
These rows can be removed using drop_duplicates
vessels.drop_duplicates(['names']).tail(10)
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Alternately, if we simply want to replace particular values in a Series or DataFrame, we can use the replace method. An example where replacement is useful is replacing sentinel values with an appropriate numeric value prior to analysis. A large negative number is sometimes used in otherwise positive-valued data to denote missing values.
scores = pd.Series([99, 76, 85, -999, 84, 95])
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
In such situations, we can use replace to substitute nan where the sentinel values occur.
scores.replace(-999, np.nan)
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Inidcator variables For some statistical analyses (e.g. regression models or analyses of variance), categorical or group variables need to be converted into columns of indicators--zeros and ones--to create a so-called design matrix. The Pandas function get_dummies (indicator variables are also known as dummy variables) makes this transformation straightforward. Let's consider the DataFrame containing the ships corresponding to the transit segments on the eastern seaboard. The type variable denotes the class of vessel; we can create a matrix of indicators for this. For simplicity, lets filter out the 5 most common types of ships. Exercise Create a subset of the vessels DataFrame called vessels5 that only contains the 5 most common types of vessels, based on their prevalence in the dataset.
# Write your answer here
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
We can now apply get_dummies to the vessel type to create 5 indicator variables.
pd.get_dummies(vessels5.type).head(10)
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Discretization Pandas' cut function can be used to group continuous or countable data in to bins. Discretization is generally a very bad idea for statistical analysis, so use this function responsibly! Lets say we want to bin the ages of the cervical dystonia patients into a smaller number of groups:
cdystonia.age.describe()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Alternatively, one can specify custom quantiles to act as cut points:
quantiles = pd.qcut(vessels.max_loa, [0, 0.01, 0.05, 0.95, 0.99, 1]) quantiles[:30]
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Exercise Use the discretized segment lengths as the input for get_dummies to create 5 indicator variables for segment length:
# Write your answer here
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Categorical Variables One of the keys to maximizing performance in pandas is to use the appropriate types for your data wherever possible. In the case of categorical data--either the ordered categories as we have just created, or unordered categories like race, gender or country--the use of the categorical to encode string variables as numeric quantities can dramatically improve performance and simplify subsequent analyses. When text data are imported into a DataFrame, they are endowed with an object dtype. This will result in relatively slow computation because this dtype runs at Python speeds, rather than as Cython code that gives much of pandas its speed. We can ameliorate this by employing the categorical dtype on such data.
cdystonia_cat = cdystonia.assign(treatment=cdystonia.treat.astype('category')).drop('treat', axis=1) cdystonia_cat.dtypes cdystonia_cat.treatment.head() cdystonia_cat.treatment.cat.codes
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
This creates an unordered categorical variable. To create an ordinal variable, we can specify order=True as an argument to astype:
cdystonia.treat.astype('category', ordered=True).head()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
However, this is not the correct order; by default, the categories will be sorted alphabetically, which here gives exactly the reverse order that we need. To specify an arbitrary order, we can used the set_categories method, as follows:
cdystonia.treat.astype('category').cat.set_categories(['Placebo', '5000U', '10000U'], ordered=True).head()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Notice that we obtained set_categories from the cat attribute of the categorical variable. This is known as the category accessor, and is a device for gaining access to Categorical variables' categories, analogous to the string accessor that we have seen previously from text variables.
cdystonia_cat.treatment.cat
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Additional categoried can be added, even if they do not currently exist in the DataFrame, but are part of the set of possible categories:
cdystonia_cat['treatment'] = (cdystonia.treat.astype('category').cat .set_categories(['Placebo', '5000U', '10000U', '20000U'], ordered=True))
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
To complement this, we can remove categories that we do not wish to retain:
cdystonia_cat.treatment.cat.remove_categories('20000U').head()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Or, even more simply:
cdystonia_cat.treatment.cat.remove_unused_categories().head()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
For larger datasets, there is an appreciable gain in performance, both in terms of speed and memory usage.
vessels_merged = (pd.read_csv('../data/AIS/vessel_information.csv', index_col=0) .merge(pd.read_csv('../data/AIS/transit_segments.csv'), left_index=True, right_on='mmsi')) vessels_merged['registered'] = vessels_merged.flag.astype('category') %timeit vessels_merged.groupby('flag').avg_sog.mean().sort_values() %timeit vessels_merged.groupby('registered').avg_sog.mean().sort_values() vessels_merged[['flag','registered']].memory_usage()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
The add_prefix and add_suffix methods can be used to give the columns of the resulting table labels that reflect the transformation:
cdystonia_grouped.mean().add_suffix('_mean').head()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Exercise Use the quantile method to generate the median values of the twstrs variable for each patient.
# Write your answer here
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
It is easy to do column selection within groupby operations, if we are only interested split-apply-combine operations on a subset of columns:
%timeit cdystonia_grouped['twstrs'].mean().head()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Or, as a DataFrame:
cdystonia_grouped[['twstrs']].mean().head()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
By default, groupby groups by row, but we can specify the axis argument to change this. For example, we can group our columns by dtype this way:
dict(list(cdystonia.groupby(cdystonia.dtypes, axis=1)))
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Its also possible to group by one or more levels of a hierarchical index. Recall cdystonia2, which we created with a hierarchical index:
cdystonia2.head(10)
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
The level argument specifies which level of the index to use for grouping.
cdystonia2.groupby(level='obs', axis=0)['twstrs'].mean()
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Apply We can generalize the split-apply-combine methodology by using apply function. This allows us to invoke any function we wish on a grouped dataset and recombine them into a DataFrame. The function below takes a DataFrame and a column name, sorts by the column, and takes the n largest values of that column. We can use this with apply to return the largest values from every group in a DataFrame in a single call.
def top(df, column, n=5): return df.sort_index(by=column, ascending=False)[:n]
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
To see this in action, consider the vessel transit segments dataset (which we merged with the vessel information to yield segments_merged). Say we wanted to return the 3 longest segments travelled by each ship:
goo = vessels_merged.groupby('mmsi') top3segments = vessels_merged.groupby('mmsi').apply(top, column='seg_length', n=3)[['names', 'seg_length']] top3segments.head(15)
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Notice that additional arguments for the applied function can be passed via apply after the function name. It assumes that the DataFrame is the first argument. Exercise Load the dataset in titanic.xls. It contains data on all the passengers that travelled on the Titanic.
from IPython.core.display import HTML HTML(filename='../data/titanic.html')
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Women and children first? Use the groupby method to calculate the proportion of passengers that survived by sex. Calculate the same proportion, but by class and sex. Create age categories: children (under 14 years), adolescents (14-20), adult (21-64), and senior(65+), and calculate survival proportions by age category, class and sex.
# Write your answer here
notebooks/1.4 - Pandas Best Practices.ipynb
fonnesbeck/ngcm_pandas_2016
cc0-1.0
Help with commands If you ever need to look up a command, you can bring up the list of shortcuts by pressing H in command mode. The keyboard shortcuts are also available above in the Help menu. Go ahead and try it now. Creating new cells One of the most common commands is creating new cells. You can create a cell above the current cell by pressing A in command mode. Pressing B will create a cell below the currently selected cell. Exercise: Create a cell above this cell using the keyboard command. Exercise: Create a cell below this cell using the keyboard command. Switching between Markdown and code With keyboard shortcuts, it is quick and simple to switch between Markdown and code cells. To change from Markdown to a code cell, press Y. To switch from code to Markdown, press M. Exercise: Switch the cell below between Markdown and code cells.
## Practice here def fibo(n): # Recursive Fibonacci sequence! if n == 0: return 0 elif n == 1: return 1 return fibo(n-1) + fibo(n-2)
nd101 Deep Learning Nanodegree Foundation/notebooks/1 - playing with jupyter/keyboard-shortcuts.ipynb
anandha2017/udacity
mit
Line numbers A lot of times it is helpful to number the lines in your code for debugging purposes. You can turn on numbers by pressing L (in command mode of course) on a code cell. Exercise: Turn line numbers on and off in the above code cell. Deleting cells Deleting cells is done by pressing D twice in a row so D, D. This is to prevent accidently deletions, you have to press the button twice! Exercise: Delete the cell below. Saving the notebook Notebooks are autosaved every once in a while, but you'll often want to save your work between those times. To save the book, press S. So easy! The Command Palette You can easily access the command palette by pressing Shift + Control/Command + P. Note: This won't work in Firefox and Internet Explorer unfortunately. There is already a keyboard shortcut assigned to those keys in those browsers. However, it does work in Chrome and Safari. This will bring up the command palette where you can search for commands that aren't available through the keyboard shortcuts. For instance, there are buttons on the toolbar that move cells up and down (the up and down arrows), but there aren't corresponding keyboard shortcuts. To move a cell down, you can open up the command palette and type in "move" which will bring up the move commands. Exercise: Use the command palette to move the cell below down one position.
# below this cell # Move this cell down
nd101 Deep Learning Nanodegree Foundation/notebooks/1 - playing with jupyter/keyboard-shortcuts.ipynb
anandha2017/udacity
mit
Next, we will import the data we saved previously using the pickle library.
pickle_file = '-basic_data.pickle' with open(pickle_file, 'rb') as f: save = pickle.load(f) X = save['X'] y = save['y'] char_to_int = save['char_to_int'] int_to_char = save['int_to_char'] del save # hint to help gc free up memory print('Training set', X.shape, y.shape)
notebooks/week-6/02-using a pre-trained model with Keras.ipynb
kkkddder/dmc
apache-2.0
Now we need to define the Keras model. Since we will be loading parameters from a pre-trained model, this needs to match exactly the definition from the previous lab section. The only difference is that we will comment out the dropout layer so that the model uses all the hidden neurons when doing the predictions.
# define the LSTM model model = Sequential() model.add(LSTM(128, return_sequences=False, input_shape=(X.shape[1], X.shape[2]))) # model.add(Dropout(0.50)) model.add(Dense(y.shape[1], activation='softmax')) model.compile(loss='categorical_crossentropy', optimizer='adam')
notebooks/week-6/02-using a pre-trained model with Keras.ipynb
kkkddder/dmc
apache-2.0
Next we will load the parameters from the model we trained previously, and compile it with the same loss and optimizer function.
# load the parameters from the pretrained model filename = "-basic_LSTM.hdf5" model.load_weights(filename) model.compile(loss='categorical_crossentropy', optimizer='adam')
notebooks/week-6/02-using a pre-trained model with Keras.ipynb
kkkddder/dmc
apache-2.0
We also need to rewrite the sample() and generate() helper functions so that we can use them in our code:
def sample(preds, temperature=1.0): preds = np.asarray(preds).astype('float64') preds = np.log(preds) / temperature exp_preds = np.exp(preds) preds = exp_preds / np.sum(exp_preds) probas = np.random.multinomial(1, preds, 1) return np.argmax(probas) def generate(sentence, sample_length=50, diversity=0.35): generated = sentence sys.stdout.write(generated) for i in range(sample_length): x = np.zeros((1, X.shape[1], X.shape[2])) for t, char in enumerate(sentence): x[0, t, char_to_int[char]] = 1. preds = model.predict(x, verbose=0)[0] next_index = sample(preds, diversity) next_char = int_to_char[next_index] generated += next_char sentence = sentence[1:] + next_char sys.stdout.write(next_char) sys.stdout.flush() print
notebooks/week-6/02-using a pre-trained model with Keras.ipynb
kkkddder/dmc
apache-2.0
Now we can use the generate() function to generate text of any length based on our imported pre-trained model and a seed text of our choice. For best result, the length of the seed text should be the same as the length of training sequences (100 in the previous lab section). In this case, we will test the overfitting of the model by supplying it two seeds: one which comes verbatim from the training text, and one which comes from another earlier speech by Obama If the model has not overfit our training data, we should expect it to produce reasonable results for both seeds. If it has overfit, it might produce pretty good results for something coming directly from the training set, but perform poorly on a new seed. This means that it has learned to replicate our training text, but cannot generalize to produce text based on other inputs. Since the original article was very short, however, the entire vocabulary of the model might be very limited, which is why as input we use a part of another speech given by Obama, instead of completely random text. Since we have not trained the model for that long, we will also use a lower temperature to get the model to generate more accurate if less diverse results. Try running the code a few times with different temperature settings to generate different results.
prediction_length = 500 seed_from_text = "america has shown that progress is possible. last year, income gains were larger for households at t" seed_original = "and as people around the world began to hear the tale of the lowly colonists who overthrew an empire" for seed in [seed_from_text, seed_original]: generate(seed, prediction_length, .50) print "-" * 20
notebooks/week-6/02-using a pre-trained model with Keras.ipynb
kkkddder/dmc
apache-2.0
Data input We need some data to get started. Luckily, we have jQAssistant at our hand. It's integrated into the build process of Spring PetClinic repository above and scanned the Git repository information automatically with every executed build. So let's query our almighty Neo4j graph database that holds all the structural data about the software project.
graph = py2neo.Graph() query = """ MATCH (author:Author)-[:COMMITED]-> (commit:Commit) RETURN author.name as name, author.email as email """ result = graph.data(query) # just how first three entries result[0:3]
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
The query returns all commits with their authors and the author's email addresses. We get some nice, tabular data that we put into Pandas's DataFrame.
commits = pd.DataFrame(result) commits.head()
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
Familiarization First, I like to check the raw data a little bit. I often do this by first having a look at the data types the data source is returning. It's a good starting point to check that Pandas recognizes the data types accordingly. You can also use this approach to check for skewed data columns very quickly (especially necessary when reading CSV or Excel files): If there should be a column with a specific data type (e. g. because the documentation of the dataset said so), the data type should be recognized automatically as specified. If not, there is a high probability that the imported data source isn't correct (and we have a data quality problem).
commits.dtypes
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
That's OK for our simple scenario. The two columns with texts are objects – nothing spectacular. In the next step, I always like to get a "feeling" of all the data. Primarily, I want to get a quick impression of the data quality again. It could always be that there is "dirty data" in the dataset or that there are outliers that would screw up the analysis. With such a small amount of data we have, we can simply list all unique values that occur in the columns. I just list the top 10's for both columns.
commits['name'].value_counts()[0:10]
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
OK, at first glance, something seems awkward. Let's have a look at the email addresses.
commits['email'].value_counts()[0:10]
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
OK, the bad feeling is strengthening. We might have a problem with multiple authors having multiple email addresses. Let me show you the problem by better representing the problem. Interlude - begin In the interlude section, I take you to a short, mostly undocumented excursion with probably messy code (don't do this at home!) to make a point. If you like, you can skip that section. Goal: Create a diagram that shows the relationship between the authors and the emails addresses. (Note to myself: It's probably better to solve that directly in Neo4j the next time ;-) ) I need a unique index for each name and I have to calculate the number of different email addresses per author.
grouped_by_authors = commits[['name', 'email']]\ .drop_duplicates().groupby('name').count()\ .sort_values('email', ascending=False).reset_index().reset_index() grouped_by_authors.head()
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
Same procedure for the email addresses.
grouped_by_email = commits[['name', 'email']]\ .drop_duplicates().groupby('email').count()\ .sort_values('name', ascending=False).reset_index().reset_index() grouped_by_email.head()
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
Then I merge the two DataFrames with a subset of the original data. I get each author and email index as well as the number of occurrences for author respectively emails. I only need the ones that are occurring multiple times, so I check for > 2.
plot_data = commits.drop_duplicates()\ .merge(grouped_by_authors, left_on='name', right_on="name", suffixes=["", "_from_authors"], how="outer")\ .merge(grouped_by_email, left_on='email', right_on="email", suffixes=["", "_from_emails"], how="outer") plot_data = plot_data[\ (plot_data['email_from_authors'] > 1) | \ (plot_data['name_from_emails'] > 1)] plot_data
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
I just add some nicely normalized indexes for plotting (note: there might be a method that's easier)
from sklearn import preprocessing le = preprocessing.LabelEncoder() le.fit(plot_data['index']) plot_data['normalized_index_name'] = le.transform(plot_data['index']) * 10 le.fit(plot_data['index_from_emails']) plot_data['normalized_index_email'] = le.transform(plot_data['index_from_emails']) * 10 plot_data.head()
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
Plot an assignment table with the relationships between authors and email addresses.
fig1 = plt.figure(facecolor='white') ax1 = plt.axes(frameon=False) ax1.set_frame_on(False) ax1.get_xaxis().tick_bottom() ax1.axes.get_yaxis().set_visible(False) ax1.axes.get_xaxis().set_visible(False) # simply plot all the data (imperfection: duplicated will be displayed in bold font) for data in plot_data.iterrows(): row = data[1] plt.text(0, row['normalized_index_name'], row['name'], fontsize=15, horizontalalignment="right") plt.text(1, row['normalized_index_email'], row['email'], fontsize=15, horizontalalignment="left") plt.plot([0,1],[row['normalized_index_name'],row['normalized_index_email']],'grey', linewidth=1.0)
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
Alright! Here we are! We see that multiple authors use multiple email addresses. And I see a pattern that could be used to get better data. Do you, too? Interlude - end If you skipped the interlude section: I just visualized / demonstrated that there are different email addresses per author (and vise versa). Some authors choose to use another email address and some choose a different name for committing to the repositories (and a few did both things). Data Wrangling The situation above is a typical case of a little data messiness and – to demotivate you – absolutely normal. So we have to do some data correction before we start our analysis. Otherwise, we would ignore reality completely and deliver wrong results. This could damage our reputation as a data analyst and is something we have to avoid at all costs! We want to fix the problem with the multiple authors having multiple email addresses (but are the same persons). We need a mapping between them. Should we do it manually? That would be kind of crazy. As mentioned above, there is a pattern in the data to fix that. We simply use the name of the email address as an identifier for a person. Let's give it a try by extracting the name part from the email address with a simple split.
commits['nickname'] = commits['email'].apply(lambda x : x.split("@")[0]) commits.head()
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
That looks pretty good. Now we want to get only the person's real name instead of the nickname. We use a little heuristic to determine the "best fitting" real name and replace all the others. For this, we need group by nicknames and determine the real names.
def determine_real_name(names): real_name = "" for name in names: # assumption: if there is a whitespace in the name, # someone thought about it to be first name and surname if " " in name: return name # else take the longest name elif len(name) > len(real_name): real_name = name return real_name commits_grouped = commits[['nickname', 'name']].groupby(['nickname']).agg(determine_real_name) commits_grouped = commits_grouped.rename(columns={'name' : 'real_name'}) commits_grouped.head()
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
That looks great! Now we switch back to our previous DataFrame by joining in the new information.
commits = commits.merge(commits_grouped, left_on='nickname', right_index=True) # drop duplicated for better displaying commits.drop_duplicates().head()
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
That should be enough data cleansing for today! Analysis Now that we have valid data, we can produce some new insights. Top 10 committers Easy tasks first: We simply produce a table with the Top 10 committers. We group by the real name and count every commit by using a subset (only the <tt>email</tt> column) of the DataFrame to only get on column returned. We rename the returned columns to <tt>commits</tt> for displaying reasons (would otherwise be <tt>email</tt>). Then we just list the top 10 entries after sorting appropriately.
committers = commits.groupby('real_name')[['email']]\ .count().rename(columns={'email' : 'commits'})\ .sort_values('commits', ascending=False) committers.head(10) committers.head(10)
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
Committer Distribution Next, we create a pie chart to get a good impression of the committers.
committers['commits'].plot(kind='pie')
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
Uhh...that looks ugly and kind of weird. Let's first try to fix the mess on the right side that shows all authors with minor changes by summing up their number of commits. We will use a threshold value that makes sense with our data (e. g. the committers that contribute more than 3/4 to the code) to identify them. A nice start is the description of the current data set.
committers_description = committers.describe() committers_description
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
OK, we want the 3/4 main contributors...
threshold = committers_description.loc['75%'].values[0] threshold
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
...that is > 75% of the commits of all contributors.
minor_committers = committers[committers['commits'] <= threshold] minor_committers.head()
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
These are the entries we want to combine to our new "Others" section. But we don't want to loose the number of changes, so we store them for later usage.
others_number_of_changes = minor_committers.sum() others_number_of_changes
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
Now we are deleting all authors that are in the <tt>author_minor_changes</tt>'s DataFrame. To not check on the threshold value from above again, we reuse the already calculated DataFrame.
main_committers = committers[~committers.isin(minor_committers)] main_committers.tail()
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
This gives us for the contributors with just a few commits missing values for the <tt>changes</tt> column, because these values were in the <tt>author_minor_changes</tt> DataFrame. We drop all Nan values to get only the major contributors.
main_committers = main_committers.dropna() main_committers
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
We add the "Others" row by appending to the DataFrame
main_committers.loc["Others"] = others_number_of_changes main_committers
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
Almost there, you redraw with some styling and minor adjustments.
# some configuration for displaying nice diagrams plt.style.use('fivethirtyeight') plt.figure(facecolor='white') ax = main_committers['commits'].plot( kind='pie', figsize=(6,6), title="Main committers", autopct='%.2f', fontsize=12) # get rid of the distracting label for the y-axis ax.set_ylabel("")
notebooks/Committer Distribution.ipynb
feststelltaste/software-analytics
gpl-3.0
<hr> Signal Processing for Data Scientists Jed Ludlow UnitedHealth Group <hr> Get the code at https://github.com/jedludlow/sp-for-ds Overview Signal processing: Tools to separate the useful information from the nuisance information in a time series. Cover three areas today Fourier analysis in the frequency domain Discrete-time sampling Digital filtering Fourier Analysis in the Frequency Domain Fourier Series A periodic signal $s(t)$ can be expressed as a (possibly infininte) sum of simple sinusoids. Usually we approximate it by truncating the series to $N$ terms as $$s_N(t) = \frac{A_0}{2} + \sum_{n=1}^N A_n \sin(\tfrac{2\pi nt}{P}+\phi_n) \quad \scriptstyle \text{for integer}\ N\ \ge\ 1$$ Discrete Fourier Transform If we have a short sample of a periodic signal, the discrete Fourier transform allows us to recover its sinusoidal frequency components. Numerically, the problem of computing the discrete Fourier transform has been studied for many years, and the result is the Fast Fourier Transform (FFT). In 1994, Gilbert Strang described the FFT as "the most important numerical algorithm of our lifetime" and it was included in Top 10 Algorithms of 20th Century by the IEEE journal Computing in Science & Engineering. (source: https://en.wikipedia.org/wiki/Fast_Fourier_transform) In Python, this transform is available in the numpy.fft package.
def fft_scaled(x, axis=-1, samp_freq=1.0, remove_mean=True): """ Fully scaled and folded FFT with physical amplitudes preserved. Arguments --------- x: numpy n-d array array of signal information. axis: int array axis along which to compute the FFT. samp_freq: float signal sampling frequency in Hz. remove_mean: boolean remove the mean of each signal prior to taking the FFT so the DC component of the FFT will be zero. Returns -------- (fft_x, freq) where *fft_x* is the full complex FFT, scaled and folded so that only positive frequencies remain, and *freq* is a matching array of positive frequencies. Examples -------- A common use case would present the signals in a 2-D array where each row contains a signal trace. Columns would then represent time sample intervals of the signals. The rows of the returned *fft_x* array would contain the FFT of each signal, and each column would correspond to an entry in the *freq* array. """ # Get length of the requested array axis. n = x.shape[axis] # Use truncating division here since for odd n we want to # round down to the next closest integer. See docs for numpy fft. half_n = n // 2 # Remove the mean if requested if remove_mean: ind = [slice(None)] * x.ndim ind[axis] = np.newaxis x = x - x.mean(axis)[ind] # Compute fft, scale, and fold negative frequencies into positive. def scale_and_fold(x): # Scale by length of original signal x = (1.0 / n) * x[:half_n + 1] # Fold negative frequency x[1:] *= 2.0 return x fft_x = np.fft.fft(x, axis=axis) fft_x = np.apply_along_axis(scale_and_fold, axis, fft_x) # Matching frequency array. The abs takes care of the case where n # is even, and the Nyquist frequency is usually negative. freq = np.fft.fftfreq(n, 1.0 / samp_freq) freq = np.abs(freq[:half_n + 1]) return (fft_x, freq)
sp_for_ds.ipynb
jedludlow/sp-for-ds
mit
1 Hz Square Wave
f_s = 1000.0 # Sampling frequency in Hz time = np.arange(0.0, 100.0 + 1.0/f_s, 1.0/f_s) square_wave = signal.square(2 * np.pi * time) plt.figure(figsize=(9, 5)) plt.plot(time, square_wave), plt.xlabel('time (s)'), plt.ylabel('x(t)'), plt.title('1 Hz Square Wave') plt.xlim((0, 3)), plt.ylim((-1.1, 1.1));
sp_for_ds.ipynb
jedludlow/sp-for-ds
mit
Fourier Analysis of Square Wave
fft_x, freq_sq = fft_scaled(square_wave, samp_freq=f_s) f_max = 24.0 plt.figure(figsize=(9, 5)), plt.plot(freq_sq, np.abs(fft_x)) plt.xticks(np.arange(0.0, f_max + 1.0, 1.0)) plt.xlim((0, f_max)), plt.xlabel('Frequency (Hz)'), plt.ylabel('Amplitude') plt.title('Frequency Spectrum of Square Wave');
sp_for_ds.ipynb
jedludlow/sp-for-ds
mit
Approximate 1 Hz Square Wave Let's sythesize an approximation to a square wave by summing a reduced number of sinusoidal components.
# Set frequency components and amplitudes. # Square waves contain all the odd harmonics # of the fundamental frequency. f_components = [1.0, 3.0, 5.0, 7.0, 9.0, 11.0] # f_components = [1.0, 3.0, 5.0, 7.0, 9.0, 11.0, # 13.0, 15.0, 17.0, 19.0, 21.0] amplitudes = [1.28 / f for f in f_components] # Generate the square wave s_t = np.zeros_like(time) for f, amp in zip(f_components, amplitudes): s_t += amp * np.sin(2 * np.pi * f * time) plt.figure(figsize=(9, 5)), plt.plot(time, s_t) plt.xlabel('time (s)'), plt.ylabel('$s(t)$'), plt.xlim((0, 3)) plt.title('Approximate Square Wave');
sp_for_ds.ipynb
jedludlow/sp-for-ds
mit
Fourier Analysis of Approximate Square Wave
freq_spec, freq = fft_scaled(s_t, samp_freq=f_s) f_max = 12.0 plt.figure(figsize=(9, 5)), plt.plot(freq, np.abs(freq_spec)) plt.xticks(np.arange(0.0, f_max + 1.0, 1.0)) plt.xlim((0, f_max)), plt.xlabel('Frequency (Hz)'), plt.ylabel('Amplitude') plt.title('Frequency Spectrum of Approximate Square Wave');
sp_for_ds.ipynb
jedludlow/sp-for-ds
mit
Discrete-Time Sampling Nyquist-Shannon Sampling Theorem Consider a continuous signal $x(t)$ with Fourier transfom $X(f)$. Assume: A sampled version of the signal is constructed as $$x_k = x(kT), k \in \mathbb{I}$$ $x(t)$ is band-limited such that $$X(f) = 0 \ \forall \ |f| > B$$ <center><img src="images/Bandlimited.svg" width="300"></center> Then $x(t)$ is uniquely recoverable from $x_k$ if $$\frac{1}{T} \triangleq f_s > 2B$$ This critical frequency shows up so frequently that is has its own name, the Nyquist frequency. $$f_N = \frac{f_s}{2}$$ A note about frequency: Most theoretical signal processing work is done using circular frequency $\omega$ in units of rad/sec. This is done to eliminate the factor of of $2 \pi$ which shows up in many equations when true ordinary frequency $f$ is used. That said, nearly all practical signal processing is done with ordinary frequency. The relationship between the two frequencies is $$ \omega = 2 \pi f$$ <center><img src="images/ideal_sampler.png" width="800"></center> image credit: MIT Open Courseware, Signals and Systems, Oppenheimer Practical Realities For complete recoverability, Nyquist requires an ideal sampler and an ideal interpolator. In practice, these are not physically realizable. $$x(t) = \mathrm{IdealInterpolator}_T(\mathrm{IdealSampler}_T(x(t))$$ Real signals are never perfectly band-limited. There are always some noise components out past the Nyquist sampling rate. You will often be given sampled data but have very little insight into the system that generated the data. In that situation, you really have no guarantees that any estimates of frequency content for the underlying continuous time process are correct. You may be observing alias frequencies. A frequency $f_a$ is an alias of $f$ if $$ f_a = |nf_s - f|, n \in \mathbb{I}$$ Aliasing When your signal contains frequency components that are above the Nyquist frequency then those high frequency components show up at lower frequencies. These lower frequencies are called aliases of the higher frequencies.
def scale_and_fold(x): n = len(x) half_n = n // 2 # Scale by length of original signal x = (1.0 / n) * x[:half_n + 1] # Fold negative frequency x[1:] *= 2.0 return x def aliasing_demo(): f_c = 1000.0 # Hz f_s = 20.0 # Hz f_end = 25.0 # Hz f = 1.0 # Hz time_c = np.arange(0.0, 10.0 + 1.0/f_c, 1/f_c) time_s = np.arange(0.0, 10.0 + 1.0/f_s, 1/f_s) freq_c = np.fft.fftfreq(len(time_c), 1.0 / f_c) freq_c = np.abs(freq_c[:len(time_c) // 2 + 1]) freq_s = np.fft.fftfreq(len(time_s), 1.0 / f_s) freq_s = np.abs(freq_s[:len(time_s) // 2 + 1]) f=widgets.FloatSlider(value=1.0, min=0.0, max=f_end, step=0.1, description='Frequency (Hz)') phi = widgets.FloatSlider(value=0.0, min=0.0, max=2.0*np.pi, step=0.1, description="Phase (rad)") x_c = np.sin(2 * np.pi * f.value * time_c + phi.value) x_s = np.sin(2 * np.pi * f.value * time_s + phi.value) fig, ax = plt.subplots(2, 1, figsize=(9, 6)) fig.subplots_adjust(hspace=0.3) line1 = ax[0].plot(time_c, x_c, alpha=0.9, lw=2.0)[0] line2 = ax[0].plot(time_s, x_s, marker='o', color='r', ls=':')[0] ax[0].set_xlabel("Time (s)") ax[0].set_ylabel("$x$") ax[0].set_title('Sine Wave Sampled at {} Hz'.format(int(f_s))) ax[0].set_ylim((-1, 1)) ax[0].set_xlim((0, 1)) window_c = 2 * np.hanning(len(time_c)) window_s = 2 * np.hanning(len(time_s)) fft_c = scale_and_fold(np.fft.fft(x_c * window_c)) fft_s = scale_and_fold(np.fft.fft(x_s * window_s)) line3 = ax[1].plot(freq_c, np.abs(fft_c), alpha=0.5, lw=2)[0] line4 = ax[1].plot(freq_s, np.abs(fft_s), 'r:', lw=2)[0] line5 = ax[1].axvline(f_s / 2.0, color='0.75', ls='--') plt.axvline(f_s, color='0.75') ax[1].text(1.02 * f_s / 2, 0.93, '$f_N$', {'size':14}) ax[1].text(1.01 * f_s, 0.93, '$f_s$', {'size':14}) ax[1].set_xlabel("Frequency (Hz)") ax[1].set_ylabel("$X(f)$") ax[1].set_xlim((0, f_end)) def on_slider(s): x_c = np.sin(2 * np.pi * f.value * time_c + phi.value) x_s = np.sin(2 * np.pi * f.value * time_s + phi.value) fft_c = scale_and_fold(np.fft.fft(x_c * window_c)) fft_s = scale_and_fold(np.fft.fft(x_s * window_s)) # line1.set_xdata(time_c) line1.set_ydata(x_c) # line2.set_xdata(time_s) line2.set_ydata(x_s) line3.set_ydata(np.abs(fft_c)) line4.set_ydata(np.abs(fft_s)) plt.draw() f.on_trait_change(on_slider) phi.on_trait_change(on_slider) display(f) display(phi) aliasing_demo()
sp_for_ds.ipynb
jedludlow/sp-for-ds
mit
Avoiding Aliasing If you have control over the sampling process, specify a sampling frequency that is at least twice the highest frequency component of your signal. If you really want to preserve high fidelity, specify a sampling frequency that is ten times the highest frequency component in your signal. Digital Filtering Reshaping the Signal So far we've discussed analysis techniques for characterizing the frequency content of a signal. Now we discuss how to modify the frequency content of the signal to emphasize some of the information in it while removing other aspects. Generally accomplish this using digital filters. Moving Average as a Digital Filter Let's express a moving average of five in the language of digital filtering. The output $y$ at the $k$-th sample is a function of the last five inputs $x$. $$y_k = \frac{x_k + x_{k-1} + x_{k-2} + x_{k-3} + x_{k-4}}{5}$$ More generally, this looks like $$y_k = b_0 x_k + b_1 x_{k-1} + b_2 x_{k-2} + b_3 x_{k-3} + b_4 x_{k-4}$$ where all the $b_i = 0.2$. But they don't have to be equal. We could select each of the $b_i$ independently to be whatever we want. Then the filter looks like a weighted average. Using Previous Outputs Even more generally, the current output can be a function of previous outputs as well as inputs if we desire. $$y_k = \frac{1}{a_0} \left(\frac{b_0 x_k + b_1 x_{k-1} + b_2 x_{k-2} + b_3 x_{k-3} + b_4 x_{k-4}, + \cdots} {a_1 y_{k-1} + a_2 y_{k-2} + a_3 y_{k-3} + a_4 y_{k-4} + \cdots} \right)$$ But how do we choose the $b_i$ and the $a_i$ to get a filter with a particular desired behavior? Standard Digital Filter Designs Luckily, standard filter designs already exist to create filters that have certain response characteristics, either in the time domain or the frequency domain. Butterworth Chebyshev Elliptic Bessel When in doubt, use the Butterworth filter since it's a great general purpose filter and is easier to specify. All of these filter designs are available in scipy.signal.
def butter_filt(x, sampling_freq_hz, corner_freq_hz=4.0, lowpass=True, filtfilt=False): """ Smooth data with a low-pass or high-pass filter. Apply a 2nd order Butterworth filter. Note that if filtfilt is True the applied filter is effectively a 4th order Butterworth. Parameters ---------- x: 1D numpy array Array containing the signal to be smoothed. sampling_freq_hz: float Sampling frequency of the signal in Hz. corner_freq_hz: float Corner frequency of the Butterworth filter in Hz. lowpass: bool If True (default), apply a low-pass filter. If False, apply a high-pass filter. filtfilt: bool If True, apply the filter forward and then backward to elminate delay. If False (default), apply the filter only in the forward direction. Returns ------- filtered: 1D numpy array Array containing smoothed signal b, a: 1D numpy arrays Polynomial coefficients of the smoothing filter as returned from the Butterworth design function. """ nyquist = sampling_freq_hz / 2.0 f_c = np.array([corner_freq_hz, ], dtype=np.float64) # Hz # Normalize by Nyquist f_c /= nyquist # Second order Butterworth filter at corner frequency btype = 'low' if lowpass else 'high' b, a = signal.butter(2, f_c, btype=btype) # Apply the filter either in forward direction or forward-back. if filtfilt: filtered = signal.filtfilt(b, a, x) else: filtered = signal.lfilter(b, a, x) return (filtered, b, a) f_c_low = 2.0 # Corner frequency in Hz s_filtered, b, a = butter_filt(s_t, f_s, f_c_low) w, h = signal.freqz(b, a, 2048) w *= (f_s / (2 * np.pi)) fig, ax = plt.subplots(2, 1, sharex=True, figsize=(9, 5)) ax[0].plot(w, abs(h)), plt.xlim((0, 12)), ax[1].plot(w, np.angle(h, deg=True)) ax[0].set_ylabel('Attenuation Factor'), ax[1].set_ylabel('Phase Angle (deg)') ax[1].set_xlabel('Frequency (Hz)') ax[0].set_title('Filter Frequency Response - 2nd Order Butterworth Low-Pass'); plt.figure(figsize=(9, 5)) plt.plot(time, s_t, label='Original'), plt.plot(time, s_filtered, 'r-', label='Filtered') plt.xlim((0, 3)) plt.xlabel('Time (s)'), plt.ylabel('Signal'), plt.legend(), plt.title('Low-Pass, Forward Filtering'); s_filtered, b, a = butter_filt(s_t, f_s, f_c_low, filtfilt=True) plt.figure(figsize=(9, 5)) plt.plot(time, s_t, label='Original'), plt.plot(time, s_filtered, 'r-', label='Filtered') plt.xlim((0, 3)) plt.xlabel('Time (s)'), plt.ylabel('Signal'), plt.legend(), plt.title('Low-Pass, Forward-Backward Filtering'); f_c_high = 6.0 # Corner frequency in Hz s_filtered, b, a = butter_filt(s_t, f_s, f_c_high, lowpass=False, filtfilt=True) w, h = signal.freqz(b, a, 2048) w *= (f_s / (2 * np.pi)) fig, ax = plt.subplots(2, 1, sharex=True, figsize=(9, 5)) ax[0].plot(w, abs(h)), plt.xlim((0, 12)), ax[1].plot(w[1:], np.angle(h, deg=True)[1:]) ax[0].set_ylabel('Attenuation Factor'), ax[1].set_ylabel('Phase Angle (deg)') ax[1].set_xlabel('Frequency (Hz)') ax[0].set_title('Filter Frequency Response - 2nd Order Butterworth High-Pass'); s_filtered, b, a = butter_filt(s_t, f_s, f_c_high, lowpass=False, filtfilt=True) plt.figure(figsize=(9, 5)) plt.plot(time, s_t, label='Original'), plt.plot(time, s_filtered, 'r-', label='Filtered') plt.xlim((0, 3)) plt.xlabel('Time (s)'), plt.ylabel('Signal'), plt.legend(), plt.title('High-Pass, Forward-Backward Filtering');
sp_for_ds.ipynb
jedludlow/sp-for-ds
mit
Comparing surrogate models Tim Head, July 2016. Reformatted by Holger Nahrstaedt 2020 .. currentmodule:: skopt Bayesian optimization or sequential model-based optimization uses a surrogate model to model the expensive to evaluate function func. There are several choices for what kind of surrogate model to use. This notebook compares the performance of: gaussian processes, extra trees, and random forests as surrogate models. A purely random optimization strategy is also used as a baseline.
print(__doc__) import numpy as np np.random.seed(123) import matplotlib.pyplot as plt
0.7/notebooks/auto_examples/strategy-comparison.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
Toy model We will use the :class:benchmarks.branin function as toy model for the expensive function. In a real world application this function would be unknown and expensive to evaluate.
from skopt.benchmarks import branin as _branin def branin(x, noise_level=0.): return _branin(x) + noise_level * np.random.randn() from matplotlib.colors import LogNorm def plot_branin(): fig, ax = plt.subplots() x1_values = np.linspace(-5, 10, 100) x2_values = np.linspace(0, 15, 100) x_ax, y_ax = np.meshgrid(x1_values, x2_values) vals = np.c_[x_ax.ravel(), y_ax.ravel()] fx = np.reshape([branin(val) for val in vals], (100, 100)) cm = ax.pcolormesh(x_ax, y_ax, fx, norm=LogNorm(vmin=fx.min(), vmax=fx.max())) minima = np.array([[-np.pi, 12.275], [+np.pi, 2.275], [9.42478, 2.475]]) ax.plot(minima[:, 0], minima[:, 1], "r.", markersize=14, lw=0, label="Minima") cb = fig.colorbar(cm) cb.set_label("f(x)") ax.legend(loc="best", numpoints=1) ax.set_xlabel("X1") ax.set_xlim([-5, 10]) ax.set_ylabel("X2") ax.set_ylim([0, 15]) plot_branin()
0.7/notebooks/auto_examples/strategy-comparison.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
This shows the value of the two-dimensional branin function and the three minima. Objective The objective of this example is to find one of these minima in as few iterations as possible. One iteration is defined as one call to the :class:benchmarks.branin function. We will evaluate each model several times using a different seed for the random number generator. Then compare the average performance of these models. This makes the comparison more robust against models that get "lucky".
from functools import partial from skopt import gp_minimize, forest_minimize, dummy_minimize func = partial(branin, noise_level=2.0) bounds = [(-5.0, 10.0), (0.0, 15.0)] n_calls = 60 def run(minimizer, n_iter=5): return [minimizer(func, bounds, n_calls=n_calls, random_state=n) for n in range(n_iter)] # Random search dummy_res = run(dummy_minimize) # Gaussian processes gp_res = run(gp_minimize) # Random forest rf_res = run(partial(forest_minimize, base_estimator="RF")) # Extra trees et_res = run(partial(forest_minimize, base_estimator="ET"))
0.7/notebooks/auto_examples/strategy-comparison.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
Note that this can take a few minutes.
from skopt.plots import plot_convergence plot = plot_convergence(("dummy_minimize", dummy_res), ("gp_minimize", gp_res), ("forest_minimize('rf')", rf_res), ("forest_minimize('et)", et_res), true_minimum=0.397887, yscale="log") plot.legend(loc="best", prop={'size': 6}, numpoints=1)
0.7/notebooks/auto_examples/strategy-comparison.ipynb
scikit-optimize/scikit-optimize.github.io
bsd-3-clause
Now, let’s import Marvin:
import marvin
docs/sphinx/jupyter/first-steps.ipynb
sdss/marvin
bsd-3-clause
Let's see what release we're using. Releases can be either MPLs (e.g. MPL-5) or DRs (e.g. DR13), however DRs are currently disabled in Marvin.
marvin.config.release
docs/sphinx/jupyter/first-steps.ipynb
sdss/marvin
bsd-3-clause
On intial import, Marvin will set the default data release to use the latest MPL available, currently MPL-6. You can change the version of MaNGA data using the Marvin Config.
from marvin import config config.setRelease('MPL-5') print('MPL:', config.release)
docs/sphinx/jupyter/first-steps.ipynb
sdss/marvin
bsd-3-clause
But let's work with MPL-6:
config.setRelease('MPL-6') # check designated version config.release
docs/sphinx/jupyter/first-steps.ipynb
sdss/marvin
bsd-3-clause
My First Cube Now let’s play with a Marvin Cube! Import the Marvin-Tools Cube class:
from marvin.tools.cube import Cube
docs/sphinx/jupyter/first-steps.ipynb
sdss/marvin
bsd-3-clause
Let's load a cube from a local file. Start by specifying the full path and name of the file, such as: /Users/Brian/Work/Manga/redux/v2_3_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz EDIT Next Cell
#----- EDIT THIS CELL -----# # filename = '/Users/Brian/Work/Manga/redux/v1_5_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz' filename = 'path/to/manga/cube/manga-8485-1901-LOGCUBE.fits.gz' filename = '/Users/andrews/manga/spectro/redux/v2_3_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz' filename = '/Users/Brian/Work/Manga/redux/v2_3_1/8485/stack/manga-8485-1901-LOGCUBE.fits.gz'
docs/sphinx/jupyter/first-steps.ipynb
sdss/marvin
bsd-3-clause
Create a Cube object:
cc = Cube(filename=filename)
docs/sphinx/jupyter/first-steps.ipynb
sdss/marvin
bsd-3-clause
Now we have a Cube object:
print(cc)
docs/sphinx/jupyter/first-steps.ipynb
sdss/marvin
bsd-3-clause
How about we look at some meta-data
cc.ra, cc.dec, cc.header['SRVYMODE']
docs/sphinx/jupyter/first-steps.ipynb
sdss/marvin
bsd-3-clause
...and the quality and target bits
cc.target_flags cc.quality_flag
docs/sphinx/jupyter/first-steps.ipynb
sdss/marvin
bsd-3-clause
Get a Spaxel Cubes have several functions currently available: getSpaxel, getMaps, getAperture. Let's look at spaxels. We can retrieve spaxels from a cube easily via indexing. In this manner, spaxels are 0-indexed from the lower left corner. Let's get spaxel (x=10, y=10):
spax = cc[10,10] # print the spaxel to see the x,y coord from the lower left, and the coords relative to the cube center, x_cen/y_cen spax
docs/sphinx/jupyter/first-steps.ipynb
sdss/marvin
bsd-3-clause
Spaxels have a spectrum associated with it. It has the wavelengths and fluxes of each spectral channel: Alternatively grab a spaxel with getSpaxel. Use the xyorig keyword to set the coordinate origin point: 'lower' or 'center'. The default is "center"
# let's grab the central spaxel spax = cc.getSpaxel(x=0, y=0) spax spax.flux.wavelength spax.flux
docs/sphinx/jupyter/first-steps.ipynb
sdss/marvin
bsd-3-clause