markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
1b. Book prices Calculate book prices for the following scenarios: Suppose the price of a book is 24.95 EUR, but if the book is bought by a bookstore, they get a 30 percent discount (as opposed to customers buying from an online stores). Shipping costs 3 EUR for the first copy and 75 cents for each additional copy. Shipping costs always apply (the books also have to be shipped to the bookstore). Write a program that can calculate the total costs for any number of copies for both bookstores and other customers. Use variables with clear names for your calculations and print the result using a full sentence. The program should use variables which indicate whether the customer is a bookstore or not and how many books are bought. You can simply assign values to the variables in you code or use the input function (both is accepted). Tip Start small and add things in steps. For instance, start by calculating the price minus the discount. Then add the additional steps. Also, it helps to a start by simply assuming a specific number of books (start with 1 and make sure it works with any other number). Do not forget to test your program!
# complete the code below n_books = customer_is_bookstore = # you bookprice calculations here
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
1c. The modulus operator There is one operator (like the ones for multiplication and subtraction) that we did not discuss yet, namely the modulus operator %. Could you figure out by yourself what it does when you place it between two numbers (e.g. 113 % 9)? (PS: Try to figure it out by yourself first, by trying multiple combinations of numbers. If you do not manage, it's OK to use online resources...) You don't need this operator all that often, but when you do, it comes in really handy! Also, it is important to learn how to figure out what operators/methods/functions you have never seen before do by playing with code, googling or reading documentation.
# try out the modulus operator!
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Help the cashier Can you use the modulus operator you just learned about to solve the following task? Imagine you want to help cashiers to return the change in a convenient way. This means you do not want to return hands full of small coins, but rather use bills and as few coins as possible. Write code that classifies a given amount of money into smaller monetary units. Given a specific amout of dollars, your program should report the maximum number of dollar bills, quarters, dimes, nickels, and pennies. Set the amount variable to 11.67. You code should output a report listing the monetary equivalent in dollars, quarters, dimes, nickels, and pennies (one quarter is equivalent to 25 cents; one dime to 10 cents; one nickle to 5 cents and a pennie to 1 cent). Your program should report the maximum number of dollars, then the number of quarters, dimes, nickels, and pennies, in this order, to result in the minimum number of coins. Here are the steps in developing the program: Convert the amount (11.67) into cents (1167). First get the amount of cents that you would get after subtracting the maximum amount of dollars (100 cents) using the modulus operator (67 cents). Then subtract the remainder (67 cents) from the total amount of cents (1167 cents) and divide this by 100 to find the number of dollars. Use the modulus operator again to find out the remainder after subtracting the maximum amount of quarters (17 cents). Subtract this remainder (17 cents) from the previous remainder (67 cents) and divide this by 25 to find out the number of quarters. Follow the same steps for the dimes, nickels and pennies. Display the result for your cashier! (the amount of dollars, quarters, dimes, nickels and pennies that (s)he would have to give back)`
# cashier code
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Exercise 2: Printing and user input 2a. Difference between "," and "+" What is the difference between using + and , in a print statement? Illustrate by using both in each of the following: calling the print() fuction with multiple strings printing combinations of strings and integers concatenating multiple strings and assign to one single variable concatenating strings and integers and assign to one single variable 2b. Small Talk Write a program to have a little conversation with someone. First ask them for their name and their age, and then say something about your own age compared to theirs. Your code should result in a conversation following this example: Hello there! What is your name? -- Emily. Nice to meet you, Emily. How old are you? -- 23 I'm 25 years old, so I'm 2 years older than you. Also account for situations where the other person is older or the same age. You will need to use if-else-statements!
name = input("Hello there! What is your name? ") # finish this code
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Exercise 3: String Art 3a. Drawing figures We start with some repetition of the theory about strings: | Topic | Explanation | |-----------|--------| | quotes | A string is delimited by single quotes ('...') or double quotes ("...") | | special characters | Certain special characters can be used, such as "\n" (for newline) and "\t" (for a tab) | | printing special characters | To print the special characters, they must be preceded by a backslash (\) | | continue on next line | A backslash (\) at the end of a line is used to continue a string on the next line | | multi-line strings | A multi-line print statement should be enclosed by three double or three single quotes ("""...""" of '''...''') | Please run the code snippet below and observe what happens:
print('hello\n') print('To print a newline use \\n') print('She said: \'hello\'') print('\tThis is indented') print('This is a very, very, very, very, very, very \ long print statement') print(''' This is a multi-line print statement First line Second line ''')
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Now write a Python script that prints the following figure using only one line of code! (so don't use triple quotes) | | | @ @ u |"""|
# your code here
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
3b. Colors We start again with some repetition of the theory: | Topic | Explanation | |-----------|--------| | a = b + c | if b and c are strings: concatenate b and c to form a new string a| | a = b * c | if b is an integer and c is a string: c is repeated b times to form a new string a | | a[0] | the first character of string a | | len(a) | the number of characters in string a | | min(a) | the smallest element in string a (alphabetically first) | | max(a) | the largest element in string a (alphabetically last) | Please run the code snippet below and observe what happens:
b = 'the' c = 'cat' d = ' is on the mat' a = b + ' ' + c + d print(a) a = b * 5 print(a) print('The first character of', c, 'is' , c[0]) print('The word c has,', len(c) ,'characters')
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Now write a program that asks users for their favorite color. Create the following output (assuming "red" is the chosen color). Use "+" and "*". It should work with any color name though. xml red red red red red red red red red red red red red red red red red red red red red red red red
color = input('what is your favorite color? ') print(color) print(color) print(color) print(color)
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Exercise 4: String methods Remember that you can see all methods of the class str by using dir(). You can ignore all methods that start with one or two underscores.
dir(str)
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
To see the explanation for a method of this class, you can use help(str.method). For example:
help(str.upper)
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
4a. Counting vowels Count how many of each vowel (a,e,i,o,u) there are in the text string in the next cell. Print the count for each vowel with a single formatted string. Remember that vowels can be both lower and uppercase.
text = """But I must explain to you how all this mistaken idea of denouncing pleasure and praising pain was born and I will give you a complete account of the system, and expound the actual teachings of the great explorer of the truth, the master-builder of human happiness. No one rejects, dislikes, or avoids pleasure itself, because it is pleasure, but because those who do not know how to pursue pleasure rationally encounter consequences that are extremely painful. Nor again is there anyone who loves or pursues or desires to obtain pain of itself, because it is pain, but because occasionally circumstances occur in which toil and pain can procure him some great pleasure. To take a trivial example, which of us ever undertakes laborious physical exercise, except to obtain some advantage from it? But who has any right to find fault with a man who chooses to enjoy a pleasure that has no annoying consequences, or one who avoids a pain that produces no resultant pleasure? On the other hand, we denounce with righteous indignation and dislike men who are so beguiled and demoralized by the charms of pleasure of the moment, so blinded by desire, that they cannot foresee the pain and trouble that are bound to ensue; and equal blame belongs to those who fail in their duty through weakness of will, which is the same as saying through shrinking from toil and pain.""" # your code here
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
4b. Printing the lexicon Have a good look at the internal representation of the string below. Use a combination of string methods (you will need at least 3 different ones and some will have to be used multiple times) in the correct order to remove punctuation and redundant whitespaces, and print each word in lowercase characters on a new line. The result should look like: the quick brown fox jumps etc.
text = """ The quick, brown fox jumps over a lazy dog.\tDJs flock by when MTV ax quiz prog. Junk MTV quiz graced by fox whelps.\tBawds jog, flick quartz, vex nymphs. Waltz, bad nymph, for quick jigs vex!\tFox nymphs grab quick-jived waltz. Brick quiz whangs jumpy veldt fox. """ print(text) print() print(repr(text)) text = # your code here print(text)
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
4c. Passwords Write a program that asks a user for a password and checks some simple requirements of a password. If necessary, print out the following warnings (use if-statements): Your password should contain at least 6 characters. Your password should contain no more than 12 characters. Your password only contains alphabetic characters! Please also use digits and/or special characters. Your password only contains digits! Please also use alphabetic and/or special characters. Your password should contain at least one special character. Your password contains only lowercase letters! Please also use uppercase letters. Your password contains only uppercase letters! Please also use lowercase letters.
# your code here
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
Exercise 5: Boolean Logic and Conditions 5a. Speeding Write code to solve the following scenario: You are driving a little too fast, and a police officer stops you. Write code to compute and print the result, encoded as a string: 'no ticket', 'small ticket', 'big ticket'. If speed is 60 or less, the result is 'no ticket'. If speed is between 61 and 80 inclusive, the result is 'small ticket'. If speed is 81 or more, the result is 'big ticket'. Unless it is your birthday -- on that day, your speed can be 5 higher in all cases.
# your code here
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
5b. Alarm clock Write code to set you alarm clock! Given the day of the week and information about whether you are currently on vacation or not, your code should print the time you want to be woken up following these constraints: Weekdays, the alarm should be "7:00" and on the weekend it should be "10:00". Unless we are on vacation -- then on weekdays it should be "10:00" and weekends it should be "off". Encode the weeks days as ints in the following way: 0=Sun, 1=Mon, 2=Tue, ...6=Sat. Encode the vacation infromation as boolean. Your code should assign the correct time to a variable as a string (following this format: "7:00") and print it. Note: Encoding the days as an integer helps you with defining conditions. You can check whether the week day is in a certain interval (instead of writing code for every single day).
# your code here
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
5c. Parcel delivery The required postage for an international parcel delivery service is calculated based on item weight and country of destination: | Tariff zone | 0 - 2 kg | 2 - 5 kg | 5 - 10 kg | 10 - 20 kg | 20 - 30 kg | |-------------|----------|----------|-----------|------------|------------| |EUR 1 | € 13.00 | € 19.50 | € 25.00 | € 34.00 | € 45.00 | |EUR 2 | € 18.50 | € 25.00 | € 31.00 | € 40.00 | € 55.00 | |World | € 24.30 | € 34.30 | € 58.30 | € 105.30 | - | Ask a user for the weight and zone. Use (nested) if-statements to find the required postage based on these variables. Assign the result to a variable postage and print the result using a full sentence: The price of sending a [...] kg parcel to the [...] zone is € [...].
# your code here
Assignments/ASSIGNMENT-1.ipynb
evanmiltenburg/python-for-text-analysis
apache-2.0
As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
%matplotlib inline import phoebe from phoebe import u # units import numpy as np import matplotlib.pyplot as plt logger = phoebe.logger() b = phoebe.default_binary()
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Adding Spots Let's add one spot to each of our stars in the binary. A spot is a feature, and needs to be attached directly to a component upon creation. Providing a tag for 'feature' is entirely optional - if one is not provided it will be created automatically.
b.add_feature('spot', component='primary', feature='spot01')
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
As a shortcut, we can also call add_spot directly.
b.add_spot(component='secondary', feature='spot02')
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Relevant Parameters A spot is defined by the colatitude (where 0 is defined as the North (spin) Pole) and longitude (where 0 is defined as pointing towards the other star for a binary, or to the observer for a single star) of its center, its angular radius, and the ratio of temperature of the spot to the local intrinsic value.
print b['spot01'] b.set_value(qualifier='relteff', feature='spot01', value=0.9) b.set_value(qualifier='radius', feature='spot01', value=30) b.set_value(qualifier='colat', feature='spot01', value=45) b.set_value(qualifier='long', feature='spot01', value=90)
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
To see the spot, add a mesh dataset and plot it.
b.add_dataset('mesh', times=[0,0.25,0.5,0.75,1.0], columns=['teffs']) b.run_compute() afig, mplfig = b.filter(component='primary', time=0.75).plot(fc='teffs', show=True)
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Spot Corotation The positions (colat, long) of a spot are defined at t0 (note: t0@system, not necessarily t0_perpass or t0_supconj). If the stars are not synchronous, then the spots will corotate with the star. To illustrate this, let's set the syncpar > 1 and plot the mesh at three different phases from above.
b.set_value('syncpar@primary', 1.5) b.run_compute(irrad_method='none')
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
At time=t0=0, we can see that the spot is where defined: 45 degrees south of the north pole and 90 degree longitude (where longitude of 0 is defined as pointing towards the companion star at t0).
print "t0 = {}".format(b.get_value('t0', context='system')) afig, mplfig = b.plot(time=0, y='ws', fc='teffs', ec='None', show=True)
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
At a later time, the spot is still technically at the same coordinates, but longitude of 0 no longer corresponds to pointing to the companion star. The coordinate system has rotated along with the asyncronous rotation of the star.
afig, mplfig = b.plot(time=0.25, y='ws', fc='teffs', facecmap='YlOrRd', ec='None', show=True) afig, mplfig = b.plot(time=0.5, y='ws', fc='teffs', facecmap='YlOrRd', ec='None', show=True) ax, artists = b.plot(time=0.75, y='ws', fc='teffs', facecmap='YlOrRd', ec='None', show=True)
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Since the syncpar was set to 1.5, one full orbit later the star (and the spot) has made an extra half-rotation.
ax, artists = b.plot(time=1.0, y='ws', fc='teffs', facecmap='YlOrRd', ec='None', show=True)
2.1/tutorials/spots.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Find Peaks Use method in P. Du, W. A. Kibbe, S. M. Lin, Bioinformatics 2006, 22, 2059. Same as in scipy.signal.find_peaks_cwt() and baselineWavelet Wavelet transform
data1 = np.genfromtxt(os.path.join('..', 'tests', 'data', 'raman-785nm.txt')) x = data1[:, 0] y = data1[:, 1] plt.plot(x, y)
notebooks/find_background.ipynb
rohanisaac/spectra
gpl-3.0
Find ridge lines
widths = np.arange(1,71) cwtmat = signal.cwt(y, signal.ricker, widths) plt.imshow(cwtmat, aspect='auto', cmap='PRGn') # Find local maxima # make a binary array containing local maximum of transform, with same shape lmax = np.zeros(cwtmat.shape) for i in range(cwtmat.shape[0]): lmax[i, signal.argrelextrema(cwtmat[i, :], np.greater)] = 1 fig, ax = plt.subplots(figsize=(15, 4)) ax.imshow(lmax, aspect='auto', cmap='gray_r') # allocate memory # intial location assigned to peak from the first row peak_pos_start = np.where(lmax[0,:]==1)[0] # current position of the ridge peak_ridge = np.copy(peak_pos_start) # full copy n_peaks = peak_pos_start.size # length of the ridge peak_len = np.ones(n_peaks) # use the max of the ridge line to find the width of the peaks peak_pos = np.zeros(n_peaks, dtype='int') peak_width = np.ones(n_peaks) peak_width_max = np.zeros(n_peaks) # Link local maxima (find ridges) w = 3 # for each row starting at the second for i in range(1, lmax.shape[0]): # for each peak for j in range(n_peaks): # assume it doesn't extend, and then check extends = False p = peak_ridge[j] if lmax[i, p] == 1: # if there is one below, it is part of the same ridge extends = True else: # if not search around peak for k in range(1, w): if lmax[i, p-k] == 1: extends = True peak_ridge[j] -= k break elif lmax[i, p+k] == 1: extends = True peak_ridge[j] += k break # if it extends if extends: # it it longer peak_len[j] += 1 # find width by comparing max vs. previous if cwtmat[i, p] > peak_width_max[j]: peak_width_max[j] = cwtmat[i, p] peak_width[j] = i peak_pos[j] = p print peak_pos[:20] print peak_width[:20] # generate a simulated spectrum of sorts, with peak positions and the length of the ridge lines ypeaks = np.zeros(y.shape) ypeaks[peak_pos] = peak_len*peak_width fig, ax = plt.subplots(figsize=(15, 4)) ax.plot(x, ypeaks) # find peaks using the first ridge position, last ridge position as well using find_peaks peaks = signal.find_peaks_cwt(y, wavelet=signal.ricker, widths=widths) peaks_2 = peak_pos[np.all(((peak_width > 0), (peak_len > 5)), axis=0)] print peaks, peaks_2
notebooks/find_background.ipynb
rohanisaac/spectra
gpl-3.0
For now use scipy.signal.find_peaks_cwt(), compare with my own implementation
fig, ax = plt.subplots(24, figsize=(10,10)) for w in range(3): for l in range(2, 10): a = ax[w*8 + (l-2)] peaks = peak_pos[np.all(((peak_width > w), (peak_len > l)), axis=0)] a.plot(x,y) a.plot(x[peaks], y[peaks], 'rx', label='w%i, l%i' % (w,l)) #a.legend() # find peaks using the first ridge position, last ridge position as well using find_peaks peaks = signal.find_peaks_cwt(y, wavelet=signal.ricker, widths=widths) peaks_2 = peak_pos[np.all(((peak_width > 1), (peak_len > 5)), axis=0)] fig, ax = plt.subplots(figsize=(15,5)) ax.semilogy(x,y) ax.semilogy(x[peaks], y[peaks], 'kv', alpha=0.8) ax.semilogy(x[peaks_2], y[peaks_2], 'rd', alpha=0.8, label='filterd width') #ax.plot(x[peaks_3], y[peaks_3], 'bx', label='filterd length') ax.set_ylim(200000,600000) ax.legend() # find peaks using the first ridge position, last ridge position as well using find_peaks peaks = signal.find_peaks_cwt(y, wavelet=signal.ricker, widths=widths) peaks_2 = peak_pos[np.all(((peak_width > 5), (peak_len > 20)), axis=0)] fig, ax = plt.subplots(figsize=(15,5)) ax.plot(x,y) ax.plot(x[peaks], y[peaks], 'kv', alpha=0.8, label='scipy') ax.plot(x[peaks_2], y[peaks_2], 'rd', alpha=0.8, label='filterd length and width') #ax.plot(x[peaks_3], y[peaks_3], 'bx', label='filterd length') ax.set_ylim(200000,520000) ax.legend()
notebooks/find_background.ipynb
rohanisaac/spectra
gpl-3.0
Estimate Peak widths Procedure from Zhang et al. Perform CWT with Haar wavelet w/ same scales as peak finding. Result M x N matrix Take abs of all values For each peak in peak-detection there are two parameter: index and scale a. Row corresponding to scale is taken out b. Search for local minima to three times of peak scale or next peak index If local minima do not exist: a. Peak start or end point is min(3 x peak scale, next peak index) else b. Peaks boundaries are minima and min(...) Repeat for all peaks
# analyze the ricker wavelet to help build the ricker wavelet points = 100 for a in range(2, 11, 2): wave = signal.ricker(points, a) plt.plot(wave) # note, all integrate to 0 # make a haar mother wavelet def haar2(points, a): """ Returns a haar wavelet mother wavelet 1 if 0 <= t < 1/2 h(t) = -1 if 1/2 <= t < 1 0 otherwise` Numpy version, not accurate right now """ x = np.arange(0, points) - (points - 1.0) / 2 wave = np.zeros(x.shape) amp = 2/a wave[np.where(np.logical_and(0 <= x, x < 0.5*a))[0]] = 1 wave[np.where(np.logical_and(-0.5*a <= x, x < 1))[0]] = -1 return wave*amp # make a haar mother wavelet def haar(points, a): """ Returns a haar wavelet mother wavelet 1 if 0 <= t < 1/2 h(t) = -1 if 1/2 <= t < 1 0 otherwise` """ vec = np.arange(0, points) - (points - 1.0) / 2 wave = np.zeros(vec.shape) amp = 2/a for i, x in enumerate(vec): if 0 <= x < 0.5*a: wave[i] = 1 elif -0.5*a <= x < 1: wave[i] = -1 return wave*amp points = 100 for a in range(2, 11, 2): wave = haar(points, a) plt.step(np.arange(points), wave) hw = signal.cwt(y, haar, widths=widths) plt.imshow(hw, aspect='auto', cmap='PRGn') ahw = np.abs(hw) plt.imshow(ahw, aspect='auto', cmap='PRGn')
notebooks/find_background.ipynb
rohanisaac/spectra
gpl-3.0
Search for local minima in in the row corresponding to the peak's scale, within 3x peak scale or peak index
for p in peak_pos: print p
notebooks/find_background.ipynb
rohanisaac/spectra
gpl-3.0
Open questions/issues Should we be recording other observing meta-data? How about SFR, M*, etc.? DEIMOS Targeting Pull mask target info from Mask files :: parse_deimos_mask_file Pull other target info from SExtractor output Requires yaml file describing target criteria And the SExtractor output file Sample output of MULTI_OBS file MULTI_OBJ file: | INSTR | MASK_NAME | MASK_RA | MASK_DEC | MASK_EPOCH | MASK_PA | DATE_OBS | DISPERSER | TEXP | CONDITIONS | | DEIMOS | PG1407_may_early | 14:09:34.10 | 26:18:45.1 | 2000.0 | -96.1 | 23-Jul-2015 | G600 | 3600.0 | POOR_SEEING,CLOUDS | | DEIMOS | PG1407_may_early | 14:09:34.10 | 26:18:45.1 | 2000.0 | -96.1 | 24-Jul-2015 | G600 | 3600.0 | CLEAR |
#### Sample of target file fil='/Users/xavier/CASBAH/Galaxies/PG1407+265/PG1407+265_targets.fits' targ = Table.read(fil) # mt = np.where(targ['MASK_NAME'] != 'N/A')[0] targ[mt[0:5]]
xastropy/casbah/CASBAH_galaxy_database.ipynb
profxj/xastropy
bsd-3-clause
Testing
fil='/Users/xavier/CASBAH/Galaxies/PG1407+265/PG1407+265_targets.fits' tmp = Table.read(fil,fill_values=[('N/A','0','MASK_NAME')],format='fits')
xastropy/casbah/CASBAH_galaxy_database.ipynb
profxj/xastropy
bsd-3-clause
使用 tf.data 加载文本数据 <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://tensorflow.google.cn/tutorials/load_data/text"><img src="https://tensorflow.google.cn/images/tf_logo_32px.png" />在 TensorFlow.org 上查看</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/text.ipynb"><img src="https://tensorflow.google.cn/images/colab_logo_32px.png" />在 Google Colab 上运行</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/zh-cn/tutorials/load_data/text.ipynb"><img src="https://tensorflow.google.cn/images/GitHub-Mark-32px.png" />查看 GitHub 上的资源</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/zh-cn/tutorials/load_data/text.ipynb"><img src="https://tensorflow.google.cn/images/download_logo_32px.png" />下载 notebook</a> </td> </table> Note: 我们的 TensorFlow 社区翻译了这些文档。因为社区翻译是尽力而为, 所以无法保证它们是最准确的,并且反映了最新的 官方英文文档。如果您有改进此翻译的建议, 请提交 pull request 到 tensorflow/docs GitHub 仓库。要志愿地撰写或者审核译文,请加入 [email protected] Google Group。 本教程为你提供了一个如何使用 tf.data.TextLineDataset 来加载文本文件的示例。TextLineDataset 通常被用来以文本文件构建数据集(原文件中的一行为一个样本) 。这适用于大多数的基于行的文本数据(例如,诗歌或错误日志) 。下面我们将使用相同作品(荷马的伊利亚特)三个不同版本的英文翻译,然后训练一个模型来通过单行文本确定译者。 环境搭建
import tensorflow as tf import tensorflow_datasets as tfds import os
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
三个版本的翻译分别来自于: William Cowper — text Edward, Earl of Derby — text Samuel Butler — text 本教程中使用的文本文件已经进行过一些典型的预处理,主要包括删除了文档页眉和页脚,行号,章节标题。请下载这些已经被局部改动过的文件。
DIRECTORY_URL = 'https://storage.googleapis.com/download.tensorflow.org/data/illiad/' FILE_NAMES = ['cowper.txt', 'derby.txt', 'butler.txt'] for name in FILE_NAMES: text_dir = tf.keras.utils.get_file(name, origin=DIRECTORY_URL+name) parent_dir = os.path.dirname(text_dir) parent_dir
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
将文本加载到数据集中 迭代整个文件,将整个文件加载到自己的数据集中。 每个样本都需要单独标记,所以请使用 tf.data.Dataset.map 来为每个样本设定标签。这将迭代数据集中的每一个样本并且返回( example, label )对。
def labeler(example, index): return example, tf.cast(index, tf.int64) labeled_data_sets = [] for i, file_name in enumerate(FILE_NAMES): lines_dataset = tf.data.TextLineDataset(os.path.join(parent_dir, file_name)) labeled_dataset = lines_dataset.map(lambda ex: labeler(ex, i)) labeled_data_sets.append(labeled_dataset)
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
将这些标记的数据集合并到一个数据集中,然后对其进行随机化操作。
BUFFER_SIZE = 50000 BATCH_SIZE = 64 TAKE_SIZE = 5000 all_labeled_data = labeled_data_sets[0] for labeled_dataset in labeled_data_sets[1:]: all_labeled_data = all_labeled_data.concatenate(labeled_dataset) all_labeled_data = all_labeled_data.shuffle( BUFFER_SIZE, reshuffle_each_iteration=False)
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
你可以使用 tf.data.Dataset.take 与 print 来查看 (example, label) 对的外观。numpy 属性显示每个 Tensor 的值。
for ex in all_labeled_data.take(5): print(ex)
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
将文本编码成数字 机器学习基于的是数字而非文本,所以字符串需要被转化成数字列表。 为了达到此目的,我们需要构建文本与整数的一一映射。 建立词汇表 首先,通过将文本标记为单独的单词集合来构建词汇表。在 TensorFlow 和 Python 中均有很多方法来达成这一目的。在本教程中: 迭代每个样本的 numpy 值。 使用 tfds.features.text.Tokenizer 来将其分割成 token。 将这些 token 放入一个 Python 集合中,借此来清除重复项。 获取该词汇表的大小以便于以后使用。
tokenizer = tfds.features.text.Tokenizer() vocabulary_set = set() for text_tensor, _ in all_labeled_data: some_tokens = tokenizer.tokenize(text_tensor.numpy()) vocabulary_set.update(some_tokens) vocab_size = len(vocabulary_set) vocab_size
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
样本编码 通过传递 vocabulary_set 到 tfds.features.text.TokenTextEncoder 来构建一个编码器。编码器的 encode 方法传入一行文本,返回一个整数列表。
encoder = tfds.features.text.TokenTextEncoder(vocabulary_set)
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
你可以尝试运行这一行代码并查看输出的样式。
example_text = next(iter(all_labeled_data))[0].numpy() print(example_text) encoded_example = encoder.encode(example_text) print(encoded_example)
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
现在,在数据集上运行编码器(通过将编码器打包到 tf.py_function 并且传参至数据集的 map 方法的方式来运行)。
def encode(text_tensor, label): encoded_text = encoder.encode(text_tensor.numpy()) return encoded_text, label def encode_map_fn(text, label): # py_func doesn't set the shape of the returned tensors. encoded_text, label = tf.py_function(encode, inp=[text, label], Tout=(tf.int64, tf.int64)) # `tf.data.Datasets` work best if all components have a shape set # so set the shapes manually: encoded_text.set_shape([None]) label.set_shape([]) return encoded_text, label all_encoded_data = all_labeled_data.map(encode_map_fn)
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
将数据集分割为测试集和训练集且进行分支 使用 tf.data.Dataset.take 和 tf.data.Dataset.skip 来建立一个小一些的测试数据集和稍大一些的训练数据集。 在数据集被传入模型之前,数据集需要被分批。最典型的是,每个分支中的样本大小与格式需要一致。但是数据集中样本并不全是相同大小的(每行文本字数并不相同)。因此,使用 tf.data.Dataset.padded_batch(而不是 batch )将样本填充到相同的大小。
train_data = all_encoded_data.skip(TAKE_SIZE).shuffle(BUFFER_SIZE) train_data = train_data.padded_batch(BATCH_SIZE) test_data = all_encoded_data.take(TAKE_SIZE) test_data = test_data.padded_batch(BATCH_SIZE)
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
现在,test_data 和 train_data 不是( example, label )对的集合,而是批次的集合。每个批次都是一对(多样本, 多标签 ),表示为数组。
sample_text, sample_labels = next(iter(test_data)) sample_text[0], sample_labels[0]
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
由于我们引入了一个新的 token 来编码(填充零),因此词汇表大小增加了一个。
vocab_size += 1
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
建立模型
model = tf.keras.Sequential()
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
第一层将整数表示转换为密集矢量嵌入。更多内容请查阅 Word Embeddings 教程。
model.add(tf.keras.layers.Embedding(vocab_size, 64))
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
下一层是 LSTM 层,它允许模型利用上下文中理解单词含义。 LSTM 上的双向包装器有助于模型理解当前数据点与其之前和之后的数据点的关系。
model.add(tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)))
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
最后,我们将获得一个或多个紧密连接的层,其中最后一层是输出层。输出层输出样本属于各个标签的概率,最后具有最高概率的分类标签即为最终预测结果。
# 一个或多个紧密连接的层 # 编辑 `for` 行的列表去检测层的大小 for units in [64, 64]: model.add(tf.keras.layers.Dense(units, activation='relu')) # 输出层。第一个参数是标签个数。 model.add(tf.keras.layers.Dense(3, activation='softmax'))
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
最后,编译这个模型。对于一个 softmax 分类模型来说,通常使用 sparse_categorical_crossentropy 作为其损失函数。你可以尝试其他的优化器,但是 adam 是最常用的。
model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy'])
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
训练模型 利用提供的数据训练出的模型有着不错的精度(大约 83% )。
model.fit(train_data, epochs=3, validation_data=test_data) eval_loss, eval_acc = model.evaluate(test_data) print('\nEval loss: {}, Eval accuracy: {}'.format(eval_loss, eval_acc))
site/zh-cn/tutorials/load_data/text.ipynb
tensorflow/docs-l10n
apache-2.0
Getting the data The planet dataset isn't available on the fastai dataset page due to copyright restrictions. You can download it from Kaggle however. Let's see how to do this by using the Kaggle API as it's going to be pretty useful to you if you want to join a competition or use other Kaggle datasets later on. First, install the Kaggle API by uncommenting the following line and executing it, or by executing it in your terminal (depending on your platform you may need to modify this slightly to either add source activate fastai or similar, or prefix pip with a path. Have a look at how conda install is called for your platform in the appropriate Returning to work section of https://course.fast.ai/. (Depending on your environment, you may also need to append "--user" to the command.)
# ! {sys.executable} -m pip install kaggle --upgrade
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
Then you need to upload your credentials from Kaggle on your instance. Login to kaggle and click on your profile picture on the top left corner, then 'My account'. Scroll down until you find a button named 'Create New API Token' and click on it. This will trigger the download of a file named 'kaggle.json'. Upload this file to the directory this notebook is running in, by clicking "Upload" on your main Jupyter page, then uncomment and execute the next two commands (or run them in a terminal). For Windows, uncomment the last two commands.
# ! mkdir -p ~/.kaggle/ # ! mv kaggle.json ~/.kaggle/ # For Windows, uncomment these two commands # ! mkdir %userprofile%\.kaggle # ! move kaggle.json %userprofile%\.kaggle
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
You're all set to download the data from planet competition. You first need to go to its main page and accept its rules, and run the two cells below (uncomment the shell commands to download and unzip the data). If you get a 403 forbidden error it means you haven't accepted the competition rules yet (you have to go to the competition page, click on Rules tab, and then scroll to the bottom to find the accept button).
path = Config.data_path()/'planet' path.mkdir(parents=True, exist_ok=True) path # ! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train-jpg.tar.7z -p {path} # ! kaggle competitions download -c planet-understanding-the-amazon-from-space -f train_v2.csv -p {path} # ! unzip -q -n {path}/train_v2.csv.zip -d {path}
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
To extract the content of this file, we'll need 7zip, so uncomment the following line if you need to install it (or run sudo apt install p7zip-full in your terminal).
# ! conda install --yes --prefix {sys.prefix} -c haasad eidl7zip
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
And now we can unpack the data (uncomment to run - this might take a few minutes to complete).
# ! 7za -bd -y -so x {path}/train-jpg.tar.7z | tar xf - -C {path.as_posix()}
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
Multiclassification Contrary to the pets dataset studied in last lesson, here each picture can have multiple labels. If we take a look at the csv file containing the labels (in 'train_v2.csv' here) we see that each 'image_name' is associated to several tags separated by spaces.
df = pd.read_csv(path/'train_v2.csv') df.head()
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
To put this in a DataBunch while using the data block API, we then need to using ImageList (and not ImageDataBunch). This will make sure the model created has the proper loss function to deal with the multiple classes.
tfms = get_transforms(flip_vert=True, max_lighting=0.1, max_zoom=1.05, max_warp=0.)
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
We use parentheses around the data block pipeline below, so that we can use a multiline statement without needing to add '\'.
np.random.seed(42) src = (ImageList.from_csv(path, 'train_v2.csv', folder='train-jpg', suffix='.jpg') .split_by_rand_pct(0.2) .label_from_df(label_delim=' ')) data = (src.transform(tfms, size=128) .databunch().normalize(imagenet_stats))
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
show_batch still works, and show us the different labels separated by ;.
data.show_batch(rows=3, figsize=(12,9))
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
To create a Learner we use the same function as in lesson 1. Our base architecture is resnet50 again, but the metrics are a little bit differeent: we use accuracy_thresh instead of accuracy. In lesson 1, we determined the predicition for a given class by picking the final activation that was the biggest, but here, each activation can be 0. or 1. accuracy_thresh selects the ones that are above a certain threshold (0.5 by default) and compares them to the ground truth. As for Fbeta, it's the metric that was used by Kaggle on this competition. See here for more details.
arch = models.resnet50 acc_02 = partial(accuracy_thresh, thresh=0.2) f_score = partial(fbeta, thresh=0.2) learn = cnn_learner(data, arch, metrics=[acc_02, f_score])
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
We use the LR Finder to pick a good learning rate.
learn.lr_find() learn.recorder.plot()
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
Then we can fit the head of our network.
lr = 0.01 learn.fit_one_cycle(5, slice(lr)) learn.save('stage-1-rn50')
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
...And fine-tune the whole model:
learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(5, slice(1e-5, lr/5)) learn.save('stage-2-rn50') data = (src.transform(tfms, size=256) .databunch().normalize(imagenet_stats)) learn.data = data data.train_ds[0][0].shape learn.freeze() learn.lr_find() learn.recorder.plot() lr=1e-2/2 learn.fit_one_cycle(5, slice(lr)) learn.save('stage-1-256-rn50') learn.unfreeze() learn.fit_one_cycle(5, slice(1e-5, lr/5)) learn.recorder.plot_losses() learn.save('stage-2-256-rn50')
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
You won't really know how you're going until you submit to Kaggle, since the leaderboard isn't using the same subset as we have for training. But as a guide, 50th place (out of 938 teams) on the private leaderboard was a score of 0.930.
learn.export()
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
fin (This section will be covered in part 2 - please don't ask about it just yet! :) )
#! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg.tar.7z -p {path} #! 7za -bd -y -so x {path}/test-jpg.tar.7z | tar xf - -C {path} #! kaggle competitions download -c planet-understanding-the-amazon-from-space -f test-jpg-additional.tar.7z -p {path} #! 7za -bd -y -so x {path}/test-jpg-additional.tar.7z | tar xf - -C {path} test = ImageList.from_folder(path/'test-jpg').add(ImageList.from_folder(path/'test-jpg-additional')) len(test) learn = load_learner(path, test=test) preds, _ = learn.get_preds(ds_type=DatasetType.Test) thresh = 0.2 labelled_preds = [' '.join([learn.data.classes[i] for i,p in enumerate(pred) if p > thresh]) for pred in preds] labelled_preds[:5] fnames = [f.name[:-4] for f in learn.data.test_ds.items] df = pd.DataFrame({'image_name':fnames, 'tags':labelled_preds}, columns=['image_name', 'tags']) df.to_csv(path/'submission.csv', index=False) ! kaggle competitions submit planet-understanding-the-amazon-from-space -f {path/'submission.csv'} -m "My submission"
nbs/dl1/lesson3-planet.ipynb
fastai/course-v3
apache-2.0
The reactGS folder "mimics" the actual react-groundstation github repository, only copying the file directory structure, but the source code itself (which is a lot) isn't completely copied over. I wanted to keep these scripts/notebooks/files built on top of that github repository to be separate from the actual working code.
wherepodCommandsis = os.getcwd()+'/reactGS/server/udp/' print(wherepodCommandsis)
packetDef/podCommands.ipynb
ernestyalumni/servetheloop
mit
node.js/(JavaScript) to json; i.e. node.js/(JavaScript) $\to$ json Make a copy of server/udp/podCommands.js. In this copy, comment out var chalk = require('chalk') (this is the only thing you have to do manually). Run this in the directory containing your copy of podCommands.js: node traverse_podCommands.js This should generate a json file podCmds_lst.json Available podCommands as a Python list; json to Python list, i.e. json $\to$ Python list
import json f_podCmds_json = open(wherepodCommandsis+'podCmds_lst.json','rb') rawjson_podCmds = f_podCmds_json.read() f_podCmds_json.close() print(type(rawjson_podCmds)) podCmds_lst=json.loads(rawjson_podCmds) print(type(podCmds_lst)) print(len(podCmds_lst)) # there are 104 available commands for the pod! for cmd in podCmds_lst: print cmd
packetDef/podCommands.ipynb
ernestyalumni/servetheloop
mit
Dirty parsing of podCommands.js and the flight control parameters
f_podCmds = open(wherepodCommandsis+'podCommands.js','rb') raw_podCmds = f_podCmds.read() f_podCmds.close() print(type(raw_podCmds)) print(len(raw_podCmds)) # get the name of the functions cmdnameslst = [func[:func.find("(")].strip() for func in raw_podCmds.split("function ")] funcparamslst = [func[func.find("(")+1:func.find(")")] if func[func.find("(")+1:func.find(")")] is not '' else None for func in raw_podCmds.split("function ")] #raw_podCmds.split("function ")[3][ raw_podCmds.split("function ")[3].find("(")+1:raw_podCmds.split("function ")[3].find(")")] # more parsing of this list of strings funcparamslst_cleaned = [] for param in funcparamslst: if param is None: funcparamslst_cleaned.append(None) else: funcparamslst_cleaned.append( param.strip().split(',') ) print(len(raw_podCmds.split("function ")) ) # 106 commands # get the index value (e.g. starts at position 22) of where "udp.tx.transmitPodCommand" starts, treating it as a string #whereisudptransmit = [func.find("udp.tx.transmitPodCommand(") for func in raw_podCmds.split("function ")] whereisudptransmit = [] for func in raw_podCmds.split("function "): val = func.find("udp.tx.transmitPodCommand(") if val is not -1: if func.find("// ",val-4) is not -1 or func.find("// udp",val-4) is not -1: whereisudptransmit.append(None) else: whereisudptransmit.append(val) else: whereisudptransmit.append(None) #whereisudptransmit = [func.find("udp.tx.transmitPodCommand(") for func in raw_podCmds.split("function ")] # remove -1 values #whereisudptransmit = filter(lambda x : x != -1, whereisudptransmit) rawParams=[funcstr[ funcstr.find("(",val)+1:funcstr.find(")",val)] if val is not None else None for funcstr, val in zip(raw_podCmds.split("function "), whereisudptransmit)] funcparamslst_cleaned[:10] raw_podCmds.split("function ")[4].find("// ",116-4); # more parsing of this list of strings cleaningParams = [] for rawparam in rawParams: if rawparam is None: cleaningParams.append(None) else: cleanParam = [] cleanParam.append( rawparam.split(',')[0].strip("'") ) for strval in rawparam.split(',')[1:]: strval2 = strval.strip() try: strval2 = int(strval2,16) strval2 = hex(strval2) except ValueError: strval2 cleanParam.append(strval2) cleaningParams.append(cleanParam) cleaningParams[:10] # get the name of the functions #[func[:func.find("(")] # if func.find("()") is not -1 else None for func in raw_podCmds.split("function ")]; cmdnameslst = [func[:func.find("(")].strip() for func in raw_podCmds.split("function ")] # each node js function has its arguments; do that first podfunclst = zip(cmdnameslst, funcparamslst_cleaned) print(len(podfunclst)) podfunclst[:10]; # each node js function has its arguments; do that first podCommandparams = zip(podfunclst, cleaningParams) print(len(podCommandparams)) podCommandparams[-2]
packetDef/podCommands.ipynb
ernestyalumni/servetheloop
mit
So the structure of our result is as follows: Python tuples (each of size 2 for each of the tuples) """ ( (Name of pod command as a string, None if there are no function parameters or Python list of function arguments), Python list [ Subsystem name as a string, paramter1 as a hex value, paramter2 as a hex value, paramter3 as a hex value, paramter4 as a hex value] ) """ Notice that in the original code, there's some TO DO's still left (eek!) so that those udp.tx.transmitPodCommand is commented out or left as TODO, and some are dependent upon arguments in the function (and thus will change, the parameter is a variable).
podCommandparams[:10] try: import CPickle as pickle except ImportError: import pickle podCommandparamsfile = open("podCommandparams.pkl",'wb') pickle.dump( podCommandparams , podCommandparamsfile ) podCommandparamsfile.close() # open up a pickle file like so: podCommandparamsfile_recover = open("podCommandparams.pkl",'rb') podCommandparams_recover = pickle.load(podCommandparamsfile_recover) podCommandparamsfile_recover.close() podCommandparams_recover[:10]
packetDef/podCommands.ipynb
ernestyalumni/servetheloop
mit
Going to .csv @nuttwerx and @ernestyalumni decided upon separating the multiple entries in a field by the semicolon ";":
tocsv = [] for cmd in podCommandparams_recover: name = cmd[0][0] funcparam = cmd[0][1] if funcparam is None: fparam = None else: fparam = ";".join(funcparam) udpparam = cmd[1] if udpparam is None: uname = None uparam = None else: uname = udpparam[0] uparam = ";".join( udpparam[1:] ) tocsv.append([name,fparam,uname,uparam])
packetDef/podCommands.ipynb
ernestyalumni/servetheloop
mit
Add the headers in manually: 1 = Command name; 2 = Function args; 3 = Pod Node; 4 = Command Args
header = ["Command name","Function args", "Pod Node", "Command Args"] tocsv.insert(0,header)
packetDef/podCommands.ipynb
ernestyalumni/servetheloop
mit
The csv fields format is as follows: (function name) , (function arguments (None is there are none)) , (UDP transmit name (None is there are no udp transmit command)), (UDP transmit parameters, 4 of them, separated by semicolon, or None if there are no udp transmit command )
import csv f_podCommands_tocsv = open("podCommands.csv",'w') tocsv_writer = csv.writer( f_podCommands_tocsv ) tocsv_writer.writerows(tocsv) f_podCommands_tocsv.close() #tocsv.insert(0,header) no need #tocsv[:10] no need
packetDef/podCommands.ipynb
ernestyalumni/servetheloop
mit
Compute source power spectral density (PSD) of VectorView and OPM data Here we compute the resting state from raw for data recorded using a Neuromag VectorView system and a custom OPM system. The pipeline is meant to mostly follow the Brainstorm [1] OMEGA resting tutorial pipeline &lt;bst_omega_&gt;. The steps we use are: Filtering: downsample heavily. Artifact detection: use SSP for EOG and ECG. Source localization: dSPM, depth weighting, cortically constrained. Frequency: power spectral density (Welch), 4 sec window, 50% overlap. Standardize: normalize by relative power for each source. :depth: 1 Preprocessing
# Authors: Denis Engemann <[email protected]> # Luke Bloy <[email protected]> # Eric Larson <[email protected]> # # License: BSD (3-clause) import os.path as op from mne.filter import next_fast_len import mne print(__doc__) data_path = mne.datasets.opm.data_path() subject = 'OPM_sample' subjects_dir = op.join(data_path, 'subjects') bem_dir = op.join(subjects_dir, subject, 'bem') bem_fname = op.join(subjects_dir, subject, 'bem', subject + '-5120-5120-5120-bem-sol.fif') src_fname = op.join(bem_dir, '%s-oct6-src.fif' % subject) vv_fname = data_path + '/MEG/SQUID/SQUID_resting_state.fif' vv_erm_fname = data_path + '/MEG/SQUID/SQUID_empty_room.fif' vv_trans_fname = data_path + '/MEG/SQUID/SQUID-trans.fif' opm_fname = data_path + '/MEG/OPM/OPM_resting_state_raw.fif' opm_erm_fname = data_path + '/MEG/OPM/OPM_empty_room_raw.fif' opm_trans_fname = None opm_coil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Load data, resample. We will store the raw objects in dicts with entries "vv" and "opm" to simplify housekeeping and simplify looping later.
raws = dict() raw_erms = dict() new_sfreq = 90. # Nyquist frequency (45 Hz) < line noise freq (50 Hz) raws['vv'] = mne.io.read_raw_fif(vv_fname, verbose='error') # ignore naming raws['vv'].load_data().resample(new_sfreq) raws['vv'].info['bads'] = ['MEG2233', 'MEG1842'] raw_erms['vv'] = mne.io.read_raw_fif(vv_erm_fname, verbose='error') raw_erms['vv'].load_data().resample(new_sfreq) raw_erms['vv'].info['bads'] = ['MEG2233', 'MEG1842'] raws['opm'] = mne.io.read_raw_fif(opm_fname) raws['opm'].load_data().resample(new_sfreq) raw_erms['opm'] = mne.io.read_raw_fif(opm_erm_fname) raw_erms['opm'].load_data().resample(new_sfreq) # Make sure our assumptions later hold assert raws['opm'].info['sfreq'] == raws['vv'].info['sfreq']
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Do some minimal artifact rejection just for VectorView data
titles = dict(vv='VectorView', opm='OPM') ssp_ecg, _ = mne.preprocessing.compute_proj_ecg( raws['vv'], tmin=-0.1, tmax=0.1, n_grad=1, n_mag=1) raws['vv'].add_proj(ssp_ecg, remove_existing=True) # due to how compute_proj_eog works, it keeps the old projectors, so # the output contains both projector types (and also the original empty-room # projectors) ssp_ecg_eog, _ = mne.preprocessing.compute_proj_eog( raws['vv'], n_grad=1, n_mag=1, ch_name='MEG0112') raws['vv'].add_proj(ssp_ecg_eog, remove_existing=True) raw_erms['vv'].add_proj(ssp_ecg_eog) fig = mne.viz.plot_projs_topomap(raws['vv'].info['projs'][-4:], info=raws['vv'].info) fig.suptitle(titles['vv']) fig.subplots_adjust(0.05, 0.05, 0.95, 0.85)
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Explore data
kinds = ('vv', 'opm') n_fft = next_fast_len(int(round(4 * new_sfreq))) print('Using n_fft=%d (%0.1f sec)' % (n_fft, n_fft / raws['vv'].info['sfreq'])) for kind in kinds: fig = raws[kind].plot_psd(n_fft=n_fft, proj=True) fig.suptitle(titles[kind]) fig.subplots_adjust(0.1, 0.1, 0.95, 0.85)
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Alignment and forward
# Here we use a reduced size source space (oct5) just for speed src = mne.setup_source_space( subject, 'oct5', add_dist=False, subjects_dir=subjects_dir) # This line removes source-to-source distances that we will not need. # We only do it here to save a bit of memory, in general this is not required. del src[0]['dist'], src[1]['dist'] bem = mne.read_bem_solution(bem_fname) fwd = dict() trans = dict(vv=vv_trans_fname, opm=opm_trans_fname) # check alignment and generate forward with mne.use_coil_def(opm_coil_def_fname): for kind in kinds: dig = True if kind == 'vv' else False fig = mne.viz.plot_alignment( raws[kind].info, trans=trans[kind], subject=subject, subjects_dir=subjects_dir, dig=dig, coord_frame='mri', surfaces=('head', 'white')) mne.viz.set_3d_view(figure=fig, azimuth=0, elevation=90, distance=0.6, focalpoint=(0., 0., 0.)) fwd[kind] = mne.make_forward_solution( raws[kind].info, trans[kind], src, bem, eeg=False, verbose=True) del trans, src, bem
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute and apply inverse to PSD estimated using multitaper + Welch. Group into frequency bands, then normalize each source point and sensor independently. This makes the value of each sensor point and source location in each frequency band the percentage of the PSD accounted for by that band.
freq_bands = dict( delta=(2, 4), theta=(5, 7), alpha=(8, 12), beta=(15, 29), gamma=(30, 45)) topos = dict(vv=dict(), opm=dict()) stcs = dict(vv=dict(), opm=dict()) snr = 3. lambda2 = 1. / snr ** 2 for kind in kinds: noise_cov = mne.compute_raw_covariance(raw_erms[kind]) inverse_operator = mne.minimum_norm.make_inverse_operator( raws[kind].info, forward=fwd[kind], noise_cov=noise_cov, verbose=True) stc_psd, sensor_psd = mne.minimum_norm.compute_source_psd( raws[kind], inverse_operator, lambda2=lambda2, n_fft=n_fft, dB=False, return_sensor=True, verbose=True) topo_norm = sensor_psd.data.sum(axis=1, keepdims=True) stc_norm = stc_psd.sum() # same operation on MNE object, sum across freqs # Normalize each source point by the total power across freqs for band, limits in freq_bands.items(): data = sensor_psd.copy().crop(*limits).data.sum(axis=1, keepdims=True) topos[kind][band] = mne.EvokedArray( 100 * data / topo_norm, sensor_psd.info) stcs[kind][band] = \ 100 * stc_psd.copy().crop(*limits).sum() / stc_norm.data del inverse_operator del fwd, raws, raw_erms
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Now we can make some plots of each frequency band. Note that the OPM head coverage is only over right motor cortex, so only localization of beta is likely to be worthwhile. Theta
def plot_band(kind, band): """Plot activity within a frequency band on the subject's brain.""" title = "%s %s\n(%d-%d Hz)" % ((titles[kind], band,) + freq_bands[band]) topos[kind][band].plot_topomap( times=0., scalings=1., cbar_fmt='%0.1f', vmin=0, cmap='inferno', time_format=title) brain = stcs[kind][band].plot( subject=subject, subjects_dir=subjects_dir, views='cau', hemi='both', time_label=title, title=title, colormap='inferno', clim=dict(kind='percent', lims=(70, 85, 99)), smoothing_steps=10) brain.show_view(dict(azimuth=0, elevation=0), roll=0) return fig, brain fig_theta, brain_theta = plot_band('vv', 'theta')
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Alpha
fig_alpha, brain_alpha = plot_band('vv', 'alpha')
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Beta Here we also show OPM data, which shows a profile similar to the VectorView data beneath the sensors.
fig_beta, brain_beta = plot_band('vv', 'beta') fig_beta_opm, brain_beta_opm = plot_band('opm', 'beta')
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Gamma
fig_gamma, brain_gamma = plot_band('vv', 'gamma')
0.21/_downloads/6035dcef33422511928bd2247a3d092d/plot_source_power_spectrum_opm.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
The experiment takes participants with two test, congruent task and incongruent task. Congruent task is word with agreeing text and font color, while incongruent is a different text and its font color. Both of the task require the participants to say it out loud the word that are being display, and press 'Finish' button to see which time do they take. The control group is the congruent task, while experiment group is ingconruent task. The independent variables is which makes differ between congruent task and incongruent task. That is words that are being displayed. Participants are requested to say the font color of the words, which is the same for both control and experiment group. But while text displayed agree with color in congruent, incongruent is the other way around. The dependent variables is time participants take to complete the task. The time is depend on whether the text agree with the font color being displayed. We can see that from the data, on average, the time participants took for incongruent task is different than when they solve congruent task. We will use statistical test to test whether the time is significantly different. So what kind of paired data should we be asking? We know that in general Incongruent task take longer than Congruent task. So in Confidence Interval, we could be asking the interval in which Ingrouent takes more second than congruent, and in hypothesis we could be asking is whether the incongruent task results in significantly different than congruent task. Our sample size is less than 30, and that would means that our sampling distribution won't be normal. We're faced with two conditions, using t-test or bootstrapping. In this case, We will be using t-test. And since this is an experiment (assumed random assignment), we can draw causation. In the instructions, it doesn't stated anywhere how the participants are collected. There might be a convenience bias(only participants that know the experiment), location bias(city/country where the experiment performed ), or voluntarily bias. Assumed participants randomly sampled without any bias at all. The result of this experiment can be generalized to world population. We design the hypothesis test as follows: H0: $ \mu_\mathbf{congruent} = \mu_\mathbf{incongruent}$ The time took for population to solve both congruent task and incongruent task is the same, on average HA:$\mu_\mathbf{congruent} \neq \mu_\mathbf{incongruent}$ The time took for population to solve both congruent task and incongruent task is different, on average We're going to use two-sided t-statistics. This is an experiment where we have limited data and samples, and we want to test our hypothesis to the population parameters.
df.describe()
p1-statistics/project.ipynb
napjon/ds-nd
mit
The measure of tendency that will be used in this situation is mean, and measure of variability is standard deviation.
df.plot.scatter(x='Congruent',y='Incongruent');
p1-statistics/project.ipynb
napjon/ds-nd
mit
The plot shown a moderaly weak correlation between congruent variable and incongruent variable.
(df.Incongruent - df.Congruent).plot.hist();
p1-statistics/project.ipynb
napjon/ds-nd
mit
We can see that is the difference is right skewed distribution. This makes sense, since congruent task is easier, there shouldn't be any participants that solve incongruent task shorter tha congruent task. And it should be the longer time it took for the participants at solving incongruent task, the less should be for the number of participants. Hypothesis Testing
%%R n = 24 mu = 7.964792 s = 4.864827 CL = 0.95 n = 24 # z = round(qnorm((1-CL)/2, lower.tail=F),digits=2) SE = s/sqrt(n) t = mu/SE t_crit = round(qt((1-CL)/2,df=n-1),digits=3) c(t,c(-t_crit,t_crit))
p1-statistics/project.ipynb
napjon/ds-nd
mit
Since our t-statistics, 8.02 is higher than the t critical values, we can conclude that the data provides convincing evidence that the time participants took for incongruent task is significantly different than when they took congruent task. Confidence Interval
%%R ME = t*SE c(mu+ME,mu-ME)
p1-statistics/project.ipynb
napjon/ds-nd
mit
Load data Similar to previous exercises, we will load CIFAR-10 data from disk.
from cs231n.features import color_histogram_hsv, hog_feature def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000): # Load the raw CIFAR-10 data cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # Subsample the data mask = range(num_training, num_training + num_validation) X_val = X_train[mask] y_val = y_train[mask] mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] return X_train, y_train, X_val, y_val, X_test, y_test X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
assignment1/features.ipynb
zlpure/CS231n
mit
Extract Features For each image we will compute a Histogram of Oriented Gradients (HOG) as well as a color histogram using the hue channel in HSV color space. We form our final feature vector for each image by concatenating the HOG and color histogram feature vectors. Roughly speaking, HOG should capture the texture of the image while ignoring color information, and the color histogram represents the color of the input image while ignoring texture. As a result, we expect that using both together ought to work better than using either alone. Verifying this assumption would be a good thing to try for the bonus section. The hog_feature and color_histogram_hsv functions both operate on a single image and return a feature vector for that image. The extract_features function takes a set of images and a list of feature functions and evaluates each feature function on each image, storing the results in a matrix where each column is the concatenation of all feature vectors for a single image.
from cs231n.features import * num_color_bins = 10 # Number of bins in the color histogram feature_fns = [hog_feature, lambda img: color_histogram_hsv(img, nbin=num_color_bins)] X_train_feats = extract_features(X_train, feature_fns, verbose=True) X_val_feats = extract_features(X_val, feature_fns) X_test_feats = extract_features(X_test, feature_fns) # Preprocessing: Subtract the mean feature mean_feat = np.mean(X_train_feats, axis=0, keepdims=True) X_train_feats -= mean_feat X_val_feats -= mean_feat X_test_feats -= mean_feat # Preprocessing: Divide by standard deviation. This ensures that each feature # has roughly the same scale. std_feat = np.std(X_train_feats, axis=0, keepdims=True) X_train_feats /= std_feat X_val_feats /= std_feat X_test_feats /= std_feat # Preprocessing: Add a bias dimension X_train_feats = np.hstack([X_train_feats, np.ones((X_train_feats.shape[0], 1))]) X_val_feats = np.hstack([X_val_feats, np.ones((X_val_feats.shape[0], 1))]) X_test_feats = np.hstack([X_test_feats, np.ones((X_test_feats.shape[0], 1))])
assignment1/features.ipynb
zlpure/CS231n
mit
Train SVM on features Using the multiclass SVM code developed earlier in the assignment, train SVMs on top of the features extracted above; this should achieve better results than training SVMs directly on top of raw pixels.
# Use the validation set to tune the learning rate and regularization strength from cs231n.classifiers.linear_classifier import LinearSVM learning_rates = [1e-9, 1e-8, 1e-7] regularization_strengths = [1e5, 1e6, 1e7] results = {} best_val = -1 best_svm = None pass ################################################################################ # TODO: # # Use the validation set to set the learning rate and regularization strength. # # This should be identical to the validation that you did for the SVM; save # # the best trained classifer in best_svm. You might also want to play # # with different numbers of bins in the color histogram. If you are careful # # you should be able to get accuracy of near 0.44 on the validation set. # ################################################################################ for i in learning_rates: for j in regularization_strengths: svm=LinearSVM() svm.train(X_train_feats, y_train, learning_rate=i, reg=j,num_iters=5000, verbose=False) y_pred=svm.predict(X_train_feats) y_val_pred=svm.predict(X_val_feats) train_accuracy=np.mean(y_pred==y_train) val_accuracy=np.mean(y_val_pred==y_val) print train_accuracy, val_accuracy results[(i,j)]=(train_accuracy,val_accuracy) if val_accuracy>best_val: best_val=val_accuracy best_svm=svm ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out results. for lr, reg in sorted(results): train_accuracy, val_accuracy = results[(lr, reg)] print 'lr %e reg %e train accuracy: %f val accuracy: %f' % ( lr, reg, train_accuracy, val_accuracy) print 'best validation accuracy achieved during cross-validation: %f' % best_val # Evaluate your trained SVM on the test set y_test_pred = best_svm.predict(X_test_feats) test_accuracy = np.mean(y_test == y_test_pred) print test_accuracy # An important way to gain intuition about how an algorithm works is to # visualize the mistakes that it makes. In this visualization, we show examples # of images that are misclassified by our current system. The first column # shows images that our system labeled as "plane" but whose true label is # something other than "plane". examples_per_class = 8 classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] for cls, cls_name in enumerate(classes): idxs = np.where((y_test != cls) & (y_test_pred == cls))[0] idxs = np.random.choice(idxs, examples_per_class, replace=False) for i, idx in enumerate(idxs): plt.subplot(examples_per_class, len(classes), i * len(classes) + cls + 1) plt.imshow(X_test[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls_name) plt.show()
assignment1/features.ipynb
zlpure/CS231n
mit
Inline question 1: Describe the misclassification results that you see. Do they make sense? Neural Network on image features Earlier in this assigment we saw that training a two-layer neural network on raw pixels achieved better classification performance than linear classifiers on raw pixels. In this notebook we have seen that linear classifiers on image features outperform linear classifiers on raw pixels. For completeness, we should also try training a neural network on image features. This approach should outperform all previous approaches: you should easily be able to achieve over 55% classification accuracy on the test set; our best model achieves about 60% classification accuracy.
print X_train_feats.shape from cs231n.classifiers.neural_net import TwoLayerNet input_dim = X_train_feats.shape[1] hidden_dim = 500 num_classes = 10 net = TwoLayerNet(input_dim, hidden_dim, num_classes) best_net = None ################################################################################ # TODO: Train a two-layer neural network on image features. You may want to # # cross-validate various parameters as in previous sections. Store your best # # model in the best_net variable. # ################################################################################ maxn=20 best_val=0 for i in xrange(maxn): net_exp=net.train(X_train_feats, y_train, X_val_feats, y_val, learning_rate=1e-2, learning_rate_decay=0.95, reg=1e-5, num_iters=1000, batch_size=200, verbose=False) acc_val=np.mean(net.predict(X_test_feats)==y_test) print acc_val if acc_val>best_val: best_val=acc_val best_net=net ################################################################################ # END OF YOUR CODE # ################################################################################ # Run your neural net classifier on the test set. You should be able to # get more than 55% accuracy. test_acc = (best_net.predict(X_test_feats) == y_test).mean() print test_acc
assignment1/features.ipynb
zlpure/CS231n
mit
By default, autofig uses the z dimension just to assign z-order (so that positive z appears "on top")
autofig.reset() autofig.plot(x, y, z, i=t, xlabel='x', ylabel='y', zlabel='z') mplfig = autofig.draw()
docs/tutorials/3d.ipynb
kecnry/autofig
gpl-3.0
To instead plot using a projected 3d axes, simply pass projection='3d'
autofig.reset() autofig.plot(x, y, z, i=t, xlabel='x', ylabel='y', zlabel='z', projection='3d') mplfig = autofig.draw()
docs/tutorials/3d.ipynb
kecnry/autofig
gpl-3.0
If the projection is set to 3d, you can also set the elevation ('elev') and azimuth ('azim') of the viewing angle. These are provided in degrees and can be either a float (fixed) or a list (changes as a function of the current value of i).
autofig.reset() autofig.plot(x, y, z, i=t, xlabel='x', ylabel='y', zlabel='z', projection='3d', elev=0, azim=0) mplfig = autofig.draw()
docs/tutorials/3d.ipynb
kecnry/autofig
gpl-3.0
When provided as an array, the set viewing angle is determined as follows: if no i is passed, the median values of 'elev' and 'azim' are used if i is passed, then linear interpolation is used across the i dimension of all calls attached to that axes Therefore, passing an array (or list or tuple) with two items will simply set the lower and upper bounds. If you want the axes to rotate more than once, simply provide angles above 360.
autofig.reset() autofig.plot(x, y, z, i=t, xlabel='x', ylabel='y', zlabel='z', projection='3d', elev=0, azim=[0, 180]) mplfig = autofig.draw(i=3) anim = autofig.animate(i=t, save='3d_azim_2.gif', save_kwargs={'writer': 'imagemagick'})
docs/tutorials/3d.ipynb
kecnry/autofig
gpl-3.0
We can then achieve an "accelerating" rotation by passing finer detail on the azimuth as a function of 'i'.
autofig.reset() autofig.plot(x, y, z, i=t, xlabel='x', ylabel='y', zlabel='z', projection='3d', elev=0, azim=[0, 20, 30, 50, 150, 180]) anim = autofig.animate(i=t, save='3d_azim_6.gif', save_kwargs={'writer': 'imagemagick'})
docs/tutorials/3d.ipynb
kecnry/autofig
gpl-3.0
Places your images in a folder such as dirname = '/Users/Someone/Desktop/ImagesFromTheInternet'. We'll then use the os package to load them and crop/resize them to a standard size of 100 x 100 pixels. <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# You need to find 100 images from the web/create them yourself # or find a dataset that interests you (e.g. I used celeb faces # in the course lecture...) # then store them all in a single directory. # With all the images in a single directory, you can then # perform the following steps to create a 4-d array of: # N x H x W x C dimensions as 100 x 100 x 100 x 3. dirname = ... # Load every image file in the provided directory filenames = [os.path.join(dirname, fname) for fname in os.listdir(dirname)] # Make sure we have exactly 100 image files! filenames = filenames[:100] assert(len(filenames) == 100) # Read every filename as an RGB image imgs = [plt.imread(fname)[..., :3] for fname in filenames] # Crop every image to a square imgs = [utils.imcrop_tosquare(img_i) for img_i in imgs] # Then resize the square image to 100 x 100 pixels imgs = [resize(img_i, (100, 100)) for img_i in imgs] # Finally make our list of 3-D images a 4-D array with the first dimension the number of images: imgs = np.array(imgs).astype(np.float32) # Plot the resulting dataset: # Make sure you "run" this cell after you create your `imgs` variable as a 4-D array! # Make sure we have a 100 x 100 x 100 x 3 dimension array assert(imgs.shape == (100, 100, 100, 3)) plt.figure(figsize=(10, 10)) plt.imshow(utils.montage(imgs, saveto='dataset.png'))
session-1/session-1.ipynb
goddoe/CADL
apache-2.0
<a name="part-two---compute-the-mean"></a> Part Two - Compute the Mean <a name="instructions-1"></a> Instructions First use Tensorflow to define a session. Then use Tensorflow to create an operation which takes your 4-d array and calculates the mean color image (100 x 100 x 3) using the function tf.reduce_mean. Have a look at the documentation for this function to see how it works in order to get the mean of every pixel and get an image of (100 x 100 x 3) as a result. You'll then calculate the mean image by running the operation you create with your session (e.g. <code>sess.run(...)</code>). Finally, plot the mean image, save it, and then include this image in your zip file as <b>mean.png</b>. <a name="code-1"></a> Code <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# First create a tensorflow session sess = ... # Now create an operation that will calculate the mean of your images mean_img_op = ... # And then run that operation using your session mean_img = sess.run(mean_img_op) # Then plot the resulting mean image: # Make sure the mean image is the right size! assert(mean_img.shape == (100, 100, 3)) plt.figure(figsize=(10, 10)) plt.imshow(mean_img) plt.imsave(arr=mean_img, fname='mean.png')
session-1/session-1.ipynb
goddoe/CADL
apache-2.0
Once you have seen the mean image of your dataset, how does it relate to your own expectations of the dataset? Did you expect something different? Was there something more "regular" or "predictable" about your dataset that the mean image did or did not reveal? If your mean image looks a lot like something recognizable, it's a good sign that there is a lot of predictability in your dataset. If your mean image looks like nothing at all, a gray blob where not much seems to stand out, then it's pretty likely that there isn't very much in common between your images. Neither is a bad scenario. Though, it is more likely that having some predictability in your mean image, e.g. something recognizable, that there are representations worth exploring with deeper networks capable of representing them. However, we're only using 100 images so it's a very small dataset to begin with. <a name="part-three---compute-the-standard-deviation"></a> Part Three - Compute the Standard Deviation <a name="instructions-2"></a> Instructions Now use tensorflow to calculate the standard deviation and upload the standard deviation image averaged across color channels as a "jet" heatmap of the 100 images. This will be a little more involved as there is no operation in tensorflow to do this for you. However, you can do this by calculating the mean image of your dataset as a 4-D array. To do this, you could write e.g. mean_img_4d = tf.reduce_mean(imgs, axis=0, keep_dims=True) to give you a 1 x H x W x C dimension array calculated on the N x H x W x C images variable. The axis parameter is saying to calculate the mean over the 0th dimension, meaning for every possible H, W, C, or for every pixel, you will have a mean composed over the N possible values it could have had, or what that pixel was for every possible image. This way, you can write images - mean_img_4d to give you a N x H x W x C dimension variable, with every image in your images array having been subtracted by the mean_img_4d. If you calculate the square root of the expected squared differences of this resulting operation, you have your standard deviation! In summary, you'll need to write something like: subtraction = imgs - tf.reduce_mean(imgs, axis=0, keep_dims=True), then reduce this operation using tf.sqrt(tf.reduce_mean(subtraction * subtraction, axis=0)) to get your standard deviation then include this image in your zip file as <b>std.png</b> <a name="code-2"></a> Code <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
# Create a tensorflow operation to give you the standard deviation # First compute the difference of every image with a # 4 dimensional mean image shaped 1 x H x W x C mean_img_4d = ... subtraction = imgs - mean_img_4d # Now compute the standard deviation by calculating the # square root of the expected squared differences std_img_op = tf.sqrt(tf.reduce_mean(subtraction * subtraction, axis=0)) # Now calculate the standard deviation using your session std_img = sess.run(std_img_op) # Then plot the resulting standard deviation image: # Make sure the std image is the right size! assert(std_img.shape == (100, 100) or std_img.shape == (100, 100, 3)) plt.figure(figsize=(10, 10)) std_img_show = std_img / np.max(std_img) plt.imshow(std_img_show) plt.imsave(arr=std_img_show, fname='std.png')
session-1/session-1.ipynb
goddoe/CADL
apache-2.0
Once you have plotted your dataset's standard deviation per pixel, what does it reveal about your dataset? Like with the mean image, you should consider what is predictable and not predictable about this image. <a name="part-four---normalize-the-dataset"></a> Part Four - Normalize the Dataset <a name="instructions-3"></a> Instructions Using tensorflow, we'll attempt to normalize your dataset using the mean and standard deviation. <a name="code-3"></a> Code <h3><font color='red'>TODO! COMPLETE THIS SECTION!</font></h3>
norm_imgs_op = ... norm_imgs = sess.run(norm_imgs_op) print(np.min(norm_imgs), np.max(norm_imgs)) print(imgs.dtype) # Then plot the resulting normalized dataset montage: # Make sure we have a 100 x 100 x 100 x 3 dimension array assert(norm_imgs.shape == (100, 100, 100, 3)) plt.figure(figsize=(10, 10)) plt.imshow(utils.montage(norm_imgs, 'normalized.png'))
session-1/session-1.ipynb
goddoe/CADL
apache-2.0