markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Plot the spectrum!
# turn on interactive plotting %matplotlib notebook spax.flux.plot()
docs/sphinx/jupyter/first-steps.ipynb
sdss/marvin
bsd-3-clause
Save plot to Downloads directory:
# To save the plot, we need to draw it in the same cell as the save command. spax.flux.plot() import os plt.savefig(os.getenv('HOME') + '/Downloads/my-first-spectrum.png') # NOTE - if you are using the latest version of iPython and Jupyter notebooks, then interactive matplotlib plots # should be enabled. You can save the figure with the save icon in the interactive toolbar.
docs/sphinx/jupyter/first-steps.ipynb
sdss/marvin
bsd-3-clause
Read variables and units We assume the data file is present in the following directory:
datafile = "~/CMEMS_INSTAC/INSITU_MED_NRT_OBSERVATIONS_013_035/history/mooring/IR_TS_MO_61198.nc"
PythonNotebooks/PlatformPlots/Plot_TimeSeries1.ipynb
ctroupin/CMEMS_INSTAC_Training
mit
We use the os mudule to extend the ~.
import os datafile = os.path.expanduser(datafile) with netCDF4.Dataset(datafile, 'r') as ds: time_values = ds.variables['TIME'][:] temperature_values = ds.variables['TEMP'][:] temperatureQC = ds.variables['TEMP_QC'][:] time_units = ds.variables['TIME'].units temperature_units = ds.variables['TEMP'].units time2 = netCDF4.num2date(time_values, time_units)
PythonNotebooks/PlatformPlots/Plot_TimeSeries1.ipynb
ctroupin/CMEMS_INSTAC_Training
mit
We also mask the temperature values that have quality flag not equal to 1.
temperature_values = np.ma.masked_where(temperatureQC != 1, temperature_values)
PythonNotebooks/PlatformPlots/Plot_TimeSeries1.ipynb
ctroupin/CMEMS_INSTAC_Training
mit
Basic plot We create the most simple plot, without any additional option.
fig = plt.figure() plt.plot(time2, temperature_values) plt.ylabel(temperature_units)
PythonNotebooks/PlatformPlots/Plot_TimeSeries1.ipynb
ctroupin/CMEMS_INSTAC_Training
mit
Main problems: * The figure is not large enough. * The labels are too small. Improved plot With some commands the previous plot can be improved: * The figure size is increased * The font size is set to 20 (pts) * The year labels are rotated 45ΒΊ
mpl.rcParams.update({'font.size': 20}) fig = plt.figure(figsize=(15, 8)) ax = fig.add_subplot(111) plt.plot(time2, temperature_values, linewidth=0.5) plt.ylabel(temperature_units) plt.xlabel('Year') fig.autofmt_xdate() plt.grid()
PythonNotebooks/PlatformPlots/Plot_TimeSeries1.ipynb
ctroupin/CMEMS_INSTAC_Training
mit
Final version We want to add a title containing the coordinates of the station. Longitude and latitude are both stored as vectors, but we will only keep the mean position to be included in the title. LaTeX syntax can be used, as in this example, with the degree symbol.
with netCDF4.Dataset(datafile, 'r') as ds: lon = ds.variables['LONGITUDE'][:] lat = ds.variables['LATITUDE'][:] figure_title = r'Temperature evolution at \n%s$^\circ$E, %s$^\circ$N' % (lon.mean(), lat.mean()) print figure_title
PythonNotebooks/PlatformPlots/Plot_TimeSeries1.ipynb
ctroupin/CMEMS_INSTAC_Training
mit
The units for the temperature are also changed:
temperature_units2 = '($^{\circ}$C)' fig = plt.figure(figsize=(15, 8)) ax = fig.add_subplot(111) ax.xaxis.set_major_locator(dates.YearLocator(base=2)) ax.xaxis.set_minor_locator(dates.YearLocator()) plt.plot(time2, temperature_values, linewidth=0.5) plt.ylabel(temperature_units2, rotation=0., horizontalalignment='right') plt.title(figure_title) plt.xlabel('Year') fig.autofmt_xdate() plt.grid()
PythonNotebooks/PlatformPlots/Plot_TimeSeries1.ipynb
ctroupin/CMEMS_INSTAC_Training
mit
Let's take a look at unique values for some of the columns:
litigation['Boro'].unique() litigation.groupby(by = ['Boro','CaseJudgement']).count()
src/bryan analyses/Hack for Heat #1.ipynb
heatseeknyc/data-science
mit
The above table tells us that Manhattan has the lowest proportion of cases that receive judgement (about 1 in 80), whereas Staten Island has the highest (about 1 in 12). It may be something worth looking into, but it's also important to note that many cases settle out of court, and landlords in Manhattan may be more willing (or able) to do so.
litigation['CaseType'].unique() litigation.groupby(by = ['CaseType', 'CaseJudgement']).count()
src/bryan analyses/Hack for Heat #1.ipynb
heatseeknyc/data-science
mit
The table above shows the same case judgement proportions, but conditioned on what type of case it was. Unhelpfully, the documentation does not specify what the difference between Access Warrant - Lead and Non-Lead is. It could be one of two possibilities: The first is whether the warrants have to do with lead-based paint, which is a common problem, but perhaps still too idiosyncratic to have it's own warrant type. The second, perhaps more likely possibility is whether or not HPD was the lead party in the case. We'll probably end up using these data by aggregating it and examining how complaints change over time, perhaps as a function of what type they are. There's also the possibility of looking up specific buildings' complaints and tying them to landlords. There's probably also an easy way to join this dataset with another, by converting the address information into something standardized, like borough-block-lot (BBL; http://www1.nyc.gov/nyc-resources/service/1232/borough-block-lot-bbl-lookup) HPD complaints Next, we're going to look at a dataset of HPD complaints.
hpdcomp = pd.read_csv('Housing_Maintenance_Code_Complaints.csv') hpdcomp.head() len(hpdcomp)
src/bryan analyses/Hack for Heat #1.ipynb
heatseeknyc/data-science
mit
This dataset is less useful on its own. It doesn't tell us what the type of complaint was, only the date it was received and whether or not the complaint is still open. However, it may be useful in conjunction with the earlier dataset. For example, we might be interested in how many of these complaints end up in court (or at least, have some sort of legal action taken). HPD violations The following dataset tracks HPD violations.
hpdviol = pd.read_csv('Housing_Maintenance_Code_Violations.csv') hpdviol.head() len(hpdviol)
src/bryan analyses/Hack for Heat #1.ipynb
heatseeknyc/data-science
mit
These datasets all have different lengths, but that's not surprising, given they come from different years. One productive initial step would be to convert the date strings into something numerical. HPD complaint problems database
hpdcompprob = pd.read_csv('Complaint_Problems.csv') hpdcompprob.head()
src/bryan analyses/Hack for Heat #1.ipynb
heatseeknyc/data-science
mit
You can grab any part of the datetime object you want
my_date.day my_date_time.hour
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/05-Pandas-with-Time-Series/01 - Datetime Index.ipynb
arcyfelix/Courses
apache-2.0
Pandas with Datetime Index You'll usually deal with time series as an index when working with pandas dataframes obtained from some sort of financial API. Fortunately pandas has a lot of functions and methods to work with time series!
# Create an example datetime list/array first_two = [datetime(2016, 1, 1), datetime(2016, 1, 2)] first_two # Converted to an index dt_ind = pd.DatetimeIndex(first_two) dt_ind # Attached to some random data data = np.random.randn(2, 2) print(data) cols = ['A','B'] df = pd.DataFrame(data,dt_ind,cols) df df.index # Latest Date Location df.index.argmax() df.index.max() # Earliest Date Index Location df.index.argmin() df.index.min()
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/05-Pandas-with-Time-Series/01 - Datetime Index.ipynb
arcyfelix/Courses
apache-2.0
Summarize How do you access elements in a list? Predict what this code does.
some_list = [10,20,30,40] print(some_list[1:3]) some_list = [10,20,30] print(some_list[:3])
chapters/00_inductive-python/05_lists.ipynb
harmsm/pythonic-science
unlicense
Summarize What does the ":" symbol do? Modify Change the cell below so it prints the second through fourth elements in the list.
some_list = [0,10,20,30,40,50,60,70] print(some_list[2:4])
chapters/00_inductive-python/05_lists.ipynb
harmsm/pythonic-science
unlicense
Setting values in lists Predict what this code does.
some_list = [10,20,30] some_list[0] = 50 print(some_list)
chapters/00_inductive-python/05_lists.ipynb
harmsm/pythonic-science
unlicense
Predict what this code does.
some_list = [] for i in range(5): some_list.append(i) print(some_list)
chapters/00_inductive-python/05_lists.ipynb
harmsm/pythonic-science
unlicense
Predict what this code does.
some_list = [1,2,3] some_list.insert(2,5) print(some_list) some_list = [10,20,30] some_list.pop(1) print(some_list) some_list = [10,20,30] some_list.remove(30) print(some_list)
chapters/00_inductive-python/05_lists.ipynb
harmsm/pythonic-science
unlicense
Summarize How can you change entries in a list? Implement Write a program that creates a list with all integers from 0 to 9 and then replaces the 5 with the number 423. Miscellaneous List Stuff
# You can put anything in a list some_list = ["test",1,1.52323,print] # You can even put a list in a list some_list = [[1,2,3],[4,5,6],[7,8,9]] # a list of three lists! # You can get the length of a list with len(some_list) some_list = [10,20,30] print(len(some_list))
chapters/00_inductive-python/05_lists.ipynb
harmsm/pythonic-science
unlicense
Copying lists (a confusing point for python programmers) Predict what this code does.
some_list = [10,20,30] another_list = some_list some_list[0] = 50 print(some_list) print(another_list)
chapters/00_inductive-python/05_lists.ipynb
harmsm/pythonic-science
unlicense
Predict what this code does.
import copy some_list = [10,20,30] another_list = copy.deepcopy(some_list) some_list[0] = 50 print(some_list) print(another_list)
chapters/00_inductive-python/05_lists.ipynb
harmsm/pythonic-science
unlicense
FIR Filter Design Both floating point and fixed-point FIR filters are the objective here. we will also need a means to export the filter coefficients to header files. Header export functions for float32_t and int16_t format are provided below. The next step is to actually design some filters using functions found in scipy.signal. To support both of these activities the Python modules fir_design_helper.py and coeff2header.py are available. Note: The MATLAB signal processing toolbox is extremely comprehensive in its support of digital filter design. The use of Python is adequate for this, but do not ignore the power available in MATLAB. Windowed (Kaiser window) and Equal-Ripple FIR Filter Design The module fir_design_helper.py contains custom filter design code build on top of functions found in scipy.signal. Functions are available for winowed FIR design using a Kaiser window function and equal-ripple FIR design, both type have linear phase. Example: Lowpass with $f_s = 1$ Hz For this 31 tap filter we choose the cutoff frequency to be $F_c = F_s/8$, or in normalized form $f_c = 1/8$.
b_k = fir_d.firwin_kaiser_lpf(1/8,1/6,50,1.0) b_r = fir_d.fir_remez_lpf(1/8,1/6,0.2,50,1.0) fir_d.freqz_resp_list([b_k,b_r],[[1],[1]],'dB',fs=1) ylim([-80,5]) title(r'Kaiser vs Equal Ripple Lowpass') ylabel(r'Filter Gain (dB)') xlabel(r'Frequency in kHz') legend((r'Kaiser: %d taps' % len(b_k),r'Remez: %d taps' % len(b_r)),loc='best') grid();
tutorial_part1/FIR Filter Design and C Headers.ipynb
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
bsd-2-clause
A Highpass Design
b_k_hp = fir_d.firwin_kaiser_hpf(1/8,1/6,50,1.0) b_r_hp = fir_d.fir_remez_hpf(1/8,1/6,0.2,50,1.0) fir_d.freqz_resp_list([b_k_hp,b_r_hp],[[1],[1]],'dB',fs=1) ylim([-80,5]) title(r'Kaiser vs Equal Ripple Lowpass') ylabel(r'Filter Gain (dB)') xlabel(r'Frequency in kHz') legend((r'Kaiser: %d taps' % len(b_k),r'Remez: %d taps' % len(b_r)),loc='best') grid();
tutorial_part1/FIR Filter Design and C Headers.ipynb
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
bsd-2-clause
Plot a Pole-Zero Map for the Equal-Ripple Design
ss.zplane(b_r_hp,[1]) # the b and a coefficient arrays
tutorial_part1/FIR Filter Design and C Headers.ipynb
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
bsd-2-clause
A Bandpass Design
b_k_bp = fir_d.firwin_kaiser_bpf(7000,8000,14000,15000,50,48000) b_r_bp = fir_d.fir_remez_bpf(7000,8000,14000,15000,0.2,50,48000) fir_d.freqz_resp_list([b_k_bp,b_r_bp],[[1],[1]],'dB',fs=48) ylim([-80,5]) title(r'Kaiser vs Equal Ripple Bandpass') ylabel(r'Filter Gain (dB)') xlabel(r'Frequency in kHz') legend((r'Kaiser: %d taps' % len(b_k_bp), r'Remez: %d taps' % len(b_r_bp)), loc='lower right') grid();
tutorial_part1/FIR Filter Design and C Headers.ipynb
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
bsd-2-clause
Exporting Coefficients to Header Files Once a filter design is complete it can be exported as a C header file using FIR_header() for floating-point design and FIR_fix_header() for 16-bit fixed-point designs. Float Header Export python def FIR_header(fname_out,h): """ Write FIR Filter Header Files """ 16 Bit Signed Integer Header Export python def FIR_fix_header(fname_out,h): """ Write FIR Fixed-Point Filter Header Files """ These functions are available in coeff2header.py, which was imported as c2h above Write a Header File for the Bandpass Equal-Ripple
# Write a C header file c2h.FIR_header('remez_8_14_bpf_f32.h',b_r_bp)
tutorial_part1/FIR Filter Design and C Headers.ipynb
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
bsd-2-clause
The header file, remez_8_14_bpf_f32.h written above takes the form: ```c //define a FIR coefficient Array include <stdint.h> ifndef M_FIR define M_FIR 101 endif /**********/ / FIR Filter Coefficients */ float32_t h_FIR[M_FIR] = {-0.001475936747, 0.000735580994, 0.004771062558, 0.001254178712,-0.006176846780,-0.001755945520, 0.003667323660, 0.001589634576, 0.000242520766, 0.002386316353,-0.002699251419,-0.006927087152, 0.002072374590, 0.006247819434,-0.000017122009, 0.000544273776, 0.001224920394,-0.008238424843, -0.005846603175, 0.009688130613, 0.007237935594, -0.003554185785, 0.000423864572,-0.002894644665, -0.013460012489, 0.002388684318, 0.019352295029, 0.002144732872,-0.009232278407, 0.000146728997, -0.010111394762,-0.013491956909, 0.020872121644, 0.025104278030,-0.013643042233,-0.015018451283, -0.000068299117,-0.019644863999, 0.000002861510, 0.052822261169, 0.015289946639,-0.049012297911, -0.016642744836,-0.000164469072,-0.032121234463, 0.059953731027, 0.133383985599,-0.078819553619, -0.239811117665, 0.036017541207, 0.285529343096, 0.036017541207,-0.239811117665,-0.078819553619, 0.133383985599, 0.059953731027,-0.032121234463, -0.000164469072,-0.016642744836,-0.049012297911, 0.015289946639, 0.052822261169, 0.000002861510, -0.019644863999,-0.000068299117,-0.015018451283, -0.013643042233, 0.025104278030, 0.020872121644, -0.013491956909,-0.010111394762, 0.000146728997, -0.009232278407, 0.002144732872, 0.019352295029, 0.002388684318,-0.013460012489,-0.002894644665, 0.000423864572,-0.003554185785, 0.007237935594, 0.009688130613,-0.005846603175,-0.008238424843, 0.001224920394, 0.000544273776,-0.000017122009, 0.006247819434, 0.002072374590,-0.006927087152, -0.002699251419, 0.002386316353, 0.000242520766, 0.001589634576, 0.003667323660,-0.001755945520, -0.006176846780, 0.001254178712, 0.004771062558, 0.000735580994,-0.001475936747}; /***********/ ``` This file can be included in the main module of an ARM Cortex M4 micro controller using the Cypress FM4 $50 dev kit
f_AD,Mag_AD, Phase_AD = loadtxt('BPF_8_14_101tap_48k.csv', delimiter=',',skiprows=6,unpack=True) fir_d.freqz_resp_list([b_r_bp],[[1]],'dB',fs=48) ylim([-80,5]) plot(f_AD/1e3,Mag_AD+.5) title(r'Equal Ripple Bandpass Theory vs Measured') ylabel(r'Filter Gain (dB)') xlabel(r'Frequency in kHz') legend((r'Equiripple Theory: %d taps' % len(b_r_bp), r'AD Measured (0.5dB correct)'),loc='lower right',fontsize='medium') grid();
tutorial_part1/FIR Filter Design and C Headers.ipynb
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
bsd-2-clause
FIR Design Problem Now its time to design and implement your own FIR filter using the filter design tools of fir_design_helper.py. The assignment here is to complete a design using a sampling rate of 48 kHz having an equiripple FIR lowpass lowpass response with 1dB cutoff frequency at 5 kHz, a passband ripple of 1dB, and stopband attenuation of 60 dB starting at 6.5 kHz. See Figure 9 for a graphical depiction of these amplitude response requirements.
Image('images/FIR_LPF_Design.png',width='100%')
tutorial_part1/FIR Filter Design and C Headers.ipynb
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
bsd-2-clause
We can test this filter in Lab3 using PyAudio for real-time DSP.
# Design the filter here
tutorial_part1/FIR Filter Design and C Headers.ipynb
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
bsd-2-clause
Plot the magnitude response and phase response, and the pole-zero plot Using the freqz_resp_list ```Python def freqz_resp_list(b,a=np.array([1]),mode = 'dB',fs=1.0,Npts = 1024,fsize=(6,4)): """ A method for displaying a list filter frequency responses in magnitude, phase, and group delay. A plot is produced using matplotlib freqz_resp([b],[a],mode = 'dB',Npts = 1024,fsize=(6,4)) b = ndarray of numerator coefficients a = ndarray of denominator coefficents mode = display mode: 'dB' magnitude, 'phase' in radians, or 'groupdelay_s' in samples and 'groupdelay_t' in sec, all versus frequency in Hz Npts = number of points to plot; default is 1024 fsize = figure size; defult is (6,4) inches """ ```
# fill in the plotting details
tutorial_part1/FIR Filter Design and C Headers.ipynb
mwickert/SP-Comm-Tutorial-using-scikit-dsp-comm
bsd-2-clause
You can experiment with these parameters:
PLOT_TYPE_TEXT = False # If you'd like to see indices PLOT_VECTORS = True # If you'd like to see your original features in P.C.-Space
Module5/Module5 - Lab4.ipynb
authman/DAT210x
mit
Some Convenience Functions
def drawVectors(transformed_features, components_, columns, plt): num_columns = len(columns) # This function will project your *original* feature (columns) # onto your principal component feature-space, so that you can # visualize how "important" each one was in the # multi-dimensional scaling # Scale the principal components by the max value in # the transformed set belonging to that component xvector = components_[0] * max(transformed_features[:,0]) yvector = components_[1] * max(transformed_features[:,1]) ## Visualize projections # Sort each column by its length. These are your *original* # columns, not the principal components. important_features = { columns[i] : math.sqrt(xvector[i]**2 + yvector[i]**2) for i in range(num_columns) } important_features = sorted(zip(important_features.values(), important_features.keys()), reverse=True) print("Projected Features by importance:\n", important_features) ax = plt.axes() for i in range(num_columns): # Use an arrow to project each original feature as a # labeled vector on your principal component axes plt.arrow(0, 0, xvector[i], yvector[i], color='b', width=0.0005, head_width=0.02, alpha=0.75, zorder=600000) plt.text(xvector[i]*1.2, yvector[i]*1.2, list(columns)[i], color='b', alpha=0.75, zorder=600000) return ax def doPCA(data, dimensions=2): model = PCA(n_components=dimensions, svd_solver='randomized', random_state=7) model.fit(data) return model def doKMeans(data, num_clusters=0): # TODO: Do the KMeans clustering here, passing in the # of clusters parameter # and fit it against your data. Then, return a tuple containing the cluster # centers and the labels. # # Hint: Just like with doPCA above, you will have to create a variable called # `model`, which will be a SKLearn K-Means model for this to work. # .. your code here .. return model.cluster_centers_, model.labels_
Module5/Module5 - Lab4.ipynb
authman/DAT210x
mit
Load up the dataset. It may or may not have nans in it. Make sure you catch them and destroy them, by setting them to 0. This is valid for this dataset, since if the value is missing, you can assume no money was spent on it.
# .. your code here ..
Module5/Module5 - Lab4.ipynb
authman/DAT210x
mit
As instructed, get rid of the Channel and Region columns, since you'll be investigating as if this were a single location wholesaler, rather than a national / international one. Leaving these fields in here would cause KMeans to examine and give weight to them:
# .. your code here ..
Module5/Module5 - Lab4.ipynb
authman/DAT210x
mit
Before unitizing / standardizing / normalizing your data in preparation for K-Means, it's a good idea to get a quick peek at it. You can do this using the .describe() method, or even by using the built-in pandas df.plot.hist():
# .. your code here ..
Module5/Module5 - Lab4.ipynb
authman/DAT210x
mit
Having checked out your data, you may have noticed there's a pretty big gap between the top customers in each feature category and the rest. Some feature scaling algorithms won't get rid of outliers for you, so it's a good idea to handle that manually---particularly if your goal is NOT to determine the top customers. After all, you can do that with a simple Pandas .sort_values() and not a machine learning clustering algorithm. From a business perspective, you're probably more interested in clustering your +/- 2 standard deviation customers, rather than the top and bottom customers. Remove top 5 and bottom 5 samples for each column:
drop = {} for col in df.columns: # Bottom 5 sort = df.sort_values(by=col, ascending=True) if len(sort) > 5: sort=sort[:5] for index in sort.index: drop[index] = True # Just store the index once # Top 5 sort = df.sort_values(by=col, ascending=False) if len(sort) > 5: sort=sort[:5] for index in sort.index: drop[index] = True # Just store the index once
Module5/Module5 - Lab4.ipynb
authman/DAT210x
mit
Drop rows by index. We do this all at once in case there is a collision. This way, we don't end up dropping more rows than we have to, if there is a single row that satisfies the drop for multiple columns. Since there are 6 rows, if we end up dropping < 562 = 60 rows, that means there indeed were collisions:
print("Dropping {0} Outliers...".format(len(drop))) df.drop(inplace=True, labels=drop.keys(), axis=0) df.describe()
Module5/Module5 - Lab4.ipynb
authman/DAT210x
mit
What are you interested in? Depending on what you're interested in, you might take a different approach to normalizing/standardizing your data. You should note that all columns left in the dataset are of the same unit. You might ask yourself, do I even need to normalize / standardize the data? The answer depends on what you're trying to accomplish. For instance, although all the units are the same (generic money unit), the price per item in your store isn't. There may be some cheap items and some expensive one. If your goal is to find out what items people tend to buy together but you didn't "unitize" properly before running kMeans, the contribution of the lesser priced item would be dwarfed by the more expensive item. This is an issue of scale. For a great overview on a few of the normalization methods supported in SKLearn, please check out: https://stackoverflow.com/questions/30918781/right-function-for-normalizing-input-of-sklearn-svm Suffice to say, at the end of the day, you're going to have to know what question you want answered and what data you have available in order to select the best method for your purpose. Luckily, SKLearn's interfaces are easy to switch out so in the mean time, you can experiment with all of them and see how they alter your results. 5-sec summary before you dive deeper online: Normalization Let's say your user spend a LOT. Normalization divides each item by the average overall amount of spending. Stated differently, your new feature is = the contribution of overall spending going into that particular item: \$spent on feature / \$overall spent by sample. MinMax What % in the overall range of $spent by all users on THIS particular feature is the current sample's feature at? When you're dealing with all the same units, this will produce a near face-value amount. Be careful though: if you have even a single outlier, it can cause all your data to get squashed up in lower percentages. Imagine your buyers usually spend \$100 on wholesale milk, but today only spent \$20. This is the relationship you're trying to capture with MinMax. NOTE: MinMax doesn't standardize (std. dev.); it only normalizes / unitizes your feature, in the mathematical sense. MinMax can be used as an alternative to zero mean, unit variance scaling. [(sampleFeatureValue-min) / (max-min)] * (max-min) + min Where min and max are for the overall feature values for all samples. Back to The Assignment Un-comment just ONE of lines at a time and see how alters your results. Pay attention to the direction of the arrows, as well as their LENGTHS:
#T = preprocessing.StandardScaler().fit_transform(df) #T = preprocessing.MinMaxScaler().fit_transform(df) #T = preprocessing.MaxAbsScaler().fit_transform(df) #T = preprocessing.Normalizer().fit_transform(df) T = df # No Change
Module5/Module5 - Lab4.ipynb
authman/DAT210x
mit
Sometimes people perform PCA before doing KMeans, so that KMeans only operates on the most meaningful features. In our case, there are so few features that doing PCA ahead of time isn't really necessary, and you can do KMeans in feature space. But keep in mind you have the option to transform your data to bring down its dimensionality. If you take that route, then your Clusters will already be in PCA-transformed feature space, and you won't have to project them again for visualization.
# Do KMeans n_clusters = 3 centroids, labels = doKMeans(T, n_clusters)
Module5/Module5 - Lab4.ipynb
authman/DAT210x
mit
Print out your centroids. They're currently in feature-space, which is good. Print them out before you transform them into PCA space for viewing
# .. your code here ..
Module5/Module5 - Lab4.ipynb
authman/DAT210x
mit
Now that we've clustered our KMeans, let's do PCA, using it as a tool to visualize the results. Project the centroids as well as the samples into the new 2D feature space for visualization purposes:
display_pca = doPCA(T) T = display_pca.transform(T) CC = display_pca.transform(centroids)
Module5/Module5 - Lab4.ipynb
authman/DAT210x
mit
Visualize all the samples. Give them the color of their cluster label
fig = plt.figure() ax = fig.add_subplot(111) if PLOT_TYPE_TEXT: # Plot the index of the sample, so you can further investigate it in your dset for i in range(len(T)): ax.text(T[i,0], T[i,1], df.index[i], color=c[labels[i]], alpha=0.75, zorder=600000) ax.set_xlim(min(T[:,0])*1.2, max(T[:,0])*1.2) ax.set_ylim(min(T[:,1])*1.2, max(T[:,1])*1.2) else: # Plot a regular scatter plot sample_colors = [ c[labels[i]] for i in range(len(T)) ] ax.scatter(T[:, 0], T[:, 1], c=sample_colors, marker='o', alpha=0.2)
Module5/Module5 - Lab4.ipynb
authman/DAT210x
mit
Plot the Centroids as X's, and label them
ax.scatter(CC[:, 0], CC[:, 1], marker='x', s=169, linewidths=3, zorder=1000, c=c) for i in range(len(centroids)): ax.text(CC[i, 0], CC[i, 1], str(i), zorder=500010, fontsize=18, color=c[i]) # Display feature vectors for investigation: if PLOT_VECTORS: drawVectors(T, display_pca.components_, df.columns, plt) # Add the cluster label back into the dataframe and display it: df['label'] = pd.Series(labels, index=df.index) df plt.show()
Module5/Module5 - Lab4.ipynb
authman/DAT210x
mit
2. Querying image: matrix, sub-matrices, ROI
print('-----------------------------------------------------------------------') print('Image shape is',imageFromWeb.shape, 'and type is',type(imageFromWeb)) print('Min =',imageFromWeb.min(),",Mean =",imageFromWeb.mean(),',Max = ',imageFromWeb.max()) print('dtype = ',imageFromWeb.dtype) print('-----------------------------------------------------------------------') # Cropping an image facecolor = imageFromWeb[50:115, 95:140] plt.imshow(facecolor) plt.title('Holly')
code/ZeissMicroscopyCenter2017_DaniUshizima_lecture.ipynb
dani-lbnl/2017_ucberkeley_course
gpl-3.0
3. Image transformations
import numpy as np import matplotlib.pyplot as plt from skimage.color import rgb2gray from skimage.filters import sobel from skimage.filters.rank import mean, equalize from skimage.morphology import disk from skimage import exposure from skimage.morphology import reconstruction from skimage import img_as_ubyte, img_as_float # Turn color image into grayscale representation face = rgb2gray(facecolor) face = img_as_ubyte(face) #this generates the warning hist = np.histogram(face, bins=np.arange(0, 256)) fig, ax = plt.subplots(ncols=2, figsize=(10, 5)) ax[0].imshow(face, interpolation='nearest', cmap=plt.cm.gray) ax[0].axis('off') ax[1].plot(hist[1][:-1], hist[0], lw=1) ax[1].set_title('Histogram of gray values') plt.tight_layout() # Smoothing smoothed = img_as_ubyte(mean(face, disk(2))) #smoothPill = ndi.median_filter(edgesPill.astype(np.uint16), 3) # Global equalization equalized = exposure.equalize_hist(face) # Extract edges edge_sobel = sobel(face) # Masking mask = face < 80 facemask = face.copy() # Set to "white" (255) pixels where mask is True facemask[mask] = 255 #facemask = img_as_uint(facemask) fig, ax = plt.subplots(ncols=5, sharex=True, sharey=True, figsize=(10, 4)) ax[0].imshow(face, cmap='gray') ax[0].set_title('Original') ax[1].imshow(smoothed, cmap='gray') ax[1].set_title('Smoothing') ax[2].imshow(equalized, cmap='gray') ax[2].set_title('Equalized') ax[3].imshow(edge_sobel, cmap='gray') ax[3].set_title('Sobel Edge Detection') ax[4].imshow(facemask, cmap='gray') ax[4].set_title('Masked <50') for a in ax: a.axis('off') plt.tight_layout() plt.show()
code/ZeissMicroscopyCenter2017_DaniUshizima_lecture.ipynb
dani-lbnl/2017_ucberkeley_course
gpl-3.0
4. Immunohistochemistry example from scikit-image More at: http://scikit-image.org/docs/dev/api/skimage.data.html#skimage.data.immunohistochemistry
imgMicro = data.immunohistochemistry() plt.imshow(imgMicro)
code/ZeissMicroscopyCenter2017_DaniUshizima_lecture.ipynb
dani-lbnl/2017_ucberkeley_course
gpl-3.0
5. Segmentation and feature extraction
import matplotlib.patches as mpatches from skimage import data from skimage.filters import threshold_otsu from skimage.segmentation import clear_border from skimage.measure import label, regionprops from skimage.morphology import closing, square from skimage.color import label2rgb # create a subimage for tests image = imgMicro[300:550, 200:400, 2] # apply threshold thresh = threshold_otsu(image) bw = closing(image > thresh, square(3)) # remove artifacts connected to image border cleared = clear_border(bw) # label image regions label_image = label(cleared) image_label_overlay = label2rgb(label_image, image=image) fig, ax = plt.subplots(figsize=(10, 6)) ax.imshow(image_label_overlay) for region in regionprops(label_image): # take regions with large enough areas if region.area >= 50: # draw rectangle around segmented coins minr, minc, maxr, maxc = region.bbox rect = mpatches.Rectangle((minc, minr), maxc - minc, maxr - minr, fill=False, edgecolor='red', linewidth=2) ax.add_patch(rect) ax.set_axis_off() plt.tight_layout() plt.show() #plt.imshow(bw,cmap=plt.cm.gray)
code/ZeissMicroscopyCenter2017_DaniUshizima_lecture.ipynb
dani-lbnl/2017_ucberkeley_course
gpl-3.0
6. Save information as a xls file
# Calculate regions properties from label_image regions = regionprops(label_image) for i in range(len(regions)): all_props = {p:regions[i][p] for p in regions[i] if p not in ('image','convex_image','filled_image')} for p, v in list(all_props.items()): if isinstance(v,np.ndarray): if(len(v.shape)>1): del all_props[p] for p, v in list(all_props.items()): try: L = len(v) except: L = 1 if L>1: del all_props[p] for n,entry in enumerate(v): all_props[p + str(n)] = entry k = ", ".join(all_props.keys()) v = ", ".join([str(f) for f in all_props.values()]) #notice you need to convert numbers to strings if(i==0): with open('cellsProps.csv','w') as f: #f.write(k) f.writelines([k,'\n',v,'\n']) else: with open('cellsProps.csv','a') as f: #f.write(k) f.writelines([v,'\n'])
code/ZeissMicroscopyCenter2017_DaniUshizima_lecture.ipynb
dani-lbnl/2017_ucberkeley_course
gpl-3.0
7. Simulating 2D images - "cells"
# Test from skimage.draw import circle img = np.zeros((50, 50), dtype=np.uint8) rr, cc = circle(25, 25, 5) img[rr, cc] = 1 plt.imshow(img,cmap='gray') %matplotlib inline import numpy as np import random import math from matplotlib import pyplot as plt import matplotlib.patches as mpatches from skimage import data, io from skimage.draw import circle def createMyCells(width, height, r, num_cells): image = np.zeros((width,height),dtype=np.uint8) imgx, imgy = image.shape nx = [] ny = [] ng = [] #Creates a synthetic set of points for i in range(num_cells): nx.append(random.randrange(imgx)) ny.append(random.randrange(imgy)) ng.append(random.randrange(256)) #Uses points as centers of circles for i in range(num_cells): rr, cc = circle(ny[i], nx[i], radius) if valid(ny[i],r,imgy) & valid(nx[i],r,imgx): image[rr, cc] = ng[i] return image def valid(v,radius,dim): if v<radius: return False else: if v>=dim-radius: return False else: return True width = 200 height = 200 radius = 5 num_cells = 50 image = createMyCells(width, height, radius, num_cells) plt.imshow(image)
code/ZeissMicroscopyCenter2017_DaniUshizima_lecture.ipynb
dani-lbnl/2017_ucberkeley_course
gpl-3.0
8. Simulate particles with Scikit-learn -> sklearn
from sklearn.cluster import MeanShift, estimate_bandwidth from sklearn.datasets.samples_generator import make_blobs n = 1000 clusterSD = 10 #proportional to the pool size centers = [[50,50], [100, 100], [100, 200], [150,150], [200, 100], [200,200]] X, _ = make_blobs(n_samples=n, centers=centers, cluster_std=clusterSD) image = np.zeros(shape=(300,300), dtype=np.uint8) for i in X: x,y=i.astype(np.uint8) #print(x,',',y) image[x,y]=255 plt.imshow(image,cmap=plt.cm.gray) myquantile=0.15 #Change this parameter (smaller numbers will produce smaller clusters and more numerous) bandwidth = estimate_bandwidth(X, quantile=myquantile, n_samples=500) ms = MeanShift(bandwidth=bandwidth, bin_seeding=True) ms.fit(X) labels = ms.labels_ cluster_centers = ms.cluster_centers_ labels_unique = np.unique(labels) n_clusters_ = len(labels_unique) print("number of estimated clusters : %d" % n_clusters_)
code/ZeissMicroscopyCenter2017_DaniUshizima_lecture.ipynb
dani-lbnl/2017_ucberkeley_course
gpl-3.0
9. Check particle neighborhood: groups (clustering algorithms)
import matplotlib.pyplot as plt from itertools import cycle plt.figure(1) plt.clf() colors = cycle('bgrcmykbgrcmykbgrcmykbgrcmyk') for k, col in zip(range(n_clusters_), colors): my_members = labels == k cluster_center = cluster_centers[k] plt.plot(X[my_members, 0], X[my_members, 1], col + '.') plt.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=14) plt.title('Estimated number of clusters: %d' % n_clusters_) plt.show()
code/ZeissMicroscopyCenter2017_DaniUshizima_lecture.ipynb
dani-lbnl/2017_ucberkeley_course
gpl-3.0
10. Pandas and seaborn Pandas: http://pandas.pydata.org/ Seaborn: http://seaborn.pydata.org/
import numpy as np import pandas as pd from scipy import stats, integrate import matplotlib.pyplot as plt import seaborn as sns df = pd.DataFrame(X, columns=["x", "y"]) # Kernel density estimation sns.jointplot(x="x", y="y", data=df, kind="kde");
code/ZeissMicroscopyCenter2017_DaniUshizima_lecture.ipynb
dani-lbnl/2017_ucberkeley_course
gpl-3.0
s1: load raw data from AmazonReviews datasets
product_name = 'B00000JFIF' reviewJsonFile = product_name + '.json' product = Product(name=product_name) product.loadReviewsFromJsonFile('../data/trainingFiles/AmazonReviews/cameras/' + reviewJsonFile)
ipynbs/Dynamic_Aspect_Extraction_Part_B.ipynb
MachineLearningStudyGroup/Smart_Review_Summarization
mit
s2: define aspect patterns
aspectPatterns = [] # define an aspect pattern1 pattern_name = 'adj_nn' pattern_structure =""" adj_nn:{<JJ><NN.?>} """ aspectTagIndices = [1] aspectPattern = AspectPattern(name='adj_nn', structure=pattern_structure, aspectTagIndices=aspectTagIndices) aspectPatterns.append(aspectPattern) # define an aspect pattern2 pattern_name = 'nn_nn' pattern_structure =""" nn_nn:{<NN.?><NN.?>} """ aspectTagIndices = [0,1] aspectPattern = AspectPattern(name='nn_nn', structure=pattern_structure, aspectTagIndices=aspectTagIndices) aspectPatterns.append(aspectPattern)
ipynbs/Dynamic_Aspect_Extraction_Part_B.ipynb
MachineLearningStudyGroup/Smart_Review_Summarization
mit
s3: match sentence to pattern to extract aspects
# pos tagging for review in product.reviews: for sentence in review.sentences: sentence.pos_tag() sentence.matchDaynamicAspectPatterns(aspectPatterns)
ipynbs/Dynamic_Aspect_Extraction_Part_B.ipynb
MachineLearningStudyGroup/Smart_Review_Summarization
mit
s4: statistic analysis on aspects extracted across all reviews
word_dict = {} for review in product.reviews: for sentence in review.sentences: for aspect in sentence.dynamic_aspects: if aspect in word_dict: word_dict[aspect] += 1 else: word_dict[aspect] = 1 word_sorted = sorted(word_dict.items(), key=lambda tup:-tup[1]) word_sorted[:15]
ipynbs/Dynamic_Aspect_Extraction_Part_B.ipynb
MachineLearningStudyGroup/Smart_Review_Summarization
mit
s5: save most frequent dynamic aspects
import json word_output = open('../data/word_list/{0}_wordlist.txt'.format(product_name), 'w') json.dump(word_sorted[:15], word_output) word_output.close()
ipynbs/Dynamic_Aspect_Extraction_Part_B.ipynb
MachineLearningStudyGroup/Smart_Review_Summarization
mit
s6: stemming analysis
from nltk.stem import SnowballStemmer stemmer = SnowballStemmer('english') # collect word with same stem stemmedWord_dict = {} for word in word_dict: stemmedWord = stemmer.stem(word) if stemmedWord in stemmedWord_dict: stemmedWord_dict[stemmedWord] += word_dict[word] else: stemmedWord_dict[stemmedWord] = word_dict[word] # frequency ranking stemmedWord_sorted = sorted(stemmedWord_dict.items(), key=lambda tup:-tup[1]) stemmedWord_sorted[:15] # save most frequent stemmed words stemmedWord_output = open('../data/word_list/{0}_stemmedwordlist.txt'.format(product_name), 'w') json.dump(stemmedWord_sorted[:15], stemmedWord_output) stemmedWord_output.close()
ipynbs/Dynamic_Aspect_Extraction_Part_B.ipynb
MachineLearningStudyGroup/Smart_Review_Summarization
mit
Let us save this channel library for posterity:
cl.save_as("NoSidebanding")
doc/examples/Example-Channel-Lib.ipynb
BBN-Q/Auspex
apache-2.0
Now we adjust some parameters and save another version of the channel library
cl["q1"].measure_chan.frequency = 50e6 cl.commit() cl.save_as("50MHz-Sidebanding")
doc/examples/Example-Channel-Lib.ipynb
BBN-Q/Auspex
apache-2.0
Maybe we forgot to change something. No worries! We can just update the parameter and create a new copy.
cl["q1"].pulse_params['length'] = 400e-9 cl.commit() cl.save_as("50MHz-Sidebanding") cl.ls()
doc/examples/Example-Channel-Lib.ipynb
BBN-Q/Auspex
apache-2.0
We see the various versions of the channel library here. Note that the user is always modifying the working version of the database: all other versions are archival, but they can be restored to the current working version as shown below. Loading Channel Library Versions Let us load a previous version of the channel library, noting that the former value of our parameter is restored in the working copy. CRUCIAL POINT: do not use the old reference q1, which is no longer pointing to the database since the working db has been replaced with the saved version. Instead use dictionary access cl["q1"] on the channel library to return the first qubit:
cl.load("NoSidebanding") cl["q1"].measure_chan.frequency
doc/examples/Example-Channel-Lib.ipynb
BBN-Q/Auspex
apache-2.0
Now let's load the second oldest version of the 50MHz-sidebanding library:
cl.load("50MHz-Sidebanding", -1) cl["q1"].pulse_params['length'], cl["q1"].measure_chan.frequency # q1 = QubitFactory("q1") plot_pulse_files(RabiAmp(cl["q1"], np.linspace(-1, 1, 11)), time=True)
doc/examples/Example-Channel-Lib.ipynb
BBN-Q/Auspex
apache-2.0
cl.ls() cl.rm("NoSidebanding") cl.ls() cl.rm("50MHz-Sidebanding") cl.ls()
doc/examples/Example-Channel-Lib.ipynb
BBN-Q/Auspex
apache-2.0
Simulating Games: Chaos VS Defect
# Create agents and play the game for 10000 iteratations agent1 = c.Chaos() agent2 = d.Defect() game = PrisonersDilemma(agent1, agent2) game.play(10000) # Grab Data agent1_util_vals = Counter(game.data['A']) agent2_util_vals = Counter(game.data['B']) a1_total_score = sum(game.data['A']) a2_total_score = sum(game.data['B']) # Plot the results x1, y1, x2, y2 = [], [], [], [] for i, j in zip(agent1_util_vals, agent2_util_vals): x1.append(i) y1.append(agent1_util_vals[i]) x2.append(j) y2.append(agent2_util_vals[j]) fig, ax = plt.subplots(figsize=(12,6)) width = 0.35 a1 = ax.bar(x1, y1, width, color='#8A9CEF') a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange') _ = ax.set_title('Chaos Agent Vs Defect Agent') _ = ax.set_ylabel('Number of Games') _ = ax.set_xlabel('Utility Values') ax.set_xticks(np.add([1,2,5],width-.05)) _ = ax.set_xticklabels(('1', '2', '5')) _ = ax.legend((a1[0], a2[0]), ('Chaos Agent\nTotal Utility Score: {}'.format(str(a1_total_score)), 'Defect Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1)) plt.show()
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
In this scenario defecting is the domiant strategy. Where the agent is better off defecting no matter what other agents do. Grim VS Pavlov
# play the game agent1 = g.Grim() agent2 = p.Pavlov() game = PrisonersDilemma(agent1, agent2) game.play(10000) # get data from game agent1_util_vals = Counter(game.data['A']) agent2_util_vals = Counter(game.data['B']) a1_total_score = sum(game.data['A']) a2_total_score = sum(game.data['B']) # Plot the results x1, y1, x2, y2 = [], [], [], [] for i, j in zip(agent1_util_vals, agent2_util_vals): x1.append(i) y1.append(agent1_util_vals[i]) x2.append(j) y2.append(agent2_util_vals[j]) fig, ax = plt.subplots(figsize=(12,6)) width = 0.35 a1 = ax.bar(x1, y1, width, color='#8A9CEF') a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange') _ = ax.set_title('Grim Agent Vs Pavlov Agent') _ = ax.set_ylabel('Number of Games') _ = ax.set_xlabel('Utility Values') ax.set_xticks(np.add([0,4,5],width/2)) _ = ax.set_xticklabels(('0', '4', '5')) _ = ax.legend((a1[0], a2[0]), ('Grim Agent\nTotal Utility Score: {}'.format(str(a1_total_score)), 'Pavlov Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1)) plt.show()
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
Both strategies start out cooperating, Grim never defects because pavlov never defects. Pavlov never loses a round so it doesn't change it's strategy. Q-Learning VS Pavlov
# Play the Game agent1 = ml.QLearn() agent2 = p.Pavlov() game = PrisonersDilemma(agent1, agent2) game.play(10000) # Get Data from Game agent1_util_vals = Counter(game.data['A']) agent2_util_vals = Counter(game.data['B']) a1_total_score = sum(game.data['A']) a2_total_score = sum(game.data['B']) # Plot the results x1, y1, x2, y2 = [], [], [], [] for i, j in zip(agent1_util_vals, agent2_util_vals): x1.append(i) y1.append(agent1_util_vals[i]) x2.append(j) y2.append(agent2_util_vals[j]) fig, ax = plt.subplots(figsize=(12,6)) width = 0.35 a1 = ax.bar(x1, y1, width, color='#8A9CEF') a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange') _ = ax.set_title('QLearning Agent Vs Pavlov Agent') _ = ax.set_ylabel('Number of Games') _ = ax.set_xlabel('Utility Values') ax.set_xticks(np.add([1,2,4,5],width/2)) _ = ax.set_xticklabels(('1', '2', '4', '5')) _ = ax.legend((a1[0], a2[0]), ('QLearning Agent\nTotal Utility Score: {}'.format(str(a1_total_score)), 'Pavlov Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1)) plt.show()
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
Pavlov's simple rules out performs Q Learning here which is interesting.
print(agent1_util_vals, agent2_util_vals)
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
Q-Learning VS Chaos
# Play the Game N = 10000 agent1 = ml.QLearn() agent2 = c.Chaos() game = PrisonersDilemma(agent1, agent2) game.play(N) # Get Data from Game agent1_util_vals = Counter(game.data['A']) agent2_util_vals = Counter(game.data['B']) a1_total_score = sum(game.data['A']) a2_total_score = sum(game.data['B']) # Plot the results x1, y1, x2, y2 = [], [], [], [] for i, j in zip(agent1_util_vals, agent2_util_vals): x1.append(i) y1.append(agent1_util_vals[i]) x2.append(j) y2.append(agent2_util_vals[j]) fig, ax = plt.subplots(figsize=(12,6)) width = 0.35 a1 = ax.bar(x1, y1, width, color='#8A9CEF') a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange') _ = ax.set_title('QLearning Agent Vs Chaos Agent') _ = ax.set_ylabel('Number of Games') _ = ax.set_xlabel('Utility Values') ax.set_xticks(np.add(x2,width/2)) _ = ax.set_xticklabels(('1', '2', '4', '5')) _ = ax.legend((a1[0], a2[0]), ('QLearning Agent\nTotal Utility Score: {}'.format(str(a1_total_score)), 'Chaos Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1)) plt.show()
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
Q Learning significantly outperforms the Chaos Agent because the Q Learning Agent learns pretty quickly that defecting yields the highest expected utility (talked about more in appendix). Q Learning VS Q Learning
# Play the Game N = 10000 agent1 = ml.QLearn() agent2 = ml.QLearn() game = PrisonersDilemma(agent1, agent2) game.play(N) # Get Data from Game agent1_util_vals = Counter(game.data['A']) agent2_util_vals = Counter(game.data['B']) a1_total_score = sum(game.data['A']) a2_total_score = sum(game.data['B']) # Plot the results x1, y1, x2, y2 = [], [], [], [] for i, j in zip(agent1_util_vals, agent2_util_vals): x1.append(i) y1.append(agent1_util_vals[i]) x2.append(j) y2.append(agent2_util_vals[j]) fig, ax = plt.subplots(figsize=(12,6)) width = 0.35 a1 = ax.bar(x1, y1, width, color='#8A9CEF') a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange') _ = ax.set_title('QLearning Agent Vs QLearning Agent') _ = ax.set_ylabel('Number of Games') _ = ax.set_xlabel('Utility Values') ax.set_xticks(np.add(x2,width/2)) _ = ax.set_xticklabels(('1', '2', '4', '5')) _ = ax.legend((a1[0], a2[0]), ('QLearning Agent\nTotal Utility Score: {}'.format(str(a1_total_score)), 'QLearning Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1)) plt.show()
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
Here both QLearning Agents tend to mirror each other. I assume this is because they have the same inital parameters which will yield the same expected utility. QLearning Vs QLearning (Longer Game; Different Starting Parameters)
# Play the Game N = 200000 # Play a longer game # agent 1's parameters are bit more short sighted agent1 = ml.QLearn(decay=0.4, lr=0.03, explore_period=30000, explore_random_prob=0.4, exploit_random_prob=0.2) # agent 2's parameters think more about the future agent2 = ml.QLearn(decay=0.6, lr=0.2, explore_period=40000, explore_random_prob=0.4, exploit_random_prob=0.1) game = PrisonersDilemma(agent1, agent2) game.play(N) # Get Data from Game agent1_util_vals = Counter(game.data['A']) agent2_util_vals = Counter(game.data['B']) a1_total_score = sum(game.data['A']) a2_total_score = sum(game.data['B']) # Plot the results x1, y1, x2, y2 = [], [], [], [] for i, j in zip(agent1_util_vals, agent2_util_vals): x1.append(i) y1.append(agent1_util_vals[i]) x2.append(j) y2.append(agent2_util_vals[j]) fig, ax = plt.subplots(figsize=(12,6)) width = 0.35 a1 = ax.bar(x1, y1, width, color='#8A9CEF') a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange') _ = ax.set_title('QLearning Agent Vs QLearning Agent') _ = ax.set_ylabel('Number of Games') _ = ax.set_xlabel('Utility Values') ax.set_xticks(np.add(x2,width/2)) _ = ax.set_xticklabels(('1', '2', '4', '5')) _ = ax.legend((a1[0], a2[0]), ('QLearning Agent\nTotal Utility Score: {}'.format(str(a1_total_score)), 'QLearning Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1)) plt.show()
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
(I haven't had the time to look through the actions of both agents but one is short sighted and the other is not, which yields the Orange QLearning agent a higher total utility score.)
print(agent1_util_vals, agent2_util_vals)
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
Iterated Coordination Game Scenario - Choosing Movies In this scenario Vincent and Maghav want to see different movies. Vincent wants to see Guardians of the Galaxy 2 and Maghav wants to see Wonder Woman. They are willing to go see the movie that don't really care for but they both don't want to go see a movie alone. They both have 2 choices to defect (see the other persons movie person), or to cooperate go and see the movie they want. The payoff matrix is below: Chaos VS Defect
# Create agents and play the game for 10000 iteratations agent1 = c.Chaos() agent2 = d.Defect() game = Coordination(agent1, agent2) game.play(10000) # Grab Data agent1_util_vals = Counter(game.data['A']) agent2_util_vals = Counter(game.data['B']) a1_total_score = sum(game.data['A']) a2_total_score = sum(game.data['B']) # Plot the results x1, y1, x2, y2 = [], [], [], [] for i, j in zip(agent1_util_vals, agent2_util_vals): x1.append(i) y1.append(agent1_util_vals[i]) x2.append(j) y2.append(agent2_util_vals[j]) fig, ax = plt.subplots(figsize=(12,6)) width = 0.35 a1 = ax.bar(x1, y1, width, color='#8A9CEF') a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange') _ = ax.set_title('Chaos Agent Vs Defect Agent') _ = ax.set_ylabel('Number of Games') _ = ax.set_xlabel('Utility Values') ax.set_xticks(np.add([0,1, 2],width-.05)) _ = ax.set_xticklabels(('0','1','2')) _ = ax.legend((a1[0], a2[0]), ('Chaos Agent\nTotal Utility Score: {}'.format(str(a1_total_score)), 'Defect Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1)) plt.show()
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
Here Defect isn't a domiant strategy. The defect agent only recieves a non 0 utility value if the chaos agent sees the movie they intended to see. A Mixed Strategy is needed.
print(agent1_util_vals,agent2_util_vals)
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
Grim VS Pavlov
# play the game agent1 = g.Grim() agent2 = p.Pavlov() game = Coordination(agent1, agent2) game.play(10000) # get data from game agent1_util_vals = Counter(game.data['A']) agent2_util_vals = Counter(game.data['B']) a1_total_score = sum(game.data['A']) a2_total_score = sum(game.data['B']) # Plot the results x1, y1, x2, y2 = [], [], [], [] for i, j in zip(agent1_util_vals, agent2_util_vals): x1.append(i) y1.append(agent1_util_vals[i]) x2.append(j) y2.append(agent2_util_vals[j]) fig, ax = plt.subplots(figsize=(12,6)) width = 0.35 a1 = ax.bar(x1, y1, width, color='#8A9CEF') a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange') _ = ax.set_title('Grim Agent Vs Pavlov Agent') _ = ax.set_ylabel('Number of Games') _ = ax.set_xlabel('Utility Values') ax.set_xticks(np.add([0,1,2],width/2)) _ = ax.set_xticklabels(('0', '1', '2')) _ = ax.legend((a1[0], a2[0]), ('Grim Agent\nTotal Utility Score: {}'.format(str(a1_total_score)), 'Pavlov Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1)) plt.show()
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
Grim loses in the first round and always goes to other movie, the Pavlov Agent even won a round where they both ended up at the same movie and never changed it's strategy.
print(agent1_util_vals, agent2_util_vals)
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
Q-Learning Vs Chaos
# Play the Game N = 10000 agent1 = ml.QLearn() agent2 = c.Chaos() game = Coordination(agent1, agent2) game.play(N) # Get Data from Game agent1_util_vals = Counter(game.data['A']) agent2_util_vals = Counter(game.data['B']) a1_total_score = sum(game.data['A']) a2_total_score = sum(game.data['B']) # Plot the results x1, y1, x2, y2 = [], [], [], [] for i, j in zip(agent1_util_vals, agent2_util_vals): x1.append(i) y1.append(agent1_util_vals[i]) x2.append(j) y2.append(agent2_util_vals[j]) fig, ax = plt.subplots(figsize=(12,6)) width = 0.35 a1 = ax.bar(x1, y1, width, color='#8A9CEF') a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange') _ = ax.set_title('QLearning Agent Vs Chaos Agent') _ = ax.set_ylabel('Number of Games') _ = ax.set_xlabel('Utility Values') ax.set_xticks(np.add(x2,width/2)) _ = ax.set_xticklabels(('0', '1', '2')) _ = ax.legend((a1[0], a2[0]), ('QLearning Agent\nTotal Utility Score: {}'.format(str(a1_total_score)), 'Chaos Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1)) plt.show()
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
This is different from Prisoner's Dilema, the QLearning Agent is trying to cooperate with the chaos agent but can never predict which movie he is going to. QLearning Vs QLearning
# Play the Game N = 10000 agent1 = ml.QLearn() agent2 = ml.QLearn() game = Coordination(agent1, agent2) game.play(N) # Get Data from Game agent1_util_vals = Counter(game.data['A']) agent2_util_vals = Counter(game.data['B']) a1_total_score = sum(game.data['A']) a2_total_score = sum(game.data['B']) # Plot the results x1, y1, x2, y2 = [], [], [], [] for i, j in zip(agent1_util_vals, agent2_util_vals): x1.append(i) y1.append(agent1_util_vals[i]) x2.append(j) y2.append(agent2_util_vals[j]) fig, ax = plt.subplots(figsize=(12,6)) width = 0.35 a1 = ax.bar(x1, y1, width, color='#8A9CEF') a2 = ax.bar(np.asarray(x2)+width, y2, width, color='orange') _ = ax.set_title('QLearning Agent Vs QLearning Agent') _ = ax.set_ylabel('Number of Games') _ = ax.set_xlabel('Utility Values') ax.set_xticks(np.add(x2,width/2)) _ = ax.set_xticklabels(('0','1', '2')) _ = ax.legend((a1[0], a2[0]), ('QLearning Agent\nTotal Utility Score: {}'.format(str(a1_total_score)), 'QLearning Agent\nTotal Utility Score: {}'.format(str(a2_total_score))), loc=1, bbox_to_anchor=(1.35, 1)) plt.show()
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
Still playing around with this one, but both do pretty bad here.
print(agent1_util_vals, agent2_util_vals)
basicGames/Basic Games.ipynb
ikegwukc/INFO597-DeepLearning-GameTheory
mit
Downloading Data We'll start by downloading the data (available on seattle.gov).
from urllib import request FREMONT_URL = 'https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD' request.urlretrieve(FREMONT_URL, 'Fremont.csv') # magic function to show the content of the file %more Fremont.csv import pandas as pd df = pd.read_csv('Fremont.csv') # use read_csv to load the data into dataframe df.head() # Let's see the type of the data df.dtypes # change the Date column to datetime data type df['Date'] = pd.to_datetime(df['Date']) df.head() df.dtypes # Set the index to Date df.set_index('Date', inplace=True) df.head() df.apply(lambda x: sum(x.isnull())) # clear the data by delete the non-numeric df.dropna(inplace=True) df.apply(lambda x: sum(x.isnull())) df.columns df.plot() df.resample('W').sum().plot() df.columns=['West', 'East'] df.resample('w').sum().plot() # To see whether there is any annual trend of the number of rides df.resample('D').sum().rolling(365).sum().plot() # each point is the sum of the number of rides in the previuos 365 days # The y coordinate is not from 0 ax = df.resample('D').sum().rolling(365).sum().plot() ax.set_ylim(0, None) # DateimeIndex.time return numpy array of datetime.time, the time part of the Timestamps df.groupby(df.index.time).mean().plot() # plot the average of rides at each hours of the day # Create the pivoted table to investigate the pattern in each day df['Total'] = df['West'] + df['East'] pivoted = df.pivot_table(values='Total', index=df.index.time, columns=df.index.date) pivoted.head() pivoted.shape # delete the date with non-numeric pivoted.dropna(axis=1, inplace=True) pivoted.shape pivoted.plot(legend=False) # add transparent parameter alpha pivoted.plot(legend=False, alpha=0.01)
example_bridge_bike_counter.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Principal Component Analysis
# Get X with hours as mearsurement and date as observations X = pivoted.T.values X.shape X from sklearn.decomposition import PCA X2 = PCA(2, svd_solver='full').fit_transform(X) X2 X2.shape plt.scatter(X2[:, 0], X2[:, 1]) # use cluster algorithm Gaussian mixture model from sklearn.mixture import GaussianMixture gmm = GaussianMixture(2) gmm.fit(X) labels = gmm.predict(X) labels # plt.scatter(X2[:, 0], X2[:, 1], c=labels, cmap='rainbow') # plt.colorbar() plt.scatter(X2[:, 0], X2[:, 1], c=labels) plt.colorbar() labels # so labels == 1 represents the weekday pivoted.T[labels == 1].T.plot(legend=False, alpha=0.01) # labels == 0 represents the weekend or holiday pivoted.T[labels == 0].T.plot(legend=False, alpha=0.1)
example_bridge_bike_counter.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Comparing with Day of Week
pd.DatetimeIndex(pivoted.columns) # The DatetimeIndex.dayof week gives the day of the week dayofweek = pd.DatetimeIndex(pivoted.columns).dayofweek dayofweek # Then we plot the color of the weekday plt.scatter(X2[:, 0], X2[:, 1], c=dayofweek) plt.colorbar() # grab the day in label 0 which is not weekend dates = pd.DatetimeIndex(pivoted.columns) dates[(labels == 0) & (dayofweek < 5)]
example_bridge_bike_counter.ipynb
rongchuhe2/workshop_data_analysis_python
mit
Then, let's start a new thread, passing the opaque pointer of the Csound instance as argument:
pt = ctcsound.CsoundPerformanceThread(cs.csound()) pt.play()
cookbook/03-threading.ipynb
fggp/ctcsound
lgpl-2.1
Now, we can send messages to the performance thread:
pt.scoreEvent(False, 'i', (1, 0, 1, 0.5, 8.06, 0.05, 0.3, 0.5)) pt.scoreEvent(False, 'i', (1, 0.5, 1, 0.5, 9.06, 0.05, 0.3, 0.5))
cookbook/03-threading.ipynb
fggp/ctcsound
lgpl-2.1
When we're done, we stop the performance thread and reset the csound instance:
pt.stop() pt.join() cs.reset()
cookbook/03-threading.ipynb
fggp/ctcsound
lgpl-2.1
Note that we can still access the csound instance with other methods, like controlChannel() or setControlChannel():
csd = ''' <CsoundSynthesizer> <CsOptions> -odac </CsOptions> <CsInstruments> sr = 44100 ksmps = 64 nchnls = 2 0dbfs = 1 seed 0 instr 1 iPch random 60, 72 chnset iPch, "pch" kPch init iPch kNewPch chnget "new_pitch" if kNewPch > 0 then kPch = kNewPch endif aTone poscil .2, mtof(kPch) out aTone, aTone endin </CsInstruments> <CsScore> i 1 0 600 </CsScore> </CsoundSynthesizer> ''' cs.compileCsdText(csd) cs.start() pt = ctcsound.CsoundPerformanceThread(cs.csound()) pt.play()
cookbook/03-threading.ipynb
fggp/ctcsound
lgpl-2.1
We can ask for the values in the Csound instance ...
print(cs.controlChannel('pch'))
cookbook/03-threading.ipynb
fggp/ctcsound
lgpl-2.1
... or we can set our own values to the Csound instance:
cs.setControlChannel('new_pitch',73)
cookbook/03-threading.ipynb
fggp/ctcsound
lgpl-2.1
At the end, stop and reset as usual:
pt.stop() pt.join() cs.reset()
cookbook/03-threading.ipynb
fggp/ctcsound
lgpl-2.1
μ—°μŠ΅ 1 κ²¬λ³Έλ‹΅μ•ˆ 2 μ„­μ”¨μ˜¨λ„λ₯Ό μœ μΌν•˜κ²Œ νŠΉμ§•μ§€μš°λŠ” λ¬Έμžμ—΄μ„ μ°Ύμ•„μ•Ό ν•œλ‹€. " F "κ°€ 그런 λ¬Έμžμ—΄μ΄λ‹€. (F μ–‘ μ˜†μœΌλ‘œ μŠ€νŽ˜μ΄μŠ€κ°€ μžˆλ‹€.)
def NOAA_temperature(s): d = s.find(" F ") print(s[d+4: d+6] + " C") NOAA_temperature(NOAA_string())
ref_materials/excs/Lab-07.ipynb
liganega/Gongsu-DataSci
gpl-3.0
μ—°μŠ΅ 2 ν…μŠ€νŠΈ νŒŒμΌμ— μ €μž₯된 λ¬Έμž₯μ—μ„œ νŠΉμ • λ‹¨μ–΄μ˜ μΆœν˜„ 횟수λ₯Ό ν™•μΈν•΄μ£ΌλŠ” ν•¨μˆ˜ wc_sub(filename, s) ν•¨μˆ˜λ₯Ό μž‘μ„±ν•˜λΌ. wcλŠ” Word Count의 μ€„μž„λ§μ΄λ‹€. 힌트: count λ©”μ†Œλ“œλ₯Ό ν™œμš©ν•œλ‹€. 예제 1: data.txt 파일 λ‚΄μš©μ΄ μ•„λž˜μ™€ 같을 경우 One Two wc_sub('data.txt', 'One')λŠ” 1λ₯Ό λ¦¬ν„΄ν•œλ‹€. 예제 2: data.txt 파일 λ‚΄μš©μ΄ μ•„λž˜μ™€ 같을 경우 One Two Three Four Five wc_sub('data.txt', 'o')λŠ” 2λ₯Ό λ¦¬ν„΄ν•œλ‹€. wc_sub ν•¨μˆ˜λ₯Ό μ΄μš©ν•˜μ—¬ μ΄μƒν•œ λ‚˜λΌμ˜ μ•¨λ¦¬μŠ€ μ›μž‘μ— 'Alice'와 'alice'λž€ 단어가 각각 λͺ‡ 번 μ–ΈκΈ‰λ˜λŠ”μ§€ ν™•μΈν•˜λΌ. μ΄μƒν•œ λ‚˜λΌμ˜ μ•¨λ¦¬μŠ€ μ›μž‘μ€ μ•„λž˜ λ§ν¬μ—μ„œ λ‹€μš΄ 받을 수 μžˆλ‹€. http://www.gutenberg.org/files/28885/28885-8.txt μœ„ 링크λ₯Ό λˆ„λ₯΄λ©΄ λœ¨λŠ” ν™”λ©΄μ—μ„œ Plain Text UTF-8 νŒŒμΌμ„ λ‹€μš΄λ‘œλ“œ λ°›μœΌλ©΄ λœλ‹€. μ•„λ§ˆλ„ λͺ‡ 만 단어가 μ‚¬μš©λ˜μ—ˆμ„ 것이닀. 단, filename에 ν•΄λ‹Ήν•˜λŠ” 파일이 열리지 μ•Šμ„ 경우 -1을 λ¦¬ν„΄ν•˜λ„λ‘ 였λ₯˜μ²˜λ¦¬λ₯Ό ν•΄μ•Ό ν•œλ‹€. μ—°μŠ΅ 2 κ²¬λ³Έλ‹΅μ•ˆ
def wc_sub(filename, s): with open(filename, 'r') as f: f_content = f.read() return f_content.count(s) print("The word 'Alice' occurs {} times.".format(wc_sub('Alice.txt', 'Alice'))) print("The word 'alice' occurs {} times.".format(wc_sub('Alice.txt', 'alice')))
ref_materials/excs/Lab-07.ipynb
liganega/Gongsu-DataSci
gpl-3.0
μ—°μŠ΅ 3 ν•¨μˆ˜ f와 μˆ«μžλ“€μ˜ 리슀트 xsλ₯Ό 인자둜 λ°›μ•„ f(x)의 값이 0보닀 크게 λ˜λŠ” x의 κ°’λ§Œ μΆ”μΆœν•΄μ„œ λ¦¬ν„΄ν•˜λŠ” ν•¨μˆ˜ filtering(f, xs)λ₯Ό μ •μ˜ν•˜λΌ. 예제: In [1]: def f1(x): ...: return x * 3 In [2]: filtering(f1, [1, -2, 2, -1, 3, 5]) Out[2]: [1, 2, 3, 5] In [3]: filtering(f1, [-1, -2, -3, -4, -5]) Out[3]: [] μ—°μŠ΅ 3 κ²¬λ³Έλ‹΅μ•ˆ
def filtering(f, xs): L = [] for x in xs: if f(x) > 0: L.append(x) return L def f1(x): return x * 3 filtering(f1, [1, -2, 2, -1, 3, 5])
ref_materials/excs/Lab-07.ipynb
liganega/Gongsu-DataSci
gpl-3.0
μ°Έμ‘°: 파이썬 λ‚΄μž₯ν•¨μˆ˜ 쀑에 filter ν•¨μˆ˜κ°€ λΉ„μŠ·ν•œ 일을 ν•œλ‹€. μ–΄λ–€ 차이점이 μžˆλŠ”μ§€ ν™•μΈν•΄λ³΄λŠ” 것을 μΆ”μ²œν•œλ‹€. μ—°μŠ΅ 4 ν•¨μˆ˜ f와 μˆ«μžλ“€μ˜ 리슀트 xs = [x1, ..., x_n]λ₯Ό 인자둜 λ°›μ•„ f(xn)λ“€μ˜ κ°’μ˜ 합을 λ¦¬ν„΄ν•˜λŠ” ν•¨μˆ˜ sum_list(f, xs)λ₯Ό μ •μ˜ν•˜λΌ. 단, xs = [] 일 경우 0을 λ¦¬ν„΄ν•œλ‹€. 예제: In [4]: def f2(x): ...: return x ** 2 In [5]: sum_list(f2, [1, -2, 2, -3,]) Out[5]: 18 In [6]: sum_list(f1, [-1, -2, -3, -4, -5]) Out[6]: -45 μ—°μŠ΅ 4 κ²¬λ³Έλ‹΅μ•ˆ
def sum_list(f, xs): L = 0 for x in xs: L = L + f(x) return L def f2(x): return x ** 2 print(sum_list(f2, [1, -2, 2, -3])) print(sum_list(f1, [-1, -2, -3, -4, -5]))
ref_materials/excs/Lab-07.ipynb
liganega/Gongsu-DataSci
gpl-3.0
μ°Έμ‘°: 파이썬 λ‚΄μž₯ν•¨μˆ˜ 쀑에 sum ν•¨μˆ˜κ°€ λΉ„μŠ·ν•œ 일을 ν•œλ‹€. μ–΄λ–€ 차이점이 μžˆλŠ”μ§€ ν™•μΈν•΄λ³΄λŠ” 것을 μΆ”μ²œν•œλ‹€. μ—°μŠ΅ 5 λ°‘λ³€μ˜ 길이와 높이가 각각 a와 h인 μ‚Όκ°ν˜•μ˜ 면적을 λ¦¬ν„΄ν•˜λŠ” ν•¨μˆ˜ triangle_area(a, h)λ₯Ό μž‘μ„±ν•˜λΌ. 그런데 μ‚Όκ°ν˜•μ˜ 높이 hλŠ” κΈ°λ³Έκ°’μœΌλ‘œ 5λ₯Ό μ‚¬μš©ν•΄μ•Ό ν•œλ‹€. 힌트: ν‚€μ›Œλ“œ 인자λ₯Ό μ‚¬μš©ν•œλ‹€. 예제: In [7]: triangle_area(3) Out[7]: 7.5 In [8]: triangle_area(3, 7) Out[8]: 10.5 μ—°μŠ΅ 5 κ²¬λ³Έλ‹΅μ•ˆ
def triangle_area(a, height=5): return 1.0/2 * a * height print(triangle_area(3)) print(triangle_area(3, 7))
ref_materials/excs/Lab-07.ipynb
liganega/Gongsu-DataSci
gpl-3.0
μ—°μŠ΅ 6 ν•¨μˆ˜ fλ₯Ό μž…λ ₯ λ°›μœΌλ©΄ μ•„λž˜ λ¬˜μ‚¬μ²˜λŸΌ μž‘λ™ν•˜λŠ” ν•¨μˆ˜λ₯Ό λ¦¬ν„΄ν•˜λŠ” ν•¨μˆ˜ fun_2_fun(f)λ₯Ό μ •μ˜ν•˜λΌ. fun_2_fun(f)(2) = (f(2)) ** 2 fun_2_fun(f)(3) = (f(3)) ** 3 fun_2_fun(f)(4) = (f(4)) ** 4 ... 주의: ν•¨μˆ˜λ₯Ό μž…λ ₯λ°›μ•„ ν•¨μˆ˜λ₯Ό λ¦¬ν„΄ν•˜λ„λ‘ μž‘μ„±ν•΄μ•Ό ν•œλ‹€. 힌트: ν•¨μˆ˜ μ•ˆμ—μ„œ def ν‚€μ›Œλ“œλ₯Ό μ΄μš©ν•˜μ—¬ μƒˆλ‘œμš΄ ν•¨μˆ˜λ₯Ό μ •μ˜ν•  수 μžˆλ‹€. κ·Έ ν•¨μˆ˜λŠ” μ§€μ—­ν•¨μˆ˜κ°€ λœλ‹€. μ—°μŠ΅ 6 κ²¬λ³Έλ‹΅μ•ˆ 1
def fun_2_fun(f): def f_exp(n): return (f(n)) ** n return f_exp print(f1(2)) print(fun_2_fun(f1)(2))
ref_materials/excs/Lab-07.ipynb
liganega/Gongsu-DataSci
gpl-3.0
문제 핡심 이 문제의 핡심은 ν•¨μˆ˜λ₯Ό λ‹¨μˆœνžˆ 인자둜만 μ‚¬μš©ν•˜λŠ” 것이 μ•„λ‹ˆλΌ λ¦¬ν„΄κ°’μœΌλ‘œλ„ ν• μš©ν•˜λŠ” 것이닀. 즉, ν•¨μˆ˜μ— μ–΄λ–€ 인자λ₯Ό λ„£κ³  ν˜ΈμΆœν•˜μ˜€λ”λ‹ˆ μ–΄λ–€ ν•¨μˆ˜λ₯Ό λ¦¬ν„΄ν•˜λŠ” ν•¨μˆ˜λ₯Ό κ΅¬ν˜„ν•΄μ•Ό ν•œλ‹€. 그리고 리턴값이 ν•¨μˆ˜μ΄λ―€λ‘œ κ·Έ ν•¨μˆ˜λ₯Ό μ λ‹Ήν•œ 인자λ₯Ό μž…λ ₯ν•˜μ—¬ ν˜ΈμΆœν•  수 μžˆλ‹€. 예λ₯Ό λ“€μ–΄ ν•¨μˆ˜ gλ₯Ό λ‹€μŒκ³Ό 같이 μ •μ˜ν•˜μž.
def exp2(x): return x ** 2 g = fun_2_fun(exp2)
ref_materials/excs/Lab-07.ipynb
liganega/Gongsu-DataSci
gpl-3.0
그러면 gλŠ” ν•¨μˆ˜μž„μ„ 확인할 수 μžˆλ‹€.
type(g)
ref_materials/excs/Lab-07.ipynb
liganega/Gongsu-DataSci
gpl-3.0