markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
True and snapped crime locations
crimes_df = spgh.element_as_gdf(ntw, pp_name='crimes', snapped=False) snapped_crimes_df = spgh.element_as_gdf(ntw, pp_name='crimes', snapped=True)
notebooks/explore/spaghetti/Spaghetti_Pointpatterns_Empirical.ipynb
weikang9009/pysal
bsd-3-clause
Create geopandas.GeoDataFrame objects of the vertices and arcs
# network nodes and edges vertices_df, arcs_df = spgh.element_as_gdf(ntw, vertices=True, arcs=True)
notebooks/explore/spaghetti/Spaghetti_Pointpatterns_Empirical.ipynb
weikang9009/pysal
bsd-3-clause
Plotting geopandas.GeoDataFrame objects
# legend patches arcs = mlines.Line2D([], [], color='k', label='Network Arcs', alpha=.5) vtxs = mlines.Line2D([], [], color='k', linewidth=0, markersize=2.5, marker='o', label='Network Vertices', alpha=1) schl = mlines.Line2D([], [], color='k', linewidth=0, markersize=25, marker='X', label='School Locations', alpha=1) snp_schl = mlines.Line2D([], [], color='k', linewidth=0, markersize=12, marker='o', label='Snapped Schools', alpha=1) crme = mlines.Line2D([], [], color='r', linewidth=0, markersize=7, marker='x', label='Crime Locations', alpha=.75) snp_crme = mlines.Line2D([], [], color='r', linewidth=0, markersize=3, marker='o', label='Snapped Crimes', alpha=.75) patches = [arcs, vtxs, schl, snp_schl, crme, snp_crme] # plot figure base = arcs_df.plot(color='k', alpha=.25, figsize=(12,12), zorder=0) vertices_df.plot(ax=base, color='k', markersize=5, alpha=1) crimes_df.plot(ax=base, color='r', marker='x', markersize=50, alpha=.5, zorder=1) snapped_crimes_df.plot(ax=base, color='r', markersize=20, alpha=.5, zorder=1) schools_df.plot(ax=base, cmap='tab20', column='id', marker='X', markersize=500, alpha=.5, zorder=2) snapped_schools_df.plot(ax=base,cmap='tab20', column='id', markersize=200, alpha=.5, zorder=2) # add legend plt.legend(handles=patches, fancybox=True, framealpha=0.8, scatterpoints=1, fontsize="xx-large", bbox_to_anchor=(1.04, .6))
notebooks/explore/spaghetti/Spaghetti_Pointpatterns_Empirical.ipynb
weikang9009/pysal
bsd-3-clause
Imports
import multicell import numpy as np
examples/06 - Growth and divisions.ipynb
jldinh/multicell
mit
Problem definition Simulation and tissue structure
sim = multicell.simulation_builder.generate_cell_grid_sim(20, 20, 1, 1e-3)
examples/06 - Growth and divisions.ipynb
jldinh/multicell
mit
Tissue growth We first enable growth and specify the number of growth steps to be applied over the duration of the simulation. Growth steps are spaced evenly. Note: this should not be used in conjunction with set_time_steps, as the two settings would otherwise conflict.
sim.enable_growth(n_steps=11)
examples/06 - Growth and divisions.ipynb
jldinh/multicell
mit
We then register the growth method we would like to apply. In this case, it is linear_growth, which requires a coefficient parameter specifying the scaling to be applied at each time step, along each axis.
sim.register_growth_method(multicell.growth.linear_growth, {"coefficient": [1.1, 1.05, 1.]})
examples/06 - Growth and divisions.ipynb
jldinh/multicell
mit
Cell divisions We first enable cell divisions and register the method we would like to use. In this case, we use a method called symmetrical_division, which divides a cell through its centroid, perpendicularly to its longest axis.
sim.enable_division() sim.register_division_method(multicell.division.symmetrical_division)
examples/06 - Growth and divisions.ipynb
jldinh/multicell
mit
We also register the division trigger, which is used to check if a cell needs to be divided. Here, it is a volume-related trigger, which requires a threshold.
sim.register_division_trigger(multicell.division.volume_trigger, {"volume_threshold": 2.})
examples/06 - Growth and divisions.ipynb
jldinh/multicell
mit
Rendering
sim.register_renderer(multicell.rendering.MatplotlibRenderer, None, {"view_size": 60, "view": (90, -90), "axes": False})
examples/06 - Growth and divisions.ipynb
jldinh/multicell
mit
Visualization of the initial state
sim.renderer.display()
examples/06 - Growth and divisions.ipynb
jldinh/multicell
mit
Simulation As the tissue grows, it maintains its rectangular shape. Cells grow in a uniform manner (they all grow by the same amount) and all divide at the same time when they reach the volume threshold.
sim.simulate()
examples/06 - Growth and divisions.ipynb
jldinh/multicell
mit
Load the usual COMMIT structure
from commit import trk2dictionary trk2dictionary.run( filename_trk = 'LausanneTwoShell/fibers.trk', path_out = 'LausanneTwoShell/CommitOutput', filename_peaks = 'LausanneTwoShell/peaks.nii.gz', filename_mask = 'LausanneTwoShell/WM.nii.gz', fiber_shift = 0.5, peaks_use_affine = True ) import commit mit = commit.Evaluation( '.', 'LausanneTwoShell' ) mit.load_data( 'DWI.nii', 'DWI.scheme' ) mit.set_model( 'StickZeppelinBall' ) d_par = 1.7E-3 # Parallel diffusivity [mm^2/s] ICVFs = [ 0.7 ] # Intra-cellular volume fraction(s) [0..1] d_ISOs = [ 1.7E-3, 3.0E-3 ] # Isotropic diffusivitie(s) [mm^2/s] mit.model.set( d_par, ICVFs, d_ISOs ) mit.generate_kernels( regenerate=True ) mit.load_kernels() mit.load_dictionary( 'CommitOutput' ) mit.set_threads() mit.build_operator()
doc/tutorials/AdvancedSolvers/tutorial_solvers.ipynb
barakovic/COMMIT
gpl-3.0
Perform clustering of the streamlines You will need dipy, which is among the requirements of COMMIT, hence there should be no problem. The threshold parameter has to be tuned for each brain. Do not consider our choice as a standard one.
from nibabel import trackvis as tv fname='LausanneTwoShell/fibers.trk' streams, hdr = tv.read(fname) streamlines = [i[0] for i in streams] from dipy.segment.clustering import QuickBundles threshold = 15.0 qb = QuickBundles(threshold=threshold) clusters = qb.cluster(streamlines) import numpy as np structureIC = np.array([c.indices for c in clusters]) weightsIC = np.array([1.0/np.sqrt(len(c)) for c in structureIC])
doc/tutorials/AdvancedSolvers/tutorial_solvers.ipynb
barakovic/COMMIT
gpl-3.0
Notice that we defined structure_IC as a numpy.array that contains a list of lists containing the indices associated to each group. We know it sounds a little bit bizarre but it computationally convenient. Define the regularisation term Each compartment must be regularised separately. The user can choose among the following penalties: $\sum_{g\in G}w_g\|x_g\|_k$ : commit.solvers.group_sparsity with $k\in {2, \infty}$ (only for IC compartment) $\|x\|_1$ : commit.solvers.norm1 $\|x\|_2$ : commit.solvers.norm2 $\iota_{\ge 0}(x)$ : commit.solvers.non_negative (Default for all compartments) If the chosen regularisation for the IC compartment is $\sum_{g\in G}\|x_g\|_k$, we can define $k$ via the group_norm field, which must be one between $\|x\|_2$ : commit.solvers.norm2 (Default) $\|x\|_\infty$ : commit.solvers.norminf In this example we consider the following penalties: Intracellular: group sparsity with 2-norm of each group Extracellular: 2-norm Isotropic: 1-norm
regnorms = [commit.solvers.group_sparsity, commit.solvers.norm2, commit.solvers.norm1] group_norm = 2 # each group is penalised with its 2-norm
doc/tutorials/AdvancedSolvers/tutorial_solvers.ipynb
barakovic/COMMIT
gpl-3.0
The regularisation parameters are specified within the lambdas field. Again, do not consider our choice as a standard one.
lambdas = [10.,10.,10.]
doc/tutorials/AdvancedSolvers/tutorial_solvers.ipynb
barakovic/COMMIT
gpl-3.0
Call the constructor of the data structure
regterm = commit.solvers.init_regularisation(mit, regnorms = regnorms, structureIC = structureIC, weightsIC = weightsIC, group_norm = group_norm, lambdas = lambdas)
doc/tutorials/AdvancedSolvers/tutorial_solvers.ipynb
barakovic/COMMIT
gpl-3.0
Call the fit function to perform the optimisation
mit.fit(regularisation=regterm, max_iter=1000)
doc/tutorials/AdvancedSolvers/tutorial_solvers.ipynb
barakovic/COMMIT
gpl-3.0
Save the results
suffix = 'IC'+str(regterm[0])+'EC'+str(regterm[1])+'ISO'+str(regterm[2]) mit.save_results(path_suffix=suffix)
doc/tutorials/AdvancedSolvers/tutorial_solvers.ipynb
barakovic/COMMIT
gpl-3.0
Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica
#datos.ix[:, "Diametro X":"Diametro Y"].plot(secondary_y=['VELOCIDAD'],figsize=(16,10),ylim=(0.5,3)).hlines([1.85,1.65],0,3500,colors='r') datos[columns].plot(secondary_y=['VELOCIDAD'],figsize=(10,5),title='Modelo matemático del sistema').hlines([1.6 ,1.8],0,2000,colors='r') #datos['RPM TRAC'].plot(secondary_y='RPM TRAC') datos.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
medidas/20072015/FILAEXTRUDER/.ipynb_checkpoints/Analisis-checkpoint.ipynb
darkomen/TFG
cc0-1.0
Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como cuarta aproximación, vamos a modificar las velocidades de tracción. El rango de velocidades propuesto es de 1.5 a 5.3, manteniendo los incrementos del sistema experto como en el actual ensayo. Comparativa de Diametro X frente a Diametro Y para ver el ratio del filamento
plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')
medidas/20072015/FILAEXTRUDER/.ipynb_checkpoints/Analisis-checkpoint.ipynb
darkomen/TFG
cc0-1.0
Filtrado de datos Las muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.
datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)] #datos_filtrados.ix[:, "Diametro X":"Diametro Y"].boxplot(return_type='axes')
medidas/20072015/FILAEXTRUDER/.ipynb_checkpoints/Analisis-checkpoint.ipynb
darkomen/TFG
cc0-1.0
Representación de X/Y
plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')
medidas/20072015/FILAEXTRUDER/.ipynb_checkpoints/Analisis-checkpoint.ipynb
darkomen/TFG
cc0-1.0
Analizamos datos del ratio
ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y'] ratio.describe() rolling_mean = pd.rolling_mean(ratio, 50) rolling_std = pd.rolling_std(ratio, 50) rolling_mean.plot(figsize=(12,6)) # plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5) ratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))
medidas/20072015/FILAEXTRUDER/.ipynb_checkpoints/Analisis-checkpoint.ipynb
darkomen/TFG
cc0-1.0
Límites de calidad Calculamos el número de veces que traspasamos unos límites de calidad. $Th^+ = 1.85$ and $Th^- = 1.65$
Th_u = 1.85 Th_d = 1.65 data_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) | (datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)] data_violations.describe() data_violations.plot(subplots=True, figsize=(12,12))
medidas/20072015/FILAEXTRUDER/.ipynb_checkpoints/Analisis-checkpoint.ipynb
darkomen/TFG
cc0-1.0
How would we go about understanding the trends from the data on global temperature? The first step in analyzing unknown data is to generate some simple plots. We are going to look at the temperature-anomaly history, contained in a file, and make our first plot to explore this data. We are going to smooth the data and then we'll fit a line to it to find a trend, plotting along the way to see how it all looks. Let's get started! The first thing to do is to load our favorite library: the NumPy library for array operations.
import numpy
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
Make sure you have studied the introduction to JITcode in Python to know a bit about this library and why we need it. Step 1: Read a data file The data is contained in the file: GlobalTemperatureAnomaly-1958-2008.csv with the year on the first column and 12 monthly averages of temperature anomaly listed sequentially on the second column. We will read the file, then make an initial plot to see what it looks like. To load the file, we use a function from the NumPy library called loadtxt(). To tell Python where to look for this function, we precede the function name with the library name, and use a dot between the two names. This is how it works:
numpy.loadtxt(fname='./resources/GlobalTemperatureAnomaly-1958-2008.csv', delimiter=',')
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
Note that we called the function with two parameters: the file name and path, and the delimiter that separates each value on a line (a comma). Both parameters are strings (made up of characters) and we put them in single quotes. As the output of the function, we get an array. Because it's rather big, Python shows only a few rows and columns of the array. So far, so good. Now, what if we want to manipulate this data? Or plot it? We need to refer to it with a name. We've only just read the file, but we did not assign the array any name! Let's try again.
T=numpy.loadtxt(fname='./resources/GlobalTemperatureAnomaly-1958-2008.csv', delimiter=',')
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
That's interesting. Now, we don't see any output from the function call. Why? It's simply that the output was stored into the variable T, so to see it, we can do:
print(T)
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
Ah, there it is! Let's find out how big the array is. For that, we use a cool NumPy function called shape():
numpy.shape(T)
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
Again, we've told Python where to find the function shape() by attaching it to the library name with a dot. However, NumPy arrays also happen to have a property shape that will return the same value, so we can get the same result another way:
T.shape
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
It's just shorter. The array T holding our temperature-anomaly data has two columns and 612 rows. Since we said we had monthly data, how many years is that?
612/12
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
That's right: from 1958 through 2008. Step 2: Plot the data We will display the data in two ways: as a time series of the monthly temperature anomalies versus time, and as a histogram. To be fancy, we'll put both plots in one figure. Let's first load our plotting library, called matplotlib. To get the plots inside the notebook (rather than as popups), we use a special command, %matplotlib inline:
from matplotlib import pyplot %matplotlib inline
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
What's this from business about? matplotlib is a pretty big (and awesome!) library. All that we need is a subset of the library for creating 2D plots, so we ask for the pyplot module of the matplotlib library. Plotting the time series of temperature is as easy as calling the function plot() from the module pyplot. But remember the shape of T? It has two columns and the temperature-anomaly values are in the second column. We extract the values of the second column by specifying 1 as the second index (the first column has index 0) and using the colon notation : to mean all rows. Check it out:
pyplot.plot(T[:,1])
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
You can add a semicolon at the end of the plotting command to avoid that stuff that appeared on top of the figure, that Out[x]: [&lt; ...&gt;] ugliness. Try it. Do you see a trend in the data? The plot above is certainly useful, but wouldn't it be nicer if we could look at the data relative to the year, instead of the location of the data in the array? The plot function can take another input; let's get the year displayed as well.
pyplot.plot(T[:,0],T[:,1]);
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
The temperature anomaly certainly seems to show an increasing trend. But we're not going to stop there, of course. It's not that easy to convince people that the planet is warming, as you know. Plotting a histogram is as easy as calling the function hist(). Why should it be any harder?
pyplot.hist(T[:,1]);
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
What does this plot tell you about the data? It's more interesting than just an increasing trend, that's for sure. You might want to look at more statistics now: mean, median, standard deviation ... NumPy makes that easy for you:
meanT = numpy.mean(T[:,1]) medianT = numpy.median(T[:,1]) print( meanT, medianT)
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
You can control several parameters of the hist() plot. Learn more by reading the manual page (yes, you have to read the manual sometimes!). The first option is the number of bins—the default is 10—but you can also change the appearance (color, transparency). Try some things out.
pyplot.hist(T[:,1], 20, normed=1, facecolor='g', alpha=0.55);
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
This is fun. Finally, we'll put both plots on the same figure using the subplot() function, which creates a grid of plots. The argument tells this function how many rows and columns of sub-plots we want, and where in the grid each plot will go. To help you see what each plotting command is doing, we added comments, which in Python follow the # symbol.
pyplot.figure(figsize=(12,4)) # the size of the figure area pyplot.subplot(121) # creates a grid of 1 row, 2 columns and selects the first plot pyplot.plot(T[:,0],T[:,1],'g') # our time series, but now green pyplot.xlim(1958,2008) # set the x-axis limits pyplot.subplot(122) # prepares for the second plot pyplot.hist(T[:,1], 20, normed=1, facecolor='g', alpha=0.55);
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
Step 3: Smooth the data and do regression You see a lot of fluctuations on the time series, so you might be asking yourself "How can I smooth it out?" No? Let's do it anyway. One possible approach to smooth the data (there are others) is using a moving average, also known as a sliding-window average. This is defined as: $$\hat{x}{i,n} = \frac{1}{n} \sum{j=1}^{n} x_{i-j}$$ The only parameter to the moving average is the value $n$. As you can see, the moving average smooths the set of data points by creating a new data set consisting of local averages (of the $n$ previous data points) at each point in the new set. A moving average is technically a convolution, and luckily NumPy has a built-in function for that, convolve(). We use it like this:
N = 12 window = numpy.ones(N)/N smooth = numpy.convolve(T[:,1], window, 'same') pyplot.figure(figsize=(10, 4)) pyplot.plot(T[:,0], smooth, 'r') pyplot.xlim(1958,2008);
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
Did you notice the function ones()? It creates an array filled with ... you guessed it: ones! We use a window of 12 data points, meaning that the plot shows the average temperature over the last 12 months. Looking at the plot, we can still see a trend, but the range of values is smaller. Let's plot the original time series together with the smoothed version:
pyplot.figure(figsize=(10, 4)) pyplot.plot(T[:,0], T[:,1], 'g', linewidth=1) # we specify the line width here ... pyplot.plot(T[:,0], smooth, 'r', linewidth=2) # making the smoothed data a thicker line pyplot.xlim(1958, 2008);
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
That is interesting! The smoothed data follows the trend nicely but has much less noise. Well, that is what filtering data is all about. Let's now fit a straight line through the temperature-anomaly data, to see the trends. We need to perform a least-squares linear regression to find the slope and intercept of a line $$y = mx+b$$ that fits our data. Thankfully, Python and NumPy are here to help with the polyfit() function. The function takes three arguments: the two array variables $x$ and $y$, and the order of the polynomial for the fit (in this case, 1 for linear regression).
year = T[:,0] # it's time to use a more friendly name for column 1 of our data m,b = numpy.polyfit(year, T[:,1], 1) pyplot.figure(figsize=(10, 4)) pyplot.plot(year, T[:,1], 'g', linewidth=1) pyplot.plot(year, m * year + b, 'k--', linewidth=2) pyplot.xlim(1958, 2008);
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
There is more than one way to do this. Another of the favorite Python libraries is SciPy, and it has a linregress(x,y) function that will work as well. But let's not get carried away. Step 4: Checking for auto-correlation in the data We won't go into details, but you will learn more about all this if you take a course on experimental methods—for example, at GW, the Mechanical and Aerospace Engineering department offers "Methods of Engineering Experimentation" (MAE-3120). The fact is that in time series (like global temperature anomaly, stock values, etc.), the fluctuations in the data are not random: adjacent data points are not independent. We say that there is auto-correlation in the data. The problem with auto-correlation is that various techniques in statistical analysis rely on the assumption that scatter (or error) is random. If you apply these techniques willy-nilly, you can get false trends, overestimate uncertainties or exaggerate the goodness of a fit. All bad things! For the global temperature anomaly, this discussion is crucial: many critics claim that since there is auto-correlation in the data, no reliable trends can be obtained As a well-educated engineering student who cares about the planet, you will appreciate this: we can estimate the trend for the global temperature anomalies taking into account that the data points are not independent. We just need to use more advanced techniques of data analysis. To finish off this lesson, your first in data analysis with Python, we'll put all our nice plots in one figure frame, and add the residual. Because the residual is not random "white" noise, you can conclude that there is auto-correlation in this time series. Finally, we'll save the plot to an image file using the savefig() command of Pyplot—this will be useful to you when you have to prepare reports for your engineering courses!
pyplot.figure(figsize=(10, 8)) # the size of the figure area pyplot.subplot(311) # creates a grid of 3 columns, 1 row and place the first plot pyplot.plot(year, T[:,1], 'g', linewidth=1) # we specify the line width here ... pyplot.plot(year, smooth, 'r', linewidth=2) # making the smoothed data a thicker line pyplot.xlim(1958, 2008) pyplot.subplot(312) pyplot.plot(year, T[:,1], 'g', linewidth=1) pyplot.plot(year, m * year + b, 'k--', linewidth=2) pyplot.xlim(1958, 2008) pyplot.subplot(313) pyplot.plot(year, T[:,1] - m * year + b, 'o', linewidth=2) pyplot.xlim(1958, 2008) pyplot.savefig("TemperatureAnomaly.png")
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
Step 5: Generating useful output Here, we'll use our linear fit to project the temperature into the future. We'll also save some image files that we could later add to a document or report based on our findings. First, let's create an expectation of the temperature difference up to the year 2100.
spacing = (2008 + 11 / 12 - 1958) / 612 length = (2100 - 1958) / spacing length = int(length) #we'll need an integer for the length of our array years = numpy.linspace(1958, 2100, num = length) temp = m * years + b#use our linear regression to estimate future temperature change pyplot.figure(figsize=(10, 4)) pyplot.plot(years, temp) pyplot.xlim(1958, 2100) out=(years, temp) #create a tuple out of years and temperature we can output out = numpy.array(out).T #form an array and transpose it
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
Ok, that estimation looks reasonable. Let's save the data that describes it back to a .csv file, like the one we originally imported.
numpy.savetxt('./resources/GlobalTemperatureEstimate-1958-2100.csv', out, delimiter=",")
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
Now, lets make a nicer picture that we can show to back up some of our information. We can plot the linear regression as well as the original data and then save the figure.
pyplot.figure(figsize = (10, 4)) pyplot.plot(year, T[:,1], 'g') pyplot.plot(years, temp, 'k--') pyplot.xlim(1958, 2100) pyplot.savefig('./resources/GlobalTempPlot.png')
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
Nice! Now we've got some stuff that we could use in a report, or show to someone unfamiliar with coding. Remember to play with our settings; I'm sure you could get an even nicer-looking plot if you try! Dig Deeper & Think How is the global temperature anomaly calculated? What does it mean and why is it employed instead of the global mean temperature to quantify global warming? Why is it important to check that the residuals are independent and random when performing linear regression? In this particular case, is it possible to still estimate a trend with confidence? What is your best estimate of the global temperature by the end of the 22nd century? What did we learn? You should have played around with the embedded code in this notebook, and also written your own version of all the code in a separate Python script to learn: how to read data from a comma-separated file how to plot the data how to do some basic analysis on the data how to write to a file
from IPython.core.display import HTML def css_styling(): styles = open("../styles/custom.css", "r").read() return HTML(styles) css_styling()
module00_Introduction_to_Python/01_Lesson01_Playing_with_data.ipynb
barbagroup/JITcode-MechE
mit
<img src="figures/unsupervised_workflow.svg" width=100%>
from sklearn.datasets import load_digits from sklearn.cross_validation import train_test_split import numpy as np np.set_printoptions(suppress=True) digits = load_digits() X, y = digits.data, digits.target X_train, X_test, y_train, y_test = train_test_split(X, y)
Unsupervised Transformers.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
Removing mean and scaling variance
from sklearn.preprocessing import StandardScaler
Unsupervised Transformers.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
1) Instantiate the model
scaler = StandardScaler()
Unsupervised Transformers.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
2) Fit using only the data.
scaler.fit(X_train)
Unsupervised Transformers.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
3) transform the data (not predict).
X_train_scaled = scaler.transform(X_train) X_train.shape X_train_scaled.shape
Unsupervised Transformers.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
The transformed version of the data has the mean removed:
X_train_scaled.mean(axis=0) X_train_scaled.std(axis=0) X_test_transformed = scaler.transform(X_test)
Unsupervised Transformers.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
Principal Component Analysis 0) Import the model
from sklearn.decomposition import PCA
Unsupervised Transformers.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
1) Instantiate the model
pca = PCA(n_components=2)
Unsupervised Transformers.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
2) Fit to training data
pca.fit(X)
Unsupervised Transformers.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
3) Transform to lower-dimensional representation
print(X.shape) X_pca = pca.transform(X) X_pca.shape
Unsupervised Transformers.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
Visualize
plt.figure() plt.scatter(X_pca[:, 0], X_pca[:, 1], c=y) pca.components_.shape plt.matshow(pca.components_[0].reshape(8, 8), cmap="gray") plt.colorbar() plt.matshow(pca.components_[1].reshape(8, 8), cmap="gray") plt.colorbar()
Unsupervised Transformers.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
Manifold Learning
from sklearn.manifold import Isomap isomap = Isomap() X_isomap = isomap.fit_transform(X) plt.scatter(X_isomap[:, 0], X_isomap[:, 1], c=y)
Unsupervised Transformers.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
Exercises Visualize the digits dataset using the TSNE algorithm from the sklearn.manifold module (it runs for a couple of seconds). Extract non-negative components from the digits dataset using NMF. Visualize the resulting components. The interface of NMF is identical to the PCA one. What qualitative difference can you find compared to PCA?
# %load solutions/digits_unsupervised.py from sklearn.manifold import TSNE from sklearn.decomposition import NMF # Compute TSNE embedding tsne = TSNE() X_tsne = tsne.fit_transform(X) # Visualize TSNE results plt.title("All classes") plt.figure() plt.scatter(X_tsne[:, 0], X_tsne[:, 1], c=y) # build an NMF factorization of the digits dataset nmf = NMF(n_components=16).fit(X) # visualize the components fig, axes = plt.subplots(4, 4) for ax, component in zip(axes.ravel(), nmf.components_): ax.imshow(component.reshape(8, 8), cmap="gray", interpolation="nearest")
Unsupervised Transformers.ipynb
amueller/nyu_ml_lectures
bsd-2-clause
Next, let's define a vertical coordinate system that minimises missing data values, and gives good resolution at the (orographic) surface. To achieve this we invent a scheme where the "bottom" of the model closely follows the orography/bathymetry, and as we reach the "top" of the model we get levels of approximately constant height.
nz = 9 model_levels = np.arange(nz) model_top = 5000 # m # The proportion of orographic influence on the model altitude. In this case, # we define this as a log progression from full influence to no influence. sigma = 1.1 - np.logspace(-1, np.log10(1.1), nz) # Broadcast sigma so that when we multiply the orography we get a 3D array of z, y, x. sigma = sigma[:, np.newaxis, np.newaxis] # Combine sigma with the orography and model top value to # produce 3d (z, y, x) altitude data for our "model levels". altitude = (orography * sigma) + (model_top * (1 - sigma))
INTRO.ipynb
pelson/python-stratify
bsd-3-clause
Our new 3d array now represents altitude (height above sea surface) at each of our "model levels". Let's look at a cross-section of the data to see how these levels:
plt.fill_between(np.arange(6), np.zeros(6), orography[1, :], color='green', linewidth=2, label='Orography') plt.plot(np.zeros(nx), color='blue', linewidth=1.2, label='Sea level') for i in range(9): plt.plot(altitude[i, 1, :], color='gray', linestyle='--', label='Model levels' if i == 0 else None) plt.ylabel('altitude / m') plt.margins(0.1) plt.legend() plt.show()
INTRO.ipynb
pelson/python-stratify
bsd-3-clause
To recap, we now have a model vertical coordinate system that maximises the number grid-point locations close to the orography. In addition, we have a 3d array of "altitudes" so that we can relate any phenomenon measured on this grid to useful vertical coordinate information. Let's now define the temperature at each of our x, y, z points. We use the International Standard Atmosphere lapse rate of $ -6.5\ ^{\circ}C\ /\ km $ combined with our sea level standard temperature as an appoximate model for our temperature profile.
lapse = -6.5 / 1000 # degC / m temperature = sea_level_temp + lapse * altitude from matplotlib.colors import LogNorm fig = plt.figure(figsize=(6, 6)) norm = plt.Normalize(vmin=temperature.min(), vmax=temperature.max()) for i in range(nz): plt.subplot(3, 3, i + 1) qm = plt.pcolormesh(temperature[i], cmap='viridis', norm=norm) plt.subplots_adjust(right=0.84, wspace=0.3, hspace=0.3) cax = plt.axes([0.85, 0.1, 0.03, 0.8]) plt.colorbar(cax=cax) plt.suptitle('Temperature (K) at each "model level"') plt.show()
INTRO.ipynb
pelson/python-stratify
bsd-3-clause
Restratification / vertical interpolation Our data is in the form: 1d "model level" vertical coordinate (z axis) 2 x 1d horizontal coordinates (x, y) 3d "altitude" variable (x, y, z) 3d "temperature" variable (x, y, z) Suppose we now want to change the vertical coordinate system of our variables so that they are on levels of constant altitude, not levels of constant "model levels":
target_altitudes = np.linspace(700, 5500, 5) # m
INTRO.ipynb
pelson/python-stratify
bsd-3-clause
If we visualise this, we can see that we need to consider the behaviour for a number of situations, including what should happen when we are sampling below the orography, and when we are above the model top.
plt.figure(figsize=(7, 5)) plt.fill_between(np.arange(6), np.zeros(6), orography[1, :], color='green', linewidth=2, label='Orography') for i in range(9): plt.plot(altitude[i, 1, :], color='gray', lw=1.2, label=None if i > 0 else 'Source levels \n(model levels)') for i, target in enumerate(target_altitudes): plt.plot(np.repeat(target, 6), color='gray', linestyle='--', lw=1.4, alpha=0.6, label=None if i > 0 else 'Target levels \n(altitude)') plt.ylabel('height / m') plt.margins(top=0.1) plt.legend() plt.savefig('summary.png') plt.show()
INTRO.ipynb
pelson/python-stratify
bsd-3-clause
The default behaviour depends on the scheme, but for linear interpolation we recieve NaNs both below the orography and above the model top:
import stratify target_nz = 20 target_altitudes = np.linspace(400, 5200, target_nz) # m new_temperature = stratify.interpolate(target_altitudes, altitude, temperature, axis=0)
INTRO.ipynb
pelson/python-stratify
bsd-3-clause
With some work, we can visualise this result to compare a cross-section before and after. In particular this will allow us to see precisely what the interpolator has done at the extremes of our target levels:
ax1 = plt.subplot(1, 2, 1) plt.fill_between(np.arange(6), np.zeros(6), orography[1, :], color='green', linewidth=2, label='Orography') cs = plt.contourf(np.tile(np.arange(6), nz).reshape(nz, 6), altitude[:, 1], temperature[:, 1]) plt.scatter(np.tile(np.arange(6), nz).reshape(nz, 6), altitude[:, 1], c=temperature[:, 1]) plt.subplot(1, 2, 2, sharey=ax1) plt.fill_between(np.arange(6), np.zeros(6), orography[1, :], color='green', linewidth=2, label='Orography') plt.contourf(np.arange(6), target_altitudes, np.ma.masked_invalid(new_temperature[:, 1]), cmap=cs.cmap, norm=cs.norm) plt.scatter(np.tile(np.arange(nx), target_nz).reshape(target_nz, nx), np.repeat(target_altitudes, nx).reshape(target_nz, nx), c=new_temperature[:, 1]) plt.scatter(np.tile(np.arange(nx), target_nz).reshape(target_nz, nx), np.repeat(target_altitudes, nx).reshape(target_nz, nx), s=np.isnan(new_temperature[:, 1]) * 15, marker='x') plt.suptitle('Temperature cross-section before and after restratification') plt.show()
INTRO.ipynb
pelson/python-stratify
bsd-3-clause
Select live births, then make a CDF of <tt>totalwgt_lb</tt>.
import thinkstats2 as ts live = preg[preg.outcome == 1] wgt_cdf = ts.Cdf(live.totalwgt_lb, label = 'weight')
code/chap04ex.ipynb
John-Keating/ThinkStats2
gpl-3.0
Display the CDF.
import thinkplot as tp tp.Cdf(wgt_cdf, label = 'weight') tp.Show()
code/chap04ex.ipynb
John-Keating/ThinkStats2
gpl-3.0
Find out how much you weighed at birth, if you can, and compute CDF(x). If you are a first child, look up your birthweight in the CDF of first children; otherwise use the CDF of other children. Compute the percentile rank of your birthweight Compute the median birth weight by looking up the value associated with p=0.5. Compute the interquartile range (IQR) by computing percentiles corresponding to 25 and 75. Make a random selection from <tt>cdf</tt>. Draw a random sample from <tt>cdf</tt>. Draw a random sample from <tt>cdf</tt>, then compute the percentile rank for each value, and plot the distribution of the percentile ranks. Generate 1000 random values using <tt>random.random()</tt> and plot their PMF.
import random random.random? import random thousand = [random.random() for x in range(1000)] thousand_pmf = ts.Pmf(thousand, label = 'rando') tp.Pmf(thousand_pmf, linewidth=0.1) tp.Show() t_hist = ts.Hist(thousand) tp.Hist(t_hist, label = "rando") tp.Show()
code/chap04ex.ipynb
John-Keating/ThinkStats2
gpl-3.0
Assuming that the PMF doesn't work very well, try plotting the CDF instead.
thousand_cdf = ts.Cdf(thousand, label='rando') tp.Cdf(thousand_cdf) tp.Show() import scipy.stats scipy.stats?
code/chap04ex.ipynb
John-Keating/ThinkStats2
gpl-3.0
<a id='tree'></a> Tree Let's play with a toy example and write our own decion-regression stamp. First, consider following toy dataset:
X_train = np.linspace(0, 1, 100) X_test = np.linspace(0, 1, 1000) @np.vectorize def target(x): return x > 0.5 Y_train = target(X_train) + np.random.randn(*X_train.shape) * 0.1 Y_test = target(X_test) + np.random.randn(*X_test.shape) * 0.1 plt.figure(figsize = (16, 9)); plt.scatter(X_train, Y_train, s=50); plt.title('Train dataset'); plt.xlabel('X'); plt.ylabel('Y');
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
<a id='stamp'></a> Task 1 To define tree (even that simple), we need to define following functions: Loss function For regression it can be MSE, MAE .etc We will use MSE $$ \begin{aligned} & y \in \mathbb{R}^N \ & \text{MSE}(\hat{y}, y) = \dfrac{1}{N}\|\hat{y} - y\|_2^2 \end{aligned} $$ Note, that for MSE optimal prediction will be just mean value of the target. Gain function We need to select over different splitting by comparing them with their gain value. It is also reasonable to take into account number of points at the area of belongs to the split. $$ \begin{aligned} & R_i := \text{region i; c = current, l = left, r = right} \ & Gain(R_c, R_l, R_r) = Loss(R_c) - \left(\frac{|R_l|}{|R_c|}Loss(R_l) + \frac{|R_r|}{|R_c|}Loss(R_r)\right) \end{aligned} $$ Also for efficiency, we should not try all the x values, but just according to the histogram <img src="stamp.jpg" alt="Stamp Algo" style="height: 700px;"/> Also don't forget return left and right leaf predictions Implement algorithm and please, put your loss rounded to the 3 decimals at the form: https://goo.gl/forms/AshZ8gyirm0Zftz53
def loss_mse(predict, true): return np.mean((predict - true) ** 2) def stamp_fit(x, y): root_prediction = np.mean(y) root_loss = loss_mse(root_prediction, y) gain = [] _, thresholds = np.histogram(x) thresholds = thresholds[1:-1] for i in thresholds: left_predict = np.mean(y[x < i]) left_weight = np.sum(x < i) / x.shape[0] right_predict = np.mean(y[x >= i]) right_weight = np.sum(x >= i) / x.shape[0] loss = left_weight * loss_mse(left_predict, y[x < i]) + right_weight * loss_mse(right_predict, y[x >= i]) gain.append(root_loss - loss) threshold = thresholds[np.argmax(gain)] left_predict = np.mean(y[x < threshold]) right_predict = np.mean(y[x >= threshold]) return threshold, left_predict, right_predict @np.vectorize def stamp_predict(x, threshold, predict_l, predict_r): prediction = predict_l if x < threshold else predict_r return prediction predict_params = stamp_fit(X_train, Y_train) prediction = stamp_predict(X_test, *predict_params) loss_mse(prediction, Y_test) plt.figure(figsize = (16, 9)); plt.scatter(X_test, Y_test, s=50); plt.plot(X_test, prediction, 'r'); plt.title('Test dataset'); plt.xlabel('X'); plt.ylabel('Y');
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
<a id='lim'></a> Limitations Now let's discuss some limitations of decision trees. Consider another toy example. Our target is distance between the origin $(0;0)$ and data point $(x_1, x_2)$
from sklearn.tree import DecisionTreeRegressor def get_grid(data): x_min, x_max = data[:, 0].min() - 1, data[:, 0].max() + 1 y_min, y_max = data[:, 1].min() - 1, data[:, 1].max() + 1 return np.meshgrid(np.arange(x_min, x_max, 0.01), np.arange(y_min, y_max, 0.01)) data_x = np.random.normal(size=(100, 2)) data_y = (data_x[:, 0] ** 2 + data_x[:, 1] ** 2) ** 0.5 plt.figure(figsize=(8, 8)); plt.scatter(data_x[:, 0], data_x[:, 1], c=data_y, s=100, cmap='spring');
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
Sensitivity with respect to the subsample Let's see how predictions and structure of tree change, if we fit them at the random $90\%$ subset if the data.
plt.figure(figsize=(20, 6)) for i in range(3): clf = DecisionTreeRegressor(random_state=42) indecies = np.random.randint(data_x.shape[0], size=int(data_x.shape[0] * 0.9)) clf.fit(data_x[indecies], data_y[indecies]) xx, yy = get_grid(data_x) predicted = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) plt.subplot2grid((1, 3), (0, i)) plt.pcolormesh(xx, yy, predicted, cmap='winter') plt.scatter(data_x[:, 0], data_x[:, 1], c=data_y, s=30, cmap='winter', edgecolor='k')
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
Sensitivity with respect to the hyper parameters
plt.figure(figsize=(14, 14)) for i, max_depth in enumerate([2, 4, None]): for j, min_samples_leaf in enumerate([15, 5, 1]): clf = DecisionTreeRegressor(max_depth=max_depth, min_samples_leaf=min_samples_leaf) clf.fit(data_x, data_y) xx, yy = get_grid(data_x) predicted = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) plt.subplot2grid((3, 3), (i, j)) plt.pcolormesh(xx, yy, predicted, cmap='spring') plt.scatter(data_x[:, 0], data_x[:, 1], c=data_y, s=30, cmap='spring', edgecolor='k') plt.title('max_depth=' + str(max_depth) + ', min_samples_leaf: ' + str(min_samples_leaf))
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
To overcome this disadvantages, we will consider bagging or bootstrap aggregation <a id='bootbag'></a> Bagging <a id='bootbag'></a> Bootstrap Usually, we apply the following approach to the ML problem: We have some finite sample $X={x_i}_{i=1}^{N}$, $x_i\in\mathbb{R}^{d}$ from unknown complex distirubution $F$ Inference some machine learning algorithm $T = T(x_1,\dots,x_N)$ However, if we want to study statistical proporities of the algorithm, we are in the trouble. For variance: $$ \mathbb{V}T = \int_{\text{range } x}(T(x))^2dF(x) - \left(\int_{\text{range } x}T(x)dF(x)\right)^2 $$ Troubles: * We do not have the true distribution $F(x)$ * We can not analytically integrate over complex ml-algorithm $T$ as tree, or even median Solutions: * Model $F(y)$ with emperical density $p_e(y)$: $$ p_{e}(y) = \sum\limits_{i=1}^{N}\frac{1}{N}\delta(y-x_i) $$ Esitemate any integral of the form $\int f(T(x))dF(x)\approx \int f(T(x))dF_{e}(x)$ via Mone-Carlo: $$ \int f(T(x))dF(x)\approx \int f(T(x))dF_{e}(x) \approx \frac{1}{B}\sum\limits_{j=1}^{B}f(T_j),\text{where } T_j = T(X^j), X^j\sim F_e $$ Note, that sampling from $p_e(y)$ is just selection with repetition from $X={x_i}_{i=1}^{N}$. So it is the cheap and simple procedure. Let's play with model example and estimate variance of the algorithm: $$ \begin{aligned} & x_i \in \mathbb{R} \ & T(X) = \text{median }X \end{aligned} $$ Task 1 For this example let's make simultated data from Cauchy distribution
def median(X): return np.median(X) def make_sample_cauchy(n_samples): sample = np.random.standard_cauchy(size=n_samples) return sample X = make_sample_cauchy(int(1e2)) plt.hist(X, bins=int(1e1));
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
So, our model median will be:
med = median(X) med
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
Exact variance formula for sample cauchy median is following: $$ \mathbb{V}\text{med($X_n$)} = \dfrac{2n!}{(k!)^2\pi^n}\int\limits_{0}^{\pi/2}x^k(\pi-x)^k(\text{cot}x)^2dx $$ So hard! We will find it by bootstrap method. Now, please apply boostrap algorithm to calculate its variance. First, you need to write bootstrap sampler. It will be usefull https://docs.scipy.org/doc/numpy-1.13.0/reference/generated/numpy.random.choice.html#numpy.random.choice
def make_sample_bootstrap(X): size = X.shape[0] idx_range = range(size) new_idx = np.random.choice(idx_range, size, replace=True) return X[new_idx]
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
Second, for $K$ bootstrap samples your shoud estimate its median. Make K=500 samples For each samples estimate median ont it save in median_boot_samples array
K = 500 median_boot_samples = [] for i in range(K): boot_sample = make_sample_bootstrap(X) meadian_boot_sample = median(boot_sample) median_boot_samples.append(meadian_boot_sample) median_boot_samples = np.array(median_boot_samples)
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
Now we can obtain mean and variance from median_boot_samples as we are usually done it in statistics
mean = np.mean(median_boot_samples) std = np.std(median_boot_samples) print(mean, std)
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
Please, put your estimation of std rounded to the 3 decimals at the form: https://goo.gl/forms/Qgs4O7U1Yvs5csnM2
plt.hist(median_boot_samples, bins=int(50));
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
<a id='rf'></a> Tree + Bootstrap = Random Forest We want to make many different trees and then aggregate score. So, we need to specify what is different and how to aggregate. How to aggregate For base algorithms $b_1(x),\dots, b_N(x)$ For classification task => majority vote: $a(x) = \text{arg}\max_{y}\sum_{i=1}^N[b_i(x) = y]$ For regression task => $a(x) = \frac{1}{N}\sum_{i=1}^{N}b_i(x)$ Different trees Note, that more different trees, than less covariance their predictions have. Hence then we get more gain from aggregation. One source of the difference: bootstrap sample, as we consider above Another one: select random subset of features for each $b_i(x)$ fitting Let' see how it works on our toy task
from sklearn.ensemble import RandomForestRegressor clf = RandomForestRegressor(n_estimators=100) clf.fit(data_x, data_y) xx, yy = get_grid(data_x) predicted = clf.predict(np.c_[xx.ravel(), yy.ravel()]).reshape(xx.shape) plt.figure(figsize=(8, 8)); plt.pcolormesh(xx, yy, predicted, cmap='spring'); plt.scatter(data_x[:, 0], data_x[:, 1], c=data_y, s=100, cmap='spring', edgecolor='k');
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
You can note, that all boundaries become much more smoother. Now we will compare methods on the Boston Dataset
from sklearn.datasets import load_boston data = load_boston() X = data.data y = data.target
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
Task 1 Get cross validation score for variety of algorithms: BaggingRegressor and RandomForestRegressor with different parameters. For example, for simple decision tree:
from sklearn.model_selection import KFold, cross_val_score cv = KFold(shuffle=True, random_state=1011) regr = DecisionTreeRegressor() print(cross_val_score(regr, X, y, cv=cv, scoring='r2').mean())
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
Find best parameter with CV. Please put score at the https://goo.gl/forms/XZ7xHR54Fjk5cBy92
from sklearn.ensemble import BaggingRegressor from sklearn.ensemble import RandomForestRegressor # usuall cv code
_posts/Seminar+5+Trees+Bagging+%28with+Solutions%29.ipynb
evgeniiegorov/evgeniiegorov.github.io
mit
Create synthetic dataset 1 For the first technology, where "JDBC" was used. Create committed lines
import numpy as np import pandas as pd np.random.seed(0) # adding period added_lines = [int(np.random.normal(30,50)) for i in range(0,600)] # deleting period added_lines.extend([int(np.random.normal(-50,100)) for i in range(0,200)]) added_lines.extend([int(np.random.normal(-2,20)) for i in range(0,200)]) added_lines.extend([int(np.random.normal(-3,10)) for i in range(0,200)]) df_jdbc = pd.DataFrame() df_jdbc['lines'] = added_lines df_jdbc.head()
notebooks/Generating Synthetic Data based on a Git Log.ipynb
feststelltaste/software-analytics
gpl-3.0
Add timestamp
times = pd.timedelta_range("00:00:00","23:59:59", freq="s") times = pd.Series(times) times.head() dates = pd.date_range('2013-05-15', '2017-07-23') dates = pd.to_datetime(dates) dates = dates[~dates.dayofweek.isin([5,6])] dates = pd.Series(dates) dates = dates.add(times.sample(len(dates), replace=True).values) dates.head() df_jdbc['timestamp'] = dates.sample(len(df_jdbc), replace=True).sort_values().reset_index(drop=True) df_jdbc = df_jdbc.sort_index() df_jdbc.head()
notebooks/Generating Synthetic Data based on a Git Log.ipynb
feststelltaste/software-analytics
gpl-3.0
Treat first commit separetely Set a fixed value because we have to start with some code at the beginning
df_jdbc.loc[0, 'lines'] = 250 df_jdbc.head() df_jdbc = df_jdbc
notebooks/Generating Synthetic Data based on a Git Log.ipynb
feststelltaste/software-analytics
gpl-3.0
Add file names Sample file names including their paths from an existing dataset
df_jdbc['file'] = log[log['type'] == 'jdbc']['file'].sample(len(df_jdbc), replace=True).values
notebooks/Generating Synthetic Data based on a Git Log.ipynb
feststelltaste/software-analytics
gpl-3.0
Check dataset
%matplotlib inline df_jdbc.lines.hist()
notebooks/Generating Synthetic Data based on a Git Log.ipynb
feststelltaste/software-analytics
gpl-3.0
Sum up the data and check if it was created as wanted.
df_jdbc_timed = df_jdbc.set_index('timestamp') df_jdbc_timed['count'] = df_jdbc_timed.lines.cumsum() df_jdbc_timed['count'].plot() last_non_zero_timestamp = df_jdbc_timed[df_jdbc_timed['count'] >= 0].index.max() last_non_zero_timestamp df_jdbc = df_jdbc[df_jdbc.timestamp <= last_non_zero_timestamp] df_jdbc.head()
notebooks/Generating Synthetic Data based on a Git Log.ipynb
feststelltaste/software-analytics
gpl-3.0
Create synthetic dataset 2
df_jpa = pd.DataFrame([int(np.random.normal(20,50)) for i in range(0,600)], columns=['lines']) df_jpa.loc[0,'lines'] = 150 df_jpa['timestamp'] = pd.DateOffset(years=2) + dates.sample(len(df_jpa), replace=True).sort_values().reset_index(drop=True) df_jpa = df_jpa.sort_index() df_jpa['file'] = log[log['type'] == 'jpa']['file'].sample(len(df_jpa), replace=True).values df_jpa.head()
notebooks/Generating Synthetic Data based on a Git Log.ipynb
feststelltaste/software-analytics
gpl-3.0
Check dataset
df_jpa.lines.hist() df_jpa_timed = df_jpa.set_index('timestamp') df_jpa_timed['count'] = df_jpa_timed.lines.cumsum() df_jpa_timed['count'].plot()
notebooks/Generating Synthetic Data based on a Git Log.ipynb
feststelltaste/software-analytics
gpl-3.0
Add some noise
dates_other = pd.date_range(df_jdbc.timestamp.min(), df_jpa.timestamp.max()) dates_other = pd.to_datetime(dates_other) dates_other = dates_other[~dates_other.dayofweek.isin([5,6])] dates_other = pd.Series(dates_other) dates_other = dates_other.add(times.sample(len(dates_other), replace=True).values) dates_other.head() df_other = pd.DataFrame([int(np.random.normal(5,100)) for i in range(0,40000)], columns=['lines']) df_other['timestamp'] = dates_other.sample(len(df_other), replace=True).sort_values().reset_index(drop=True) df_other = df_other.sort_index() df_other['file'] = log[log['type'] == 'other']['file'].sample(len(df_other), replace=True).values df_other.head()
notebooks/Generating Synthetic Data based on a Git Log.ipynb
feststelltaste/software-analytics
gpl-3.0
Check dataset
df_other.lines.hist() df_other_timed = df_other.set_index('timestamp') df_other_timed['count'] = df_other_timed.lines.cumsum() df_other_timed['count'].plot()
notebooks/Generating Synthetic Data based on a Git Log.ipynb
feststelltaste/software-analytics
gpl-3.0
Concatenate all datasets
df = pd.concat([df_jpa, df_jdbc, df_other], ignore_index=True).sort_values(by='timestamp') df.loc[df.lines > 0, 'additions'] = df.lines df.loc[df.lines < 0, 'deletions'] = df.lines * -1 df = df.fillna(0).reset_index(drop=True) df = df[['additions', 'deletions', 'file', 'timestamp']] df.loc[(df.deletions > 0) & (df.loc[0].timestamp == df.timestamp),'additions'] = df.deletions df.loc[df.loc[0].timestamp == df.timestamp,'deletions'] = 0 df['additions'] = df.additions.astype(int) df['deletions'] = df.deletions.astype(int) df = df.sort_values(by='timestamp', ascending=False) df.head()
notebooks/Generating Synthetic Data based on a Git Log.ipynb
feststelltaste/software-analytics
gpl-3.0
Truncate data until fixed date
df = df[df.timestamp < pd.Timestamp('2018-01-01')] df.head()
notebooks/Generating Synthetic Data based on a Git Log.ipynb
feststelltaste/software-analytics
gpl-3.0
Export the data
df.to_csv("datasets/git_log_refactoring.gz", index=None, compression='gzip')
notebooks/Generating Synthetic Data based on a Git Log.ipynb
feststelltaste/software-analytics
gpl-3.0
Check loaded data
df_loaded = pd.read_csv("datasets/git_log_refactoring.gz") df_loaded.head() df_loaded.info()
notebooks/Generating Synthetic Data based on a Git Log.ipynb
feststelltaste/software-analytics
gpl-3.0