markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Q8. Compute softmax cross entropy between logits and labels.
logits = tf.random_normal(shape=[2, 5, 10]) labels = tf.convert_to_tensor(np.random.randint(0, 10, size=[2, 5]), tf.int32) labels = tf.one_hot(labels, depth=10) output = tf.nn.... with tf.Session() as sess: print(sess.run(output))
programming/Python/tensorflow/exercises/Neural_Network_Part2.ipynb
diegocavalca/Studies
cc0-1.0
Embeddings Q9. Map tensor x to the embedding.
tf.reset_default_graph() x = tf.constant([0, 2, 1, 3, 4], tf.int32) embedding = tf.constant([0, 0.1, 0.2, 0.3, 0.4], tf.float32) output = tf.nn.... with tf.Session() as sess: print(sess.run(output))
programming/Python/tensorflow/exercises/Neural_Network_Part2.ipynb
diegocavalca/Studies
cc0-1.0
Choose most probable B-events
_, take_indices = numpy.unique(data[event_id_column], return_index=True) figure(figsize=[15, 5]) subplot(1, 2, 1) hist(data.Bmass.values[take_indices], bins=100) title('B mass hist') xlabel('mass') subplot(1, 2, 2) hist(data.N_sig_sw.values[take_indices], bins=100, normed=True) title('sWeights hist') xlabel('signal sWeights') #plt.savefig('img/Bmass_less_PID.png' , format='png')
Stefania_files/track-based-tagging-track-sign-usage.ipynb
tata-antares/tagging_LHCb
apache-2.0
Define B-like events for training Events with low sWeight still will be used only to test quality.
sweight_threshold = 1. data_sw_passed = data[data.N_sig_sw > sweight_threshold] data_sw_not_passed = data[data.N_sig_sw <= sweight_threshold] get_events_statistics(data_sw_passed) _, take_indices = numpy.unique(data_sw_passed[event_id_column], return_index=True) figure(figsize=[15, 5]) subplot(1, 2, 1) hist(data_sw_passed.Bmass.values[take_indices], bins=100) title('B mass hist for sWeight > 1 selection') xlabel('mass') subplot(1, 2, 2) hist(data_sw_passed.N_sig_sw.values[take_indices], bins=100, normed=True) title('sWeights hist for sWeight > 1 selection') xlabel('signal sWeights') # plt.savefig('img/Bmass_selected_less_PID.png' , format='png') hist(data_sw_passed.diff_pt.values, bins=100) pass
Stefania_files/track-based-tagging-track-sign-usage.ipynb
tata-antares/tagging_LHCb
apache-2.0
Main idea: find tracks, which can help reconstruct the sign of B if you know track sign. label = signB * signTrack * the highest output means that this is same sign B as track * the lowest output means that this is opposite sign B than track Define features
features = list(set(data.columns) - {'index', 'run', 'event', 'i', 'signB', 'N_sig_sw', 'Bmass', 'mult', 'PIDNNp', 'PIDNNpi', 'label', 'thetaMin', 'Dist_phi', event_id_column, 'mu_cut', 'e_cut', 'K_cut', 'ID', 'diff_phi', 'group_column'}) features
Stefania_files/track-based-tagging-track-sign-usage.ipynb
tata-antares/tagging_LHCb
apache-2.0
PID pairs scatters
figure(figsize=[15, 16]) bins = 60 step = 3 for i, (feature1, feature2) in enumerate(combinations(['PIDNNk', 'PIDNNm', 'PIDNNe', 'PIDNNp', 'PIDNNpi'], 2)): subplot(4, 3, i + 1) Z, (x, y) = numpy.histogramdd(data_sw_passed[[feature1, feature2]].values, bins=bins, range=([0, 1], [0, 1])) pcolor(numpy.log(Z).T, vmin=0) xlabel(feature1) ylabel(feature2) xticks(numpy.arange(bins, step), x[::step]), yticks(numpy.arange(bins, step), y[::step]) # plt.savefig('img/PID_selected_less_PID.png' , format='png')
Stefania_files/track-based-tagging-track-sign-usage.ipynb
tata-antares/tagging_LHCb
apache-2.0
count of tracks
_, n_tracks = numpy.unique(data_sw_passed[event_id_column], return_counts=True) hist(n_tracks, bins=100) title('Number of tracks') # plt.savefig('img/tracks_number_less_PID.png' , format='png')
Stefania_files/track-based-tagging-track-sign-usage.ipynb
tata-antares/tagging_LHCb
apache-2.0
DT
from rep.estimators import XGBoostClassifier xgb_base = XGBoostClassifier(n_estimators=100, colsample=0.7, eta=0.01, nthreads=12, subsample=0.1, max_depth=6) xgb_folding = FoldingGroupClassifier(xgb_base, n_folds=2, random_state=11, train_features=features, group_feature='group_column') %time xgb_folding.fit_lds(data_sw_passed_lds) pass comparison_report = ClassificationReport({'tt': xgb_folding}, data_sw_passed_lds) comparison_report.compute_metric(RocAuc()) lc = comparison_report.learning_curve(RocAuc(), steps=1) lc tt_base = DecisionTrainClassifier(learning_rate=0.02, n_estimators=3000, depth=6, pretransform_needed=True, max_features=15, loss=LogLossFunction(regularization=100), n_threads=8) tt_folding = FoldingGroupClassifier(SklearnClassifier(tt_base), n_folds=2, random_state=11, train_features=features, group_feature='group_column') %time tt_folding.fit_lds(data_sw_passed_lds) pass import cPickle with open('models/dt_full_group_sign.pkl', 'w') as f: cPickle.dump(tt_folding, f) comparison_report = ClassificationReport({'tt': tt_folding}, data_sw_passed_lds) comparison_report.compute_metric(RocAuc()) comparison_report.roc() lc = comparison_report.learning_curve(RocAuc(), steps=1) lc comparison_report.feature_importance()
Stefania_files/track-based-tagging-track-sign-usage.ipynb
tata-antares/tagging_LHCb
apache-2.0
Triangular mesh generation In the last lesson we learned how to create a quad mesh by Transfinite Interpolation to accurately approximate the strong topography of a sea dike. We can use this mesh for spectral element modelling. But what should we do, if we need a triangular mesh for example for Finite Element modelling? Yigma Tepe - another problem with strong topography You might think that the sea dike topography was already quite complex. Well, here is another problem we are currently working on at the "Applied Geophysics" group at Christian-Albrechts University Kiel: <img src="images/yigmatepe_1.jpg" width="100%"> This is Yigma Tepe, a tumulus which might be a burial of an Attilad King located in Pergamon (Turkey). Extensive geophysical prospecting and archaeological excavations revealed small scale near-surface structures, which makes Yigma Tepe an ideal target to further develop 2D SH and 3D seismic full waveform inversion in the field of archaeological prospection. A critical part is the correct discretization of the complex free-surface topography, crucial for accurate modelling of surface waves. Let's take a look at a 2D quad mesh created by Transfinite-Interpolation for a SH-profile along the tumulus.
# Import Libraries %matplotlib inline import numpy as np import matplotlib.pyplot as plt # Here, I introduce a new library, which is useful # to define the fonts and size of a figure in a notebook from pylab import rcParams # Get rid of a Matplotlib deprecation warning import warnings warnings.filterwarnings("ignore") # Load Yigma Tepe quad mesh created by TFI X = np.loadtxt('data/yigma_tepe_TFI_mesh_X.dat', delimiter=' ', skiprows=0, unpack='True') Z = np.loadtxt('data/yigma_tepe_TFI_mesh_Z.dat', delimiter=' ', skiprows=0, unpack='True') # number of grid points in each spatial direction NZ, NX = np.shape(X) print("NX = ", NX) print("NZ = ", NZ) # Define figure size rcParams['figure.figsize'] = 10, 7 # Plot Yigma Tepe TFI mesh plt.plot(X, Z, 'k') plt.plot(X.T, Z.T, 'k') plt.plot(X, Z, 'bo', markersize=4) plt.title("Yigma Tepe TFI mesh" ) plt.xlabel("x [m]") plt.ylabel("z [m]") plt.axes().set_aspect('equal') #plt.savefig('yigma_tepe_TFI.pdf', bbox_inches='tight', format='pdf') plt.show()
02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
This quad mesh is already able to accurately describe the free-surface topography. Triangular mesh generation If we need a triangular mesh, for example for finite element or finite volume modelling, we could apply Delaunay triangulation to the node point distribution of the Yigma Tepe TFI mesh. For further details related to Delaunay triangulation, I refer to Computational Geometry in Python by Francisco Blanco-Silva. In a first step, we assemble the x- and z-vectors in a (NX*NZ x 2) matrix.
# Reshape X and Z vector x = X.flatten() z = Z.flatten() # Assemble x and z vector into NX*NZ x 2 matrix points = np.vstack([x,z]).T
02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
Next, we compute the Voronoi diagram for the mesh points. This describes the partitioning of a plane with n points into convex polygons such that each polygon contains exactly one generating point and every point in a given polygon is closer to its generating point than to any other.
# calculate and plot Voronoi diagram for mesh points from scipy.spatial import Voronoi, voronoi_plot_2d vor = Voronoi(points) plt.figure(figsize=(12,6)) ax = plt.subplot(111, aspect='equal') voronoi_plot_2d(vor, ax=ax) plt.title("Part of Yigma Tepe (Voronoi diagram)" ) plt.xlabel("x [m]") plt.ylabel("z [m]") plt.xlim( 25, 75) plt.ylim(10, 35) plt.show()
02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
The Delaunay triangulation creates triangles by connecting the points in neighbouring Voronoi cells.
# Apply Delaunay triangulation to the quad mesh node points from scipy.spatial import Delaunay tri = Delaunay(points) plt.figure(figsize=(12,6)) ax = plt.subplot(111, aspect='equal') voronoi_plot_2d(vor, ax=ax) plt.triplot(points[:,0], points[:,1], tri.simplices.copy(), linewidth=3, color='b') plt.title("Part of Yigma Tepe (Voronoi diagram & Delaunay triangulation)" ) plt.xlabel("x [m]") plt.ylabel("z [m]") plt.xlim( 25, 75) plt.ylim(10, 35) plt.show()
02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
Let's take a look at the final mesh for the Yigma Tepe model
# Plot triangular mesh plt.triplot(points[:,0], points[:,1], tri.simplices.copy()) plt.title("Yigma Tepe Delaunay mesh" ) plt.xlabel("x [m]") plt.ylabel("z [m]") plt.axes().set_aspect('equal') plt.show()
02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
The regular triangulation within the tumulus looks reasonable. However, the Delaunay triangulation also added unwanted triangles above the topography. To solve this problem we have to use constrained Delaunay triangulation in order to restrict the triangulation to the model below the free-surface topography. Unfortunately, constrained Delaunay triangulation is not available within SciPy. However, there exists a python wrapper to the mesh generator Triangle by Richard Shewchuck. This wrapper can be installed by git clone --depth=1 https://github.com/drufat/triangle.git cd triangle python setup.py install A detailed documentation of the Triangle Python wrapper by Dzhelil Rufat can be found here.
# import triangulate library from triangle import triangulate, show_data, plot as tplot import triangle
02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
In order to use the constrained Delaunay triangulation, we obviously have to define the constraining vertex points lying on the boundaries of our model. In this case it is quite easy, because the TFI mesh is regular. OK, perhaps not so easy, because we have to be sure that no redundant points are in the final list and the points are sequentially defined in clockwise direction.
# Estimate boundary points # surface topography surf = np.vstack([X[9,:-2],Z[9,:-2]]).T # right model boundary right = np.vstack([X[1:,69],Z[1:,69]]).T # bottom model boundary bottom = np.vstack([X[0,1:],Z[0,1:]]).T # left model boundary left = np.vstack([X[:-2,0],Z[:-2,0]]).T # assemble model boundary model_stack = np.vstack([surf,np.flipud(right)]) model_stack1 = np.vstack([model_stack,np.flipud(bottom)]) model_bound = np.vstack([model_stack1,left])
02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
The above code looks a little bit chaotic, but you can check that the points in the resulting array model_bound are correctly sorted and contains no redundant points.
plt.plot(model_bound[:,0],model_bound[:,1],'bo') plt.title("Yigma Tepe model boundary" ) plt.xlabel("x [m]") plt.ylabel("z [m]") plt.axes().set_aspect('equal') plt.show()
02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
Good, now we have defined the model boundary points. Time for some constrained Delaunay triangulation ...
# define vertices (no redundant points) vert = model_bound # apply Delaunay triangulation to vertices tri = triangle.delaunay(vert) # define vertex markers vertm = np.array(np.zeros((len(vert),1)),dtype='int32') # define how the vertices are connected, e.g. point 0 is connected to point 1, # point 1 to point 2 and so on ... points1 = np.arange(len(vert)) points2 = np.arange(len(vert))+1 # last point is connected to the first point points2[-1] = 0 # define connectivity of boundary polygon seg = np.array(np.vstack([points1,points2]).T,dtype='int32') # define marker for boundary polygon segm = np.array(np.ones((len(seg),1)),dtype='int32') # assemble dictionary for triangle optimisation A = dict(vertices=vert, vertex_markers=vertm, segments=seg, segment_markers=segm,triangles=tri) # Optimise initial triangulation cndt = triangle.triangulate(A,'pD') ax = plt.subplot(111, aspect='equal') tplot.plot(ax,**cndt)
02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
Very good, compared to the SciPy Delaunay triangulation, no triangles are added above the topography. However, most triangles have very small minimum angles, which would lead to serious numerical issues in later finite element modelling runs. So in the next step we restrict the minimum angle to 20° using the option q20.
cncfq20dt = triangulate(A,'pq20D') ax = plt.subplot(111, aspect='equal') tplot.plot(ax,**cncfq20dt)
02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
Finally, we want a more evenly distribution of the triangle sizes. This can be achieved by imposing a maximum area to the triangles with the option a20.
cncfq20adt = triangulate(A,'pq20a20D') ax = plt.subplot(111, aspect='equal') tplot.plot(ax,**cncfq20adt)
02_Mesh_generation/4_Tri_mesh_delaunay_yigma_tepe.ipynb
daniel-koehn/Theory-of-seismic-waves-II
gpl-3.0
All hypotheses discussed herein will be expressed with Gaussian / normal distributions. Let's look at the properties of this distribution. Start by plotting it. We'll set the mean to 0 and the width the 1...the standard normal distribution.
x = np.arange(-10, 10, 0.001) plt.plot(x,norm.pdf(x,0,1)) # final arguments are mean and width
error_statistics-101/error_stats_and_severity.ipynb
gitreset/Data-Science-45min-Intros
unlicense
Now look at the cumulative distribution function of the standard normal, which integrates from negative infinity up to the function argument, on a unit-normalized distribution.
norm.cdf(0)
error_statistics-101/error_stats_and_severity.ipynb
gitreset/Data-Science-45min-Intros
unlicense
The function also accepts a list.
norm.cdf([-1., 0, 1])
error_statistics-101/error_stats_and_severity.ipynb
gitreset/Data-Science-45min-Intros
unlicense
Now let's be more explicit about the parameters of the distribution.
mu = 0 sigma = 1 norm(loc=mu, scale=sigma) norm.cdf([-1., 0, 1]) sigma=2 mu = 0 n = norm(loc=mu, scale=sigma) n.cdf([-1., 0, 1])
error_statistics-101/error_stats_and_severity.ipynb
gitreset/Data-Science-45min-Intros
unlicense
In addition to exploring properties of the exact function, we can sample points from it.
[normal() for _ in range(5)]
error_statistics-101/error_stats_and_severity.ipynb
gitreset/Data-Science-45min-Intros
unlicense
We can also approximate the exact distribution by sampling a large number of points from it.
size = 1000000 num_bins = 300 plt.hist([normal() for _ in range(size)],num_bins) plt.xlim([-10,10])
error_statistics-101/error_stats_and_severity.ipynb
gitreset/Data-Science-45min-Intros
unlicense
Data samples If we have sample of points, we can summarize them in a model-nonspecific way by calculating the mean. Here, we draw them from a Gaussian for convenience.
n = 10 my_sample = [normal() for _ in range(n)] my_sample_mean = np.mean(my_sample) print(my_sample_mean)
error_statistics-101/error_stats_and_severity.ipynb
gitreset/Data-Science-45min-Intros
unlicense
Now let's generate a large number of data samples and plot the corresponding distribution of sample means.
n = 10 means_10 = [] for _ in range(10000): my_sample = [normal() for _ in range(n)] my_sample_mean = np.mean(my_sample) means_10.append(my_sample_mean) plt.hist(means_10,100) plt.xlim([-1.5,1.5]) plt.xlabel("P(mean(X))") plt.show() n = 100 means_100 = [] for _ in range(10000): my_sample = [normal() for _ in range(n)] my_sample_mean = np.mean(my_sample) means_100.append(my_sample_mean) plt.hist(means_100,100) plt.xlim([-1.5,1.5]) plt.xlabel("P(mean(X))") plt.show() # show 1/sqrt(n) scaling of deviation n_s = [] std_100 = [] for i in range(1, 1000, 50): means_100 = [] for _ in range(5000): my_sample = [normal() for _ in range(i)] my_sample_mean = np.mean(my_sample) means_100.append(my_sample_mean) my_sample_std = np.std(means_100) std_100.append(1./(my_sample_std*my_sample_std)) n_s.append(i) plt.scatter(n_s,std_100) plt.xlim([0,1000]) plt.ylabel("std(mean(X;sample))") plt.xlabel("sample") plt.show()
error_statistics-101/error_stats_and_severity.ipynb
gitreset/Data-Science-45min-Intros
unlicense
Note that by increasing the number of data points, the variation on the mean decreases. Notation: the variable containing all possible n-sized sets of samples is called $X$. A specific $X$, like the one actually observed in an experiment, is called $X_0$. What can we say about the data? are the data consistent with having been sampled from a certain distribution? if not, what distribution are they consistent with? Hypotheses In our tutorial, a hypothesis is expressed as a distribution from which the data may have been drawn. Our goal is to provide a procedure for rejection of the null hypothesis, and, in the case of rejecting the null, provide warranted inference of one or more alternate hypotheses. Simplification: the hypothesis space is defined as all normal distributions with variable mean $\mu$ and fixed variance. Generalizing this assumption changes almost nothing. Corollary: the hypothesis space is one-dimensional, and the logical not of a hypothesis is simple to comprehend. A Test Statistic To relate observed data to hypotheses, we need to define a test statistic, which summarizes a particular experimental result. This statistic is also a function of the hypothesis, and will have different sampling distributions under different hypotheses. $d(X;H_\mu) = (\bar X - \mu)/(\sigma/\sqrt n)$, where $\bar X$ is the mean of $X$. For Gaussian hypotheses, $d$ is distributed as a unit normal.
def d(X=[0], mu = 0, sigma = 1): X_bar = np.mean(X) return (X_bar - mu) / sigma * np.sqrt(len(X)) n = 10 my_sample = [normal() for _ in range(n)] d(my_sample)
error_statistics-101/error_stats_and_severity.ipynb
gitreset/Data-Science-45min-Intros
unlicense
Let's numerically determine the sampling distribution under the hypothesis: $H_0$: $\mu = 0, \sigma = 1$
size = 100000 n = 10 d_sample = [] for _ in range(size): my_sample = [normal() for _ in range(n)] # get a sample of size n d_sample.append(d(my_sample)) # add test statistic for this sample to the list plt.hist(d_sample,100) plt.xlabel("P(d(X);H0)")
error_statistics-101/error_stats_and_severity.ipynb
gitreset/Data-Science-45min-Intros
unlicense
With this sampling distribution (which can be calculated exactly), we know exectly how likely a particular result $d(X_0)$ is. We also know how likely it is to observe a result that is even less probable than $d(X_0)$, $P(d(X) > d(X_0); \mu)$. Rejecting the null This probability is the famous p-value. When the p-value for a particular experimental outcome is less that some pre-determined amount (usually called $\alpha$), we can: infer that $H_0$ is falsified at level $\alpha$ take the action that has been specified for this situation infer that $X_0$ indicates something about an alternate hypothesis. If $H_0$ corresponds to $\mu = \mu_0$, then we infer that $\mu > \mu_0 + \delta$. If $H_0$ is rejected, we can now also begin to speak about statistical properties of $H_1$ where $H_1 != H_0$. Neyman-Pearson digression The traditional frequentist procedure (due to Neyman and Pearson) is to construct a test that fixes the probability of rejecting $H_0$ when it’s true, and maximizes the power: the probability of statistical similarity with $H_1$ when it is true. In other words, for a fixed probability of rejecting $H_0$ when it's true, maximize the probability of accepting $H_1$ when it's true. The N-P construction is fixed before $X_0$ is observed. We wish to extend this and, when $H_0$ is rejected, infer regions of alternate parameter space that are severely tested by the outcome $X_0$. Inference of an alternate hypothesis When the null hypothesis is rejected, we are interested in ranges of alternate hypotheses that, if not true, are highly likely to have produced a test statistic less significant than $d(X_0)$. We say these ranges of parameters space, which can be thought of as composite hypotheses, have been severely tested. We call the level of testing severity and it is a function of the observed data ($X_0$), the range of alternate hypothesis ($H$), and the test constuction itself. This is the point of the tutorial: we are warrented to infer ranges of hypothesis space when that range has been severely tested.
# look at the distributions of sample means for two hypotheses def make_histograms(mu0=0,mu1=1,num_samples=10000,n=100,sigma=1): #d0_sample = [] #d1_sample = [] m0_sample = [] m1_sample = [] for _ in range(num_samples): H0_sample = [normal(loc=mu0,scale=sigma) for _ in range(n)] # get a sample of size n from H0 H1_sample = [normal(loc=mu1,scale=sigma) for _ in range(n)] # get a sample of size n from H1 m0_sample.append( np.mean(H0_sample) ) # add mean for this sample to the m0 list m1_sample.append( np.mean(H1_sample) ) # add mean for this sample to the m1 list # remember that the test statistic is unit-normal-distributed for Gaussian hypotheses, # so these distributions should be identical #d0_sample.append( d(H0_sample,mu0,sigma) ) # add test statistic for this sample to the d0 list #d1_sample.append( d(H1_sample,mu1,sigma) ) # add test statistic for this sample to the d1 list plt.hist(m0_sample,100,label="H0") plt.hist(m1_sample,100,label="H1") plt.xlabel(r"$\bar{X}$") plt.legend() num_samples = 10000 n = 100 mu0 = 0 mu1 = 1 sigma=2 make_histograms(mu0=mu0,mu1=mu1,num_samples=num_samples,n=n,sigma=sigma)
error_statistics-101/error_stats_and_severity.ipynb
gitreset/Data-Science-45min-Intros
unlicense
Now, imagine that we observe $\bar X_0 = 0.4$. The probability of $\bar X > 0.4$ is less than $2\%$ under $H_0$, so let's say we've rejected $H_0$. Question, what regions of $\mu$ (defined as $\mu > \mu_1$) have been severely tested? $SEV(\mu>\mu_1) = P(d(X)<d(X_0);!(\mu>\mu_1)) = P(d(X)<d(X_0); \mu<=\mu_1)$ ---> $P(d(X)<d(X_0);\mu = \mu_1)$ So we only need to calculate the probability of a result less anomalous than $d(X_0)$, given $\mu_1$.
# severity for the interval: mu > mu_1 # note that we calculate the probability in terms of the _lower bound_ of the interval, # since it will provide the _lowest_ severity def severity(mu_1=0, x=[0], mu0=0, sigma=sigma, n=100): # find the mean of the observed data x_bar = np.mean(x) # calculate the test statistic w.r.t. mu_1 dx = (x_bar - mu_1)/sigma*np.sqrt(n) # the test statistic is distributed as a unit normal n = norm() return n.cdf(dx)
error_statistics-101/error_stats_and_severity.ipynb
gitreset/Data-Science-45min-Intros
unlicense
Calculate the severity of an outcome that is rather unlike (is lower) than the lower bound of a range of alternate hypotheses ($\mu > \mu_1$).
sigma = 2 mu_1 = 0.2 x = [0.4] severity(mu_1=mu_1,x=x,sigma=sigma) num_samples = 10000 n = 100 mu0 = 0 mu1 = 0.2 sigma=2 make_histograms(mu0=mu0,mu1=mu1,num_samples=num_samples,n=n,sigma=sigma)
error_statistics-101/error_stats_and_severity.ipynb
gitreset/Data-Science-45min-Intros
unlicense
Calculate the severity for a set of observations.
x_bar_values = [[0.4],[0.6],[1.]] color_indices = ["b","k","r"] for x,color_idx in zip(x_bar_values,color_indices): mu_values = scipy.linspace(0,1,100) sev = [severity(mu_1=mu_1,x=x,sigma=sigma) for mu_1 in mu_values] plt.plot(mu_values,sev,color_idx,label=x) plt.ylim(0,1.1) plt.ylabel("severity for $H: \mu > \mu_1$") plt.legend(loc="lower left") plt.xlabel(r"$\mu_1$")
error_statistics-101/error_stats_and_severity.ipynb
gitreset/Data-Science-45min-Intros
unlicense
Create a mock light curve
lc = MockLC(SimulationSetup('M', 0.1, 0.0, 0.0, 'short_transit', cteff=5500, know_orbit=True)) lc.create(wnsigma=[0.001, 0.001, 0.001, 0.001], rnsigma=0.00001, rntscale=0.5, nights=1); lc.plot();
notebooks/contamination/example_1b.ipynb
hpparvi/PyTransit
gpl-2.0
If we want to do any "feature engineering" like creating new features or adjusting existing ones we should do this directly using the SFrames as seen in the other Week 2 notebook. For this notebook, however, we will work with the existing features. Convert to Numpy Array Although SFrames offer a number of benefits to users (especially when using Big Data and built-in graphlab functions) in order to understand the details of the implementation of algorithms it's important to work with a library that allows for direct (and optimized) matrix operations. Numpy is a Python solution to work with matrices (or any multi-dimensional "array"). Recall that the predicted value given the weights and the features is just the dot product between the feature and weight vector. Similarly, if we put all of the features row-by-row in a matrix then the predicted value for all the observations can be computed by right multiplying the "feature matrix" by the "weight vector". First we need to take the SFrame of our data and convert it into a 2D numpy array (also called a matrix). To do this we use graphlab's built in .to_dataframe() which converts the SFrame into a Pandas (another python library) dataframe. We can then use Panda's .as_matrix() to convert the dataframe into a numpy matrix.
print sales.head() import numpy as np # note this allows us to refer to numpy as np instead
Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb
subhankarb/Machine-Learning-PlayGround
apache-2.0
Computing the Derivative We are now going to move to computing the derivative of the regression cost function. Recall that the cost function is the sum over the data points of the squared difference between an observed output and a predicted output. Since the derivative of a sum is the sum of the derivatives we can compute the derivative for a single data point and then sum over data points. We can write the squared difference between the observed output and predicted output for a single point as follows: (w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[k]*[feature_k] - output)^2 Where we have k features and a constant. So the derivative with respect to weight w[i] by the chain rule is: 2*(w[0]*[CONSTANT] + w[1]*[feature_1] + ... + w[i] *[feature_i] + ... + w[k]*[feature_k] - output)* [feature_i] The term inside the paranethesis is just the error (difference between prediction and output). So we can re-write this as: 2*error*[feature_i] That is, the derivative for the weight for feature i is the sum (over data points) of 2 times the product of the error and the feature itself. In the case of the constant then this is just twice the sum of the errors! Recall that twice the sum of the product of two vectors is just twice the dot product of the two vectors. Therefore the derivative for the weight for feature_i is just two times the dot product between the values of feature_i and the current errors. With this in mind complete the following derivative function which computes the derivative of the weight given the value of the feature (over all data points) and the errors (over all data points).
def feature_derivative(errors, feature): # Assume that errors and feature are both numpy arrays of the same length (number of data points) # compute twice the dot product of these vectors as 'derivative' and return the value derivative = 2 * np.dot(errors, feature) return derivative
Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb
subhankarb/Machine-Learning-PlayGround
apache-2.0
Gradient Descent Now we will write a function that performs a gradient descent. The basic premise is simple. Given a starting point we update the current weights by moving in the negative gradient direction. Recall that the gradient is the direction of increase and therefore the negative gradient is the direction of decrease and we're trying to minimize a cost function. The amount by which we move in the negative gradient direction is called the 'step size'. We stop when we are 'sufficiently close' to the optimum. We define this by requiring that the magnitude (length) of the gradient vector to be smaller than a fixed 'tolerance'. With this in mind, complete the following gradient descent function below using your derivative function above. For each step in the gradient descent we update the weight for each feature befofe computing our stopping criteria
from math import sqrt # recall that the magnitude/length of a vector [g[0], g[1], g[2]] is sqrt(g[0]^2 + g[1]^2 + g[2]^2) def regression_gradient_descent(feature_matrix, output, initial_weights, step_size, tolerance): converged = False weights = np.array(initial_weights) # make sure it's a numpy array count = 0 while not converged: print 'weights in ',count ,'iteration is: ', weights # compute the predictions based on feature_matrix and weights using your predict_output() function predictions = np.dot(feature_matrix, weights) # compute the errors as predictions - output errors = predictions - output gradient_sum_squares = 0 # initialize the gradient sum of squares # while we haven't reached the tolerance yet, update each feature's weight for i in range(len(weights)): # loop over each weight # Recall that feature_matrix[:, i] is the feature column associated with weights[i] # compute the derivative for weight[i]: derivative = feature_derivative(errors, feature_matrix[:, i]) # add the squared value of the derivative to the gradient sum of squares (for assessing convergence) gradient_sum_squares = gradient_sum_squares + (derivative * derivative) # subtract the step size times the derivative from the current weight weights[i] = weights[i] - (step_size * derivative) # compute the square-root of the gradient sum of squares to get the gradient matnigude: gradient_magnitude = sqrt(gradient_sum_squares) # print 'gradient_magnitude: ', gradient_magnitude , 'and tolerance: ', tolerance if gradient_magnitude < tolerance: converged = True count = count + 1 return(weights)
Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb
subhankarb/Machine-Learning-PlayGround
apache-2.0
Now compute your predictions using test_simple_feature_matrix and your weights from above.
predictions = predict_output(test_simple_feature_matrix, simple_weights) print predictions
Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb
subhankarb/Machine-Learning-PlayGround
apache-2.0
Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 1 (round to nearest dollar)?
print predictions[0]
Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb
subhankarb/Machine-Learning-PlayGround
apache-2.0
Now that you have the predictions on test data, compute the RSS on the test data set. Save this value for comparison later. Recall that RSS is the sum of the squared errors (difference between prediction and output).
rss = ((predictions - test_output) ** 2).sum() print rss
Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb
subhankarb/Machine-Learning-PlayGround
apache-2.0
Running a multiple regression Now we will use more than one actual feature. Use the following code to produce the weights for a second model with the following parameters:
model_features = ['sqft_living', 'sqft_living15'] # sqft_living15 is the average squarefeet for the nearest 15 neighbors. my_output = 'price' (feature_matrix, multi_output) = get_numpy_data(train_data, model_features, my_output) initial_weights = np.array([-100000., 1., 1.]) step_size = 4e-12 tolerance = 1e9
Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb
subhankarb/Machine-Learning-PlayGround
apache-2.0
Use the above parameters to estimate the model weights. Record these values for your quiz.
multi_weights = regression_gradient_descent(feature_matrix, multi_output, initial_weights, step_size, tolerance) print multi_weights
Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb
subhankarb/Machine-Learning-PlayGround
apache-2.0
Use your newly estimated weights and the predict_output function to compute the predictions on the TEST data. Don't forget to create a numpy array for these features from the test set first!
(test_multi_feature_matrix, multi_output) = get_numpy_data(test_data, model_features, my_output) multi_predictions = predict_output(test_multi_feature_matrix, multi_weights) print multi_predictions
Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb
subhankarb/Machine-Learning-PlayGround
apache-2.0
Quiz Question: What is the predicted price for the 1st house in the TEST data set for model 2 (round to nearest dollar)?
print multi_predictions[0]
Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb
subhankarb/Machine-Learning-PlayGround
apache-2.0
Quiz Question: Which estimate was closer to the true price for the 1st house on the Test data set, model 1 or model 2? Now use your predictions and the output to compute the RSS for model 2 on TEST data.
print 'prediction from first model is $356134 and prediction from 2nd model is $366651'
Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb
subhankarb/Machine-Learning-PlayGround
apache-2.0
Quiz Question: Which model (1 or 2) has lowest RSS on all of the TEST data?
rss = ((multi_predictions - multi_output) ** 2).sum() print rss print 'RSS from first model is 2.75400047593e+14 and RSS from 2nd model is 2.70263446465e+14'
Machine-Learning-Specialization/machine_learning_regression/week2/multiple-regression-assignment-2.ipynb
subhankarb/Machine-Learning-PlayGround
apache-2.0
Again, none of these are beautiful, but for mean and standard deviation I think that magnetic_field_y and magnetic_field_z will be the most helpful. That gives us a "who made the cut" feature list: attitude_roll attitude_pitch attitude_yaw rotation_rate_x rotation_rate_y gravity_z user_acc_y user_acc_z magnetic_field_y mangetic_Field_z Still way too many! My next cut could be for which wave forms drift up or down a lot-- drifting would mess up mean and standard deviation. I'll go with attitude_roll, rotation_rate_x, and user_acc_z. Next step: chunk up the data
# http://stackoverflow.com/questions/17315737/split-a-large-pandas-dataframe # input - df: a Dataframe, chunkSize: the chunk size # output - a list of DataFrame # purpose - splits the DataFrame into smaller of max size chunkSize (last may be smaller) def splitDataFrameIntoSmaller(df, chunkSize = 1000): listOfDf = list() numberChunks = len(df) // chunkSize + 1 for i in range(numberChunks): listOfDf.append(df[i*chunkSize:(i+1)*chunkSize]) return listOfDf # Set up the 10-second chunks walk_chunked = splitDataFrameIntoSmaller(walk_raw) for idx, df in enumerate(walk_chunked): walk_chunked[idx] = pd.DataFrame(df) drive_chunked = splitDataFrameIntoSmaller(drive_raw) for idx, df in enumerate(drive_chunked): drive_chunked[idx] = pd.DataFrame(df) static_chunked = splitDataFrameIntoSmaller(static_raw) for idx, df in enumerate(static_chunked): static_chunked[idx] = pd.DataFrame(df) upstairs_chunked = splitDataFrameIntoSmaller(upstairs_raw) for idx, df in enumerate(upstairs_chunked): upstairs_chunked[idx] = pd.DataFrame(df) run_chunked = splitDataFrameIntoSmaller(run_raw) for idx, df in enumerate(run_chunked): run_chunked[idx] = pd.DataFrame(df)
.ipynb_checkpoints/.ipynb_checkpoints/A3-checkpoint.ipynb
eherold/PersonalInformatics
mit
Now it's time to add those features
# This is where the feature data will go. The array for each activity will have length 30. walk_featured = [] drive_featured = [] static_featured = [] upstairs_featured = [] run_featured = [] # Populate the features for df in walk_chunked: features = df.mean()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() + df.std()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() walk_featured.append(features) for df in drive_chunked: features = df.mean()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() + df.std()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() drive_featured.append(features) for df in static_chunked: features = df.mean()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() + df.std()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() static_featured.append(features) for df in upstairs_chunked: features = df.mean()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() + df.std()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() upstairs_featured.append(features) for df in run_chunked: features = df.mean()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() + df.std()[['attitude_roll','rotation_rate_x','user_acc_z','user_acc_x']].values.tolist() run_featured.append(features) # Combine all of the feature sets into one big one. Along the way, generate my target array. all_featured = walk_featured + drive_featured + static_featured + upstairs_featured + run_featured target = [] + [0] * len(walk_featured) target = target + [1] * len(drive_featured) target = target + [2] * len(static_featured) target = target + [3] * len(upstairs_featured) target = target + [4] * len(run_featured) # If I accidentally didn't add the right numbers to the target array, throw an error! if target.count(0) != 30 or target.count(1) != 30 or target.count(2) != 30 or target.count(3) != 30 or target.count(4) != 30: raise ValueError('Target is corrupt')
.ipynb_checkpoints/.ipynb_checkpoints/A3-checkpoint.ipynb
eherold/PersonalInformatics
mit
Running Cross-Validation
# Create and run cross-validation on a K-Nearest Neighbors classifier knn = KNeighborsClassifier() knn_scores = cross_val_score(knn, all_featured, target, cv = 5) print 'K-NEAREST NEIGHBORS CLASSIFIER' print knn_scores # Create and run cross-validation on a Logistic Regression classifier lr = LogisticRegression() lr_scores = cross_val_score(lr, all_featured, target, cv = 5) print 'LOGISTIC REGRESSION CLASSIFIER' print lr_scores # Create and run cross-validation on a Decision Tree classifier svc = svm.SVC(kernel='linear') svc_scores = cross_val_score(svc, all_featured, target, cv = 5) print 'DECISION TREE CLASSIFIER' print svc_scores # Create and run cross-validation on a Support Vector Machine classifier dtree = tree.DecisionTreeClassifier() dtree_scores = cross_val_score(dtree, all_featured, target, cv = 5) print 'SUPPORT VECTOR MACHINE CLASSIFIER' print dtree_scores # How I started figuring out features: print walk_raw[['attitude_yaw']].describe()[2:3] print run_raw[['attitude_yaw']].describe()[2:3] print static_raw[['attitude_yaw']].describe()[2:3] print upstairs_raw[['attitude_yaw']].describe()[2:3] print drive_raw[['attitude_yaw']].describe()[2:3]
.ipynb_checkpoints/.ipynb_checkpoints/A3-checkpoint.ipynb
eherold/PersonalInformatics
mit
What if I don't know how to use a function, you can access the documentation. ? &lt;module&gt;.&lt;function&gt; Let's look at the documentation of math.log10
?? math.log10
notebooks/python_intro.ipynb
mined-gatech/pymks_overview
mit
Use the cell below to manipulate the array we just created.
B + B
notebooks/python_intro.ipynb
mined-gatech/pymks_overview
mit
Let's do some simple matrix multiplication using np.dot. $$ \mathbf{A} \overrightarrow{x} = \overrightarrow{y}$$ First checkout the documentation of np.dot.
? np.dot N = 5 A = np.eye(N) * 2 x = np.arange(N) print('A =') print(A) print('x =') print(x) = np.dot(A, x) print('y =') print(y)
notebooks/python_intro.ipynb
mined-gatech/pymks_overview
mit
Use the cell below to call another function from NumPy. Scikit-Learn Scikit-Learn, a.k.a. sklearn, is a scientific toolkit (there are many others) for machine learning and it built on SciPy and NumPy. Below is an example from scikit-learn for linear regression. This example also using the plotting library matplotlib to display the results.
%matplotlib inline # Code source: Jaques Grobler # License: BSD 3 clause import matplotlib.pyplot as plt from sklearn import datasets, linear_model # Load the diabetes dataset diabetes = datasets.load_diabetes() # Use only one feature diabetes_X = diabetes.data[:, np.newaxis] diabetes_X_temp = diabetes_X[:, :, 2] # Split the data into training/testing sets diabetes_X_train = diabetes_X_temp[:-20] diabetes_X_test = diabetes_X_temp[-20:] # Split the targets into training/testing sets diabetes_y_train = diabetes.target[:-20] diabetes_y_test = diabetes.target[-20:] # Create linear regression object regr = linear_model.LinearRegression() # Train the model using the training sets regr.fit(diabetes_X_train, diabetes_y_train) # Predict result y = regr.predict(diabetes_X_test) # The coefficients print('Coefficients: \n', regr.coef_) # The mean square error print("Residual sum of squares: %.2f" % np.mean((y - diabetes_y_test) ** 2)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(diabetes_X_test, diabetes_y_test)) # Plot outputs plt.scatter(diabetes_X_test, diabetes_y_test, color='black') plt.plot(diabetes_X_test, y, color='blue', linewidth=3) plt.xticks(()) plt.yticks(()) plt.show()
notebooks/python_intro.ipynb
mined-gatech/pymks_overview
mit
Series Panda's Series class extends NumPy's ndarray with a labelled index. The key to using Series is to understand how to use its index.
# Create a Series with auto-generated indices pd.Series(data=[100, 101, 110, 111], dtype=np.int8) # Create a Series with custom indices pd.Series(data=[100, 101, 110, 111], index=['a', 'b', 'c', 'd'], dtype=np.int8) # Create a Series using a dictionary d = {'a' : 100, 'b': 101, 'c': 110, 'd': 111} pd.Series(data=d, dtype=np.int8)
pandas.ipynb
sheikhomar/ml
mit
Arithmetic
day1 = pd.Series(data=[400, 600, 400], index=['breakfast', 'lunch', 'dinner'], dtype=np.int16) day1 day2 = pd.Series(data=[350, 500, 150], index=['breakfast', 'lunch', 'snack'], dtype=np.int16) day2 # Note that only values of matched indices are added together. day1 + day2
pandas.ipynb
sheikhomar/ml
mit
DataFrame A DataFrame is container for tabular data. Basically, a DataFrame is just a collection of Series that share the same index.
def init_df(): return pd.DataFrame(data=np.arange(1,17).reshape(4,4), index='w x y z'.split(), columns='A B C D'.split()) df = init_df() df
pandas.ipynb
sheikhomar/ml
mit
Creating and deleting
# Create a new column based on another column df['E'] = df['A'] ** 2 df # Create a new DataFrame, where certain columns are excluded. df.drop(['A', 'E'], axis=1) # Remove a column permanently df.drop('E', axis=1, inplace=True) df
pandas.ipynb
sheikhomar/ml
mit
Querying
# Select column 'A' df['A'] # Note that all columns are stored as Series objects type(df['A']) # Selecting multiple columns, we get a new DataFrame object df[['A', 'D']] # Select a row by its label df.loc['x'] # Select a row by its numerical index position df.iloc[0] # Select the value of the first cell df.loc['w', 'A'] # Select a subset of the DataFrame df.loc[['x', 'y'], ['B', 'C']] # Conditional selection df[df > 10] # Note that the conditional selection only # returns cells whose boolean value is True # in the following DataFrame df > 10 # Select the rows where column A is larger or equal to 9 df[df['A'] >= 9] # Note that we use `&` as conjunction since Python's `and` operator # can only deal with single Boolean values e.g. `True and True` df[(df['A'] >= 9) & (df['C'] == 11)] df[(df['A'] >= 9) | (df['C'] == 3)]
pandas.ipynb
sheikhomar/ml
mit
Indicies
# Reset the index to a numerical value # Note that the old index will become # a column in our DataFrame. df.reset_index() # Set a new index. df['Country'] = 'CA DE DK NO'.split() df.set_index('Country') # To overrides the old index use following line instead: # df.set_index('Country', inplace=True)
pandas.ipynb
sheikhomar/ml
mit
Hierarchical indexing
outside = 'p p p q q q'.split() inside = [1, 2, 3, 1, 2, 3] hierarchical_index = list(zip(outside, inside)) multi_index = pd.MultiIndex.from_tuples(hierarchical_index, names='outside inside'.split()) multi_index df = pd.DataFrame(data=np.random.randn(6,2), index=multi_index, columns=['Column 1', 'Column 2']) df # Select using the outer index df.loc['p'] # Select using the inside index df.loc['p'].loc[2] # Select a specific cell df.loc['p'].loc[2]['Column 1'] # Rename index names df.index.names = ['O', 'I'] df
pandas.ipynb
sheikhomar/ml
mit
Cross section is used when we need to select data at a particular level.
# Select rows whose inside index is equal 1 df.xs(1, level='I')
pandas.ipynb
sheikhomar/ml
mit
Dealing with missing data
d = {'A': [1, 2, np.nan], 'B': [1, np.nan, np.nan], 'C': [1, 2, 3]} df = pd.DataFrame(d) df # Drop any rows with missing values df.dropna() # Keep only the rows with at least 2 non-na values: df.dropna(thresh=2)
pandas.ipynb
sheikhomar/ml
mit
The subset parameter can be used to specify which columns an action should apply to instead of all columns. For instance, if we want to drop rows with missing values, subset specifies a list of columns to include. For instance, df.dropna(thresh=1, subset=['A','B']) will drop all rows with less than 1 NA value in only columns A and B(rather than all the columns to consider for thresh=1). The line df.dropna(how=all, subset=['A','B']) will drop all rows with all NA values in only columns A and B.
# Drop any columns with missing values df.dropna(axis=1) # Replace missing values df.fillna(0) # Replace missing values with the mean of the column df['A'].fillna(value=df['A'].mean())
pandas.ipynb
sheikhomar/ml
mit
Grouping
columns = 'Id EmployeeName JobTitle TotalPay Year'.split() salaries_df = pd.read_csv('data/sf-salaries-subset.csv', index_col='Id', usecols=columns) salaries_df.head() # Group by job title salaries_by_job_df = salaries_df.groupby('JobTitle') # Get some statistics on the TotalPay column salaries_by_job_df['TotalPay'].describe() # Get some statistics on all numeric columns salaries_by_job_df.describe() # Present statistics in a different way salaries_by_job_df.describe().transpose() # Count number of rows in each group salaries_by_job_df.count() # Find the mean of numeric columns salaries_by_job_df.mean() # Get the highest pay salaries_df['TotalPay'].max() # Get the position of the highest pay salaries_df['TotalPay'].argmax() # Get the person with the highest pay salaries_df.iloc[salaries_df['TotalPay'].argmax()]
pandas.ipynb
sheikhomar/ml
mit
Combining DataFrames
df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3'], 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', 'D2', 'D3']}, index=[0, 1, 2, 3]) df2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'], 'B': ['B4', 'B5', 'B6', 'B7'], 'C': ['C4', 'C5', 'C6', 'C7'], 'D': ['D4', 'D5', 'D6', 'D7']}, index=[4, 5, 6, 7]) df3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'], 'B': ['B8', 'B9', 'B10', 'B11'], 'C': ['C8', 'C9', 'C10', 'C11'], 'D': ['D8', 'D9', 'D10', 'D11']}, index=[8, 9, 10, 11]) # Combine along the rows pd.concat([df1, df2, df3]) # Combine along the columns # Note that Pandas assigns cell values that does not align correct to NaN pd.concat([df1, df2, df3], axis=1)
pandas.ipynb
sheikhomar/ml
mit
The merge function is useful if we want to combine DataFrames like we join tables using SQL.
left = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'], 'key2': ['K0', 'K1', 'K0', 'K1'], 'A': ['A0', 'A1', 'A2', 'A3'], 'B': ['B0', 'B1', 'B2', 'B3']}) right = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'], 'key2': ['K0', 'K0', 'K0', 'K0'], 'C': ['C0', 'C1', 'C2', 'C3'], 'D': ['D0', 'D1', 'D2', 'D3']}) pd.merge(left, right, how='inner', on=['key1', 'key2'])
pandas.ipynb
sheikhomar/ml
mit
The join function is used to combine the columns of DataFrames that may have different indices. It works exactly like the merge function except the keys that we join on are on the indices instead of the columns.
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'], 'B': ['B0', 'B1', 'B2']}, index=['K0', 'K1', 'K2']) right = pd.DataFrame({'C': ['C0', 'C2', 'C3'], 'D': ['D0', 'D2', 'D3']}, index=['K0', 'K2', 'K3']) left.join(right) left.join(right, how='outer')
pandas.ipynb
sheikhomar/ml
mit
Operations
df = pd.DataFrame({'col1':[1,2,3,4], 'col2':[444,555,666,444], 'col3':['abc','def','ghi','xyz']}) df.head() # Find the unique values in col2 df['col2'].unique() # Find the number of unique values in col2 df['col2'].nunique() # Find the unique values in col2 df['col2'].value_counts() # The value_counts() can be used to find top X row most common value df['col2'].value_counts().head(1) # Apply custom function to each element of a column df['col1'].apply(lambda element_value: element_value**2) # Find the names of all the columns in the DataFrame df.columns # Sort data df.sort_values(by='col2') # Find null values df.isnull()
pandas.ipynb
sheikhomar/ml
mit
Reading data from HTML
data = pd.read_html('https://borsen.dk/kurser/danske_aktier/c20_cap.html', thousands='.', decimal=',') df = data[0] # Show information about the data df.info() df.columns df.columns = ['Akie', '%', '+/-', 'Kurs', 'ATD%', 'Bud', 'Udbud', 'Omsætning'] df.info() df['Omsætning'][0]
pandas.ipynb
sheikhomar/ml
mit
Loading up the raw data
mldb.put('/v1/procedures/import_reddit', { "type": "import.text", "params": { "dataFileUrl": "file://mldb/mldb_test_data/reddit.csv.zst", 'delimiter':'', 'quoteChar':'', 'outputDataset': 'reddit_raw', 'runOnCreation': True } })
container_files/demos/Mapping Reddit.ipynb
mldbai/mldb
apache-2.0
And here is what our raw dataset looks like. The lineText column will need to be parsed: it's comma-delimited, with the first token being a user ID and the remaining tokens being the set of subreddits that user contributed to.
mldb.query("select * from reddit_raw limit 5")
container_files/demos/Mapping Reddit.ipynb
mldbai/mldb
apache-2.0
Transforming the raw data into a sparse matrix We will create and run a Procedure of type transform. The tokenize function will project out the subreddit names into columns.
mldb.put('/v1/procedures/reddit_import', { "type": "transform", "params": { "inputData": "select tokenize(lineText, {offset: 1, value: 1}) as * from reddit_raw", "outputDataset": "reddit_dataset", "runOnCreation": True } })
container_files/demos/Mapping Reddit.ipynb
mldbai/mldb
apache-2.0
Here is the resulting dataset: it's a sparse matrix with a row per user and a column per subreddit, where the cells are 1 if the row's user was a contributor to the column's subreddit, and null otherwise.
mldb.query("select * from reddit_dataset limit 5")
container_files/demos/Mapping Reddit.ipynb
mldbai/mldb
apache-2.0
Dimensionality Reduction with Singular Value Decomposition (SVD) We will create and run a Procedure of type svd.train.
mldb.put('/v1/procedures/reddit_svd', { "type" : "svd.train", "params" : { "trainingData" : """ SELECT COLUMN EXPR (AS columnName() ORDER BY rowCount() DESC, columnName() LIMIT 4000) FROM reddit_dataset """, "columnOutputDataset" : "reddit_svd_embedding", "runOnCreation": True } })
container_files/demos/Mapping Reddit.ipynb
mldbai/mldb
apache-2.0
The result of this operation is a new dataset with a row per subreddit for the 4000 most-active subreddits and columns representing coordinates for that subreddit in a 100-dimensional space. Note: the row names are the subreddit names followed by ".numberEquals.1" because the SVD training procedure interpreted the input matrix as categorical rather than numerical.
mldb.query("select * from reddit_svd_embedding limit 5")
container_files/demos/Mapping Reddit.ipynb
mldbai/mldb
apache-2.0
Clustering with K-Means We will create and run a Procedure of type kmeans.train.
mldb.put('/v1/procedures/reddit_kmeans', { "type" : "kmeans.train", "params" : { "trainingData" : "select * from reddit_svd_embedding", "outputDataset" : "reddit_kmeans_clusters", "numClusters" : 20, "runOnCreation": True } })
container_files/demos/Mapping Reddit.ipynb
mldbai/mldb
apache-2.0
The result of this operation is a simple dataset which associates each row in the input (i.e. each subreddit) to one of 20 clusters.
mldb.query("select * from reddit_kmeans_clusters limit 5")
container_files/demos/Mapping Reddit.ipynb
mldbai/mldb
apache-2.0
2-d Dimensionality Reduction with t-SNE We will create and run a Procedure of type tsne.train.
mldb.put('/v1/procedures/reddit_tsne', { "type" : "tsne.train", "params" : { "trainingData" : "select * from reddit_svd_embedding", "rowOutputDataset" : "reddit_tsne_embedding", "runOnCreation": True } })
container_files/demos/Mapping Reddit.ipynb
mldbai/mldb
apache-2.0
The result is similar to the SVD step above: we get a row per subreddit and the columns are coordinates, but this time in a 2-dimensional space appropriate for visualization.
mldb.query("select * from reddit_tsne_embedding limit 5")
container_files/demos/Mapping Reddit.ipynb
mldbai/mldb
apache-2.0
Counting the number of users per subreddit We will create and run a Procedure of type transform on the transpose of the original input dataset.
mldb.put('/v1/procedures/reddit_count_users', { "type": "transform", "params": { "inputData": "select columnCount() as numUsers from transpose(reddit_dataset)", "outputDataset": "reddit_user_counts", "runOnCreation": True } })
container_files/demos/Mapping Reddit.ipynb
mldbai/mldb
apache-2.0
We appended "|1" to the row names in this dataset to allow the merge operation below to work well.
mldb.query("select * from reddit_user_counts limit 5")
container_files/demos/Mapping Reddit.ipynb
mldbai/mldb
apache-2.0
Querying and Visualizating the output We'll use the Query API to get the data into a Pandas DataFrame and then use Bokeh to visualize it. In the query below we renamed the rows to get rid of the "|1" which the SVD appended to each subreddit name and we filter out rows where cluster is null because we only clustered the 4000 most-active subreddits.
df = mldb.query(""" select c.* as *, m.* as *, quantize(m.x, 7) as grid_x, quantize(m.y, 7) as grid_y named c.rowName() from merge(reddit_tsne_embedding, reddit_kmeans_clusters) as m join reddit_user_counts as c on c.rowName() = m.rowPathElement(0) where m.cluster is not null order by c.numUsers desc """) df.head() import numpy as np colormap = np.array([ "#1f77b4", "#aec7e8", "#ff7f0e", "#ffbb78", "#2ca02c", "#98df8a", "#d62728", "#ff9896", "#9467bd", "#c5b0d5", "#8c564b", "#c49c94", "#e377c2", "#f7b6d2", "#7f7f7f", "#c7c7c7", "#bcbd22", "#dbdb8d", "#17becf", "#9edae5" ]) import bokeh.plotting as bp from bokeh.models import HoverTool #this line must be in its own cell bp.output_notebook() x = bp.figure(plot_width=900, plot_height=700, title="Subreddit Map by t-SNE", tools=[HoverTool( tooltips=[ ("/r/", "@subreddit") ] )], toolbar_location=None, x_axis_type=None, y_axis_type=None, min_border=1) x.scatter( x = df.x.values, y=df.y.values, color=colormap[df.cluster.astype(int).values], alpha=0.6, radius=(df.numUsers.values ** .3)/15, source=bp.ColumnDataSource({"subreddit": df.index.values}) ) labels = df.reset_index().groupby(['grid_x', 'grid_y'], as_index=False).first() labels = labels[labels["numUsers"] > 10000] x.text( x = labels.x.values, y = labels.y.values, text = labels._rowName.values, text_align="center", text_baseline="middle", text_font_size="8pt", text_font_style="bold", text_color="#333333" ) bp.show(x)
container_files/demos/Mapping Reddit.ipynb
mldbai/mldb
apache-2.0
Create a soil layer, which defines the median value.
soil_type = pysra.site.DarendeliSoilType(18.0, plas_index=0, ocr=1, stress_mean=50)
examples/example-05.ipynb
arkottke/pysra
mit
Create the simulated nonlinear curves
n = 10 correlation = 0 simulated = [] for name, model in zip( ["Darendeli (2001)", "EPRI SPID (2014)"], [ pysra.variation.DarendeliVariation(correlation), pysra.variation.SpidVariation(correlation), ], ): simulated.append((name, [model(soil_type) for _ in range(n)]))
examples/example-05.ipynb
arkottke/pysra
mit
Compare the uncertainty models.
fig, axes = plt.subplots(2, 2, sharex=True, sharey="row", subplot_kw={"xscale": "log"}) for i, (name, sims) in enumerate(simulated): for j, prop in enumerate(["mod_reduc", "damping"]): axes[j, i].plot( getattr(soil_type, prop).strains, np.transpose([getattr(s, prop).values for s in sims]), linewidth=0.5, color="C0", alpha=0.8, ) if j == 0: axes[j, i].set_title(name) axes[0, 0].set_ylabel("$G/G_{max}$") axes[1, 0].set_ylabel("$D$ (%)") plt.setp(axes[1, :], xlabel="Strain, $\gamma$ (%)") fig.tight_layout();
examples/example-05.ipynb
arkottke/pysra
mit
1. Inference detect.py runs YOLOv3 inference on a variety of sources, downloading models automatically from the latest YOLOv3 release, and saving results to runs/detect. Example inference sources are: <img src="https://user-images.githubusercontent.com/26833433/114307955-5c7e4e80-9ae2-11eb-9f50-a90e39bee53f.png" width="900">
!python detect.py --weights yolov3.pt --img 640 --conf 0.25 --source data/images/ Image(filename='runs/detect/exp/zidane.jpg', width=600)
components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb
robocomp/robocomp-robolab
gpl-3.0
2. Test Test a model's accuracy on COCO val or test-dev datasets. Models are downloaded automatically from the latest YOLOv3 release. To show results by class use the --verbose flag. Note that pycocotools metrics may be ~1% better than the equivalent repo metrics, as is visible below, due to slight differences in mAP computation. COCO val2017 Download COCO val 2017 dataset (1GB - 5000 images), and test model accuracy.
# Download COCO val2017 torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017val.zip', 'tmp.zip') !unzip -q tmp.zip -d ../ && rm tmp.zip # Run YOLOv3 on COCO val2017 !python test.py --weights yolov3.pt --data coco.yaml --img 640 --iou 0.65
components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb
robocomp/robocomp-robolab
gpl-3.0
COCO test-dev2017 Download COCO test2017 dataset (7GB - 40,000 images), to test model accuracy on test-dev set (20,000 images, no labels). Results are saved to a *.json file which should be zipped and submitted to the evaluation server at https://competitions.codalab.org/competitions/20794.
# Download COCO test-dev2017 torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017labels.zip', 'tmp.zip') !unzip -q tmp.zip -d ../ && rm tmp.zip # unzip labels !f="test2017.zip" && curl http://images.cocodataset.org/zips/$f -o $f && unzip -q $f && rm $f # 7GB, 41k images %mv ./test2017 ../coco/images # move to /coco # Run YOLOv3 on COCO test-dev2017 using --task test !python test.py --weights yolov3.pt --data coco.yaml --task test
components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb
robocomp/robocomp-robolab
gpl-3.0
3. Train Download COCO128, a small 128-image tutorial dataset, start tensorboard and train YOLOv3 from a pretrained checkpoint for 3 epochs (note actual training is typically much longer, around 300-1000 epochs, depending on your dataset).
# Download COCO128 torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip', 'tmp.zip') !unzip -q tmp.zip -d ../ && rm tmp.zip
components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb
robocomp/robocomp-robolab
gpl-3.0
Train a YOLOv3 model on COCO128 with --data coco128.yaml, starting from pretrained --weights yolov3.pt, or from randomly initialized --weights '' --cfg yolov3.yaml. Models are downloaded automatically from the latest YOLOv3 release, and COCO, COCO128, and VOC datasets are downloaded automatically on first use. All training results are saved to runs/train/ with incrementing run directories, i.e. runs/train/exp2, runs/train/exp3 etc.
# Tensorboard (optional) %load_ext tensorboard %tensorboard --logdir runs/train # Weights & Biases (optional) %pip install -q wandb import wandb wandb.login() # Train YOLOv3 on COCO128 for 3 epochs !python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov3.pt --nosave --cache
components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb
robocomp/robocomp-robolab
gpl-3.0
4. Visualize Weights & Biases Logging 🌟 NEW Weights & Biases (W&B) is now integrated with YOLOv3 for real-time visualization and cloud logging of training runs. This allows for better run comparison and introspection, as well improved visibility and collaboration for teams. To enable W&B pip install wandb, and then train normally (you will be guided through setup on first use). During training you will see live updates at https://wandb.ai/home, and you can create and share detailed Reports of your results. For more information see the YOLOv5 Weights & Biases Tutorial. <img src="https://user-images.githubusercontent.com/26833433/98184457-bd3da580-1f0a-11eb-8461-95d908a71893.jpg" width="800"> Local Logging All results are logged by default to runs/train, with a new experiment directory created for each new training as runs/train/exp2, runs/train/exp3, etc. View train and test jpgs to see mosaics, labels, predictions and augmentation effects. Note a Mosaic Dataloader is used for training (shown below), a new concept developed by Ultralytics and first featured in YOLOv4.
Image(filename='runs/train/exp/train_batch0.jpg', width=800) # train batch 0 mosaics and labels Image(filename='runs/train/exp/test_batch0_labels.jpg', width=800) # test batch 0 labels Image(filename='runs/train/exp/test_batch0_pred.jpg', width=800) # test batch 0 predictions
components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb
robocomp/robocomp-robolab
gpl-3.0
<img src="https://user-images.githubusercontent.com/26833433/83667642-90fcb200-a583-11ea-8fa3-338bbf7da194.jpeg" width="750"> train_batch0.jpg shows train batch 0 mosaics and labels <img src="https://user-images.githubusercontent.com/26833433/83667626-8c37fe00-a583-11ea-997b-0923fe59b29b.jpeg" width="750"> test_batch0_labels.jpg shows test batch 0 labels <img src="https://user-images.githubusercontent.com/26833433/83667635-90641b80-a583-11ea-8075-606316cebb9c.jpeg" width="750"> test_batch0_pred.jpg shows test batch 0 predictions Training losses and performance metrics are also logged to Tensorboard and a custom results.txt logfile which is plotted as results.png (below) after training completes. Here we show YOLOv3 trained on COCO128 to 300 epochs, starting from scratch (blue), and from pretrained --weights yolov3.pt (orange).
from utils.plots import plot_results plot_results(save_dir='runs/train/exp') # plot all results*.txt as results.png Image(filename='runs/train/exp/results.png', width=800)
components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb
robocomp/robocomp-robolab
gpl-3.0
<img src="https://user-images.githubusercontent.com/26833433/97808309-8182b180-1c66-11eb-8461-bffe1a79511d.png" width="800"> Environments YOLOv3 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled): Google Colab and Kaggle notebooks with free GPU: <a href="https://colab.research.google.com/github/ultralytics/yolov5/blob/master/tutorial.ipynb"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"></a> <a href="https://www.kaggle.com/ultralytics/yolov5"><img src="https://kaggle.com/static/images/open-in-kaggle.svg" alt="Open In Kaggle"></a> Google Cloud Deep Learning VM. See GCP Quickstart Guide Amazon Deep Learning AMI. See AWS Quickstart Guide Docker Image. See Docker Quickstart Guide <a href="https://hub.docker.com/r/ultralytics/yolov5"><img src="https://img.shields.io/docker/pulls/ultralytics/yolov5?logo=docker" alt="Docker Pulls"></a> Status If this badge is green, all YOLOv3 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv3 training (train.py), testing (test.py), inference (detect.py) and export (export.py) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. Appendix Optional extras below. Unit tests validate repo functionality and should be run on any PRs submitted.
# Re-clone repo %cd .. %rm -rf yolov3 && git clone https://github.com/ultralytics/yolov3 %cd yolov3 # Reproduce for x in 'yolov3', 'yolov3-spp', 'yolov3-tiny': !python test.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.25 --iou 0.45 # speed !python test.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.001 --iou 0.65 # mAP # PyTorch Hub import torch # Model model = torch.hub.load('ultralytics/yolov3', 'yolov3') # or 'yolov3_spp', 'yolov3_tiny' # Images dir = 'https://ultralytics.com/images/' imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')] # batch of images # Inference results = model(imgs) results.print() # or .show(), .save() # Unit tests %%shell export PYTHONPATH="$PWD" # to run *.py. files in subdirectories rm -rf runs # remove runs/ for m in yolov3; do # models python train.py --weights $m.pt --epochs 3 --img 320 --device 0 # train pretrained python train.py --weights '' --cfg $m.yaml --epochs 3 --img 320 --device 0 # train scratch for d in 0 cpu; do # devices python detect.py --weights $m.pt --device $d # detect official python detect.py --weights runs/train/exp/weights/best.pt --device $d # detect custom python test.py --weights $m.pt --device $d # test official python test.py --weights runs/train/exp/weights/best.pt --device $d # test custom done python hubconf.py # hub python models/yolo.py --cfg $m.yaml # inspect python models/export.py --weights $m.pt --img 640 --batch 1 # export done # Profile from utils.torch_utils import profile m1 = lambda x: x * torch.sigmoid(x) m2 = torch.nn.SiLU() profile(x=torch.randn(16, 3, 640, 640), ops=[m1, m2], n=100) # Evolve !python train.py --img 640 --batch 64 --epochs 100 --data coco128.yaml --weights yolov3.pt --cache --noautoanchor --evolve !d=runs/train/evolve && cp evolve.* $d && zip -r evolve.zip $d && gsutil mv evolve.zip gs://bucket # upload results (optional) # VOC for b, m in zip([64, 48, 32, 16], ['yolov3', 'yolov3-spp', 'yolov3-tiny']): # zip(batch_size, model) !python train.py --batch {b} --weights {m}.pt --data voc.yaml --epochs 50 --cache --img 512 --nosave --hyp hyp.finetune.yaml --project VOC --name {m}
components/detection/trafficMonitoringInOutdoorEnv/yolov3/tutorial.ipynb
robocomp/robocomp-robolab
gpl-3.0
Import Data Cerebral Cortex provides a set of predefined data import routines that fit typical use cases. The most common is CSV data parser, csv_data_parser. These parsers are easy to write and can be extended to support most types of data. Additionally, the data importer, import_data, needs to be brought into this notebook so that we can start the data import process. The import_data method requires several parameters that are discussed below. - cc_config: The path to the configuration files for Cerebral Cortex; this is the same folder that you would utilize for the Kernel initialization - input_data_dir: The path to where the data to be imported is located; in this example, sample_data is available in the file/folder browser on the left and you should explore the files located inside of it - user_id: The universally unique identifier (UUID) that owns the data to be imported into the system - data_file_extension: The type of files to be considered for import - data_parser: The import method or another that defines how to interpret the data samples on a per-line basis - gen_report: A simple True/False value that controls if a report is printed to the screen when complete
iot_stream = CC.read_csv(file_path="sample_data/data.csv", stream_name="some-sample-iot-stream", column_names=["timestamp", "some_vals", "version", "user"])
jupyter_demo/import_and_analyse_data.ipynb
MD2Korg/CerebralCortex
bsd-2-clause
View Imported Data
iot_stream.show(4)
jupyter_demo/import_and_analyse_data.ipynb
MD2Korg/CerebralCortex
bsd-2-clause
Document Data
stream_metadata = Metadata() stream_metadata.set_name("iot-data-stream").set_description("This is randomly generated data for demo purposes.") \ .add_dataDescriptor( DataDescriptor().set_name("timestamp").set_type("datetime").set_attribute("description", "UTC timestamp of data point collection.")) \ .add_dataDescriptor( DataDescriptor().set_name("some_vals").set_type("float").set_attribute("description", \ "Random values").set_attribute("range", \ "Data is between 0 and 1.")) \ .add_dataDescriptor( DataDescriptor().set_name("version").set_type("int").set_attribute("description", "version of the data")) \ .add_dataDescriptor( DataDescriptor().set_name("user").set_type("string").set_attribute("description", "user id")) \ .add_module(ModuleMetadata().set_name("cerebralcortex.data_importer").set_attribute("url", "hhtps://md2k.org").set_author( "Nasir Ali", "[email protected]")) iot_stream.metadata = stream_metadata
jupyter_demo/import_and_analyse_data.ipynb
MD2Korg/CerebralCortex
bsd-2-clause
View Metadata
iot_stream.metadata
jupyter_demo/import_and_analyse_data.ipynb
MD2Korg/CerebralCortex
bsd-2-clause
How to write an algorithm This section provides an example of how to write a simple smoothing algorithm and apply it to the data that was just imported. Import the necessary modules
from pyspark.sql.functions import pandas_udf, PandasUDFType from pyspark.sql.types import StructField, StructType, StringType, FloatType, TimestampType, IntegerType from pyspark.sql.functions import minute, second, mean, window from pyspark.sql import functions as F import numpy as np
jupyter_demo/import_and_analyse_data.ipynb
MD2Korg/CerebralCortex
bsd-2-clause
Define the Schema This schema defines what the computation module will return to the execution context for each row or window in the datastream.
from pyspark.sql.types import StructField, StructType, StringType, DoubleType, IntegerType, TimestampType schema = StructType([ StructField("timestamp", TimestampType()), StructField("some_vals", DoubleType()), StructField("version", IntegerType()), StructField("user", StringType()) ])
jupyter_demo/import_and_analyse_data.ipynb
MD2Korg/CerebralCortex
bsd-2-clause
Write a user defined function The user-defined function (UDF) is one of two mechanisms available for distributed data processing within the Apache Spark framework. The F.udf Python decorator assigns the recently defined schema as a return type of the udf method. The method, smooth_algo, accepts a list of values, vals, and any python-based operations can be run over this data window to produce the result defined in the schema. In this case, we are computing a simple windowed average.
@pandas_udf(schema, PandasUDFType.GROUPED_MAP) def smooth_algo(df): some_vals_mean = df["some_vals"].mean() df["some_vals"] = df["some_vals"]/some_vals_mean return df
jupyter_demo/import_and_analyse_data.ipynb
MD2Korg/CerebralCortex
bsd-2-clause