markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Binarna slikaSlika čiji pikseli imaju samo dve moguće vrednosti: crno i belo. U zavisnosti da li interval realan (float32) ili celobrojan (uint8), ove vrednosti mogu biti {0,1} ili {0,255}.U binarnoj slici često izdvajamo ono što nam je bitno (**foreground**), od ono što nam je nebitno (**background**). Formalnije, ovaj postupak izdvajanja bitnog od nebitnog na slici nazivamo **segmentacija**.Najčešći način dobijanja binarne slike je korišćenje nekog praga (**threshold**), pa ako je vrednost piksela veća od zadatog praga taj piksel dobija vrednost 1, u suprotnom 0. Postoji više tipova threshold-ovanja:1. Globalni threshold - isti prag se primenjuje na sve piksele2. Lokalni threshold - različiti pragovi za različite delove slike3. Adaptivni threshold - prag se ne određuje ručno (ne zadaje ga čovek), već kroz neki postupak. Može biti i globalni i lokalni. Globalni thresholdKako izdvojiti npr. samo lice?
img_tr = img_gray > 127 # svi piskeli koji su veci od 127 ce dobiti vrednost True, tj. 1, i obrnuto plt.imshow(img_tr, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
OpenCV ima metodu threshold koja kao prvi parametar prima sliku koja se binarizuje, kao drugi parametar prima prag binarizacije, treći parametar je vrednost rezultujućeg piksela ako je veći od praga (255=belo), poslednji parametar je tip thresholda (u ovo slučaju je binarizacija).
ret, image_bin = cv2.threshold(img_gray, 100, 255, cv2.THRESH_BINARY) # ret je vrednost praga, image_bin je binarna slika print(ret) plt.imshow(image_bin, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Otsu thresholdOtsu metoda se koristi za automatsko pronalaženje praga za threshold na slici.
ret, image_bin = cv2.threshold(img_gray, 0, 255, cv2.THRESH_OTSU) # ret je izracunata vrednost praga, image_bin je binarna slika print("Otsu's threshold: " + str(ret)) plt.imshow(image_bin, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Adaptivni thresholdU nekim slučajevima primena globalnog praga za threshold ne daje dobre rezultate. Dobar primer su slike na kojima se menja osvetljenje, gde globalni threshold praktično uništi deo slike koji je previše osvetljen ili zatamnjen.Adaptivni threshold je drugačiji pristup, gde se za svaki piksel na slici izračunava zaseban prag, na osnovu njemu okolnnih piksela. Primer
image_ada = cv2.imread('images/sonnet.png') image_ada = cv2.cvtColor(image_ada, cv2.COLOR_BGR2GRAY) plt.imshow(image_ada, 'gray') ret, image_ada_bin = cv2.threshold(image_ada, 100, 255, cv2.THRESH_BINARY) plt.imshow(image_ada_bin, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Loši rezultati su dobijeni upotrebom globalnog thresholda.Poboljšavamo rezultate korišćenjem adaptivnog thresholda. Pretposlednji parametar metode adaptiveThreshold je ključan, jer predstavlja veličinu bloka susednih piksela (npr. 15x15) na osnovnu kojih se računa lokalni prag.
# adaptivni threshold gde se prag racuna = srednja vrednost okolnih piksela image_ada_bin = cv2.adaptiveThreshold(image_ada, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 15, 5) plt.figure() # ako je potrebno da se prikaze vise slika u jednoj celiji plt.imshow(image_ada_bin, 'gray') # adaptivni threshold gde se prag racuna = tezinska suma okolnih piksela, gde su tezine iz gausove raspodele image_ada_bin = cv2.adaptiveThreshold(image_ada, 255, cv2.ADAPTIVE_THRESH_GAUSSIAN_C, cv2.THRESH_BINARY, 15, 5) plt.figure() plt.imshow(image_ada_bin, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
HistogramMožemo koristiti **histogram**, koji će nam dati informaciju o distribuciji osvetljenosti piksela.Vrlo koristan kada je potrebno odrediti prag za globalni threshold.Pseudo-kod histograma za grayscale sliku: ```codeinicijalizovati nula vektor od 256 elemenata za svaki piksel na slici: preuzeti inicijalni intezitet piksela uvecati za 1 broj piksela tog intezitetaplotovati histogram```
def hist(image): height, width = image.shape[0:2] x = range(0, 256) y = np.zeros(256) for i in range(0, height): for j in range(0, width): pixel = image[i, j] y[pixel] += 1 return (x, y) x,y = hist(img_gray) plt.plot(x, y, 'b') plt.show()
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Koristeći matplotlib:
plt.hist(img_gray.ravel(), 255, [0, 255]) plt.show()
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Koristeći OpenCV:
hist_full = cv2.calcHist([img_gray], [0], None, [255], [0, 255]) plt.plot(hist_full) plt.show()
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Pretpostavimo da su vrednosti piksela lica između 100 i 200.
img_tr = (img_gray > 100) * (img_gray < 200) plt.imshow(img_tr, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Konverovanje iz "grayscale" u RGBOvo je zapravo trivijalna operacija koja za svaki kanal boje (RGB) napravi kopiju od originalne grayscale slike. Ovo je zgodno kada nešto što je urađeno u grayscale modelu treba iskoristiti zajedno sa RGB slikom.
img_tr_rgb = cv2.cvtColor(img_tr.astype('uint8'), cv2.COLOR_GRAY2RGB) plt.imshow(img*img_tr_rgb) # množenje originalne RGB slike i slike sa izdvojenim pikselima lica
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Morfološke operacijeVeliki skup operacija za obradu digitalne slike, gde su te operacije zasnovane na oblicima, odnosno **strukturnim elementima**. U morfološkim operacijama, vrednost svakog piksela rezultujuće slike se zasniva na poređenju odgovarajućeg piksela na originalnoj slici sa svojom okolinom. Veličina i oblik ove okoline predstavljaju strukturni element.
kernel = np.ones((3, 3)) # strukturni element 3x3 blok print(kernel)
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
ErozijaMorfološka erozija postavlja vrednost piksela rez. slike na ```(i,j)``` koordinatama na **minimalnu** vrednost svih piksela u okolini ```(i,j)``` piksela na orig. slici.U suštini erozija umanjuje regione belih piksela, a uvećava regione crnih piksela. Često se koristi za uklanjanje šuma (u vidu sitnih regiona belih piksela).![images/erosion.gif](images/erosion.gif)
plt.imshow(cv2.erode(image_bin, kernel, iterations=1), 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
DilacijaMorfološka dilacija postavlja vrednost piksela rez. slike na ```(i,j)``` koordinatama na **maksimalnu** vrednost svih piksela u okolini ```(i,j)``` piksela na orig. slici.U suštini dilacija uvećava regione belih piksela, a umanjuje regione crnih piksela. Zgodno za izražavanje regiona od interesa.![images/dilation.gif](images/dilation.gif)
# drugaciji strukturni element kernel = cv2.getStructuringElement(cv2.MORPH_CROSS, (5,5)) # MORPH_ELIPSE, MORPH_RECT... print(kernel) plt.imshow(cv2.dilate(image_bin, kernel, iterations=5), 'gray') # 5 iteracija
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Otvaranje i zatvaranje**```otvaranje = erozija + dilacija```**, uklanjanje šuma erozijom i vraćanje originalnog oblika dilacijom.**```zatvaranje = dilacija + erozija```**, zatvaranje sitnih otvora među belim pikselima
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (5, 5)) print(kernel) img_ero = cv2.erode(image_bin, kernel, iterations=1) img_open = cv2.dilate(img_ero, kernel, iterations=1) plt.imshow(img_open, 'gray') img_dil = cv2.dilate(image_bin, kernel, iterations=1) img_close = cv2.erode(img_dil, kernel, iterations=1) plt.imshow(img_close, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Primer detekcije ivica na binarnoj slici korišćenjem dilatacije i erozije:
kernel = cv2.getStructuringElement(cv2.MORPH_ELLIPSE, (3, 3)) image_edges = cv2.dilate(image_bin, kernel, iterations=1) - cv2.erode(image_bin, kernel, iterations=1) plt.imshow(image_edges, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Zamućenje (blur)Zamućenje slike se dobija tako što se za svaki piksel slike kao nova vrednost uzima srednja vrednost okolnih piksela, recimo u okolini 5 x 5. Kernel k predstavlja kernel za uniformno zamućenje. Ovo je jednostavnija verzija Gausovskog zamućenja.
from scipy import signal k_size = 5 k = (1./k_size*k_size) * np.ones((k_size, k_size)) image_blur = signal.convolve2d(img_gray, k) plt.imshow(image_blur, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Regioni i izdvajanje regionaNajjednostavnije rečeno, region je skup međusobno povezanih belih piksela. Kada se kaže povezanih, misli se na to da se nalaze u neposrednoj okolini. Razlikuju se dve vrste povezanosti: tzv. **4-connectivity** i **8-connectivity**:![images/48connectivity.png](images/48connectivity.png)Postupak kojim se izdvajanju/obeležavaju regioni se naziva **connected components labelling**. Ovo ćemo primeniti na problemu izdvajanja barkoda.
# ucitavanje slike i convert u RGB img_barcode = cv2.cvtColor(cv2.imread('images/barcode.jpg'), cv2.COLOR_BGR2RGB) plt.imshow(img_barcode)
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Recimo da želimo da izdvojimo samo linije barkoda sa slike.Za početak, uradimo neke standardne operacije, kao što je konvertovanje u grayscale i adaptivni threshold.
img_barcode_gs = cv2.cvtColor(img_barcode, cv2.COLOR_RGB2GRAY) # konvert u grayscale plt.imshow(img_barcode_gs, 'gray') #ret, image_barcode_bin = cv2.threshold(img_barcode_gs, 80, 255, cv2.THRESH_BINARY) image_barcode_bin = cv2.adaptiveThreshold(img_barcode_gs, 255, cv2.ADAPTIVE_THRESH_MEAN_C, cv2.THRESH_BINARY, 35, 10) plt.imshow(image_barcode_bin, 'gray')
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Pronalaženje kontura/regionaKonture, odnosno regioni na slici su grubo rečeno grupe crnih piksela. OpenCV metoda findContours pronalazi sve ove grupe crnih piksela, tj. regione. Druga povratna vrednost metode, odnosno contours je lista pronađeih kontura na slici.Ove konture je zaim moguće iscrtati metodom drawContours, gde je prvi parametar slika na kojoj se iscrtavaju pronađene konture, drugi parametar je lista kontura koje je potrebno iscrtati, treći parametar određuje koju konturu po redosledu iscrtati (-1 znači iscrtavanje svih kontura), četvrti parametar je boja kojom će se obeležiti kontura, a poslednji parametar je debljina linije.
contours, hierarchy = cv2.findContours(image_barcode_bin, cv2.RETR_LIST, cv2.CHAIN_APPROX_SIMPLE) img = img_barcode.copy() cv2.drawContours(img, contours, -1, (255, 0, 0), 1) plt.imshow(img)
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Osobine regionaSvi pronađeni regioni imaju neke svoje karakteristične osobine: površina, obim, konveksni omotač, konveksnost, obuhvatajući pravougaonik, ugao... Ove osobine mogu biti izuzetno korisne kada je neophodno izdvojiti samo određene regione sa slike koji ispoljavaju neku osobinu. Za sve osobine pogledati ovo i ovo.Izdvajamo samo bar-kod sa slike.
contours_barcode = [] #ovde ce biti samo konture koje pripadaju bar-kodu for contour in contours: # za svaku konturu center, size, angle = cv2.minAreaRect(contour) # pronadji pravougaonik minimalne povrsine koji ce obuhvatiti celu konturu width, height = size if width > 3 and width < 30 and height > 300 and height < 400: # uslov da kontura pripada bar-kodu contours_barcode.append(contour) # ova kontura pripada bar-kodu img = img_barcode.copy() cv2.drawContours(img, contours_barcode, -1, (255, 0, 0), 1) plt.imshow(img) print('Ukupan broj regiona: %d' % len(contours_barcode))
_____no_output_____
MIT
v1-uvod/sc-siit-v1-cv-basics.ipynb
ftn-ai-lab/sc-2019-siit
Ensemble models from machine learning: an example of wave runup and coastal dune erosion Tomas Beuzen1, Evan B. Goldstein2, Kristen D. Splinter11Water Research Laboratory, School of Civil and Environmental Engineering, UNSW Sydney, NSW, Australia2Department of Geography, Environment, and Sustainability, University of North Carolina at Greensboro, Greensboro, NC, USAThis notebook contains the code required to develop the Gaussian Process (GP) runup predictor developed in the manuscript "*Ensemble models from machine learning: an example of wave runup and coastal dune erosion*" by Beuzen et al.**Citation:** Beuzen, T, Goldstein, E.B., Splinter, K.S. (In Review). Ensemble models from machine learning: an example of wave runup and coastal dune erosion, Natural Hazards and Earth Systems Science, SI Advances in computational modeling of geoprocesses and geohazards. Table of Contents:1. [Imports](bullet-0)2. [Load and Visualize Data](bullet-1)3. [Develop GP Runup Predictor](bullet-2)4. [Test GP Runup Predictor](bullet-3)5. [Explore GP Prediction Uncertainty](bullet-4) 1. Imports
# Required imports # Standard computing packages import numpy as np import pandas as pd import matplotlib.pyplot as plt # Gaussian Process tools from sklearn.metrics import mean_squared_error from sklearn.gaussian_process import GaussianProcessRegressor from sklearn.gaussian_process.kernels import RBF, WhiteKernel # Notebook functionality %matplotlib inline
_____no_output_____
MIT
paper_code/Beuzen_et_al_2019_code.ipynb
TomasBeuzen/BeuzenEtAl_2019_NHESS_GP_runup_model
2. Load and Visualize Data In this section, we will load and visualise the wave, beach slope, and runup data we will use to develop the Gaussian process (GP) runup predictor.
# Read in .csv data file as a pandas dataframe df = pd.read_csv('../data_repo_temporary/lidar_runup_data_for_GP_training.csv',index_col=0) # Print the size and head of the dataframe print('Data size:', df.shape) df.head() # This cell plots histograms of the data # Initialize the figure and axes fig, axes = plt.subplots(2,2,figsize=(6,6)) plt.tight_layout(w_pad=0.1, h_pad=3) # Subplot (0,0): Hs ax = axes[0,0] ax.hist(df.Hs,28,color=(0.6,0.6,0.6),edgecolor='k',lw=0.5) # Plot histogram ax.set_xlabel('H$_s$ (m)') # Format plot ax.set_ylabel('Frequency') ax.set_xticks((0,1.5,3,4.5)) ax.set_xlim((0,4.5)) ax.set_ylim((0,50)) ax.grid(lw=0.5,alpha=0.7) ax.text(-1.1, 52, 'A)', fontsize=12) ax.tick_params(direction='in') ax.set_axisbelow(True) # Subplot (0,1): Tp ax = axes[0,1] ax.hist(df.Tp,20,color=(0.6,0.6,0.6),edgecolor='k',lw=0.5) # Plot histogram ax.set_xlabel('T$_p$ (s)') # Format plot ax.set_xticks((0,6,12,18)) ax.set_xlim((0,18)) ax.set_ylim((0,50)) ax.set_yticklabels([]) ax.grid(lw=0.5,alpha=0.7) ax.text(-2.1, 52, 'B)', fontsize=12) ax.tick_params(direction='in') ax.set_axisbelow(True) # Subplot (1,0): beta ax = axes[1,0] ax.hist(df.beach_slope,20,color=(0.6,0.6,0.6),edgecolor='k',lw=0.5) # Plot histogram ax.set_xlabel(r'$\beta$') # Format plot ax.set_ylabel('Frequency') ax.set_xticks((0,0.1,0.2,0.3)) ax.set_xlim((0,0.3)) ax.set_ylim((0,50)) ax.grid(lw=0.5,alpha=0.7) ax.text(-0.073, 52, 'C)', fontsize=12) ax.tick_params(direction='in') ax.set_axisbelow(True) # Subplot (1,1): R2 ax = axes[1,1] ax.hist(df.runup,24,color=(0.9,0.2,0.2),edgecolor='k',lw=0.5) # Plot histogram ax.set_xlabel('R$_2$ (m)') # Format plot ax.set_xticks((0,1,2,3)) ax.set_xlim((0,3)) ax.set_ylim((0,50)) ax.set_yticklabels([]) ax.grid(lw=0.5,alpha=0.7) ax.text(-0.35, 52, 'D)', fontsize=12) ax.tick_params(direction='in') ax.set_axisbelow(True);
_____no_output_____
MIT
paper_code/Beuzen_et_al_2019_code.ipynb
TomasBeuzen/BeuzenEtAl_2019_NHESS_GP_runup_model
3. Develop GP Runup Predictor In this section we will develop the GP runup predictor.We standardize the data for use in the GP by removing the mean and scaling to unit variance. This does not really affect GP performance but improves computational efficiency (see sklearn documentation for more information).A kernel must be specified to develop the GP. Many kernels were trialled in initial GP development. The final kernel is a combination of the RBF and WhiteKernel. See **Section 2.1** and **Section 2.2** of the manuscript for further discussion.
# Define features and response data X = df.drop(columns=df.columns[-1]) # Drop the last column to retain input features (Hs, Tp, slope) y = df[[df.columns[-1]]] # The last column is the predictand (R2)
_____no_output_____
MIT
paper_code/Beuzen_et_al_2019_code.ipynb
TomasBeuzen/BeuzenEtAl_2019_NHESS_GP_runup_model
Standardize data for use in the GPscaler = StandardScaler()scaler.fit(X) Fit the scaler to the training dataX_scaled = scaler.transform(X) Scale training data
# Specify the kernel to use in the GP kernel = RBF(0.1, (1e-2, 1e2)) + WhiteKernel(1,(1e-2,1e2)) # Train GP model on training dataset gp = GaussianProcessRegressor(kernel=kernel, n_restarts_optimizer=9, normalize_y=True, random_state=123) gp.fit(X, y);
_____no_output_____
MIT
paper_code/Beuzen_et_al_2019_code.ipynb
TomasBeuzen/BeuzenEtAl_2019_NHESS_GP_runup_model
4. Test GP Runup Predictor This section now shows how the GP runup predictor can be used to test 50 test samples not previosuly used in training.
# Read in .csv test data file as a pandas dataframe df_test = pd.read_csv('../data_repo_temporary/lidar_runup_data_for_GP_testing.csv',index_col=0) # Print the size and head of the dataframe print('Data size:', df_test.shape) df_test.head() # Predict the data X_test = df_test.drop(columns=df.columns[-1]) # Drop the last column to retain input features (Hs, Tp, slope) y_test = df_test[[df_test.columns[-1]]] # The last column is the predictand (R2) y_test_predictions = gp.predict(X_test) print('GP RMSE on test data =', format(np.sqrt(mean_squared_error(y_test,y_test_predictions)),'.2f')) # This cell plots a figure comparing GP predictions to observations for the testing dataset # Similar to Figure 4 in the manuscript # Initialize the figure and axes fig, axes = plt.subplots(figsize=(6,6)) plt.tight_layout(pad=2.2) # Plot and format axes.scatter(y_test,y_test_predictions,s=20,c='b',marker='.') axes.plot([0,4],[0,4],'k--') axes.set_ylabel('Predicted R$_2$ (m)') axes.set_xlabel('Observed R$_2$ (m)') axes.grid(lw=0.5,alpha=0.7) axes.set_xlim(0,1.5) axes.set_ylim(0,1.5) # Print some statistics print('GP RMSE on test data =', format(np.sqrt(mean_squared_error(y_test,y_test_predictions)),'.2f')) print('GP bias on test data =', format(np.mean(y_test_predictions-y_test.values),'.2f'))
GP RMSE on test data = 0.22 GP bias on test data = 0.07
MIT
paper_code/Beuzen_et_al_2019_code.ipynb
TomasBeuzen/BeuzenEtAl_2019_NHESS_GP_runup_model
5. Explore GP Prediction Uncertainty This section explores how we can draw random samples from the GP to explain scatter in the runup predictions. We randomly draw 100 samples from the GP and calculate how much of the scatter in the runup predictions is captured by the ensemble envelope for different ensemble sizes. The process is repeated 100 times for robustness. See **Section 3.3** of the manuscript for further discussion.We then plot the prediction with prediction uncertainty to help visualize.
# Draw 100 samples from the GP model using the testing dataset GP_draws = gp.sample_y(X_test, n_samples=100, random_state=123).squeeze() # Draw 100 random samples from the GP # Initialize result arrays perc_ens = np.zeros((100,100)) # Initialize ensemble capture array perc_err = np.zeros((100,)) # Initialise arbitray error array # Loop to get results for i in range(0,perc_ens.shape[0]): # Caclulate capture % in envelope created by adding arbitrary, uniform error to mean GP prediction lower = y_test_predictions*(1-i/100) # Lower bound upper = y_test_predictions*(1+i/100) # Upper bound perc_err[i] = sum((np.squeeze(y_test)>=np.squeeze(lower)) & (np.squeeze(y_test)<=np.squeeze(upper)))/y_test.shape[0] # Store percent capture for j in range(0,perc_ens.shape[1]): ind = np.random.randint(0,perc_ens.shape[0],i+1) # Determine i random integers lower = np.min(GP_draws[:,ind],axis=1) # Lower bound of ensemble of i random members upper = np.max(GP_draws[:,ind],axis=1) # Upper bound of ensemble of i random members perc_ens[i,j] = sum((np.squeeze(y_test)>=lower) & (np.squeeze(y_test)<=upper))/y_test.shape[0] # Store percent capture # This cell plots a figure showing how samples from the GP can help to capture uncertainty in predictions # Similar to Figure 5 from the manuscript # Initialize the figure and axes fig, axes = plt.subplots(1,2,figsize=(9,4)) plt.tight_layout() lim = 0.95 # Desired limit to test # Plot ensemble results ax = axes[0] perc_ens_mean = np.mean(perc_ens,axis=1) ax.plot(perc_ens_mean*100,'k-',lw=2) ind = np.argmin(abs(perc_ens_mean-lim)) # Find where the capture rate > lim ax.plot([ind,ind],[0,perc_ens_mean[ind]*100],'r--') ax.plot([0,ind],[perc_ens_mean[ind]*100,perc_ens_mean[ind]*100],'r--') ax.set_xlabel('# Draws from GP') ax.set_ylabel('Observations captured \n within ensemble range (%)') ax.grid(lw=0.5,alpha=0.7) ax.minorticks_on() ax.set_xlim(0,100); ax.set_ylim(0,100); ax.text(-11.5, 107, 'A)', fontweight='bold', fontsize=12) print('# draws needed for ' + format(lim*100,'.0f') + '% capture = ' + str(ind)) print('Mean/Min/Max for ' + str(ind) + ' draws = ' + format(np.mean(perc_ens[ind,:])*100,'.1f') + '%/' + format(np.min(perc_ens[ind,:])*100,'.1f') + '%/' + format(np.max(perc_ens[ind,:])*100,'.1f') + '%') # Plot arbitrary error results ax = axes[1] ax.plot(perc_err*100,'k-',lw=2) ind = np.argmin(abs(perc_err-lim)) # Find where the capture rate > lim ax.plot([ind,ind],[0,perc_err[ind]*100],'r--') ax.plot([0,ind],[perc_err[ind]*100,perc_err[ind]*100],'r--') ax.set_xlabel('% Error added to mean GP estimate') ax.grid(lw=0.5,alpha=0.7) ax.minorticks_on() ax.set_xlim(0,100); ax.set_ylim(0,100); ax.text(-11.5, 107, 'B)', fontweight='bold', fontsize=12) print('% added error needed for ' + format(lim*100,'.0f') + '% capture = ' + str(ind) + '%') # This cell plots predictions for all 50 test samples with prediction uncertainty from 12 ensemble members. # In the cell above, 12 members was identified as optimal for capturing 95% of observations. # Initialize the figure and axes fig, axes = plt.subplots(1,1,figsize=(10,6)) # Make some data for plotting x = np.arange(1, len(y_test)+1) lower = np.min(GP_draws[:,:12],axis=1) # Lower bound of ensemble of 12 random members upper = np.max(GP_draws[:,:12],axis=1) # Upper bound of ensemble of 12 random members # Plot axes.plot(x,y_test,'o',linestyle='-',color='C0',mfc='C0',mec='k',zorder=10,label='Observed') axes.plot(x,y_test_predictions,'k',marker='o',color='C1',mec='k',label='GP Ensemble Mean') axes.fill_between(x, lower, upper, alpha=0.2, facecolor='C1', label='GP Ensemble Range') # Formatting axes.set_xlim(0,50) axes.set_ylim(0,2.5) axes.set_xlabel('Observation') axes.set_ylabel('R2 (m)') axes.grid() axes.legend(framealpha=1)
_____no_output_____
MIT
paper_code/Beuzen_et_al_2019_code.ipynb
TomasBeuzen/BeuzenEtAl_2019_NHESS_GP_runup_model
Let's go through this bit by bit.```pythonclass Network(nn.Module):```Here we're inheriting from `nn.Module`. Combined with `super().__init__()` this creates a class that tracks the architecture and provides a lot of useful methods and attributes. It is mandatory to inherit from `nn.Module` when you're creating a class for your network. The name of the class itself can be anything.```pythonself.hidden = nn.Linear(784, 256)```This line creates a module for a linear transformation, $x\mathbf{W} + b$, with 784 inputs and 256 outputs and assigns it to `self.hidden`. The module automatically creates the weight and bias tensors which we'll use in the `forward` method. You can access the weight and bias tensors once the network (`net`) is created with `net.hidden.weight` and `net.hidden.bias`.```pythonself.output = nn.Linear(256, 10)```Similarly, this creates another linear transformation with 256 inputs and 10 outputs.```pythonself.sigmoid = nn.Sigmoid()self.softmax = nn.Softmax(dim=1)```Here I defined operations for the sigmoid activation and softmax output. Setting `dim=1` in `nn.Softmax(dim=1)` calculates softmax across the columns.```pythondef forward(self, x):```PyTorch networks created with `nn.Module` must have a `forward` method defined. It takes in a tensor `x` and passes it through the operations you defined in the `__init__` method.```pythonx = self.hidden(x)x = self.sigmoid(x)x = self.output(x)x = self.softmax(x)```Here the input tensor `x` is passed through each operation a reassigned to `x`. We can see that the input tensor goes through the hidden layer, then a sigmoid function, then the output layer, and finally the softmax function. It doesn't matter what you name the variables here, as long as the inputs and outputs of the operations match the network architecture you want to build. The order in which you define things in the `__init__` method doesn't matter, but you'll need to sequence the operations correctly in the `forward` method.Now we can create a `Network` object.
#Text version of model architecture model = Network() model # #Common way to define model using PyTorch # import torch.nn.functional as F # class Network(nn.Module): # def __init__(self): # super().__init__() # # Inputs to hidden layer linear transformation # self.hidden = nn.Linear(784, 256) # # Output layer, 10 units - one for each digit # self.output = nn.Linear(256, 10) # def forward(self, x): # # Hidden layer with sigmoid activation # x = F.sigmoid(self.hidden(x)) # # Output layer with softmax activation # x = F.softmax(self.output(x), dim=1) # return x from torch import nn import torch.nn.functional as F #Create a network with 784 input units, a hidden layer with 128 units and a ReLU activation, #then a hidden layer with 64 units and a ReLU activation, #and finally an output layer with a softmax activation as shown above. class Network(nn.Module): def __init__(self): super().__init__() # Defining the layers, 128, 64, 10 units each self.fc1 = nn.Linear(784, 128) self.fc2 = nn.Linear(128, 64) # Output layer, 10 units - one for each digit self.fc3 = nn.Linear(64, 10) def forward(self, x): ''' Forward pass through the network, returns the output logits ''' x = self.fc1(x) x = F.relu(x) x = self.fc2(x) x = F.relu(x) x = self.fc3(x) x = F.softmax(x, dim=1) return x model1 = Network() model1 # Import necessary packages %matplotlib inline %config InlineBackend.figure_format = 'retina' import numpy as np import torch import helper import matplotlib.pyplot as plt # Hyperparameters for our network input_size = 784 hidden_sizes = [128, 64] output_size = 10 # Build a feed-forward network model = nn.Sequential(nn.Linear(input_size, hidden_sizes[0]), nn.ReLU(), nn.Linear(hidden_sizes[0], hidden_sizes[1]), nn.ReLU(), nn.Linear(hidden_sizes[1], output_size), nn.Softmax(dim=1)) print(model) # Forward pass through the network and display output images, labels = next(iter(trainloader)) images.resize_(images.shape[0], 1, 784) ps = model.forward(images[0,:]) helper.view_classify(images[0].view(1, 28, 28), ps)
Sequential( (0): Linear(in_features=784, out_features=128, bias=True) (1): ReLU() (2): Linear(in_features=128, out_features=64, bias=True) (3): ReLU() (4): Linear(in_features=64, out_features=10, bias=True) (5): Softmax() )
MIT
NN using PyTorch.ipynb
Spurryag/PyTorch-Scholarship-Programme-Solutions
Subplots
%matplotlib notebook import matplotlib.pyplot as plt import numpy as np plt.subplot? plt.figure() # subplot with 1 row, 2 columns, and current axis is 1st subplot axes plt.subplot(1, 2, 1) linear_data = np.array([1,2,3,4,5,6,7,8]) plt.plot(linear_data, '-o') exponential_data = linear_data**2 # subplot with 1 row, 2 columns, and current axis is 2nd subplot axes plt.subplot(1, 2, 2) plt.plot(exponential_data, '-o') # plot exponential data on 1st subplot axes plt.subplot(1, 2, 1) plt.plot(exponential_data, '-x') plt.figure() ax1 = plt.subplot(1, 2, 1) plt.plot(linear_data, '-o') # pass sharey=ax1 to ensure the two subplots share the same y axis ax2 = plt.subplot(1, 2, 2, sharey=ax1) plt.plot(exponential_data, '-x') plt.figure() # the right hand side is equivalent shorthand syntax plt.subplot(1,2,1) == plt.subplot(121) # create a 3x3 grid of subplots fig, ((ax1,ax2,ax3), (ax4,ax5,ax6), (ax7,ax8,ax9)) = plt.subplots(3, 3, sharex=True, sharey=True) # plot the linear_data on the 5th subplot axes ax5.plot(linear_data, '-') # set inside tick labels to visible for ax in plt.gcf().get_axes(): for label in ax.get_xticklabels() + ax.get_yticklabels(): label.set_visible(True) # necessary on some systems to update the plot plt.gcf().canvas.draw()
_____no_output_____
MIT
AppliedDataScienceWithPython/Week3.ipynb
MikeBeaulieu/coursework
Histograms
# create 2x2 grid of axis subplots fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex=True) axs = [ax1,ax2,ax3,ax4] # draw n = 10, 100, 1000, and 10000 samples from the normal distribution and plot corresponding histograms for n in range(0,len(axs)): sample_size = 10**(n+1) sample = np.random.normal(loc=0.0, scale=1.0, size=sample_size) axs[n].hist(sample) axs[n].set_title('n={}'.format(sample_size)) # repeat with number of bins set to 100 fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2, sharex=True) axs = [ax1,ax2,ax3,ax4] for n in range(0,len(axs)): sample_size = 10**(n+1) sample = np.random.normal(loc=0.0, scale=1.0, size=sample_size) axs[n].hist(sample, bins=100) axs[n].set_title('n={}'.format(sample_size)) plt.figure() Y = np.random.normal(loc=0.0, scale=1.0, size=10000) X = np.random.random(size=10000) plt.scatter(X,Y) # use gridspec to partition the figure into subplots import matplotlib.gridspec as gridspec plt.figure() gspec = gridspec.GridSpec(3, 3) top_histogram = plt.subplot(gspec[0, 1:]) side_histogram = plt.subplot(gspec[1:, 0]) lower_right = plt.subplot(gspec[1:, 1:]) Y = np.random.normal(loc=0.0, scale=1.0, size=10000) X = np.random.random(size=10000) lower_right.scatter(X, Y) top_histogram.hist(X, bins=100) s = side_histogram.hist(Y, bins=100, orientation='horizontal') # clear the histograms and plot normed histograms top_histogram.clear() top_histogram.hist(X, bins=100, normed=True) side_histogram.clear() side_histogram.hist(Y, bins=100, orientation='horizontal', normed=True) # flip the side histogram's x axis side_histogram.invert_xaxis() # change axes limits for ax in [top_histogram, lower_right]: ax.set_xlim(0, 1) for ax in [side_histogram, lower_right]: ax.set_ylim(-5, 5) %%HTML <img src='http://educationxpress.mit.edu/sites/default/files/journal/WP1-Fig13.jpg' />
_____no_output_____
MIT
AppliedDataScienceWithPython/Week3.ipynb
MikeBeaulieu/coursework
Box and Whisker Plots
import pandas as pd normal_sample = np.random.normal(loc=0.0, scale=1.0, size=10000) random_sample = np.random.random(size=10000) gamma_sample = np.random.gamma(2, size=10000) df = pd.DataFrame({'normal': normal_sample, 'random': random_sample, 'gamma': gamma_sample}) df.describe() plt.figure() # create a boxplot of the normal data, assign the output to a variable to supress output _ = plt.boxplot(df['normal'], whis='range') # clear the current figure plt.clf() # plot boxplots for all three of df's columns _ = plt.boxplot([ df['normal'], df['random'], df['gamma'] ], whis='range') plt.figure() _ = plt.hist(df['gamma'], bins=100) import mpl_toolkits.axes_grid1.inset_locator as mpl_il plt.figure() plt.boxplot([ df['normal'], df['random'], df['gamma'] ], whis='range') # overlay axis on top of another ax2 = mpl_il.inset_axes(plt.gca(), width='60%', height='40%', loc=2) ax2.hist(df['gamma'], bins=100) ax2.margins(x=0.5) # switch the y axis ticks for ax2 to the right side ax2.yaxis.tick_right() # if `whis` argument isn't passed, boxplot defaults to showing 1.5*interquartile (IQR) whiskers with outliers plt.figure() _ = plt.boxplot([ df['normal'], df['random'], df['gamma'] ] )
_____no_output_____
MIT
AppliedDataScienceWithPython/Week3.ipynb
MikeBeaulieu/coursework
Heatmaps
plt.figure() Y = np.random.normal(loc=0.0, scale=1.0, size=10000) X = np.random.random(size=10000) _ = plt.hist2d(X, Y, bins=25) plt.figure() _ = plt.hist2d(X, Y, bins=100) # add a colorbar legend plt.colorbar()
_____no_output_____
MIT
AppliedDataScienceWithPython/Week3.ipynb
MikeBeaulieu/coursework
Animations
import matplotlib.animation as animation n = 100 x = np.random.randn(n) # create the function that will do the plotting, where curr is the current frame def update(curr): # check if animation is at the last frame, and if so, stop the animation a if curr == n: a.event_source.stop() plt.cla() bins = np.arange(-4, 4, 0.5) plt.hist(x[:curr], bins=bins) plt.axis([-4,4,0,30]) plt.gca().set_title('Sampling the Normal Distribution') plt.gca().set_ylabel('Frequency') plt.gca().set_xlabel('Value') plt.annotate('n = {}'.format(curr), [3,27]) fig = plt.figure() a = animation.FuncAnimation(fig, update, interval=1000)
_____no_output_____
MIT
AppliedDataScienceWithPython/Week3.ipynb
MikeBeaulieu/coursework
Interactivity
plt.figure() data = np.random.rand(10) plt.plot(data) def onclick(event): plt.cla() plt.plot(data) plt.gca().set_title('Event at pixels {},{} \nand data {},{}'.format(event.x, event.y, event.xdata, event.ydata)) # tell mpl_connect we want to pass a 'button_press_event' into onclick when the event is detected plt.gcf().canvas.mpl_connect('button_press_event', onclick) from random import shuffle origins = ['China', 'Brazil', 'India', 'USA', 'Canada', 'UK', 'Germany', 'Iraq', 'Chile', 'Mexico'] shuffle(origins) df = pd.DataFrame({'height': np.random.rand(10), 'weight': np.random.rand(10), 'origin': origins}) df plt.figure() # picker=5 means the mouse doesn't have to click directly on an event, but can be up to 5 pixels away plt.scatter(df['height'], df['weight'], picker=5) plt.gca().set_ylabel('Weight') plt.gca().set_xlabel('Height') def onpick(event): origin = df.iloc[event.ind[0]]['origin'] plt.gca().set_title('Selected item came from {}'.format(origin)) # tell mpl_connect we want to pass a 'pick_event' into onpick when the event is detected plt.gcf().canvas.mpl_connect('pick_event', onpick)
_____no_output_____
MIT
AppliedDataScienceWithPython/Week3.ipynb
MikeBeaulieu/coursework
Question 1Assume that $f(\cdot)$ is an infinitely smooth and continuous scalar function. Suppose that $a\in \mathbb{R}$ is a given constant in the domain of the function $f$ and that $h>0$ is a given parameter assumed to be small. Consider the following numerical approximation of a first derivative,$$ f'(a) \approx c_h(a) = \frac{f(a+h) - f(a - h)}{2h}.$$A. Use a Taylor's series expansion of the function $f$ around $a$ to show that the approximation error is $O(h^2)$ provided that $f'''(a) \neq 0$.B. What happens to the error if $f'''(a) = 0$? ------------------------------------- SolutionA.The absolute error is$$\mathcal{E}_{\rm abs} = \left \vert\frac{f(x+h) - f(x - h)}{2h} - f'(x) \right \vert. $$To derive the error, we expand our function in a Taylor's series, with$$ f(a \pm h) = f(a) \pm h f'(a) + \frac{h^2}{2}f''(a) \pm \frac{h^3}{6} f'''(a) + O(h^4) $$Substituting the Taylor's series into the absolute error yields\begin{align*}\mathcal{E}_{\rm abs} &= \left \vert\frac{1}{2h}\left(hf'(a) + \frac{h^2}{2}f''(a) + \frac{h^3}{6} f'''(a) + O(h^4) + hf'(a) - \frac{h^2}{2}f''(a) + \frac{h^3}{6} f'''(a) - O(h^4)\right) \right \vert \\ &= \left \vert f'(a) + \frac{h^2}{6}f'''(a) + O(h^4) - f'(a)\right \vert \\ &= \left \vert \frac{h^2}{6}f'''(a) + O(h^4) \right \vert \\ &= \frac{h^2}{6}\left \vert f'''(a)\right \vert + O(h^4) \end{align*}B. The next nonzero term in the Taylor's series expansion of the error is $O(h^4)$, namely $$ \frac{h^4}{5!}f^{(5)}(a). $$Note that the $O(h^3)$ cancels out. Question 2Use Example 2 in the Week 2 Jupyter notebook as a starting point. Copy the code and paste it into a new cell (you should be using a copy of the Week 2 notebook or a new notebook).A. Compute the derivative approximation derived in Q1 for the function $f(x) = \sin(x)$ at the point $x=1.2$ for a range of values $10^{-20} \leq h \leq 10^{-1}$. $$$$B. Compute the absolute error between the approximation and the exact derivative for a range of values $10^{-20} \leq h \leq 10^{-1}$. (For parts A and B, turn in a screen shot of your code.)C. Create a plot of the absolute error. Add a plot of the discretization error that you derived in Q1. Is the derivative approximation that you derived in Q2 more accurate than the approximation used in Example 2?
x0 = 1.2 f0 = sin(x0) fp = cos(x0) fpp = -sin(x0) fppp = -cos(x0) i = linspace(-20, 0, 40) h = 10.0**i fp_approx = (sin(x0 + h) - f0)/h fp_center_diff_approx = (sin(x0 + h) - sin(x0 - h))/(2*h) err = absolute(fp - fp_approx) err2 = absolute(fp - fp_center_diff_approx) d_err = h/2*absolute(fpp) d2_err = h**2/6*absolute(fppp) figure(1, [7, 5]) loglog(h, err, '-*') loglog(h, err2, '-*') loglog(h, d_err, 'r-', label=r'$\frac{h}{2}\vert f^{\prime\prime}(x) \vert $') loglog(h, d2_err, label=r'$\frac{h^2}{6}\vert f^{\prime\prime\prime}(x) \vert $') xlabel('h', fontsize=20) ylabel('absolute error', fontsize=20) ylim(1e-15, 1) legend(fontsize=24);
_____no_output_____
Apache-2.0
Homework 2 Solutions.ipynb
newby-jay/MATH381-Fall2021-JupyterNotebooks
Building and using data schemas for computer visionThis tutorial illustrates how to use raymon profiling to guard image quality in your production system. The image data is taken from [Kaggle](https://www.kaggle.com/ravirajsinh45/real-life-industrial-dataset-of-casting-product) and is courtesy of PILOT TECHNOCAST, Shapar, Rajkot. Commercial use of this data is not permitted, but we have received permission to use this data in our tutorials.Note that some outputs may not work when viewing on Github since they are shown in iframes. We recommend to clone this repo and execute the notebooks locally.
%load_ext autoreload %autoreload 2 from PIL import Image from pathlib import Path
The autoreload extension is already loaded. To reload it, use: %reload_ext autoreload
MIT
examples/2-profiling-vision.ipynb
raymon-ai/raymon
First, let's load some data. In this tutorial, we'll take the example of quality inspection in manufacturing. The puprose of our system may be to determine whether a manufactured part passes the required quality checks. These checks may measure the roudness of the part, the smoothness of the edges, the smoothness of the part overall, etc... let's assume you have automated those checks with an ML based system.What we demonstrate here is how you can easily set up quality checks on the incoming data like whether the image is sharp enough and whether it is similar enough to the data the model was trained on. Doing checks like this may be important because people's actions, periodic maintenance and wear and tear may have an impact on what data exaclty is sent to your system. If your data changes, your system may keep running but will suffer from reduced performance, resulting in lower business value.
DATA_PATH = Path("../raymon/tests/sample_data/castinginspection/ok_front/") LIM = 150 def load_data(dpath, lim): files = dpath.glob("*.jpeg") images = [] for n, fpath in enumerate(files): if n == lim: break img = Image.open(fpath) images.append(img) return images loaded_data = load_data(dpath=DATA_PATH, lim=LIM) loaded_data[0]
_____no_output_____
MIT
examples/2-profiling-vision.ipynb
raymon-ai/raymon
Constructing and building a profileFor this tutorial, we'll construct a profile that checks the image sharpness and will calculate an outlier score on the image. This way, we hope to get alerting when something seems off with the input data.Just like in the case of structured data, we need to start by specifying a profile and its components.
from raymon import ModelProfile, InputComponent from raymon.profiling.extractors.vision import Sharpness, DN2AnomalyScorer profile = ModelProfile( name="casting-inspection", version="0.0.1", components=[ InputComponent(name="sharpness", extractor=Sharpness()), InputComponent(name="outlierscore", extractor=DN2AnomalyScorer(k=16)) ], ) profile.build(input=loaded_data) ## Inspect the schema profile.view(poi=loaded_data[-1], mode="external")
_____no_output_____
MIT
examples/2-profiling-vision.ipynb
raymon-ai/raymon
Use the profile to check new dataWe can save the schema to JSON, load it again (in your production system), and use it to validate incoming data.
profile.save(".") profile = ModelProfile.load("[email protected]") tags = profile.validate_input(loaded_data[-1]) tags
_____no_output_____
MIT
examples/2-profiling-vision.ipynb
raymon-ai/raymon
As you can see, all the extracted feature values are returned. This is useful for when you want to track feature distributions on your monitoring backend (which is what happens on the Raymon.ai platform). Also note that these features are not necessarily the ones going into your ML model. Corrupting inputsLet's see what happens when we blur an image.
from PIL import ImageFilter img_blur = loaded_data[-1].copy().filter(ImageFilter.GaussianBlur(radius=5)) img_blur profile.validate_input(img_blur)
_____no_output_____
MIT
examples/2-profiling-vision.ipynb
raymon-ai/raymon
As can be seen, every feature extractor now gives rise to 2 tags: one being the feature and one being a schema error, indicating that the data has failed both sanity checks. Awesome.We can visualize this datum while inspecting the profile.
profile.view(poi=img_blur, mode="external")
_____no_output_____
MIT
examples/2-profiling-vision.ipynb
raymon-ai/raymon
Clustering hierárquico
# imports necessários from sklearn.cluster import AgglomerativeClustering from scipy.cluster.hierarchy import dendrogram # Implemente Clustering Hierárquico modelo = AgglomerativeClustering(distance_threshold = 0, n_clusters = None, linkage = "single") modelo.fit_predict(x) # clusters.children_ # Plotando o dendograma def plot_dendrogram(modelo, **kwargs): counts = np.zeros(modelo.children_.shape[0]) n_samples = len(modelo.labels_) for i, merge in enumerate(modelo.children_): current_count = 0 for child_index in merge: if child_index < n_samples: current_count += 1 else: current_count += counts[child_index - n_samples] counts[i] = current_count linkage_matrix = np.column_stack([modelo.children_, modelo.distances_, counts]).astype(float) dendrogram(linkage_matrix, **kwargs) plot_dendrogram(modelo, truncate_mode = 'level', p = 12) plt.show() # DBSCAN # https://scikit-learn.org/stable/modules/generated/sklearn.cluster.dbscan.html?highlight=dbscan#sklearn.cluster.dbscan from sklearn.cluster import DBSCAN dbscan = DBSCAN(eps = .5, min_samples = 15).fit(x) # Não consegui desenvolver essa forma de clustering
_____no_output_____
MIT
clustering/k_means.ipynb
JVBravoo/Learning-Machine-Learning
This notebook demonstrates how to perform regression analysis using scikit-learn and the watson-machine-learning-client package.Some familiarity with Python is helpful. This notebook is compatible with Python 3.7.You will use the sample data set, **sklearn.datasets.load_boston** which is available in scikit-learn, to predict house prices. Learning goalsIn this notebook, you will learn how to:- Load a sample data set from ``scikit-learn``- Explore data- Prepare data for training and evaluation- Create a scikit-learn pipeline- Train and evaluate a model- Store a model in the Watson Machine Learning (WML) repository- Deploy a model as Core ML Contents1. [Set up the environment](setup)2. [Load and explore data](load)3. [Build a scikit-learn linear regression model](model)4. [Set up the WML instance and save the model in the WML repository](upload)5. [Deploy the model via Core ML](deploy)6. [Clean up](cleanup)7. [Summary and next steps](summary) 1. Set up the environmentBefore you use the sample code in this notebook, you must perform the following setup tasks:- Contact with your Cloud Pack for Data administrator and ask him for your account credentials Connection to WMLAuthenticate the Watson Machine Learning service on IBM Cloud Pack for Data. You need to provide platform `url`, your `username` and `password`.
username = 'PASTE YOUR USERNAME HERE' password = 'PASTE YOUR PASSWORD HERE' url = 'PASTE THE PLATFORM URL HERE' wml_credentials = { "username": username, "password": password, "url": url, "instance_id": 'openshift', "version": '3.5' }
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Install and import the `ibm-watson-machine-learning` package**Note:** `ibm-watson-machine-learning` documentation can be found here.
!pip install -U ibm-watson-machine-learning from ibm_watson_machine_learning import APIClient client = APIClient(wml_credentials)
2020-12-08 12:44:04,591 - root - WARNING - scikit-learn version 0.23.2 is not supported. Minimum required version: 0.17. Maximum required version: 0.19.2. Disabling scikit-learn conversion API. 2020-12-08 12:44:04,653 - root - WARNING - Keras version 2.2.5 detected. Last version known to be fully compatible of Keras is 2.2.4 .
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Working with spacesFirst of all, you need to create a space that will be used for your work. If you do not have space already created, you can use `{PLATFORM_URL}/ml-runtime/spaces?context=icp4data` to create one.- Click New Deployment Space- Create an empty space- Go to space `Settings` tab- Copy `space_id` and paste it below**Tip**: You can also use SDK to prepare the space for your work. More information can be found [here](https://github.com/IBM/watson-machine-learning-samples/blob/master/cpd3.5/notebooks/python_sdk/instance-management/Space%20management.ipynb).**Action**: Assign space ID below
space_id = 'PASTE YOUR SPACE ID HERE'
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
You can use `list` method to print all existing spaces.
client.spaces.list(limit=10)
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
To be able to interact with all resources available in Watson Machine Learning, you need to set **space** which you will be using.
client.set.default_space(space_id)
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
2. Load and explore data The sample data set contains boston house prices. The data set can be found here.In this section, you will learn how to:- [2.1 Explore Data](dataset) - [2.2 Check the correlations between predictors and the target](corr) 2.1 Explore dataIn this subsection, you will perform exploratory data analysis of the boston house prices data set.
!pip install --upgrade scikit-learn==0.23.1 seaborn import sklearn from sklearn import datasets import pandas as pd boston_data = datasets.load_boston()
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Let's check the names of the predictors.
print(boston_data.feature_names)
['CRIM' 'ZN' 'INDUS' 'CHAS' 'NOX' 'RM' 'AGE' 'DIS' 'RAD' 'TAX' 'PTRATIO' 'B' 'LSTAT']
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
**Tip:** Run `print(boston_data.DESCR)` to view a detailed description of the data set.
print(boston_data.DESCR)
.. _boston_dataset: Boston house prices dataset --------------------------- **Data Set Characteristics:** :Number of Instances: 506 :Number of Attributes: 13 numeric/categorical predictive. Median Value (attribute 14) is usually the target. :Attribute Information (in order): - CRIM per capita crime rate by town - ZN proportion of residential land zoned for lots over 25,000 sq.ft. - INDUS proportion of non-retail business acres per town - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) - NOX nitric oxides concentration (parts per 10 million) - RM average number of rooms per dwelling - AGE proportion of owner-occupied units built prior to 1940 - DIS weighted distances to five Boston employment centres - RAD index of accessibility to radial highways - TAX full-value property-tax rate per $10,000 - PTRATIO pupil-teacher ratio by town - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town - LSTAT % lower status of the population - MEDV Median value of owner-occupied homes in $1000's :Missing Attribute Values: None :Creator: Harrison, D. and Rubinfeld, D.L. This is a copy of UCI ML housing dataset. https://archive.ics.uci.edu/ml/machine-learning-databases/housing/ This dataset was taken from the StatLib library which is maintained at Carnegie Mellon University. The Boston house-price data of Harrison, D. and Rubinfeld, D.L. 'Hedonic prices and the demand for clean air', J. Environ. Economics & Management, vol.5, 81-102, 1978. Used in Belsley, Kuh & Welsch, 'Regression diagnostics ...', Wiley, 1980. N.B. Various transformations are used in the table on pages 244-261 of the latter. The Boston house-price data has been used in many machine learning papers that address regression problems. .. topic:: References - Belsley, Kuh & Welsch, 'Regression diagnostics: Identifying Influential Data and Sources of Collinearity', Wiley, 1980. 244-261. - Quinlan,R. (1993). Combining Instance-Based and Model-Based Learning. In Proceedings on the Tenth International Conference of Machine Learning, 236-243, University of Massachusetts, Amherst. Morgan Kaufmann.
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Create a pandas DataFrame and display some descriptive statistics.
boston_pd = pd.DataFrame(boston_data.data) boston_pd.columns = boston_data.feature_names boston_pd['PRICE'] = boston_data.target
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
The describe method generates summary statistics of numerical predictors.
boston_pd.describe()
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
2.2 Check the correlations between predictors and the target
import seaborn as sns %matplotlib inline corr_coeffs = boston_pd.corr() sns.heatmap(corr_coeffs, xticklabels=corr_coeffs.columns, yticklabels=corr_coeffs.columns);
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
3. Build a scikit-learn linear regression modelIn this section, you will learn how to:- [3.1 Split data](prep)- [3.2 Create a scikit-learn pipeline](pipe)- [3.3 Train the model](train) 3.1 Split dataIn this subsection, you will split the data set into: - Train data set- Test data set
from sklearn.model_selection import train_test_split X = boston_pd.drop('PRICE', axis = 1) y = boston_pd['PRICE'] X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.33, random_state = 5) print('Number of training records: ' + str(X_train.shape[0])) print('Number of test records: ' + str(X_test.shape[0]))
Number of training records: 339 Number of test records: 167
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Your data has been successfully split into two data sets: - The train data set, which is the largest group, is used for training.- The test data set will be used for model evaluation and is used to test the model. 3.2 Create a scikit-learn pipeline In this subsection, you will create a scikit-learn pipeline. First, import the scikit-learn machine learning packages that are needed in the subsequent steps.
from sklearn.pipeline import Pipeline from sklearn import preprocessing from sklearn.linear_model import LinearRegression from sklearn.metrics import mean_squared_error
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Standardize the features by removing the mean and by scaling to unit variance.
scaler = preprocessing.StandardScaler()
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Next, define the regressor you want to use. This notebook uses the Linear Regression model.
lr = LinearRegression()
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Build the pipeline. A pipeline consists of a transformer (Standard Scaler) and an estimator (Linear Regression model).
pipeline = Pipeline([('scaler', scaler), ('lr', lr)])
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
3.3 Train the model Now, you can use the **pipeline** and **train data** you defined previously to train your SVM model.
model = pipeline.fit(X_train, y_train)
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Check the model quality.
y_pred = model.predict(X_test) mse = sklearn.metrics.mean_squared_error(y_test, y_pred) print('MSE: ' + str(mse))
MSE: 28.530458765974625
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Plot the scatter plot of prices vs. predicted prices.
import matplotlib.pyplot as plt plt.style.use('ggplot') plt.title('Predicted prices vs prices') plt.ylabel('Prices') plt.xlabel('Predicted prices') plot = plt.scatter(y_pred, y_test)
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
**Note:** You can tune your model to achieve better accuracy. To keep this example simple, the tuning section is omitted. 4. Save the model in the WML repository In this section, you will learn how to use the common Python client to manage your model in the WML repository.
sofware_spec_uid = client.software_specifications.get_id_by_name("default_py3.7") metadata = { client.repository.ModelMetaNames.NAME: 'Boston house price', client.repository.ModelMetaNames.TYPE: 'scikit-learn_0.23', client.repository.ModelMetaNames.SOFTWARE_SPEC_UID: sofware_spec_uid } published_model = client.repository.store_model( model=model, meta_props=metadata, training_data=X_train, training_target=y_train) model_uid = client.repository.get_model_uid(published_model)
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Get information about all of the models in the WML repository.
models_details = client.repository.list_models()
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
5. Deploy the model via Core ML In this section, you will learn how to use the WML client to create a **virtual** deployment via the `Core ML`. You will also learn how to use `download_url` to download a Core ML model for your Xcode project.- [5.1 Create a virtual deployment for the model](create)- [5.2 Download the Core ML file from the deployment](getdeploy)- [5.3 Test the CoreML model](testcoreML) 5.1 Create a virtual deployment for the model
metadata = { client.deployments.ConfigurationMetaNames.NAME: "Virtual deployment of Boston model", client.deployments.ConfigurationMetaNames.VIRTUAL: {"export_format": "coreml"} } created_deployment = client.deployments.create(model_uid, meta_props=metadata)
####################################################################################### Synchronous deployment creation for uid: '9b319604-4b55-4a86-8728-51572eeeb761' started ####################################################################################### initializing......................... ready ------------------------------------------------------------------------------------------------ Successfully finished deployment creation, deployment_uid='29ebee31-849b-4ef1-b201-259dbddeb158' ------------------------------------------------------------------------------------------------
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Now, you can define and print the download endpoint. You can use this endpoint to download the Core ML model. 5.2 Download the `Core ML` file from the deployment
client.deployments.list()
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Download the virtual deployment content: Core ML model.
deployment_uid = client.deployments.get_uid(created_deployment) deployment_content = client.deployments.download(deployment_uid)
---------------------------------------------------------- Successfully downloaded deployment file: mlartifact.tar.gz ----------------------------------------------------------
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Use the code in the cell below to create the download link.
from ibm_watson_machine_learning.utils import create_download_link create_download_link(deployment_content)
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
**Note:** You can use Xcode to preview the model's metadata (after unzipping). 5.3 Test the `Core ML` model Use the following steps to run a test against the downloaded Core ML model.
!pip install --upgrade coremltools
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Use the ``coremltools`` to load the model and check some basic metadata. First, extract the model.
from ibm_watson_machine_learning.utils import extract_mlmodel_from_archive extracted_model_path = extract_mlmodel_from_archive('mlartifact.tar.gz', model_uid)
_____no_output_____
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Load the model and check the description.
import coremltools loaded_model = coremltools.models.MLModel(extracted_model_path) print(loaded_model.get_spec())
specificationVersion: 1 description { input { name: "input" type { multiArrayType { shape: 13 dataType: DOUBLE } } } output { name: "prediction" type { doubleType { } } } predictedFeatureName: "prediction" metadata { shortDescription: "\'description\'" userDefined { key: "coremltoolsVersion" value: "3.4" } } } pipelineRegressor { pipeline { models { specificationVersion: 1 description { input { name: "input" type { multiArrayType { shape: 13 dataType: DOUBLE } } } output { name: "__feature_vector__" type { multiArrayType { shape: 13 dataType: DOUBLE } } } metadata { userDefined { key: "coremltoolsVersion" value: "3.4" } } } scaler { shiftValue: -3.5107058407079643 shiftValue: -11.233038348082596 shiftValue: -10.946755162241887 shiftValue: -0.061946902654867256 shiftValue: -0.5524333333333333 shiftValue: -6.2900589970501475 shiftValue: -67.4339233038348 shiftValue: -3.7929982300884952 shiftValue: -9.587020648967552 shiftValue: -404.9882005899705 shiftValue: -18.456342182890854 shiftValue: -359.3829498525074 shiftValue: -12.5223598820059 scaleValue: 0.11919939314854636 scaleValue: 0.04472688940586527 scaleValue: 0.14990467420150372 scaleValue: 4.14836050491109 scaleValue: 8.709158768395854 scaleValue: 1.4339791968637936 scaleValue: 0.035440297674478205 scaleValue: 0.49360314158376223 scaleValue: 0.11485019638452705 scaleValue: 0.005946475916321702 scaleValue: 0.4634385504439216 scaleValue: 0.011393122149179365 scaleValue: 0.14172437454317954 } } models { specificationVersion: 1 description { input { name: "__feature_vector__" type { multiArrayType { shape: 13 dataType: DOUBLE } } } output { name: "prediction" type { doubleType { } } } predictedFeatureName: "prediction" metadata { userDefined { key: "coremltoolsVersion" value: "3.4" } } } glmRegressor { weights { value: -1.311930314912692 value: 0.8618774463035517 value: -0.16719286609046674 value: 0.1895784329617395 value: -1.4865858389370386 value: 2.7913156462931568 value: -0.3273770336805285 value: -2.7720409347134205 value: 2.9756754908489014 value: -2.2727548977084533 value: -2.133758688598611 value: 1.058429930547136 value: -3.3349540749442603 } offset: 22.537168141592925 } } } }
Apache-2.0
cpd3.5/notebooks/python_sdk/deployments/coreml/Use Core ML to predict Boston house prices.ipynb
muthukumarbala07/watson-machine-learning-samples
Copyright Netherlands eScience Center ** Function : Computing AMET with Surface & TOA flux** ** Author : Yang Liu ** ** First Built : 2019.08.09 ** ** Last Update : 2019.09.09 ** Description : This notebook aims to compute AMET with TOA/surface flux fields from NorESM model. The NorESM model is launched by NERSC in Blue Action Work Package 3 as coordinated experiments for joint analysis. It contributes to the Deliverable 3.1. Return Values : netCDF4 Caveat : The fields used here are post-processed monthly mean fields. Hence there is no accumulation that need to be taken into account.The **positive sign** for each variable varies:* Latent heat flux (LHF) - downward * Sensible heat flux (SHF) - downward * Net solar radiation flux at TOA (NTopSol & UTopSol) - downward * Net solar radiation flux at surface (NSurfSol) - downward * Net longwave radiation flux at surface (NSurfTherm) - downward * Net longwave radiation flux at TOA (OLR) - downward
%matplotlib inline import numpy as np import sys sys.path.append("/home/ESLT0068/NLeSC/Computation_Modeling/Bjerknes/Scripts/META") import scipy as sp import pygrib import time as tttt from netCDF4 import Dataset,num2date import os import meta.statistics import meta.visualizer # constants constant = {'g' : 9.80616, # gravititional acceleration [m / s2] 'R' : 6371009, # radius of the earth [m] 'cp': 1004.64, # heat capacity of air [J/(Kg*K)] 'Lv': 2264670, # Latent heat of vaporization [J/Kg] 'R_dry' : 286.9, # gas constant of dry air [J/(kg*K)] 'R_vap' : 461.5, # gas constant for water vapour [J/(kg*K)] } ################################ Input zone ###################################### # specify starting and ending time start_year = 1979 end_year = 2013 # specify data path datapath = '/home/ESLT0068/WorkFlow/Core_Database_BlueAction_WP3/MPIESM_MPI' # specify output path for figures output_path = '/home/ESLT0068/WorkFlow/Core_Database_BlueAction_WP3/AMET_netCDF' # ensemble number ensemble = 10 # experiment number exp = 4 # example file #datapath_example = os.path.join(datapath, 'SHF', 'Amon2d_amip_bac_rg_1_SHF_1979-2013.grb') #datapath_example = os.path.join(datapath, 'LHF', 'Amon2d_amip_bac_rg_1_LHF_1979-2013.grb') #datapath_example = os.path.join(datapath, 'NSurfSol', 'Amon2d_amip_bac_rg_1_NSurfSol_1979-2014.grb') #datapath_example = os.path.join(datapath, 'NTopSol', 'Amon2d_amip_bac_rg_1_DTopSol_1979-2014.grb') #datapath_example = os.path.join(datapath, 'UTopSol', 'Amon2d_amip_bac_rg_1_UTopSol_1979-2014.grb') #datapath_example = os.path.join(datapath, 'NSurfTherm', 'Amon2d_amip_bac_rg_1_NSurfTherm_1979-2014.grb') datapath_example = os.path.join(datapath, 'OLR', 'Amon2d_amip_bac_rg_1_OLR_1979-2014.grb') #################################################################################### def var_key_retrieve(datapath, exp_num, ensemble_num): # get the path to each datasets print ("Start retrieving datasets of experiment {} ensemble number {}".format(exp_num+1, ensemble_num)) # get data path if exp_num == 0 : # exp 1 datapath_slhf = os.path.join(datapath, 'LHF', 'Amon2d_amip_bac_rg_{}_LHF_1979-2013.grb'.format(ensemble_num)) datapath_sshf = os.path.join(datapath, 'SHF', 'Amon2d_amip_bac_rg_{}_SHF_1979-2013.grb'.format(ensemble_num)) datapath_ssr = os.path.join(datapath, 'NSurfSol', 'Amon2d_amip_bac_rg_{}_NSurfSol_1979-2014.grb'.format(ensemble_num)) datapath_str = os.path.join(datapath, 'NSurfTherm', 'Amon2d_amip_bac_rg_{}_NSurfTherm_1979-2014.grb'.format(ensemble_num)) datapath_tsr_in = os.path.join(datapath, 'NTopSol', 'Amon2d_amip_bac_rg_{}_DTopSol_1979-2014.grb'.format(ensemble_num)) datapath_tsr_out = os.path.join(datapath, 'UTopSol', 'Amon2d_amip_bac_rg_{}_UTopSol_1979-2014.grb'.format(ensemble_num)) datapath_ttr = os.path.join(datapath, 'OLR', 'Amon2d_amip_bac_rg_{}_OLR_1979-2014.grb'.format(ensemble_num)) elif exp_num == 1: datapath_slhf = os.path.join(datapath, 'LHF', 'Amon2d_amip_bac_exp{}_rg_{}_LHF_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_sshf = os.path.join(datapath, 'SHF', 'Amon2d_amip_bac_exp{}_rg_{}_SHF_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_ssr = os.path.join(datapath, 'NSurfSol', 'Amon2d_amip_bac_exp{}_rg_{}_NSurfSol_1979-2014.grb'.format(exp_num+1, ensemble_num)) datapath_str = os.path.join(datapath, 'NSurfTherm', 'Amon2d_amip_bac_exp{}_rg_{}_NSurfTherm_1979-2014.grb'.format(exp_num+1, ensemble_num)) datapath_tsr_in = os.path.join(datapath, 'NTopSol', 'Amon2d_amip_bac_exp{}_rg_{}_DTopSol_1979-2014.grb'.format(exp_num+1, ensemble_num)) datapath_tsr_out = os.path.join(datapath, 'UTopSol', 'Amon2d_amip_bac_exp{}_rg_{}_UTopSol_1979-2014.grb'.format(exp_num+1, ensemble_num)) datapath_ttr = os.path.join(datapath, 'OLR', 'Amon2d_amip_bac_exp{}_rg_{}_OLR_1979-2014.grb'.format(exp_num+1, ensemble_num)) else: datapath_slhf = os.path.join(datapath, 'LHF', 'Amon2d_amip_bac_exp{}_rg_{}_LHF_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_sshf = os.path.join(datapath, 'SHF', 'Amon2d_amip_bac_exp{}_rg_{}_SHF_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_ssr = os.path.join(datapath, 'NSurfSol', 'Amon2d_amip_bac_exp{}_rg_{}_NSurfSol_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_str = os.path.join(datapath, 'NSurfTherm', 'Amon2d_amip_bac_exp{}_rg_{}_NSurfTherm_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_tsr_in = os.path.join(datapath, 'NTopSol', 'Amon2d_amip_bac_exp{}_rg_{}_DTopSol_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_tsr_out = os.path.join(datapath, 'UTopSol', 'Amon2d_amip_bac_exp{}_rg_{}_UTopSol_1979-2013.grb'.format(exp_num+1, ensemble_num)) datapath_ttr = os.path.join(datapath, 'OLR', 'Amon2d_amip_bac_exp{}_rg_{}_OLR_1979-2013.grb'.format(exp_num+1, ensemble_num)) # get the variable keys grbs_slhf = pygrib.open(datapath_slhf) grbs_sshf = pygrib.open(datapath_sshf) grbs_ssr = pygrib.open(datapath_ssr) grbs_str = pygrib.open(datapath_str) grbs_tsr_in = pygrib.open(datapath_tsr_in) grbs_tsr_out = pygrib.open(datapath_tsr_out) grbs_ttr = pygrib.open(datapath_ttr) print ("Retrieving datasets successfully and return the variable key!") return grbs_slhf, grbs_sshf, grbs_ssr, grbs_str, grbs_tsr_in, grbs_tsr_out, grbs_ttr def amet(grbs_slhf, grbs_sshf, grbs_ssr, grbs_str, grbs_tsr_in, grbs_tsr_out, grbs_ttr, period_1979_2013, lat, lon): # get all the varialbes # make sure we know the sign of all the input variables!!! # ascending lat var_slhf = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) # surface latent heat flux W/m2 var_sshf = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) # surface sensible heat flux W/m2 var_ssr = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) var_str = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) var_tsr_in = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) var_tsr_out = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) var_ttr = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) # load data counter = 1 for i in np.arange(len(period_1979_2013)*12): key_slhf = grbs_slhf.message(counter) key_sshf = grbs_sshf.message(counter) key_ssr = grbs_ssr.message(counter) key_str = grbs_str.message(counter) key_tsr_in = grbs_tsr_in.message(counter) key_tsr_out = grbs_tsr_out.message(counter) key_ttr = grbs_ttr.message(counter) var_slhf[i,:,:] = key_slhf.values var_sshf[i,:,:] = key_sshf.values var_ssr[i,:,:] = key_ssr.values var_str[i,:,:] = key_str.values var_tsr_in[i,:,:] = key_tsr_in.values var_tsr_out[i,:,:] = key_tsr_out.values var_ttr[i,:,:] = key_ttr.values # counter update counter +=1 #size of the grid box dx = 2 * np.pi * constant['R'] * np.cos(2 * np.pi * lat / 360) / len(lon) dy = np.pi * constant['R'] / len(lat) # calculate total net energy flux at TOA/surface net_flux_surf = var_slhf + var_sshf + var_ssr + var_str net_flux_toa = var_tsr_in + var_tsr_out + var_ttr net_flux_surf_area = np.zeros(net_flux_surf.shape, dtype=float) # unit W net_flux_toa_area = np.zeros(net_flux_toa.shape, dtype=float) grbs_slhf.close() grbs_sshf.close() grbs_ssr.close() grbs_str.close() grbs_tsr_in.close() grbs_tsr_out.close() grbs_ttr.close() for i in np.arange(len(lat)): # change the unit to terawatt net_flux_surf_area[:,i,:] = net_flux_surf[:,i,:]* dx[i] * dy / 1E+12 net_flux_toa_area[:,i,:] = net_flux_toa[:,i,:]* dx[i] * dy / 1E+12 # take the zonal integral of flux net_flux_surf_int = np.sum(net_flux_surf_area,2) / 1000 # PW net_flux_toa_int = np.sum(net_flux_toa_area,2) / 1000 # AMET as the residual of net flux at TOA & surface AMET_res_ERAI = np.zeros(net_flux_surf_int.shape) for i in np.arange(len(lat)): AMET_res_ERAI[:,i] = -(np.sum(net_flux_toa_int[:,0:i+1],1) - np.sum(net_flux_surf_int[:,0:i+1],1)) AMET_res_ERAI = AMET_res_ERAI.reshape(-1,12,len(lat)) return AMET_res_ERAI def create_netcdf_point (pool_amet, lat, output_path, exp): print ('*******************************************************************') print ('*********************** create netcdf file*************************') print ('*******************************************************************') #logging.info("Start creating netcdf file for the 2D fields of ERAI at each grid point.") # get the basic dimensions ens, year, month, _ = pool_amet.shape # wrap the datasets into netcdf file # 'NETCDF3_CLASSIC', 'NETCDF3_64BIT', 'NETCDF4_CLASSIC', and 'NETCDF4' data_wrap = Dataset(os.path.join(output_path, 'amet_MPIESM_MPI_exp{}.nc'.format(exp+1)),'w',format = 'NETCDF4') # create dimensions for netcdf data ens_wrap_dim = data_wrap.createDimension('ensemble', ens) year_wrap_dim = data_wrap.createDimension('year', year) month_wrap_dim = data_wrap.createDimension('month', month) lat_wrap_dim = data_wrap.createDimension('latitude', len(lat)) # create coordinate variable ens_wrap_var = data_wrap.createVariable('ensemble',np.int32,('ensemble',)) year_wrap_var = data_wrap.createVariable('year',np.int32,('year',)) month_wrap_var = data_wrap.createVariable('month',np.int32,('month',)) lat_wrap_var = data_wrap.createVariable('latitude',np.float32,('latitude',)) # create the actual 4d variable amet_wrap_var = data_wrap.createVariable('amet',np.float64,('ensemble','year','month','latitude'),zlib=True) # global attributes data_wrap.description = 'Monthly mean atmospheric meridional energy transport' # variable attributes lat_wrap_var.units = 'degree_north' amet_wrap_var.units = 'PW' amet_wrap_var.long_name = 'atmospheric meridional energy transport' # writing data ens_wrap_var[:] = np.arange(ens) month_wrap_var[:] = np.arange(month)+1 year_wrap_var[:] = np.arange(year)+1979 lat_wrap_var[:] = lat amet_wrap_var[:] = pool_amet # close the file data_wrap.close() print ("The generation of netcdf files is complete!!") if __name__=="__main__": #################################################################### ###### Create time namelist matrix for variable extraction ####### #################################################################### # date and time arrangement # namelist of month and days for file manipulation namelist_month = ['01','02','03','04','05','06','07','08','09','10','11','12'] ensemble_list = ['01','02','03','04','05','06','07','08','09','10', '11','12','13','14','15','16','17','18','19','20', '21','22','23','24','25','26','27','28','29','30',] # index of months period_1979_2013 = np.arange(start_year,end_year+1,1) index_month = np.arange(1,13,1) #################################################################### ###### Extract invariant and calculate constants ####### #################################################################### # get basic dimensions from sample file grbs_example = pygrib.open(datapath_example) key_example = grbs_example.message(1) lats, lons = key_example.latlons() lat = lats[:,0] lon = lons[0,:] grbs_example.close() # get invariant from benchmark file Dim_year_1979_2013 = len(period_1979_2013) Dim_month = len(index_month) Dim_latitude = len(lat) Dim_longitude = len(lon) ############################################# ##### Create space for stroing data ##### ############################################# # loop for calculation for i in range(exp): pool_amet = np.zeros((ensemble,Dim_year_1979_2013,Dim_month,Dim_latitude),dtype = float) for j in range(ensemble): # get variable keys grbs_slhf, grbs_sshf, grbs_ssr, grbs_str, grbs_tsr_in,\ grbs_tsr_out, grbs_ttr = var_key_retrieve(datapath, i, j) # compute amet pool_amet[j,:,:,:] = amet(grbs_slhf, grbs_sshf, grbs_ssr, grbs_str, grbs_tsr_in,\ grbs_tsr_out, grbs_ttr, period_1979_2013, lat, lon) #################################################################### ###### Data Wrapping (NetCDF) ####### #################################################################### # save netcdf create_netcdf_point(pool_amet, lat, output_path, i) print ('Packing AMET is complete!!!') print ('The output is in sleep, safe and sound!!!') ############################################################################ ############################################################################ # first check grbs_example = pygrib.open(datapath_example) key_example = grbs_example.message(1) lats, lons = key_example.latlons() lat = lats[:,0] lon = lons[0,:] print(lat) print(lon) #k = key_example.values #print(k[30:40,330:340]) #print(key_example.unit) # print all the credentials #for i in grbs_example: # print(i) grbs_example.close() # index of months period_1979_2013 = np.arange(start_year,end_year+1,1) values = np.zeros((len(period_1979_2013)*12,len(lat),len(lon)),dtype=float) counter = 1 grbs_example = pygrib.open(datapath_example) for i in np.arange(len(period_1979_2013)*12): key = grbs_example.message(counter) values[i,:,:] = key.values counter +=1 value_max = np.amax(values) value_min = np.amin(values) print(value_max) print(value_min)
_____no_output_____
Apache-2.0
Packing/AMET_MPIESM_MPI.ipynb
geek-yang/JointAnalysis
Romania* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Romania.ipynb)
import datetime import time start = datetime.datetime.now() print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}") %config InlineBackend.figure_formats = ['svg'] from oscovida import * overview("Romania", weeks=5); overview("Romania"); compare_plot("Romania", normalise=True); # load the data cases, deaths = get_country_data("Romania") # get population of the region for future normalisation: inhabitants = population("Romania") print(f'Population of "Romania": {inhabitants} people') # compose into one table table = compose_dataframe_summary(cases, deaths) # show tables with up to 1000 rows pd.set_option("max_rows", 1000) # display the table table
_____no_output_____
CC-BY-4.0
ipynb/Romania.ipynb
oscovida/oscovida.github.io
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Romania.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and " f"deaths at {fetch_deaths_last_execution()}.") # to force a fresh download of data, run "clear_cache()" print(f"Notebook execution took: {datetime.datetime.now()-start}")
_____no_output_____
CC-BY-4.0
ipynb/Romania.ipynb
oscovida/oscovida.github.io
Dempnstration that GMVRFIT reduces to GMVPFIT (or equivalent) for polynomial casesDevelopment for a fitting function (greedy+linear based on mvpolyfit and gmvpfit) that handles rational fucntions
# Low-level import from numpy import * from numpy.linalg import pinv,lstsq # Setup ipython environment %load_ext autoreload %autoreload 2 %matplotlib inline # Setup plotting backend import matplotlib as mpl mpl.rcParams['lines.linewidth'] = 0.8 mpl.rcParams['font.family'] = 'serif' mpl.rcParams['font.size'] = 12 mpl.rcParams['axes.labelsize'] = 20 mpl.rcParams['axes.titlesize'] = 20 from mpl_toolkits.mplot3d import Axes3D from matplotlib.pyplot import * # from positive import *
_____no_output_____
MIT
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
Package Development (positive/learning.py) Setup test data
################################################################################ h = 3 Q = 25 x = h*linspace(-1,1,Q) y = h*linspace(-1,1,Q) X,Y = meshgrid(x,y) # X += np.random.random( X.shape )-0.5 # Y += np.random.random( X.shape )-0.5 zfun = lambda xx,yy: 50 + (1.0 + 0.5*xx*yy + xx**2 + yy**2 ) numerator_symbols, denominator_symbols = ['01','00','11'],[] np.random.seed(42) ns = 0.1*(np.random.random( X.shape )-0.5) Z = zfun(X,Y) + ns domain,scalar_range = ndflatten( [X,Y], Z ) ################################################################################
_____no_output_____
MIT
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
Initiate class object for fitting
foo = mvrfit( domain, scalar_range, numerator_symbols, denominator_symbols, verbose=True )
_____no_output_____
MIT
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
Plot using class method
foo.plot()
_____no_output_____
MIT
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
Generate python string for fit model
print foo.__str_python__(precision=8)
f = lambda x0,x1: 5.74999156e+01 + 4.41113329e+00 * ( 2.26778466e-01*(x0*x0) + 1.13422851e-01*(x0*x1) + 2.26544539e-01*(x1*x1) + -1.47329976e+00 )
MIT
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
Use greedy algorithm
star = gmvrfit( domain, scalar_range, verbose=True ) star.plot() star.bin['pgreedy_result'].plot() star.bin['ngreedy_result'].plot()
_____no_output_____
MIT
factory/gmvrfit_reduce_to_gmvpfit_example.ipynb
llondon6/koalas
Commands for plottingThese are used so the the usual "plot" will use matplotlib.
# commands for plotting, "plot" works with matplotlib def mesh2triang(mesh): xy = mesh.coordinates() return tri.Triangulation(xy[:, 0], xy[:, 1], mesh.cells()) def mplot_cellfunction(cellfn): C = cellfn.array() tri = mesh2triang(cellfn.mesh()) return plt.tripcolor(tri, facecolors=C) def mplot_function(f): mesh = f.function_space().mesh() if (mesh.geometry().dim() != 2): raise AttributeError('Mesh must be 2D') # DG0 cellwise function if f.vector().size() == mesh.num_cells(): C = f.vector().array() return plt.tripcolor(mesh2triang(mesh), C) # Scalar function, interpolated to vertices elif f.value_rank() == 0: C = f.compute_vertex_values(mesh) return plt.tripcolor(mesh2triang(mesh), C, shading='gouraud') # Vector function, interpolated to vertices elif f.value_rank() == 1: w0 = f.compute_vertex_values(mesh) if (len(w0) != 2*mesh.num_vertices()): raise AttributeError('Vector field must be 2D') X = mesh.coordinates()[:, 0] Y = mesh.coordinates()[:, 1] U = w0[:mesh.num_vertices()] V = w0[mesh.num_vertices():] return plt.quiver(X,Y,U,V) # Plot a generic dolfin object (if supported) def plot(obj): plt.gca().set_aspect('equal') if isinstance(obj, Function): return mplot_function(obj) elif isinstance(obj, CellFunctionSizet): return mplot_cellfunction(obj) elif isinstance(obj, CellFunctionDouble): return mplot_cellfunction(obj) elif isinstance(obj, CellFunctionInt): return mplot_cellfunction(obj) elif isinstance(obj, Mesh): if (obj.geometry().dim() != 2): raise AttributeError('Mesh must be 2D') return plt.triplot(mesh2triang(obj), color='#808080') raise AttributeError('Failed to plot %s'%type(obj)) # end of commands for plotting
_____no_output_____
MIT
.ipynb_checkpoints/Annulus_Simple_Matplotlib-checkpoint.ipynb
brettavedisian/Liquid-Crystals
Annulus This is the field in an annulus. We specify boundary conditions and solve the problem.
r1 = 1 # inner circle radius r2 = 10 # outer circle radius # shapes of inner/outer boundaries are circles c1 = Circle(Point(0.0, 0.0), r1) c2 = Circle(Point(0.0, 0.0), r2) domain = c2 - c1 # solve between circles res = 20 mesh = generate_mesh(domain, res) class outer_boundary(SubDomain): def inside(self, x, on_boundary): tol = 1e-2 return on_boundary and (abs(sqrt(x[0]*x[0] + x[1]*x[1])) - r2) < tol class inner_boundary(SubDomain): def inside(self, x, on_boundary): tol = 1e-2 return on_boundary and (abs(sqrt(x[0]*x[0] + x[1]*x[1])) - r1) < tol outerradius = outer_boundary() innerradius = inner_boundary() boundaries = FacetFunction("size_t", mesh) boundaries.set_all(0) outerradius.mark(boundaries,2) innerradius.mark(boundaries,1) V = FunctionSpace(mesh,'Lagrange',1) n = Constant(10.0) bcs = [DirichletBC(V, 0, boundaries, 2), DirichletBC(V, n, boundaries, 1)] # DirichletBC(V, nx, boundaries, 1)] u = TrialFunction(V) v = TestFunction(V) f = Constant(0.0) a = inner(nabla_grad(u), nabla_grad(v))*dx L = f*v*dx u = Function(V) solve(a == L, u, bcs)
_____no_output_____
MIT
.ipynb_checkpoints/Annulus_Simple_Matplotlib-checkpoint.ipynb
brettavedisian/Liquid-Crystals
Plotting with matplotlibNow the usual "plot" commands will work for plotting the mesh and the function.
plot(mesh) # usual Fenics command, will use matplotlib plot(u) # usual Fenics command, will use matplotlib
_____no_output_____
MIT
.ipynb_checkpoints/Annulus_Simple_Matplotlib-checkpoint.ipynb
brettavedisian/Liquid-Crystals
If you want to do usual "matplotlib" stuff then you still need "plt." prefix on commands.
plt.figure() plt.subplot(1,2,1) plot(mesh) plt.xlabel('x') plt.ylabel('y') plt.subplot(1,2,2) plot(u) plt.title('annulus solution')
_____no_output_____
MIT
.ipynb_checkpoints/Annulus_Simple_Matplotlib-checkpoint.ipynb
brettavedisian/Liquid-Crystals
Plotting along a lineIt turns out the the solution "u" is a function that can be evaluated at a point. So in the next cell we loop through a line and make a vector of points for plotting. You just need to give it coordinates $u(x,y)$.
y = np.linspace(r1,r2*0.99,100) uu = [] np.array(uu) for i in range(len(y)): yy = y[i] uu.append(u(0.0,yy)) #evaluate u along y axis plt.figure() plt.plot(y,uu) plt.grid(True) plt.xlabel('y') plt.ylabel('V') u
_____no_output_____
MIT
.ipynb_checkpoints/Annulus_Simple_Matplotlib-checkpoint.ipynb
brettavedisian/Liquid-Crystals
Handwritten Digits Recognition 02 - TensorFlowFrom the table below, we see that MNIST database is way larger than scikit-learn database, which we modelled in the previous notebook. Both number of samples and size of each sample are significantly higher. The good new is that, with TensorFlow and Keras, we can build neural networks, which are powerful enough to handle MNIST database! In this notebook, we are going to use Convolutional Neural Network (CNN) to perform image recognition.| | Scikit-learn database | MNIST database ||-----------|-----------------------|----------------|| Samples | 1797 | 70,000 || Dimensions | 64 (8x8) | 784 (28x28) |1. More information about Scikit-learn Database: https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_digits.html2. More information about MNIST Database: https://en.wikipedia.org/wiki/MNIST_database Loading MNIST databaseWe are going to load MNIST database using utilities provided by TensorFlow. When importing TensorFlow, I always first check if it is using the GPU.
import tensorflow as tf from tensorflow import keras import numpy as np import matplotlib.pyplot as plt print("TensorFlow Version", tf.__version__) if tf.test.is_gpu_available: print("Device:" ,tf.test.gpu_device_name())
TensorFlow Version 2.1.0 Device: /device:GPU:0
MIT
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
Now, load the MNIST database using TensorFlow. From the output, we can see that the images are 28x28. The database contains 60,000 training and 10,000 testing images. There is no missing entries.
(X_train, y_train), (X_test, y_test) = tf.keras.datasets.mnist.load_data() print(X_train.shape, y_train.shape) print(X_test.shape, y_test.shape)
(60000, 28, 28) (60000,) (10000, 28, 28) (10000,)
MIT
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
Before we get our hands dirty with all the hardwork, let's just take a moment and look at some digits in the dataset. The digits displayed are the first eight digits in the set. We can see that the image quality is quite high, significantly better than the ones in scikit-learn digits set.
fig, axes = plt.subplots(2, 4) for i, ax in zip(range(8), axes.flatten()): ax.imshow(X_train[i], cmap=plt.cm.gray_r, interpolation='nearest') ax.set_title("Number %d" % y_train[i]) ax.set_axis_off() fig.suptitle("Image of Digits in MNIST Database") plt.show()
_____no_output_____
MIT
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
Training a convolutional neural network with TensorFlowEach pixel in the images is stored as integers ranging from 0 to 255. CNN requires us to normalize the numbers to be between 0 and 1. We also increased a dimension so that the images can be fed into the CNN. Also, convert the labels (*y_train, y_test*) to one-hot encoding since we are categorizing images.
# Normalize and flatten the images x_train = X_train.reshape((60000, 28, 28, 1)).astype('float32') / 255 x_test = X_test.reshape((10000, 28, 28, 1)).astype('float32') / 255 # Convert to one-hot encoding from keras.utils import np_utils y_train = np_utils.to_categorical(y_train) y_test = np_utils.to_categorical(y_test)
Using TensorFlow backend.
MIT
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
This is the structure of the convolutional neural network. We have two convolution layers to extract features, along with two pooling layers to reduce the dimension of the features. The dropout layer disgards 20% of the data to prevent overfitting. The multi-dimensional data is then flattened in to vectors. The two dence layers with 128 neurons are trained to do the classification. Lastly, the dense layer with 10 neurons output the results.
model = keras.Sequential([ keras.layers.Conv2D(32, (5,5), activation = 'relu'), keras.layers.MaxPool2D(pool_size = (2,2)), keras.layers.Conv2D(32, (5,5), activation = 'relu'), keras.layers.MaxPool2D(pool_size = (2,2)), keras.layers.Dropout(rate = 0.2), keras.layers.Flatten(), keras.layers.Dense(units = 128, activation = 'relu'), keras.layers.Dense(units = 128, activation = 'relu'), keras.layers.Dense(units = 10, activation = 'softmax') ]) model.compile(loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy']) model.fit(x_train, y_train, epochs=10) # Test the accuracy of the model on the testing set test_loss, test_acc = model.evaluate(x_test, y_test, verbose = 2) print() print('Test accuracy:', test_acc)
10000/10000 - 1s - loss: 0.0290 - accuracy: 0.9921 Test accuracy: 0.9921
MIT
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
The accuracy of the CNN is 99.46% and its performance on the testing set is 99.21%. No overfitting. We have a robust model! Saving the trained modelBelow is the summary of the model. It is amazing that we have trained 109,930 parameters! Now, save this model so we don't have to train it again in the future.
# Show the model architecture model.summary()
Model: "sequential" _________________________________________________________________ Layer (type) Output Shape Param # ================================================================= conv2d (Conv2D) multiple 832 _________________________________________________________________ max_pooling2d (MaxPooling2D) multiple 0 _________________________________________________________________ conv2d_1 (Conv2D) multiple 25632 _________________________________________________________________ max_pooling2d_1 (MaxPooling2 multiple 0 _________________________________________________________________ dropout (Dropout) multiple 0 _________________________________________________________________ flatten (Flatten) multiple 0 _________________________________________________________________ dense (Dense) multiple 65664 _________________________________________________________________ dense_1 (Dense) multiple 16512 _________________________________________________________________ dense_2 (Dense) multiple 1290 ================================================================= Total params: 109,930 Trainable params: 109,930 Non-trainable params: 0 _________________________________________________________________
MIT
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
Just like in the previous notebook, we can save this model as well.
model.save("CNN_model.h5")
_____no_output_____
MIT
Handwritten Digits Recognition 02 - TensorFlow.ipynb
kevin-linps/Handwritten-digits-recognition
Lets play with a funny fake dataset. This dataset contains few features and it has an dependent variable which says if we are going ever to graduate or not Importing few libraries
from sklearn import datasets,model_selection import matplotlib.pyplot as plt import pandas as pd import seaborn as sns import numpy as np from sklearn.metrics import accuracy_score from sklearn.linear_model import LogisticRegression from ipywidgets import interactive from sklearn.preprocessing import MinMaxScaler from sklearn import model_selection
_____no_output_____
MIT
Transparent Model Interpretability.ipynb
iamollas/Informatics-Cafe-XAI-IML-Tutorial
Then, we will load our fake dataset, and we will split our dataset in two parts, one for training and one for testing
student = pd.read_csv('LionForests-Bot/students2.csv') feature_names = list(student.columns)[:-1] class_names=["Won't graduate",'Will graduate (eventually)'] X = student.iloc[:, 0:-1].values y = student.iloc[:, -1].values x_train, x_test, y_train, y_test = model_selection.train_test_split(X, y, test_size=0.3,random_state=0) fig, (ax1, ax2, ax3, ax4, ax5) = plt.subplots(1, 5, figsize=(20,4), dpi=200) ax1.hist(X[:,0:1], bins='auto') ax1.set(xlabel='Years in school') ax2.hist(X[:,1:2], bins='auto') ax2.set(xlabel='# of courses completed') ax3.hist(X[:,2:3], bins='auto') ax3.set(xlabel='Attending class per week') ax4.hist(X[:,3:4], bins='auto') ax4.set(xlabel='Owns car') ax5.hist(X[:,4:], bins='auto') ax5.set(xlabel='# of roomates') plt.show()
_____no_output_____
MIT
Transparent Model Interpretability.ipynb
iamollas/Informatics-Cafe-XAI-IML-Tutorial
We are also scaling our data in the range [0,1] in order later the interpretations to be comparable
scaler = MinMaxScaler() scaler.fit(x_train) x_train = scaler.transform(x_train) x_test = scaler.transform(x_test)
_____no_output_____
MIT
Transparent Model Interpretability.ipynb
iamollas/Informatics-Cafe-XAI-IML-Tutorial
Now, we will train a linear model, called logistic regression with our dataset. And we will evaluate its performance
#lin_model = LogisticRegression(solver="newton-cg",penalty='l2',max_iter=1000,C=100,random_state=0) lin_model = LogisticRegression(solver="liblinear",penalty='l1',max_iter=1000,C=10,random_state=0) lin_model.fit(x_train, y_train) predicted_train = lin_model.predict(x_train) predicted_test = lin_model.predict(x_test) predicted_proba_test = lin_model.predict_proba(x_test) print("Logistic Regression Model Performance:") print("Accuracy in Train Set",accuracy_score(y_train, predicted_train)) print("Accuracy in Test Set",accuracy_score(y_test, predicted_test))
Logistic Regression Model Performance: Accuracy in Train Set 0.8414285714285714 Accuracy in Test Set 0.85
MIT
Transparent Model Interpretability.ipynb
iamollas/Informatics-Cafe-XAI-IML-Tutorial
To globally interpret this model, we will plot the weights of each variable/feature
weights = lin_model.coef_ model_weights = pd.DataFrame({ 'features': list(feature_names),'weights': list(weights[0])}) #model_weights = model_weights.sort_values(by='weights', ascending=False) #Normal sort model_weights = model_weights.reindex(model_weights['weights'].abs().sort_values(ascending=False).index) #Sort by absolute value model_weights = model_weights[(model_weights["weights"] != 0)] print("Number of features:",len(model_weights.values)) plt.figure(num=None, figsize=(8, 6), dpi=100, facecolor='w', edgecolor='k') sns.barplot(x="weights", y="features", data=model_weights) plt.title("Intercept (Bias): "+str(lin_model.intercept_[0]),loc='right') plt.xticks(rotation=90) plt.show()
Number of features: 5
MIT
Transparent Model Interpretability.ipynb
iamollas/Informatics-Cafe-XAI-IML-Tutorial
import tensorflow as tf import matplotlib.pyplot as plt fashion = tf.keras.datasets.fashion_mnist (train_data, train_lable), (test_data, test_lable)= fashion.load_data() train_data= train_data/255 test_data=test_data/255 model = tf.keras.Sequential([ tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, input_shape=(28,28), activation='relu'), tf.keras.layers.Dense(64,activation='relu' ), tf.keras.layers.Dense(64,activation='relu' ), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=0.001), loss='sparse_categorical_crossentropy', metrics=['accuracy']) history=model.fit(train_data, train_lable, epochs=20, verbose=1) model.evaluate(test_data, test_lable)
313/313 [==============================] - 1s 2ms/step - loss: 0.3598 - accuracy: 0.8886
MIT
fashion.ipynb
rajeevak40/Course_AWS_Certified_Machine_Learning
Using CNN
(train_data1, train_lable1), (test_data1, test_lable1)= fashion.load_data() train_data1=train_data1.reshape(60000, 28,28,1) test_data1= test_data1.reshape(10000,28,28,1) train_data1= train_data1/255 test_data1=test_data1/255 model = tf.keras.Sequential([ tf.keras.layers.Conv2D(128,(3,3), activation='relu', input_shape=(28,28,1)), tf.keras.layers.MaxPool2D(2,2), tf.keras.layers.Conv2D(128,(3,3), activation='relu'), tf.keras.layers.MaxPool2D(2,2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dense(64,activation='relu' ), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(train_data1, train_lable, epochs=20) model.evaluate(test_data1, test_lable)
_____no_output_____
MIT
fashion.ipynb
rajeevak40/Course_AWS_Certified_Machine_Learning
Linear Regression Implementation from Scratch:label:`sec_linear_scratch`Now that you understand the key ideas behind linear regression,we can begin to work through a hands-on implementation in code.In this section, (**we will implement the entire method from scratch,including the data pipeline, the model,the loss function, and the minibatch stochastic gradient descent optimizer.**)While modern deep learning frameworks can automate nearly all of this work,implementing things from scratch is the only wayto make sure that you really know what you are doing.Moreover, when it comes time to customize models,defining our own layers or loss functions,understanding how things work under the hood will prove handy.In this section, we will rely only on tensors and auto differentiation.Afterwards, we will introduce a more concise implementation,taking advantage of bells and whistles of deep learning frameworks.
%matplotlib inline import random import tensorflow as tf from d2l import tensorflow as d2l
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
Generating the DatasetTo keep things simple, we will [**construct an artificial datasetaccording to a linear model with additive noise.**]Our task will be to recover this model's parametersusing the finite set of examples contained in our dataset.We will keep the data low-dimensional so we can visualize it easily.In the following code snippet, we generate a datasetcontaining 1000 examples, each consisting of 2 featuressampled from a standard normal distribution.Thus our synthetic dataset will be a matrix$\mathbf{X}\in \mathbb{R}^{1000 \times 2}$.(**The true parameters generating our dataset will be$\mathbf{w} = [2, -3.4]^\top$ and $b = 4.2$,and**) our synthetic labels will be assigned accordingto the following linear model with the noise term $\epsilon$:(**$$\mathbf{y}= \mathbf{X} \mathbf{w} + b + \mathbf\epsilon.$$**)You could think of $\epsilon$ as capturing potentialmeasurement errors on the features and labels.We will assume that the standard assumptions hold and thusthat $\epsilon$ obeys a normal distribution with mean of 0.To make our problem easy, we will set its standard deviation to 0.01.The following code generates our synthetic dataset.
def synthetic_data(w, b, num_examples): #@save """Generate y = Xw + b + noise.""" X = tf.zeros((num_examples, w.shape[0])) X += tf.random.normal(shape=X.shape) y = tf.matmul(X, tf.reshape(w, (-1, 1))) + b y += tf.random.normal(shape=y.shape, stddev=0.01) y = tf.reshape(y, (-1, 1)) return X, y true_w = tf.constant([2, -3.4]) true_b = 4.2 features, labels = synthetic_data(true_w, true_b, 1000)
_____no_output_____
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning
Note that [**each row in `features` consists of a 2-dimensional data exampleand that each row in `labels` consists of a 1-dimensional label value (a scalar).**]
print('features:', features[0],'\nlabel:', labels[0])
features: tf.Tensor([ 0.8627048 -0.8168014], shape=(2,), dtype=float32) label: tf.Tensor([8.699112], shape=(1,), dtype=float32)
MIT
d2l/tensorflow/chapter_linear-networks/linear-regression-scratch.ipynb
nilesh-patil/dive-into-deeplearning