markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Correlation plots
from scipy.signal import correlate def autocorr(x): xunbiased = x-np.mean(x) xnorm = np.sum(xunbiased**2) acor = np.correlate(xunbiased, xunbiased, "same")/xnorm #result = correlate(x, x, mode='full') #result /= result[result.argmax()] acor = acor[len(acor)/2:] return acor#result[result.size/2:] cov_t0 = autocorr(L_t0) cov_t1 = autocorr(L_t1) plt.plot(cov_t0) plt.ylabel('Autocorrelation') plt.xlabel('N_steps') plt.title('Autocorrelation of L_i for T=0.05') plt.plot(cov_t1) plt.ylabel('Autocorrelation') plt.xlabel('N_steps') plt.title('Autocorrelation of L_i for T=10')
2015_Fall/MATH-578B/Homework2/Homework2.ipynb
saketkc/hatex
mit
Result The autocorrelation seems to be high even for large values of $N_{step}$ for both the temperature values. I expected higher $T$ to yield lower autocorrelations. Problem 1 Let the state space be $S = {\phi, \alpha, \beta, \alpha+\beta, pol, \dagger}$ Definitions: 1. $\tau_a = { n \geq 0: X_n=a}$ $N = \sum_{k=0}^{\tau_{\phi}}I_{X_k=\dagger}$ $u(a) = E[N|X_0=a] \forall a \in S $ $u(a) = \sum_{k=0}^{\tau_{\phi}}P(X_k=\dagger|X_0=a)=\sum_{b \neq a, \dagger }P(X_1=b|X_0=a)P(X_k=\dagger|X_0=b)$ $\implies$ $u(a)=\sum_{b \neq a, \dagger} P_{ab}u(b)$ And hence $u$ solves the following set of equations: $u=(I-P_{-})^{-1}v$ where v is (0,0,0,1) in this case. and $P_{-}$ represents the matrix with that last and first row and columns removed.
k_a=0.2 k_b=0.2 k_p=0.5 P = np.matrix([[1-k_a-k_b, k_a ,k_b, 0, 0, 0], [k_a, 1-k_a-k_b, 0, k_b, 0, 0], [k_b, 0, 1-k_a-k_b, k_a, 0, 0], [0, k_b, k_a, 1-k_a-k_b-k_p, k_p, 0], [0, 0, 0, 0, 0, 1], [0, 0, 0, 1, 0, 0]]) Q=P[1:5,1:5] iq = np.eye(4)-Q iqi = np.linalg.inv(iq) print(iq) print(iqi) print 'U={}'.format(iqi[:,-1]) u=iqi[:,-1] PP = {} states = ['phi', 'alpha', 'beta', 'ab', 'pol', 'd'] PP['phi']= [1-k_a-k_b, k_a ,k_b, 0, 0, 0] PP['alpha'] = [k_a, 1-k_a-k_b, 0, k_b, 0, 0] PP['beta'] = [k_b, 0, 1-k_a-k_b, k_a, 0, 0] PP['ab']= [0, k_b, k_a, 1-k_a-k_b-k_p, k_p, 0] PP['pol']= [0, 0, 0, 0, 0, 1] PP['d']= [0, 0, 0, 1, 0, 0] def h(x): s=0 ht=0 cc=0 for j in range(1,100): new_state=x for i in range(1,10000): old_state=new_state probs = PP[old_state] z=np.random.choice(6, 1, p=probs) new_state = states[z[0]] s+=z[0] if new_state=='d': ht+=i cc+=1 break else: continue return s/1000, ht/cc
2015_Fall/MATH-578B/Homework2/Homework2.ipynb
saketkc/hatex
mit
$\alpha$
print('Simulation: {}\t Calculation: {}'.format(h('alpha')[1],u[0]))
2015_Fall/MATH-578B/Homework2/Homework2.ipynb
saketkc/hatex
mit
$\beta$
print('Simulation: {}\t Calculation: {}'.format(h('beta')[1],u[1]))
2015_Fall/MATH-578B/Homework2/Homework2.ipynb
saketkc/hatex
mit
$\alpha+\beta$
print('Simulation: {}\t Calculation: {}'.format(h('ab')[1],u[2]))
2015_Fall/MATH-578B/Homework2/Homework2.ipynb
saketkc/hatex
mit
pol
print('Simulation: {}\t Calculation: {}'.format(h('pol')[1],u[3]))
2015_Fall/MATH-578B/Homework2/Homework2.ipynb
saketkc/hatex
mit
We can compute a derivative symbolically, but it is of course horrendous (see below). Think of how much worse it would be if we chose a function with products, more dimensions, or iterated more than 20 times.
from sympy import diff, Symbol, sin from __future__ import print_function x = Symbol('x') dexp = diff(func(x), x) print(dexp)
SymbolicVsAD.ipynb
BYUFLOWLab/MDOnotebooks
mit
We can now evaluate the expression.
xpt = 0.1 dfdx = dexp.subs(x, xpt) print('dfdx =', dfdx)
SymbolicVsAD.ipynb
BYUFLOWLab/MDOnotebooks
mit
Let's compare with automatic differentiation using operator overloading:
from algopy import UTPM, sin x_algopy = UTPM.init_jacobian(xpt) y_algopy = func(x_algopy) dfdx = UTPM.extract_jacobian(y_algopy) print('dfdx =', dfdx)
SymbolicVsAD.ipynb
BYUFLOWLab/MDOnotebooks
mit
Let's also compare to AD using a source code transformation method (I used Tapenade in Fortran)
def funcad(x): xd = 1.0 yd = xd y = x for i in range(30): yd = (xd + yd)*cos(x + y) y = sin(x + y) return yd dfdx = funcad(xpt) print('dfdx =', dfdx)
SymbolicVsAD.ipynb
BYUFLOWLab/MDOnotebooks
mit
Algoritmo de Regresion Lineal en TensorFlow
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt learning_rate = 0.01 training_epochs = 100 x_train = np.linspace(-1,1,101) y_train = 2 * x_train + np.random.randn(*x_train.shape) * 0.33 X = tf.placeholder("float") Y = tf.placeholder("float") def model(X,w): return tf.multiply(X,w) w = tf.Variable(0.0, name="weights") y_model = model(X,w) cost = tf.square(Y-y_model) train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) sess = tf.Session() init = tf.global_variables_initializer() sess.run(init) for epoch in range(training_epochs): for (x,y) in zip(x_train, y_train): sess.run(train_op, feed_dict={X:x, Y:y}) w_val = sess.run(w) sess.close() plt.scatter(x_train, y_train) y_learned = x_train*w_val plt.plot(x_train, y_learned, 'r') plt.show()
TensorFlow/02_Linear_Regression.ipynb
josdaza/deep-toolbox
mit
Regresion Lineal en Polinomios de grado N
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt learning_rate = 0.01 training_epochs = 40 trX = np.linspace(-1, 1, 101) num_coeffs = 6 trY_coeffs = [1, 2, 3, 4, 5, 6] trY = 0 #Construir datos polinomiales pseudo-aleatorios para probar el algoritmo for i in range(num_coeffs): trY += trY_coeffs[i] * np.power(trX, i) trY += np.random.randn(*trX.shape) * 1.5 plt.scatter(trX, trY) plt.show() # Construir el grafo para TensorFlow X = tf.placeholder("float") Y = tf.placeholder("float") def model(X, w): terms = [] for i in range(num_coeffs): term = tf.multiply(w[i], tf.pow(X, i)) terms.append(term) return tf.add_n(terms) w = tf.Variable([0.] * num_coeffs, name="parameters") y_model = model(X, w) cost = (tf.pow(Y-y_model, 2)) train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) #Correr el Algoritmo en TensorFlow sess = tf.Session() init = tf.global_variables_initializer() sess.run(init) for epoch in range(training_epochs): for (x, y) in zip(trX, trY): sess.run(train_op, feed_dict={X: x, Y: y}) w_val = sess.run(w) print(w_val) sess.close() # Mostrar el modelo construido plt.scatter(trX, trY) trY2 = 0 for i in range(num_coeffs): trY2 += w_val[i] * np.power(trX, i) plt.plot(trX, trY2, 'r') plt.show()
TensorFlow/02_Linear_Regression.ipynb
josdaza/deep-toolbox
mit
Regularizacion Para manejar un poco mejor el impacto que tienen los outliers sobre nuestro modelo (y asi evitar que el modelo produzca curvas demasiado complicadas, y el overfitting) existe el termino Regularizacion que se define como: $$ Cost(X,Y) = Loss(X,Y) + \lambda |x| $$ en donde |x| es la norma del vector (la distancia del vector al origen, ver el tema de Norms en otro lado, por ejemplo L1 o L2 norm) que se utiliza como cantidad penalizadora y lambda es como parametro para ajustar que tanto afectara la penalizacion. Entre mas grande sea lambda mas penalizado sera ese punto, y si lambda es 0 entonces se tiene el modelo inicial que no aplica reguarizacion. Para obtener un valor optimo de gama, se tiene que hacer un split al dataset y...
import tensorflow as tf import numpy as np import matplotlib.pyplot as plt def split_dataset(x_dataset, y_dataset, ratio): arr = np.arange(x_dataset.size) np.random.shuffle(arr) num_train = int(ratio* x_dataset.size) x_train = x_dataset[arr[0:num_train]] y_train = y_dataset[arr[0:num_train]] x_test = x_dataset[arr[num_train:x_dataset.size]] y_test = y_dataset[arr[num_train:x_dataset.size]] return x_train, x_test, y_train, y_test learning_rate = 0.001 training_epochs = 1000 reg_lambda = 0. x_dataset = np.linspace(-1, 1, 100) num_coeffs = 9 y_dataset_params = [0.] * num_coeffs y_dataset_params[2] = 1 y_dataset = 0 for i in range(num_coeffs): y_dataset += y_dataset_params[i] * np.power(x_dataset, i) y_dataset += np.random.randn(*x_dataset.shape) * 0.3 (x_train, x_test, y_train, y_test) = split_dataset(x_dataset, y_dataset, 0.7) X = tf.placeholder("float") Y = tf.placeholder("float") def model(X, w): terms = [] for i in range(num_coeffs): term = tf.multiply(w[i], tf.pow(X,i)) terms.append(term) return tf.add_n(terms) w = tf.Variable([0.] * num_coeffs, name="parameters") y_model = model(X, w) cost = tf.div(tf.add(tf.reduce_sum(tf.square(Y-y_model)), tf.multiply(reg_lambda, tf.reduce_sum(tf.square(w)))), 2*x_train.size) train_op = tf.train.GradientDescentOptimizer(learning_rate).minimize(cost) sess = tf.Session() init = tf.global_variables_initializer() sess.run(init) i,stop_iters = 0,15 for reg_lambda in np.linspace(0,1,100): i += 1 for epoch in range(training_epochs): sess.run(train_op, feed_dict={X: x_train, Y: y_train}) final_cost = sess.run(cost, feed_dict={X: x_test, Y:y_test}) print('reg lambda', reg_lambda) print('final cost', final_cost) if i > stop_iters: break sess.close()
TensorFlow/02_Linear_Regression.ipynb
josdaza/deep-toolbox
mit
時間太長!
def zebra_puzzle(): return [locals() for (紅, 綠, 白, 黃, 藍) in 所有順序 if 在右邊(綠, 白) #6 for (英國人, 西班牙人, 烏克蘭人, 日本人, 挪威人) in 所有順序 if 英國人 is 紅 #2 if 挪威人 is 第一間 #10 if 隔壁(挪威人, 藍) #15 for (咖啡, 茶, 牛奶, 橘子汁, 水) in 所有順序 if 咖啡 is 綠 #4 if 烏克蘭人 is 茶 #5 if 牛奶 is 中間 #9 for (OldGold, Kools, Chesterfields, LuckyStrike, Parliaments) in 所有順序 if Kools is 黃 #8 if LuckyStrike is 橘子汁 #13 if 日本人 is Parliaments #14 for (狗, 蝸牛, 狐狸, 馬, 斑馬) in 所有順序 if 西班牙人 is 狗 #3 if OldGold is 蝸牛 #7 if 隔壁(Chesterfields, 狐狸) #11 if 隔壁(Kools, 馬) #12 ] zebra_puzzle() def result(d): return {i:[k for k,v in d.items() if v == i] for i in 屋子} def zebra_puzzle(): return [result(locals()) for (紅, 綠, 白, 黃, 藍) in 所有順序 if 在右邊(綠, 白) for (英國人, 西班牙人, 烏克蘭人, 日本人, 挪威人) in 所有順序 if 英國人 is 紅 if 挪威人 is 第一間 if 隔壁(挪威人, 藍) for (咖啡, 茶, 牛奶, 橘子汁, 水) in 所有順序 if 咖啡 is 綠 if 烏克蘭人 is 茶 if 牛奶 is 中間 for (OldGold, Kools, Chesterfields, LuckyStrike, Parliaments) in 所有順序 if Kools is 黃 if LuckyStrike is 橘子汁 if 日本人 is Parliaments for (狗, 蝸牛, 狐狸, 馬, 斑馬) in 所有順序 if 西班牙人 is 狗 if OldGold is 蝸牛 if 隔壁(Chesterfields, 狐狸) if 隔壁(Kools, 馬) ] zebra_puzzle()[0]
Tutorial 3.ipynb
tjwei/PythonTutorial
mit
Outline Preliminaries: Setup & introduction Beam dynamics Tutorial N1. Linear optics.. Web version. Linear optics. DBA. Tutorial N2. Tracking.. Web version. Linear optics of the European XFEL Injector Tracking. First and second order. Tutorial N3. Space Charge.. Web version. Tracking with SC effects. Tutorial N4. Wakefields.. Web version. Tracking with Wakefields FEL calculation Tutorial N5: Genesis preprocessor. Web version. Tutorial N6. Genesis postprocessor. Web version. All IPython (jupyter) notebooks (.ipynb) have analogues in the form of python scripts (.py). All these notebooks as well as additional files (beam distribution, wakes, ...) you can download here. Preliminaries The tutorial includes 4 simple examples dediacted to beam dynamics. However, you should have a basic understanding of Computer Programming terminologies. A basic understanding of Python language is a plus. This tutorial requires the following packages: Python version 2.7 or 3.4-3.5 numpy version 1.8 or later: http://www.numpy.org/ scipy version 0.15 or later: http://www.scipy.org/ matplotlib version 1.5 or later: http://matplotlib.org/ ipython version 2.4 or later, with notebook support: http://ipython.org The easiest way to get these is to download and install the (very large) Anaconda software distribution. Alternatively, you can download and install miniconda. The following command will install all required packages: $ conda install numpy scipy matplotlib ipython-notebook Ocelot installation you have to download from GitHub zip file. Unzip ocelot-master.zip to your working folder ../your_working_dir/. Rename folder ../your_working_dir/ocelot-master to ../your_working_dir/ocelot. Add ../your_working_dir/ to PYTHONPATH Windows 7: go to Control Panel -> System and Security -> System -> Advance System Settings -> Environment Variables. and in User variables add ../your_working_dir/ to PYTHONPATH. If variable PYTHONPATH does not exist, create it Variable name: PYTHONPATH Variable value: ../your_working_dir/ - Linux: $ export PYTHONPATH=**../your_working_dir/**:$PYTHONPATH To launch "ipython notebook" or "jupyter notebook" in command line run following commands: $ ipython notebook or $ ipython notebook --notebook-dir="path_to_your_directory" or $ jupyter notebook --notebook-dir="path_to_your_directory" Checking your installation You can run the following code to check the versions of the packages on your system: (in IPython notebook, press shift and return together to execute the contents of a cell)
import IPython print('IPython:', IPython.__version__) import numpy print('numpy:', numpy.__version__) import scipy print('scipy:', scipy.__version__) import matplotlib print('matplotlib:', matplotlib.__version__) import ocelot print('ocelot:', ocelot.__version__)
1_introduction.ipynb
sergey-tomin/workshop
mit
<a id="tutorial1"></a> Tutorial N1. Double Bend Achromat. We designed a simple lattice to demonstrate the basic concepts and syntax of the optics functions calculation. Also, we chose DBA to demonstrate the periodic solution for the optical functions calculation.
from __future__ import print_function # the output of plotting commands is displayed inline within frontends, # directly below the code cell that produced it %matplotlib inline # import from Ocelot main modules and functions from ocelot import * # import from Ocelot graphical modules from ocelot.gui.accelerator import *
1_introduction.ipynb
sergey-tomin/workshop
mit
Creating lattice Ocelot has following elements: Drift, Quadrupole, Sextupole, Octupole, Bend, SBend, RBend, Edge, Multipole, Hcor, Vcor, Solenoid, Cavity, Monitor, Marker, Undulator.
# defining of the drifts D1 = Drift(l=2.) D2 = Drift(l=0.6) D3 = Drift(l=0.3) D4 = Drift(l=0.7) D5 = Drift(l=0.9) D6 = Drift(l=0.2) # defining of the quads Q1 = Quadrupole(l=0.4, k1=-1.3) Q2 = Quadrupole(l=0.8, k1=1.4) Q3 = Quadrupole(l=0.4, k1=-1.7) Q4 = Quadrupole(l=0.5, k1=1.3) # defining of the bending magnet B = Bend(l=2.7, k1=-.06, angle=2*pi/16., e1=pi/16., e2=pi/16.) # defining of the sextupoles SF = Sextupole(l=0.01, k2=1.5) #random value SD = Sextupole(l=0.01, k2=-1.5) #random value # cell creating cell = (D1, Q1, D2, Q2, D3, Q3, D4, B, D5, SD, D5, SF, D6, Q4, D6, SF, D5, SD, D5, B, D4, Q3, D3, Q2, D2, Q1, D1)
1_introduction.ipynb
sergey-tomin/workshop
mit
hint: to see a simple description of the function put cursor inside () and press Shift-Tab or you can type sign ? before function. To extend dialog window press +* * The cell is a list of the simple objects which contain a physical information of lattice elements such as length, strength, voltage and so on. In order to create a transport map for every element and bind it with lattice object we have to create new Ocelot object - MagneticLattice() which makes these things automatically. MagneticLattice(sequence, start=None, stop=None, method=MethodTM()): * sequence - list of the elements, other paramenters we will consider in tutorial N2.
lat = MagneticLattice(cell) # to see total lenth of the lattice print("length of the cell: ", lat.totalLen, "m")
1_introduction.ipynb
sergey-tomin/workshop
mit
Optical function calculation Uses: * twiss() function and, * Twiss() object contains twiss parameters and other information at one certain position (s) of lattice To calculate twiss parameters you have to run twiss(lattice, tws0=None, nPoints=None) function. If you want to get a periodic solution leave tws0 by default. You can change the number of points over the cell, If nPoints=None, then twiss parameters are calculated at the end of each element. twiss() function returns list of Twiss() objects. You will see the Twiss object contains more information than just twiss parameters.
tws=twiss(lat) # to see twiss paraments at the begining of the cell, uncomment next line # print(tws[0]) # to see twiss paraments at the end of the cell, uncomment next line print(tws[-1]) len(tws) # plot optical functions. plot_opt_func(lat, tws, top_plot = ["Dx", "Dy"], legend=False, font_size=10) plt.show() # you also can use standard matplotlib functions for plotting #s = [tw.s for tw in tws] #bx = [tw.beta_x for tw in tws] #plt.plot(s, bx) #plt.show() # you can play with quadrupole strength and try to make achromat Q4.k1 = 1.18 # to make achromat uncomment next line # Q4.k1 = 1.18543769836 # To use matching function, please see ocelot/demos/ebeam/dba.py # updating transfer maps after changing element parameters. lat.update_transfer_maps() # recalculate twiss parameters tws=twiss(lat, nPoints=1000) plot_opt_func(lat, tws, legend=False) plt.show()
1_introduction.ipynb
sergey-tomin/workshop
mit
<h3>How many facilities have accurate records online?</h3> Those that have no offline records.
df[(df['offline'].isnull())].count()[0]
notebooks/analysis/.ipynb_checkpoints/facilities_analysis-checkpoint.ipynb
TheOregonian/long-term-care-db
mit
<h3>How many facilities have inaccurate records online?<h/3> Those that have offline records.
df[(df['offline'].notnull())].count()[0]
notebooks/analysis/.ipynb_checkpoints/facilities_analysis-checkpoint.ipynb
TheOregonian/long-term-care-db
mit
<h3>How many facilities had more than double the number of complaints shown online?</h3>
df[(df['offline']>df['online']) & (df['online'].notnull())].count()[0]
notebooks/analysis/.ipynb_checkpoints/facilities_analysis-checkpoint.ipynb
TheOregonian/long-term-care-db
mit
<h3>How many facilities show zero complaints online but have complaints offline?</h3>
df[(df['online'].isnull()) & (df['offline'].notnull())].count()[0]
notebooks/analysis/.ipynb_checkpoints/facilities_analysis-checkpoint.ipynb
TheOregonian/long-term-care-db
mit
<h3>How many facilities have complaints and are accurate online?</h3>
df[(df['online'].notnull()) & (df['offline'].isnull())].count()[0]
notebooks/analysis/.ipynb_checkpoints/facilities_analysis-checkpoint.ipynb
TheOregonian/long-term-care-db
mit
<h3>How many facilities have complaints?</h3>
df[(df['online'].notnull()) | df['offline'].notnull()].count()[0]
notebooks/analysis/.ipynb_checkpoints/facilities_analysis-checkpoint.ipynb
TheOregonian/long-term-care-db
mit
<h3>What percent of facilities have accurate records online?</h3>
df[(df['offline'].isnull())].count()[0]/df.count()[0]*100
notebooks/analysis/.ipynb_checkpoints/facilities_analysis-checkpoint.ipynb
TheOregonian/long-term-care-db
mit
<h3>What is the total capacity of all facilities with inaccurate records?</h3>
df[df['offline'].notnull()].sum()['fac_capacity'] df[df['fac_capacity'].isnull()] #df#['fac_capacity'].sum()
notebooks/analysis/.ipynb_checkpoints/facilities_analysis-checkpoint.ipynb
TheOregonian/long-term-care-db
mit
Artifact Correction with SSP
import numpy as np import mne from mne.datasets import sample from mne.preprocessing import compute_proj_ecg, compute_proj_eog # getting some data ready data_path = sample.data_path() raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif' raw = mne.io.read_raw_fif(raw_fname, preload=True) raw.set_eeg_reference() raw.pick_types(meg=True, ecg=True, eog=True, stim=True)
0.14/_downloads/plot_artifacts_correction_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Compute SSP projections
projs, events = compute_proj_ecg(raw, n_grad=1, n_mag=1, average=True) print(projs) ecg_projs = projs[-2:] mne.viz.plot_projs_topomap(ecg_projs) # Now for EOG projs, events = compute_proj_eog(raw, n_grad=1, n_mag=1, average=True) print(projs) eog_projs = projs[-2:] mne.viz.plot_projs_topomap(eog_projs)
0.14/_downloads/plot_artifacts_correction_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Apply SSP projections MNE is handling projections at the level of the info, so to register them populate the list that you find in the 'proj' field
raw.info['projs'] += eog_projs + ecg_projs
0.14/_downloads/plot_artifacts_correction_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Yes this was it. Now MNE will apply the projs on demand at any later stage, so watch out for proj parmeters in functions or to it explicitly with the .apply_proj method Demonstrate SSP cleaning on some evoked data
events = mne.find_events(raw, stim_channel='STI 014') reject = dict(grad=4000e-13, mag=4e-12, eog=150e-6) # this can be highly data dependent event_id = {'auditory/left': 1} epochs_no_proj = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5, proj=False, baseline=(None, 0), reject=reject) epochs_no_proj.average().plot(spatial_colors=True) epochs_proj = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5, proj=True, baseline=(None, 0), reject=reject) epochs_proj.average().plot(spatial_colors=True)
0.14/_downloads/plot_artifacts_correction_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Looks cool right? It is however often not clear how many components you should take and unfortunately this can have bad consequences as can be seen interactively using the delayed SSP mode:
evoked = mne.Epochs(raw, events, event_id, tmin=-0.2, tmax=0.5, proj='delayed', baseline=(None, 0), reject=reject).average() # set time instants in seconds (from 50 to 150ms in a step of 10ms) times = np.arange(0.05, 0.15, 0.01) evoked.plot_topomap(times, proj='interactive')
0.14/_downloads/plot_artifacts_correction_ssp.ipynb
mne-tools/mne-tools.github.io
bsd-3-clause
Show a bunch of 4s
fig, ax = plt.subplots(nrows=5, ncols=5, sharex=True, sharey=True,) ax = ax.flatten() for i in range(25): img = X_train[y_train == 4][i].reshape(28, 28) ax[i].imshow(img, cmap='Greys', interpolation='nearest') ax[0].set_xticks([]) ax[0].set_yticks([]) plt.tight_layout() plt.show()
python-ml-book/ch12/ch12.ipynb
krosaen/ml-study
mit
Classifying with tree based models Let's see how well some other models do before we get to the neural net.
from sklearn.tree import DecisionTreeClassifier from sklearn.ensemble import RandomForestClassifier tree10 = DecisionTreeClassifier(criterion='entropy', max_depth=10, random_state=0) tree100 = DecisionTreeClassifier(criterion='entropy', max_depth=100, random_state=0) rf10 = RandomForestClassifier(criterion='entropy', n_estimators=10, random_state=1) rf100 = RandomForestClassifier(criterion='entropy', n_estimators=100, random_state=1) labeled_models = [ ('decision tree depth 10', tree10), ('decision tree depth 100', tree100), ('random forest 10 estimators', rf10), ('random forest 100 estimators', rf100), ] import time import subprocess def say_done(label): subprocess.call("say 'done with {}'".format(label), shell=True) for label, model in labeled_models: before = time.time() model.fit(X_train, y_train) after = time.time() print("{} fit the dataset in {:.1f} seconds".format(label, after - before)) say_done(label) from sklearn.metrics import accuracy_score for label, model in labeled_models: print("{} training fit: {:.3f}".format(label, accuracy_score(y_train, model.predict(X_train)))) print("{} test accuracy: {:.3f}".format(label, accuracy_score(y_test, model.predict(X_test))))
python-ml-book/ch12/ch12.ipynb
krosaen/ml-study
mit
degree centrality for a node v is the fraction of nodes it is connected to
# the type of degree centrality is a dictionary type(nx.degree_centrality(graph)) # get all the values of the dictionary, this returns a list of centrality scores # turn the list into a numpy array # take the mean of the numpy array np.array(nx.degree_centrality(graph).values()).mean()
to_do/vax_temp/test.ipynb
gloriakang/vax-sentiment
mit
closeness centrality of a node u is the reciprocal of the sum of the shortest path distances from u to all n-1 other nodes. Since the sum of distances depends on the number of nodes in the graph, closeness is normalized by the sum of minimum possible distances n-1. Notice that higher values of closeness indicate higher centrality.
nx.closeness_centrality(graph)
to_do/vax_temp/test.ipynb
gloriakang/vax-sentiment
mit
betweenness centrality of a node v is the sum of the fraction of all-pairs shortest paths that pass through v
nx.betweenness_centrality(graph) np.array(nx.betweenness_centrality(graph).values()).mean()
to_do/vax_temp/test.ipynb
gloriakang/vax-sentiment
mit
degree assortativity coefficient Assortativity measures the similarity of connections in the graph with respect to the node degree.
nx.degree_assortativity_coefficient(graph)
to_do/vax_temp/test.ipynb
gloriakang/vax-sentiment
mit
Now we can load up two GeoDataFrames containing (multi)polygon geometries...
%matplotlib inline from shapely.geometry import Point from geopandas import datasets, GeoDataFrame, read_file from geopandas.tools import overlay # NYC Boros zippath = datasets.get_path('nybb') polydf = read_file(zippath) # Generate some circles b = [int(x) for x in polydf.total_bounds] N = 10 polydf2 = GeoDataFrame([ {'geometry': Point(x, y).buffer(10000), 'value1': x + y, 'value2': x - y} for x, y in zip(range(b[0], b[2], int((b[2] - b[0]) / N)), range(b[1], b[3], int((b[3] - b[1]) / N)))])
examples/overlays.ipynb
ozak/geopandas
bsd-3-clause
The first dataframe contains multipolygons of the NYC boros
polydf.plot()
examples/overlays.ipynb
ozak/geopandas
bsd-3-clause
And the second GeoDataFrame is a sequentially generated set of circles in the same geographic space. We'll plot these with a different color palette.
polydf2.plot(cmap='tab20b')
examples/overlays.ipynb
ozak/geopandas
bsd-3-clause
The geopandas.tools.overlay function takes three arguments: df1 df2 how Where how can be one of: ['intersection', 'union', 'identity', 'symmetric_difference', 'difference'] So let's identify the areas (and attributes) where both dataframes intersect using the overlay tool.
from geopandas.tools import overlay newdf = overlay(polydf, polydf2, how="intersection") newdf.plot(cmap='tab20b')
examples/overlays.ipynb
ozak/geopandas
bsd-3-clause
And take a look at the attributes; we see that the attributes from both of the original GeoDataFrames are retained.
polydf.head() polydf2.head() newdf.head()
examples/overlays.ipynb
ozak/geopandas
bsd-3-clause
Now let's look at the other how operations:
newdf = overlay(polydf, polydf2, how="union") newdf.plot(cmap='tab20b') newdf = overlay(polydf, polydf2, how="identity") newdf.plot(cmap='tab20b') newdf = overlay(polydf, polydf2, how="symmetric_difference") newdf.plot(cmap='tab20b') newdf = overlay(polydf, polydf2, how="difference") newdf.plot(cmap='tab20b')
examples/overlays.ipynb
ozak/geopandas
bsd-3-clause
Prepare Dataset
with open('data_w1w4.csv', 'r') as f: reader = csv.reader(f) data = list(reader) matrix = obtain_data_matrix(data) samples = len(matrix) print("Number of samples: " + str(samples)) Y = matrix[:,[8]] X = matrix[:,[9]] S = matrix[:,[11]]
Linear Regression.ipynb
tlkh/Generating-Inference-from-3D-Printing-Jobs
mit
Use the model (LinearRegression)
# Create linear regression object regr = linear_model.LinearRegression() # Train the model using the training sets regr.fit(X, Y) # Make predictions using the testing set Y_pred = regr.predict(X)
Linear Regression.ipynb
tlkh/Generating-Inference-from-3D-Printing-Jobs
mit
Plot the data
fig = plt.figure(1, figsize=(10, 4)) plt.scatter([X], [Y], color='blue', edgecolor='k') plt.plot(X, Y_pred, color='red', linewidth=1) plt.xticks(()) plt.yticks(()) print('Coefficients: ', regr.coef_) plt.show() # The mean squared error print("Mean squared error: %.2f" % mean_squared_error(Y, Y_pred)) # Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % r2_score(Y, Y_pred))
Linear Regression.ipynb
tlkh/Generating-Inference-from-3D-Printing-Jobs
mit
Bootstrap to find parameter confidence intervals
from sklearn.utils import resample bootstrap_resamples = 5000 intercepts = [] coefs = [] for k in range(bootstrap_resamples): #resample population with replacement samples_resampled = resample(X,Y,replace=True,n_samples=len(X)) ## Fit model to resampled data # Create linear regression object regr = linear_model.LinearRegression() # Train the model using the training sets regr.fit(samples_resampled[0], samples_resampled[1]) coefs.append(regr.coef_[0][0]) intercepts.append(regr.intercept_[0])
Linear Regression.ipynb
tlkh/Generating-Inference-from-3D-Printing-Jobs
mit
Calculate confidence interval
alpha = 0.95 p_lower = ((1-alpha)/2.0) * 100 p_upper = (alpha + ((1-alpha)/2.0)) * 100 coefs_lower = np.percentile(coefs,p_lower) coefs_upper = np.percentile(coefs,p_upper) intercepts_lower = np.percentile(intercepts,p_lower) intercepts_upper = np.percentile(intercepts,p_upper) print('Coefs %.0f%% CI = %.5f - %.5f' % (alpha*100,coefs_lower,coefs_upper)) print('Intercepts %.0f%% CI = %.5f - %.5f' % (alpha*100,intercepts_lower,intercepts_upper))
Linear Regression.ipynb
tlkh/Generating-Inference-from-3D-Printing-Jobs
mit
Visualize frequency distributions of bootstrapped parameters
plt.hist(coefs) plt.xlabel('Coefficient X0') plt.title('Frquency Distribution of Coefficient X0') plt.show() plt.hist(intercepts) plt.xlabel('Intercept') plt.title('Frquency Distribution of Intercepts') plt.show()
Linear Regression.ipynb
tlkh/Generating-Inference-from-3D-Printing-Jobs
mit
Vertex AI: Vertex AI Migration: AutoML Image Classification <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ1%20Vertex%20SDK%20AutoML%20Image%20Classification.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/master/vertex-ai-samples/tree/master/notebooks/official/migration/UJ1%20Vertex%20SDK%20AutoML%20Image%20Classification.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> Dataset The dataset used for this tutorial is the Flowers dataset from TensorFlow Datasets. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the type of flower an image is from a class of five flowers: daisy, dandelion, rose, sunflower, or tulip. Costs This tutorial uses billable components of Google Cloud: Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: The Cloud Storage SDK Git Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Installation Install the latest version of Vertex SDK for Python.
import os # Google Cloud Notebook if os.path.exists("/opt/deeplearning/metadata/env_version"): USER_FLAG = "--user" else: USER_FLAG = "" ! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Install the latest GA version of google-cloud-storage library as well.
! pip3 install -U google-cloud-storage $USER_FLAG if os.getenv("IS_TESTING"): ! pip3 install --upgrade tensorflow $USER_FLAG
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Location of Cloud Storage training data. Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
IMPORT_FILE = ( "gs://cloud-samples-data/vision/automl_classification/flowers/all_data_v2.csv" )
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a dataset datasets.create-dataset-api Create the Dataset Next, create the Dataset resource using the create method for the ImageDataset class, which takes the following parameters: display_name: The human readable name for the Dataset resource. gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource. import_schema_uri: The data labeling schema for the data items. This operation may take several minutes.
dataset = aip.ImageDataset.create( display_name="Flowers" + "_" + TIMESTAMP, gcs_source=[IMPORT_FILE], import_schema_uri=aip.schema.dataset.ioformat.image.single_label_classification, ) print(dataset.resource_name)
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example Output: INFO:google.cloud.aiplatform.datasets.dataset:Creating ImageDataset INFO:google.cloud.aiplatform.datasets.dataset:Create ImageDataset backing LRO: projects/759209241365/locations/us-central1/datasets/2940964905882222592/operations/1941426647739662336 INFO:google.cloud.aiplatform.datasets.dataset:ImageDataset created. Resource name: projects/759209241365/locations/us-central1/datasets/2940964905882222592 INFO:google.cloud.aiplatform.datasets.dataset:To use this ImageDataset in another session: INFO:google.cloud.aiplatform.datasets.dataset:ds = aiplatform.ImageDataset('projects/759209241365/locations/us-central1/datasets/2940964905882222592') INFO:google.cloud.aiplatform.datasets.dataset:Importing ImageDataset data: projects/759209241365/locations/us-central1/datasets/2940964905882222592 INFO:google.cloud.aiplatform.datasets.dataset:Import ImageDataset data backing LRO: projects/759209241365/locations/us-central1/datasets/2940964905882222592/operations/8100099138168815616 INFO:google.cloud.aiplatform.datasets.dataset:ImageDataset data imported. Resource name: projects/759209241365/locations/us-central1/datasets/2940964905882222592 projects/759209241365/locations/us-central1/datasets/2940964905882222592 Train a model training.automl-api Create and run training pipeline To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline. Create training pipeline An AutoML training pipeline is created with the AutoMLImageTrainingJob class, with the following parameters: display_name: The human readable name for the TrainingJob resource. prediction_type: The type task to train the model for. classification: An image classification model. object_detection: An image object detection model. multi_label: If a classification task, whether single (False) or multi-labeled (True). model_type: The type of model for deployment. CLOUD: Deployment on Google Cloud CLOUD_HIGH_ACCURACY_1: Optimized for accuracy over latency for deployment on Google Cloud. CLOUD_LOW_LATENCY_: Optimized for latency over accuracy for deployment on Google Cloud. MOBILE_TF_VERSATILE_1: Deployment on an edge device. MOBILE_TF_HIGH_ACCURACY_1:Optimized for accuracy over latency for deployment on an edge device. MOBILE_TF_LOW_LATENCY_1: Optimized for latency over accuracy for deployment on an edge device. base_model: (optional) Transfer learning from existing Model resource -- supported for image classification only. The instantiated object is the DAG (directed acyclic graph) for the training job.
dag = aip.AutoMLImageTrainingJob( display_name="flowers_" + TIMESTAMP, prediction_type="classification", multi_label=False, model_type="CLOUD", base_model=None, ) print(dag)
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: &lt;google.cloud.aiplatform.training_jobs.AutoMLImageTrainingJob object at 0x7f806a6116d0&gt; Run the training pipeline Next, you run the DAG to start the training job by invoking the method run, with the following parameters: dataset: The Dataset resource to train the model. model_display_name: The human readable name for the trained model. training_fraction_split: The percentage of the dataset to use for training. test_fraction_split: The percentage of the dataset to use for test (holdout data). validation_fraction_split: The percentage of the dataset to use for validation. budget_milli_node_hours: (optional) Maximum training time specified in unit of millihours (1000 = hour). disable_early_stopping: If True, training maybe completed before using the entire budget if the service believes it cannot further improve on the model objective measurements. The run method when completed returns the Model resource. The execution of the training pipeline will take upto 20 minutes.
model = dag.run( dataset=dataset, model_display_name="flowers_" + TIMESTAMP, training_fraction_split=0.8, validation_fraction_split=0.1, test_fraction_split=0.1, budget_milli_node_hours=8000, disable_early_stopping=False, )
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: INFO:google.cloud.aiplatform.training_jobs:View Training: https://console.cloud.google.com/ai/platform/locations/us-central1/training/2109316300865011712?project=759209241365 INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state: PipelineState.PIPELINE_STATE_RUNNING INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state: PipelineState.PIPELINE_STATE_RUNNING INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state: PipelineState.PIPELINE_STATE_RUNNING INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state: PipelineState.PIPELINE_STATE_RUNNING INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 current state: PipelineState.PIPELINE_STATE_RUNNING ... INFO:google.cloud.aiplatform.training_jobs:AutoMLImageTrainingJob run completed. Resource name: projects/759209241365/locations/us-central1/trainingPipelines/2109316300865011712 INFO:google.cloud.aiplatform.training_jobs:Model available at projects/759209241365/locations/us-central1/models/1284590221056278528 Evaluate the model projects.locations.models.evaluations.list Review model evaluation scores After your model has finished training, you can review the evaluation scores for it. First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
# Get model resource ID models = aip.Model.list(filter="display_name=flowers_" + TIMESTAMP) # Get a reference to the Model Service client client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"} model_service_client = aip.gapic.ModelServiceClient(client_options=client_options) model_evaluations = model_service_client.list_model_evaluations( parent=models[0].resource_name ) model_evaluation = list(model_evaluations)[0] print(model_evaluation)
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: name: "projects/759209241365/locations/us-central1/models/623915674158235648/evaluations/4280507618583117824" metrics_schema_uri: "gs://google-cloud-aiplatform/schema/modelevaluation/classification_metrics_1.0.0.yaml" metrics { struct_value { fields { key: "auPrc" value { number_value: 0.9891107 } } fields { key: "confidenceMetrics" value { list_value { values { struct_value { fields { key: "precision" value { number_value: 0.2 } } fields { key: "recall" value { number_value: 1.0 } } } } Make batch predictions predictions.batch-prediction Get test item(s) Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
test_items = !gsutil cat $IMPORT_FILE | head -n2 if len(str(test_items[0]).split(",")) == 3: _, test_item_1, test_label_1 = str(test_items[0]).split(",") _, test_item_2, test_label_2 = str(test_items[1]).split(",") else: test_item_1, test_label_1 = str(test_items[0]).split(",") test_item_2, test_label_2 = str(test_items[1]).split(",") print(test_item_1, test_label_1) print(test_item_2, test_label_2)
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: INFO:google.cloud.aiplatform.jobs:Creating BatchPredictionJob &lt;google.cloud.aiplatform.jobs.BatchPredictionJob object at 0x7f806a6112d0&gt; is waiting for upstream dependencies to complete. INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session: INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296') INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job: https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/5110965452507447296?project=759209241365 INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/5110965452507447296 current state: JobState.JOB_STATE_RUNNING Wait for completion of batch prediction job Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
batch_predict_job.wait()
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example Output: INFO:google.cloud.aiplatform.jobs:BatchPredictionJob created. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 INFO:google.cloud.aiplatform.jobs:To use this BatchPredictionJob in another session: INFO:google.cloud.aiplatform.jobs:bpj = aiplatform.BatchPredictionJob('projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328') INFO:google.cloud.aiplatform.jobs:View Batch Prediction Job: https://console.cloud.google.com/ai/platform/locations/us-central1/batch-predictions/181835033978339328?project=759209241365 INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state: JobState.JOB_STATE_RUNNING INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state: JobState.JOB_STATE_RUNNING INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state: JobState.JOB_STATE_RUNNING INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state: JobState.JOB_STATE_RUNNING INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state: JobState.JOB_STATE_RUNNING INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state: JobState.JOB_STATE_RUNNING INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state: JobState.JOB_STATE_RUNNING INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state: JobState.JOB_STATE_RUNNING INFO:google.cloud.aiplatform.jobs:BatchPredictionJob projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 current state: JobState.JOB_STATE_SUCCEEDED INFO:google.cloud.aiplatform.jobs:BatchPredictionJob run completed. Resource name: projects/759209241365/locations/us-central1/batchPredictionJobs/181835033978339328 Get the predictions Next, get the results from the completed batch prediction job. The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format: content: The prediction request. prediction: The prediction response. ids: The internal assigned unique identifiers for each prediction request. displayNames: The class names for each class label. confidences: The predicted confidence, between 0 and 1, per class label.
import json import tensorflow as tf bp_iter_outputs = batch_predict_job.iter_outputs() prediction_results = list() for blob in bp_iter_outputs: if blob.name.split("/")[-1].startswith("prediction"): prediction_results.append(blob.name) tags = list() for prediction_result in prediction_results: gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}" with tf.io.gfile.GFile(name=gfile_name, mode="r") as gfile: for line in gfile.readlines(): line = json.loads(line) print(line) break
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example Output: {'instance': {'content': 'gs://andy-1234-221921aip-20210802180634/100080576_f52e8ee070_n.jpg', 'mimeType': 'image/jpeg'}, 'prediction': {'ids': ['3195476558944927744', '1636105187967893504', '7400712711002128384', '2789026692574740480', '5501319568158621696'], 'displayNames': ['daisy', 'dandelion', 'roses', 'sunflowers', 'tulips'], 'confidences': [0.99998736, 8.222247e-06, 3.6782617e-06, 5.3231275e-07, 2.6960555e-07]}} Make online predictions predictions.deploy-model-api Deploy the model Next, deploy your model for online prediction. To deploy the model, you invoke the deploy method.
endpoint = model.deploy()
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: INFO:google.cloud.aiplatform.models:Creating Endpoint INFO:google.cloud.aiplatform.models:Create Endpoint backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/4087251132693348352 INFO:google.cloud.aiplatform.models:Endpoint created. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472 INFO:google.cloud.aiplatform.models:To use this Endpoint in another session: INFO:google.cloud.aiplatform.models:endpoint = aiplatform.Endpoint('projects/759209241365/locations/us-central1/endpoints/4867177336350441472') INFO:google.cloud.aiplatform.models:Deploying model to Endpoint : projects/759209241365/locations/us-central1/endpoints/4867177336350441472 INFO:google.cloud.aiplatform.models:Deploy Endpoint model backing LRO: projects/759209241365/locations/us-central1/endpoints/4867177336350441472/operations/1691336130932244480 INFO:google.cloud.aiplatform.models:Endpoint model deployed. Resource name: projects/759209241365/locations/us-central1/endpoints/4867177336350441472 predictions.online-prediction-automl Get test item You will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.
test_item = !gsutil cat $IMPORT_FILE | head -n1 if len(str(test_item[0]).split(",")) == 3: _, test_item, test_label = str(test_item[0]).split(",") else: test_item, test_label = str(test_item[0]).split(",") print(test_item, test_label)
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Make the prediction Now that your Model resource is deployed to an Endpoint resource, you can do online predictions by sending prediction requests to the Endpoint resource. Request Since in this example your test item is in a Cloud Storage bucket, you open and read the contents of the image using tf.io.gfile.Gfile(). To pass the test data to the prediction service, you encode the bytes into base64 -- which makes the content safe from modification while transmitting binary data over the network. The format of each instance is: { 'content': { 'b64': base64_encoded_bytes } } Since the predict() method can take multiple items (instances), send your single test item as a list of one test item. Response The response from the predict() call is a Python dictionary with the following entries: ids: The internal assigned unique identifiers for each prediction request. displayNames: The class names for each class label. confidences: The predicted confidence, between 0 and 1, per class label. deployed_model_id: The Vertex AI identifier for the deployed Model resource which did the predictions.
import base64 import tensorflow as tf with tf.io.gfile.GFile(test_item, "rb") as f: content = f.read() # The format of each instance should conform to the deployed model's prediction input schema. instances = [{"content": base64.b64encode(content).decode("utf-8")}] prediction = endpoint.predict(instances=instances) print(prediction)
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Example output: Prediction(predictions=[{'ids': ['3195476558944927744', '5501319568158621696', '1636105187967893504', '2789026692574740480', '7400712711002128384'], 'displayNames': ['daisy', 'tulips', 'dandelion', 'sunflowers', 'roses'], 'confidences': [0.999987364, 2.69604527e-07, 8.2222e-06, 5.32310196e-07, 3.6782335e-06]}], deployed_model_id='5949545378826158080', explanations=None) Undeploy the model When you are done doing predictions, you undeploy the model from the Endpoint resouce. This deprovisions all compute resources and ends billing for the deployed model.
endpoint.undeploy_all()
notebooks/official/migration/UJ1 Vertex SDK AutoML Image Classification.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Sunspots Data
print(sm.datasets.sunspots.NOTE) dta = sm.datasets.sunspots.load_pandas().data dta.index = pd.Index(pd.date_range("1700", end="2009", freq="A-DEC")) del dta["YEAR"] dta.plot(figsize=(12,4)); fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(dta.values.squeeze(), lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(dta, lags=40, ax=ax2) arma_mod20 = sm.tsa.statespace.SARIMAX(dta, order=(2,0,0), trend='c').fit(disp=False) print(arma_mod20.params) arma_mod30 = sm.tsa.statespace.SARIMAX(dta, order=(3,0,0), trend='c').fit(disp=False) print(arma_mod20.aic, arma_mod20.bic, arma_mod20.hqic) print(arma_mod30.params) print(arma_mod30.aic, arma_mod30.bic, arma_mod30.hqic)
v0.12.1/examples/notebooks/generated/statespace_arma_0.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Does our model obey the theory?
sm.stats.durbin_watson(arma_mod30.resid) fig = plt.figure(figsize=(12,4)) ax = fig.add_subplot(111) ax = plt.plot(arma_mod30.resid) resid = arma_mod30.resid stats.normaltest(resid) fig = plt.figure(figsize=(12,4)) ax = fig.add_subplot(111) fig = qqplot(resid, line='q', ax=ax, fit=True) fig = plt.figure(figsize=(12,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(resid, lags=40, ax=ax1) ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2) r,q,p = sm.tsa.acf(resid, fft=True, qstat=True) data = np.c_[range(1,41), r[1:], q, p] table = pd.DataFrame(data, columns=['lag', "AC", "Q", "Prob(>Q)"]) print(table.set_index('lag'))
v0.12.1/examples/notebooks/generated/statespace_arma_0.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
This indicates a lack of fit. In-sample dynamic prediction. How good does our model do?
predict_sunspots = arma_mod30.predict(start='1990', end='2012', dynamic=True) fig, ax = plt.subplots(figsize=(12, 8)) dta.loc['1950':].plot(ax=ax) predict_sunspots.plot(ax=ax, style='r'); def mean_forecast_err(y, yhat): return y.sub(yhat).mean() mean_forecast_err(dta.SUNACTIVITY, predict_sunspots)
v0.12.1/examples/notebooks/generated/statespace_arma_0.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Explore the Data The dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following: * airplane * automobile * bird * cat * deer * dog * frog * horse * ship * truck Understanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch. Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
%matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 2 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
image-classification/dlnd_image_classification.ipynb
yuanotes/deep-learning
mit
Implement Preprocess Functions Normalize In the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.
def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ # TODO: Implement Function return x / 255.0 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize)
image-classification/dlnd_image_classification.ipynb
yuanotes/deep-learning
mit
One-hot encode Just like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function. Hint: Don't reinvent the wheel.
def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ x = np.asarray(x) result = np.zeros((x.shape[0], 10)) result[np.arange(x.shape[0]), x] = 1 return result """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode)
image-classification/dlnd_image_classification.ipynb
yuanotes/deep-learning
mit
Build the network For the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project. Note: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup. However, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. Let's begin! Input The neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions * Implement neural_net_image_input * Return a TF Placeholder * Set the shape using image_shape with batch size set to None. * Name the TensorFlow placeholder "x" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_label_input * Return a TF Placeholder * Set the shape using n_classes with batch size set to None. * Name the TensorFlow placeholder "y" using the TensorFlow name parameter in the TF Placeholder. * Implement neural_net_keep_prob_input * Return a TF Placeholder for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow name parameter in the TF Placeholder. These names will be used at the end of the project to load your saved model. Note: None for shapes in TensorFlow allow for a dynamic size.
import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ # TODO: Implement Function return tf.placeholder(tf.float32, shape=(None, ) + image_shape, name="x") def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ # TODO: Implement Function return tf.placeholder(tf.uint8, shape=(None, n_classes), name="y") def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ # TODO: Implement Function return tf.placeholder(tf.float32, name="keep_prob") """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
image-classification/dlnd_image_classification.ipynb
yuanotes/deep-learning
mit
Convolution and Max Pooling Layer Convolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling: * Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor. * Apply a convolution to x_tensor using weight and conv_strides. * We recommend you use same padding, but you're welcome to use any padding. * Add bias * Add a nonlinear activation to the convolution. * Apply Max Pooling using pool_ksize and pool_strides. * We recommend you use same padding, but you're welcome to use any padding. Note: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ # TODO: Implement Function input_depth = x_tensor.get_shape().as_list()[-1] conv_strides = (1,) + conv_strides + (1, ) pool_ksize = (1,) + pool_ksize + (1, ) pool_strides = (1,) + pool_strides + (1, ) weights = tf.Variable(tf.random_normal(list(conv_ksize) + [input_depth, conv_num_outputs])) bias = tf.Variable(tf.zeros([conv_num_outputs])) x = tf.nn.conv2d(x_tensor, weights, conv_strides, 'SAME') x = tf.nn.bias_add(x, bias) x = tf.nn.relu(x) x = tf.nn.max_pool(x, pool_ksize, pool_strides, 'SAME') return x """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool)
image-classification/dlnd_image_classification.ipynb
yuanotes/deep-learning
mit
Flatten Layer Implement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
from tensorflow.contrib.layers.python import layers def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ return layers.flatten(x_tensor) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten)
image-classification/dlnd_image_classification.ipynb
yuanotes/deep-learning
mit
Fully-Connected Layer Implement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.
from tensorflow.contrib.layers.python import layers def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function x = layers.fully_connected(x_tensor, num_outputs) return x """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn)
image-classification/dlnd_image_classification.ipynb
yuanotes/deep-learning
mit
Output Layer Implement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages. Note: Activation, softmax, or cross entropy should not be applied to this.
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ # TODO: Implement Function return layers.fully_connected(x_tensor, num_outputs, activation_fn=None) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output)
image-classification/dlnd_image_classification.ipynb
yuanotes/deep-learning
mit
Create Convolutional Model Implement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model: Apply 1, 2, or 3 Convolution and Max Pool layers Apply a Flatten Layer Apply 1, 2, or 3 Fully Connected Layers Apply an Output Layer Return the output Apply TensorFlow's Dropout to one or more layers in the model using keep_prob.
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # TODO: Apply 1, 2, or 3 Convolution and Max Pool layers # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv_ksize = (2, 2) conv_strides = (2, 2) pool_ksize = (2, 2) pool_strides = (2, 2) conv_output = 32 x = conv2d_maxpool(x, conv_output, conv_ksize, conv_strides, pool_ksize, pool_strides) # x = conv2d_maxpool(x, conv_output, conv_ksize, conv_strides, pool_ksize, pool_strides) # x = conv2d_maxpool(x, conv_output, conv_ksize, conv_strides, pool_ksize, pool_strides) # TODO: Apply a Flatten Layer # Function Definition from Above: # flatten(x_tensor) x = flatten(x) # TODO: Apply 1, 2, or 3 Fully Connected Layers # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) x = fully_conn(x, 4096) # x = tf.nn.relu(x) x = tf.nn.dropout(x, keep_prob) # TODO: Apply an Output Layer # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) num_outputs = 10 x = output(x, num_outputs) return x """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net)
image-classification/dlnd_image_classification.ipynb
yuanotes/deep-learning
mit
Train the Neural Network Single Optimization Implement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following: * x for image input * y for labels * keep_prob for keep probability for dropout This function will be called for each batch, so tf.global_variables_initializer() has already been called. Note: Nothing needs to be returned. This function is only optimizing the neural network.
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ # TODO: Implement Function session.run(optimizer, feed_dict={x: feature_batch, y: label_batch, keep_prob: keep_probability}) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network)
image-classification/dlnd_image_classification.ipynb
yuanotes/deep-learning
mit
Show Stats Implement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ cost_val = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0}) accuracy_val = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0}) print('Cost: %f, Accuracy: %.2f%%' % (cost_val, accuracy_val * 100))
image-classification/dlnd_image_classification.ipynb
yuanotes/deep-learning
mit
Hyperparameters Tune the following parameters: * Set epochs to the number of iterations until the network stops learning or start overfitting * Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ... * Set keep_probability to the probability of keeping a node using dropout
# TODO: Tune Parameters epochs = 10 batch_size = 256 keep_probability = 0.5
image-classification/dlnd_image_classification.ipynb
yuanotes/deep-learning
mit
Checkpoint The model has been saved to disk. Test Model Test your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
""" DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): """ Test the saved model against the test dataset """ test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model()
image-classification/dlnd_image_classification.ipynb
yuanotes/deep-learning
mit
Matrix Factorization without Side Information (BPMF) As a first example we can run SMURFF without side information. The method used here is BPMF. Input matrix for Y is a sparse scipy matrix (either coo_matrix, csr_matrix or csc_matrix). The test matrix Ytest also needs to ne sparse matrix of the same size as Y. Here we have used burn-in of 20 samples for the Gibbs sampler and then collected 80 samples from the model. We use 16 latent dimensions in the model. For good results you will need to run more sampling and burnin iterations (>= 1000) and maybe more latent dimensions. We create a trainSession, and the run method returns the predictions of the Ytest matrix. predictions is a list of of type Prediction.
trainSession = smurff.BPMFSession( Ytrain = ic50_train, Ytest = ic50_test, num_latent = 16, burnin = 20, nsamples = 80, verbose = 0,) predictions = trainSession.run() print("First prediction element: ", predictions[0]) rmse = smurff.calc_rmse(predictions) print("RMSE =", rmse)
docs/notebooks/different_methods.ipynb
ExaScience/smurff
mit
Matrix Factorization with Side Information (Macau) If we want to use the compound features we can use the Macau algorithm. The parameter side_info = [ecfp, None] sets the side information for rows and columns, respectively. In this example we only use side information for the compounds (rows of the matrix). Since the ecfp sideinfo is sparse and large, we use the CG solver from Macau to reduce the memory footprint and speedup the computation.
predictions = smurff.MacauSession( Ytrain = ic50_train, Ytest = ic50_test, side_info = [ecfp, None], direct = False, # use CG solver instead of Cholesky decomposition num_latent = 16, burnin = 40, nsamples = 100).run() smurff.calc_rmse(predictions)
docs/notebooks/different_methods.ipynb
ExaScience/smurff
mit
Macau univariate sampler SMURFF also includes an option to use a very fast univariate sampler, i.e., instead of sampling blocks of variables jointly it samples each individually. An example:
predictions = smurff.MacauSession( Ytrain = ic50_train, Ytest = ic50_test, side_info = [ecfp, None], direct = True, univariate = True, num_latent = 32, burnin = 500, nsamples = 3500, verbose = 0,).run() smurff.calc_rmse(predictions)
docs/notebooks/different_methods.ipynb
ExaScience/smurff
mit
CSV to List
# The rb flag opens file for reading with open('data/fileops/vehicles.csv', 'rb') as csv_file: rdr = csv.reader(csv_file, delimiter=',', quotechar='"') for row in rdr: print '\t'.join(row)
.ipynb_checkpoints/python-data-files-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Dictionary to CSV
# Dictionary data structures can be used to represent rows game1_scores = {'Game':'Quarter', 'Team A': 45, 'Team B': 90} game2_scores = {'Game':'Semi', 'Team A': 80, 'Team B': 32} game3_scores = {'Game':'Final', 'Team A': 70, 'Team B': 68} headers = ['Game', 'Team A', 'Team B'] # Create CSV from dictionaries with open('data/fileops/game-scores.csv', 'wb') as df: dict_wtr = csv.DictWriter(df, fieldnames=headers) dict_wtr.writeheader() dict_wtr.writerow(game1_scores) dict_wtr.writerow(game2_scores) dict_wtr.writerow(game3_scores) print(check_output(["ls", "data/fileops"]).decode("utf8"))
.ipynb_checkpoints/python-data-files-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
CSV to Dictionary
# Read CSV into dictionary data structure with open('data/fileops/game-scores.csv', 'rb') as df: dict_rdr = csv.DictReader(df) for row in dict_rdr: print('\t'.join([row['Game'], row['Team A'], row['Team B']])) print('\t'.join(row.keys()))
.ipynb_checkpoints/python-data-files-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Pandas for CSV file operations Pandas goal is to become the most powerful and flexible open source data analysis / manipulation tool available in any language. Pandas includes file operations capabilities for CSV, among other formats. CSV operations in Pandas are much faster than in native Python. DataFrame to CSV
import pandas as pd # Create a DataFrame df = pd.DataFrame({ 'Name' : ['Josh', 'Eli', 'Ram', 'Bil'], 'Sales' : [34.32, 12.1, 4.77, 31.63], 'Region' : ['North', 'South', 'West', 'East'], 'Product' : ['PC', 'Phone', 'SW', 'Cloud']}) df # DataFrame to CSV df.to_csv('data/fileops/sales.csv', index=False) print(check_output(["ls", "data/fileops"]).decode("utf8"))
.ipynb_checkpoints/python-data-files-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
CSV to DataFrame
# CSV to DataFrame df2 = pd.read_csv('data/fileops/sales.csv') df2
.ipynb_checkpoints/python-data-files-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
DataFrame to Excel
# DataFrame to XLSX Excel file df.to_excel('data/fileops/sales.xlsx', index=False) print(check_output(["ls", "data/fileops"]).decode("utf8"))
.ipynb_checkpoints/python-data-files-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
Excel to DataFrame
# Excel to DataFrame df3 = pd.read_excel('data/fileops/sales.xlsx') df3
.ipynb_checkpoints/python-data-files-checkpoint.ipynb
Startupsci/data-science-notebooks
mit
<a id=weo></a> WEO data on government debt We use the IMF's data on government debt again, specifically its World Economic Outlook database, commonly referred to as the WEO. We focus on government debt expressed as a percentage of GDP, variable code GGXWDG_NGDP. The central question here is how the debt of Argentina, which defaulted in 2001, compared to other countries. Was it a matter of too much debt or something else? Load data First step: load the data and extract a single variable: government debt (code GGXWDG_NGDP) expressed as a percentage of GDP.
url1 = "http://www.imf.org/external/pubs/ft/weo/2016/02/weodata/" url2 = "WEOOct2016all.xls" url = url1 + url2 weo = pd.read_csv(url, sep='\t', usecols=[1,2] + list(range(19,46)), thousands=',', na_values=['n/a', '--']) print('Variable dtypes:\n', weo.dtypes.head(6), sep='')
Code/notebooks/bootcamp_pandas-summarize.ipynb
NYUDataBootcamp/Materials
mit
Clean and shape Second step: select the variable we want and generate the two dataframes.
# select debt variable variables = ['GGXWDG_NGDP'] db = weo[weo['WEO Subject Code'].isin(variables)] # drop variable code column (they're all the same) db = db.drop('WEO Subject Code', axis=1) # set index to country code db = db.set_index('ISO') # name columns db.columns.name = 'Year' # transpose dbt = db.T # see what we have dbt.head()
Code/notebooks/bootcamp_pandas-summarize.ipynb
NYUDataBootcamp/Materials
mit
Example. Let's try a simple graph of the dataframe dbt. The goal is to put Argentina in perspective by plotting it along with many other countries.
fig, ax = plt.subplots() dbt.plot(ax=ax, legend=False, color='blue', alpha=0.3, ylim=(0,150) ) ax.set_ylabel('Percent of GDP') ax.set_xlabel('') ax.set_title('Government debt', fontsize=14, loc='left') dbt['ARG'].plot(ax=ax, color='black', linewidth=1.5)
Code/notebooks/bootcamp_pandas-summarize.ipynb
NYUDataBootcamp/Materials
mit
Exercise. What do you take away from this graph? What would you change to make it look better? To make it mnore informative? To put Argentina's debt in context? Exercise. Do the same graph with Greece (GRC) as the country of interest. How does it differ? Why do you think that is? <a id=describe></a> Describing numerical data Let's step back a minute. What we're trying to do is compare Argentina to other countries. What's the best way to do that? This isn't a question with an obvious best answer, but we can try some things, see how they look. One thing we could do is compare Argentina to the mean or median. Or to some other feature of the distribution. We work up to this by looking first at some features of the distribution of government debt numbers across countries. Some of this we've seen, some is new. What's (not) there? Let's check out the data first. How many non-missing values do we have at each date? We can do that with the count method. The argument axis=1 says to do this by date, counting across columns (axis number 1).
dbt.shape # count non-missing values dbt.count(axis=1).plot()
Code/notebooks/bootcamp_pandas-summarize.ipynb
NYUDataBootcamp/Materials
mit
Describing series Let's take the data for 2001 -- the year of Argentina's default -- and see what how Argentina compares. Was its debt high compare to other countries? which leads to more questions. How would we compare? Compare Argentina to the mean or median? Something else? Let's see how that works.
# 2001 data db01 = db['2001'] db01['ARG'] db01.mean() db01.median() db01.describe() db01.quantile(q=[0.25, 0.5, 0.75])
Code/notebooks/bootcamp_pandas-summarize.ipynb
NYUDataBootcamp/Materials
mit
Comment. If we add enough quantiles, we might as well plot the whole distribution. The easiest way to do this is with a histogram.
fig, ax = plt.subplots() db01.hist(bins=15, ax=ax, alpha=0.35) ax.set_xlabel('Government Debt (Percent of GDP)') ax.set_ylabel('Number of Countries') ymin, ymax = ax.get_ylim() ax.vlines(db01['ARG'], ymin, ymax, color='blue', lw=2)
Code/notebooks/bootcamp_pandas-summarize.ipynb
NYUDataBootcamp/Materials
mit
Comment Compared to the whole sample of countries in 2001, it doesn't seem that Argentina had particularly high debt. Describing dataframes We can compute the same statistics for dataframes. Here we hve a choice: we can compute (say) the mean down rows (axis=0) or across columns (axis=1). If we use the dataframe dbt, computing the mean across countries (columns) calls for axis=1.
# here we compute the mean across countries at every date dbt.mean(axis=1).head() # or we could do the median dbt.median(axis=1).head() # or a bunch of stats at once # NB: db not dbt (there's no axix argument here) db.describe() # the other way dbt.describe()
Code/notebooks/bootcamp_pandas-summarize.ipynb
NYUDataBootcamp/Materials
mit
Example. Let's add the mean to our graph. We make it a dashed line with linestyle='dashed'.
fig, ax = plt.subplots() dbt.plot(ax=ax, legend=False, color='blue', alpha=0.2, ylim=(0,200) ) dbt['ARG'].plot(ax=ax, color='black', linewidth=1.5) ax.set_ylabel('Percent of GDP') ax.set_xlabel('') ax.set_title('Government debt', fontsize=14, loc='left') dbt.mean(axis=1).plot(ax=ax, color='black', linewidth=2, linestyle='dashed')
Code/notebooks/bootcamp_pandas-summarize.ipynb
NYUDataBootcamp/Materials
mit
Question. Do you think this looks better when the mean varies with time, or when we use a constant mean? Let's try it and see.
dbar = dbt.mean().mean() dbar fig, ax = plt.subplots() dbt.plot(ax=ax, legend=False, color='blue', alpha=0.3, ylim=(0,150) ) dbt['ARG'].plot(ax=ax, color='black', linewidth=1.5) ax.set_ylabel('Percent of GDP') ax.set_xlabel('') ax.set_title('Government debt', fontsize=14, loc='left') xmin, xmax = ax.get_xlim() ax.hlines(dbar, xmin, xmax, linewidth=2, linestyle='dashed')
Code/notebooks/bootcamp_pandas-summarize.ipynb
NYUDataBootcamp/Materials
mit
Exercise. Which do we like better? Exercise. Replace the (constant) mean with the (constant) median? Which do you prefer? <a id=value-counts></a> Describing categorical data A categorical variable is one that takes on a small number of values. States take on one of fifty values. University students are either grad or undergrad. Students select majors and concentrations. We're going to do two things with categorical data: In this section, we count the number of observations in each category using the value_counts method. This is a series method, we apply it to one series/variable at a time. In the next section, we go on to describe how other variables differ across catagories. How do students who major in finance differ from those who major in English? And so on. We start with the combined MovieLens data we constructed in the previous notebook.
url = 'http://pages.stern.nyu.edu/~dbackus/Data/mlcombined.csv' ml = pd.read_csv(url, index_col=0,encoding = "ISO-8859-1") print('Dimensions:', ml.shape) # fix up the dates ml["timestamp"] = pd.to_datetime(ml["timestamp"], unit="s") ml.head(10) # which movies have the most ratings? ml['title'].value_counts().head(10) ml['title'].value_counts().head(10).plot.barh(alpha=0.5) # which people have rated the most movies? ml['userId'].value_counts().head(10)
Code/notebooks/bootcamp_pandas-summarize.ipynb
NYUDataBootcamp/Materials
mit