markdown
stringlengths 0
37k
| code
stringlengths 1
33.3k
| path
stringlengths 8
215
| repo_name
stringlengths 6
77
| license
stringclasses 15
values |
---|---|---|---|---|
Como pueden ver, en la variable usos_suelo tenemos ya calculadas todas nuestras variables de interés, ahora lo que necesitamos es, para cada fila de nuestro GeoDataFrame, saber cuáles son los polígnos vecinos.
Para esto, vamos a utilizar la librería PySal, que provee un conjunto de métodos de análisis espacial. En particular, nos interesa la funcionalidad de crear matrices de pesos espaciales.
PySal está desarrollado para trabajar en conjunto con GeoPandas, de modo que podemos pedir la matriz de pesos directamente del GeoDataFrame y examinar el objeto que nos regresa: | import pysal
w = pysal.weights.Queen.from_dataframe(usos_suelo)
print(w.n)
print(w.weights[0])
print(w.neighbors[0])
print(w.neighbors[5])
print(w.histogram) | vecindades_python/vecindades_2.ipynb | CentroGeo/geoinformatica | gpl-3.0 |
Lo primero que hicimos fue importar la librería PySal. A continuación, claculamos la matriz de pesos w usando vecindades de tipo Reina (en la documentación de PySal pueden consultar los diferentes tipos de vecindades y las fuentes de datos que pueden usar).
w.n nos dice la cantidad de renglones de la matriz
w.weights[0] nos dice los pesos que corresponden a los vecinos del elemento 0
w.neighbors[0] nos da la lista de vecinos del elemento 0
w.histogram nos da el histograma de la matriz de adyacencia, es decir, cuántos elementos tienen x número de vecinos
Como un ejercicio rápido vamos a graficar el histograma, sólo que esta vez, en lugar de usar matplotlib directamente, vamos a usar seaborn, que es una librería para graficar datos estadísticos. Además de producir, de manera sencilla, graficas más bonitas que matplotlib, seaborn tiene una sintaxis similar a la de ggplot2 de R.
Primero convertimos el histograma que nos da PySal en un DataFrame: | freqs = pd.DataFrame(w.histogram, columns=['vecinos', 'cuenta'])
freqs.head() | vecindades_python/vecindades_2.ipynb | CentroGeo/geoinformatica | gpl-3.0 |
Y luego lo graficamos: | %matplotlib inline
import seaborn as sns
sns.barplot(x='vecinos', y='cuenta', data=freqs) | vecindades_python/vecindades_2.ipynb | CentroGeo/geoinformatica | gpl-3.0 |
Intensidad
Después de este intermedio, ahora sí vamos a hacer nuestro primer cómputo en vecindades. Vamos a comenzar por la intensidad.
La intensidad es simplemente la cantidad de actividades en un área determinada. En nuestro caso, vamos a calcular el total de actividades (de cualquier tipo) que hay en la vecindad inmediata de cada AGEB (si lo piensan un poco, esto se parece bastante a los filtros tipo blur en procesamiento de imágenes).
Para calcular la intensidad, lo que necesitamos hacer es recorrer la lista de elementos del GeoDataFrame y, para cada elemento, obtener la lista de vecinos, sacar sus variables y sumarlas.
Antes de calcular, vamos a eliminar el elemento que no tiene ningún vecino, reindexar los datos y volver a calcular los pesos (para que los índices de la matriz de pesos y del DataFrame coincidan): | usos_suelo = usos_suelo.drop(usos_suelo.index[[1224]])
usos_suelo.reset_index(drop=True, inplace=True)
w = pysal.weights.Queen.from_dataframe(usos_suelo) | vecindades_python/vecindades_2.ipynb | CentroGeo/geoinformatica | gpl-3.0 |
Ahora sí, recorremos la lista de vecinos y calculamos la intensidad para cada elemento: | usos_suelo.iloc[[0]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()
import numpy as np
intensidad =[]
for i in range(0, w.n):
vecinos = w.neighbors[i]
total = 0.0
suma = np.zeros((3),dtype=np.float)
valores = usos_suelo.iloc[[i]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()
for j in range(0,len(vecinos)):
data = usos_suelo.iloc[[j]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()
suma = suma + data
total += sum(data)
intensidad.append((i, sum(total)))
print(intensidad[0:10]) | vecindades_python/vecindades_2.ipynb | CentroGeo/geoinformatica | gpl-3.0 |
Al parecer lo que estamos haciendo es muy complicado, sin embargo, una vez más, si lo vemos con detenimiento es relativamente simple:
Primero estamos definiendo una lista vacía intensidad que nos va a servir para guardar los resultados
Luego, en el for externo, estamos recorriendo la matriz de adyacencia, entonces el índice del for, es el identificador de cada polígono
Inicializamos un array de 3 entradas con ceros (esto nos va a servir para guardar la suma para cada uso de suelo)
Con iloc tomamos la fila correspondiente en el DataFrame y as_matrix() convierte los valores de las columnas en un array
Recorremos en el for interno los vecinos de cada elemento y tomamos, como array, sus valores
Sumamos los arrays entrada por entrada (esto realmente no es necesario aquí, pero va a ser útil más adelante cuando hagamos un cálculo más complejo)
A la salida de los dos for, agregamos a la lista intensidad una tupla con el índice y el valor de la intensidad
Entonces, podemos convertir la lista intensidad en un DataFrame para después unirlo con nuestros datos: | intensidad_df = pd.DataFrame(intensidad, columns=['gid', 'intensidad'])
datos_intensidad = usos_suelo.merge(intensidad_df, left_index=True, right_on='gid', how='inner')
datos_intensidad.head() | vecindades_python/vecindades_2.ipynb | CentroGeo/geoinformatica | gpl-3.0 |
Ejercicio
Hagan un mapa que destaque las diferencias en intensidad.
Entropía
La entropía es una medida de la mezcla de usos de suelo, está basada en la forma en la que se calcula la entropía en mecánica estadística:
$$ E = \sum\limits_{j}{\frac{p_{j}*ln(p_{j})}{ln(J)}} $$
Donde $p_{j}$ representa la proporción del $j-ésimo$ uso de suelo con respecto al total y $J$ es el número de usos de suelo considerados. Valores cercanos a 0 indican poca mezcla de usos de suelo y valores cercanos a -1 indican una mezcla balanceada.
Entonces, para calcular la entropía, basta con modificar un poco el for que usamos para calcular la intensidad: | entropia =[]
for i in range(0, w.n):
vecinos = w.neighbors[i]
total = 0.0
suma = np.zeros((3),dtype=np.float)
valores = usos_suelo.iloc[[i]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()
for j in range(0,len(vecinos)):
data = usos_suelo.iloc[[j]][['clase_comercio', 'clase_ocio', 'clase_oficinas']].as_matrix()
suma = suma + data
total += np.sum(data)
p = np.nan_to_num(suma/total)
lp = np.select([p == 0,p > 0],[p, np.log(p)])
entropia.append((i, np.sum(p*lp)))
print(entropia[0:10]) | vecindades_python/vecindades_2.ipynb | CentroGeo/geoinformatica | gpl-3.0 |
We'll be running the provided LeNet example (make sure you've downloaded the data and created the databases, as below). | # Download and prepare data
!data/mnist/get_mnist.sh
!examples/mnist/create_mnist.sh | examples/01-learning-lenet.ipynb | wangg12/caffe | bsd-2-clause |
We need two external files to help out:
* the net prototxt, defining the architecture and pointing to the train/test data
* the solver prototxt, defining the learning parameters
We start with the net. We'll write the net in a succinct and natural way as Python code that serializes to Caffe's protobuf model format.
This network expects to read from pregenerated LMDBs, but reading directly from ndarrays is also possible using MemoryDataLayer. | from caffe import layers as L
from caffe import params as P
def lenet(lmdb, batch_size):
# our version of LeNet: a series of linear and simple nonlinear transformations
n = caffe.NetSpec()
n.data, n.label = L.Data(batch_size=batch_size, backend=P.Data.LMDB, source=lmdb,
transform_param=dict(scale=1./255), ntop=2)
n.conv1 = L.Convolution(n.data, kernel_size=5, num_output=20, weight_filler=dict(type='xavier'))
n.pool1 = L.Pooling(n.conv1, kernel_size=2, stride=2, pool=P.Pooling.MAX)
n.conv2 = L.Convolution(n.pool1, kernel_size=5, num_output=50, weight_filler=dict(type='xavier'))
n.pool2 = L.Pooling(n.conv2, kernel_size=2, stride=2, pool=P.Pooling.MAX)
n.ip1 = L.InnerProduct(n.pool2, num_output=500, weight_filler=dict(type='xavier'))
n.relu1 = L.ReLU(n.ip1, in_place=True)
n.ip2 = L.InnerProduct(n.relu1, num_output=10, weight_filler=dict(type='xavier'))
n.loss = L.SoftmaxWithLoss(n.ip2, n.label)
return n.to_proto()
with open('examples/mnist/lenet_auto_train.prototxt', 'w') as f:
f.write(str(lenet('examples/mnist/mnist_train_lmdb', 64)))
with open('examples/mnist/lenet_auto_test.prototxt', 'w') as f:
f.write(str(lenet('examples/mnist/mnist_test_lmdb', 100))) | examples/01-learning-lenet.ipynb | wangg12/caffe | bsd-2-clause |
The net has been written to disk in more verbose but human-readable serialization format using Google's protobuf library. You can read, write, and modify this description directly. Let's take a look at the train net. | !cat examples/mnist/lenet_auto_train.prototxt | examples/01-learning-lenet.ipynb | wangg12/caffe | bsd-2-clause |
Now let's see the learning parameters, which are also written as a prototxt file. We're using SGD with momentum, weight decay, and a specific learning rate schedule. | !cat examples/mnist/lenet_auto_solver.prototxt | examples/01-learning-lenet.ipynb | wangg12/caffe | bsd-2-clause |
Let's pick a device and load the solver. We'll use SGD (with momentum), but Adagrad and Nesterov's accelerated gradient are also available. | caffe.set_device(0)
caffe.set_mode_gpu()
solver = caffe.SGDSolver('examples/mnist/lenet_auto_solver.prototxt') | examples/01-learning-lenet.ipynb | wangg12/caffe | bsd-2-clause |
To get an idea of the architecture of our net, we can check the dimensions of the intermediate features (blobs) and parameters (these will also be useful to refer to when manipulating data later). | # each output is (batch size, feature dim, spatial dim)
[(k, v.data.shape) for k, v in solver.net.blobs.items()]
# just print the weight sizes (not biases)
[(k, v[0].data.shape) for k, v in solver.net.params.items()] | examples/01-learning-lenet.ipynb | wangg12/caffe | bsd-2-clause |
Before taking off, let's check that everything is loaded as we expect. We'll run a forward pass on the train and test nets and check that they contain our data. | solver.net.forward() # train net
solver.test_nets[0].forward() # test net (there can be more than one)
# we use a little trick to tile the first eight images
imshow(solver.net.blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray')
print solver.net.blobs['label'].data[:8]
imshow(solver.test_nets[0].blobs['data'].data[:8, 0].transpose(1, 0, 2).reshape(28, 8*28), cmap='gray')
print solver.test_nets[0].blobs['label'].data[:8] | examples/01-learning-lenet.ipynb | wangg12/caffe | bsd-2-clause |
Both train and test nets seem to be loading data, and to have correct labels.
Let's take one step of (minibatch) SGD and see what happens. | solver.step(1) | examples/01-learning-lenet.ipynb | wangg12/caffe | bsd-2-clause |
Do we have gradients propagating through our filters? Let's see the updates to the first layer, shown here as a $4 \times 5$ grid of $5 \times 5$ filters. | imshow(solver.net.params['conv1'][0].diff[:, 0].reshape(4, 5, 5, 5)
.transpose(0, 2, 1, 3).reshape(4*5, 5*5), cmap='gray') | examples/01-learning-lenet.ipynb | wangg12/caffe | bsd-2-clause |
Something is happening. Let's run the net for a while, keeping track of a few things as it goes.
Note that this process will be the same as if training through the caffe binary. In particular:
* logging will continue to happen as normal
* snapshots will be taken at the interval specified in the solver prototxt (here, every 5000 iterations)
* testing will happen at the interval specified (here, every 500 iterations)
Since we have control of the loop in Python, we're free to compute additional things as we go, as we show below. We can do many other things as well, for example:
* write a custom stopping criterion
* change the solving process by updating the net in the loop | %%time
niter = 200
test_interval = 25
# losses will also be stored in the log
train_loss = zeros(niter)
test_acc = zeros(int(np.ceil(niter / test_interval)))
output = zeros((niter, 8, 10))
# the main solver loop
for it in range(niter):
solver.step(1) # SGD by Caffe
# store the train loss
train_loss[it] = solver.net.blobs['loss'].data
# store the output on the first test batch
# (start the forward pass at conv1 to avoid loading new data)
solver.test_nets[0].forward(start='conv1')
output[it] = solver.test_nets[0].blobs['ip2'].data[:8]
# run a full test every so often
# (Caffe can also do this for us and write to a log, but we show here
# how to do it directly in Python, where more complicated things are easier.)
if it % test_interval == 0:
print 'Iteration', it, 'testing...'
correct = 0
for test_it in range(100):
solver.test_nets[0].forward()
correct += sum(solver.test_nets[0].blobs['ip2'].data.argmax(1)
== solver.test_nets[0].blobs['label'].data)
test_acc[it // test_interval] = correct / 1e4 | examples/01-learning-lenet.ipynb | wangg12/caffe | bsd-2-clause |
Let's plot the train loss and test accuracy. | _, ax1 = subplots()
ax2 = ax1.twinx()
ax1.plot(arange(niter), train_loss)
ax2.plot(test_interval * arange(len(test_acc)), test_acc, 'r')
ax1.set_xlabel('iteration')
ax1.set_ylabel('train loss')
ax2.set_ylabel('test accuracy') | examples/01-learning-lenet.ipynb | wangg12/caffe | bsd-2-clause |
The loss seems to have dropped quickly and coverged (except for stochasticity), while the accuracy rose correspondingly. Hooray!
Since we saved the results on the first test batch, we can watch how our prediction scores evolved. We'll plot time on the $x$ axis and each possible label on the $y$, with lightness indicating confidence. | for i in range(8):
figure(figsize=(2, 2))
imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray')
figure(figsize=(10, 2))
imshow(output[:50, i].T, interpolation='nearest', cmap='gray')
xlabel('iteration')
ylabel('label') | examples/01-learning-lenet.ipynb | wangg12/caffe | bsd-2-clause |
We started with little idea about any of these digits, and ended up with correct classifications for each. If you've been following along, you'll see the last digit is the most difficult, a slanted "9" that's (understandably) most confused with "4".
Note that these are the "raw" output scores rather than the softmax-computed probability vectors. The latter, shown below, make it easier to see the confidence of our net (but harder to see the scores for less likely digits). | for i in range(8):
figure(figsize=(2, 2))
imshow(solver.test_nets[0].blobs['data'].data[i, 0], cmap='gray')
figure(figsize=(10, 2))
imshow(exp(output[:50, i].T) / exp(output[:50, i].T).sum(0), interpolation='nearest', cmap='gray')
xlabel('iteration')
ylabel('label') | examples/01-learning-lenet.ipynb | wangg12/caffe | bsd-2-clause |
'y' is travel time in seconds.
Extract Features
Represent time as a decimal fraction of a day, so that we can more easily use it for prediction. | def frac_day(time):
"""
Convert time to fraction of a day (0.0 to 1.0)
Can also pass this function a datetime object
"""
return time.hour*(1./24) + time.minute*(1./(24*60)) + time.second*(1./(24*60*60)) | jupyter_notebooks/traveltime_lineartime.ipynb | anjsimmo/simple-ml-pipeline | mit |
We create the features $time^1$, $time^2$, ... in order to allow the regression algorithm to find polynomial fits. | def extract_features(data):
# Turn list into a n*1 design matrix. At this stage, we only have a single feature in each row.
t = np.array([frac_day(_t) for _t in data['t']])[:, np.newaxis]
# Add t^2, t^3, ... to allow polynomial regression
xs = np.hstack([t, t**2, t**3, t**4, t**5, t**6, t**7, t**8])
return xs
t = np.array([frac_day(_t) for _t in data['t']])[:, np.newaxis]
xs = extract_features(data)
y = data['y'].values | jupyter_notebooks/traveltime_lineartime.ipynb | anjsimmo/simple-ml-pipeline | mit |
Model
Train model, plot regression curve. | %matplotlib inline
import matplotlib.pyplot as plt
from sklearn import linear_model
regr = linear_model.LinearRegression()
regr.fit(xs, y)
y_pred = regr.predict(xs)
plt.figure(figsize=(8,8))
plt.scatter(t, y, color='black', label='actual')
plt.plot(t, y_pred, color='blue', label='regression curve')
plt.title("Travel time vs time. Princes Highway. Outbound. Wed 19 Aug 2015")
plt.ylabel("Travel Time from site 2409 to site 2425 (seconds)")
plt.xlabel("Time (fraction of day)")
plt.legend(loc='lower right')
plt.xlim([0,1])
plt.ylim([0,None])
plt.show()
# http://scikit-learn.org/stable/modules/linear_model.html#ordinary-least-squares
print('Intercept: %.2f' % regr.intercept_)
print('Coefficients: %s' % regr.coef_)
print('R^2 score: %.2f' % regr.score(xs, y)) | jupyter_notebooks/traveltime_lineartime.ipynb | anjsimmo/simple-ml-pipeline | mit |
Evaluate | test = datatables.traveltime.read('data/traveltime.task.test') # Traffic on Wed 27 Aug 2015
test_xs = extract_features(test)
test['pred'] = regr.predict(test_xs)
test['error'] = test['y'] - test['pred']
# todo: ensure data is a real number (complex numbers could be used to cheat)
rms_error = math.sqrt(sum(test['error']**2) / len(data))
test.head()
rms_error | jupyter_notebooks/traveltime_lineartime.ipynb | anjsimmo/simple-ml-pipeline | mit |
Getting started with The Joker
The Joker (pronounced Yo-kurr) is a highly specialized Monte Carlo (MC) sampler that is designed to generate converged posterior samplings for Keplerian orbital parameters, even when your data are sparse, non-uniform, or very noisy. This is not a general MC sampler, and this is not a Markov Chain MC sampler like emcee, or pymc3: This is fundamentally a rejection sampler with some tricks that help improve performance for the two-body problem.
The Joker shines over more conventional MCMC sampling methods when your radial velocity data is imprecise, non-uniform, sparse, or has a short baseline: In these cases, your likelihood function will have many, approximately equal-height modes that are often spaced widely, all properties that make conventional MCMC bork when applied to this problem. In this tutorial, we will not go through the math behind the sampler (most of that is covered in the original paper). However, some terminology is important to know for the tutorial below or for reading the documentation. Most relevant, the parameters in the two-body problem (Kepler orbital parameters) split into two sets: nonlinear and linear parameters. The nonlinear parameters are always the same in each run of The Joker: period $P$, eccentricity $e$, argument of pericenter $\omega$, and a phase $M_0$. The default linear parameters are the velocity semi-ampltude $K$, and a systemtic velocity $v_0$. However, there are ways to add additional linear parameters into the model (as described in other tutorials).
For this tutorial, we will set up an inference problem that is common to binary star or exoplanet studies, show how to generate posterior orbit samples from the data, and then demonstrate how to visualize the samples. Other tutorials demonstrate more advanced or specialized functionality included in The Joker, like:
- fully customizing the parameter prior distributions,
- allowing for a long-term velocity trend in the data,
- continuing sampling with standard MCMC methods when The Joker returns one or few samples,
- simultaneously inferring constant offsets between data sources (i.e. when using data from multiple instruments that may have calibration offsets)
But let's start here with the most basic functionality!
First, imports we will need later: | import astropy.table as at
from astropy.time import Time
import astropy.units as u
from astropy.visualization.units import quantity_support
import matplotlib.pyplot as plt
import numpy as np
%matplotlib inline
import thejoker as tj
# set up a random generator to ensure reproducibility
rnd = np.random.default_rng(seed=42) | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
Loading radial velocity data
To start, we need some radial velocity data to play with. Our ultimate goal is to construct or read in a thejoker.RVData instance, which is the main data container object used in The Joker. For this tutorial, we will use a simulated RV curve that was generated using a separate script and saved to a CSV file, and we will create an RVData instance manually.
Because we previously saved this data as an Astropy ECSV file, the units are provided with the column data and read in automatically using the astropy.table read/write interface: | data_tbl = at.QTable.read('data.ecsv')
data_tbl[:2] | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
The full simulated data table has many rows (256), so let's randomly grab 4 rows to work with: | sub_tbl = data_tbl[rnd.choice(len(data_tbl), size=4, replace=False)]
sub_tbl | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
It looks like the time column is given in Barycentric Julian Date (BJD), so in order to create an RVData instance, we will need to create an astropy.time.Time object from this column: | t = Time(sub_tbl['bjd'], format='jd', scale='tcb')
data = tj.RVData(t=t, rv=sub_tbl['rv'], rv_err=sub_tbl['rv_err']) | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
We now have an RVData object, so we could continue on with the tutorial. But as a quick aside, there is an alternate, more automatic (automagical?) way to create an RVData instance from tabular data: RVData.guess_from_table. This classmethod attempts to guess the time format and radial velocity column names from the columns in the data table. It is very much an experimental feature, so if you think it can be improved, please open an issue in the GitHub repo for The Joker. In any case, here it successfully works: | data = tj.RVData.guess_from_table(sub_tbl) | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
One of the handy features of RVData is the .plot() method, which generates a quick view of the data: | _ = data.plot() | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
The data are clearly variable! But what orbits are consistent with these data? I suspect many, given how sparse they are! Now that we have the data in hand, we need to set up the sampler by specifying prior distributions over the parameters in The Joker.
Specifying the prior distributions for The Joker parameters
The prior pdf (probability distribution function) for The Joker is controlled and managed through the thejoker.JokerPrior class. The prior for The Joker is fairly customizable and the initializer for JokerPrior is therefore pretty flexible; usually too flexible for typical use cases. We will therefore start by using an alternate initializer defined on the class, JokerPrior.default(), that provides a simpler interface for creating a JokerPrior instance that uses the default prior distributions assumed in The Joker. In the default prior:
$$
\begin{align}
&p(P) \propto \frac{1}{P} \quad ; \quad P \in (P_{\rm min}, P_{\rm max})\
&p(e) = B(a_e, b_e)\
&p(\omega) = \mathcal{U}(0, 2\pi)\
&p(M_0) = \mathcal{U}(0, 2\pi)\
&p(K) = \mathcal{N}(K \,|\, \mu_K, \sigma_K)\
&\sigma_K = \sigma_{K, 0} \, \left(\frac{P}{P_0}\right)^{-1/3} \, \left(1 - e^2\right)^{-1/2}\
&p(v_0) = \mathcal{N}(v_0 \,|\, \mu_{v_0}, \sigma_{v_0})\
\end{align}
$$
where $B(.)$ is the beta distribution, $\mathcal{U}$ is the uniform distribution, and $\mathcal{N}$ is the normal distribution.
Most parameters in the distributions above are set to reasonable values, but there are a few required parameters for the default case: the range of allowed period values (P_min and P_max), the scale of the K prior variance sigma_K0, and the standard deviation of the $v_0$ prior sigma_v. Let's set these to some arbitrary numbers. Here, I chose the value for sigma_K0 to be typical of a binary star system; if using The Joker for exoplanet science, you will want to adjust this correspondingly. | prior = tj.JokerPrior.default(
P_min=2*u.day, P_max=1e3*u.day,
sigma_K0=30*u.km/u.s,
sigma_v=100*u.km/u.s) | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
Once we have the prior instance, we need to generate some prior samples that we will then use The Joker to rejection sample down to a set of posterior samples. To generate prior samples, use the JokerSamples.sample() method. Here, we'll generate a lare number of samples to use: | prior_samples = prior.sample(size=250_000,
random_state=rnd)
prior_samples | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
This object behaves like a Python dictionary in that the parameter values can be accessed via their key names: | prior_samples['P']
prior_samples['e'] | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
They can also be written to disk or re-loaded using this same class. For example, to save these prior samples to the current directory to the file "prior_samples.hdf5": | prior_samples.write("prior_samples.hdf5", overwrite=True) | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
We could then load the samples from this file using: | tj.JokerSamples.read("prior_samples.hdf5") | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
Running The Joker
Now that we have a set of prior samples, we can create an instance of The Joker and use the rejection sampler: | joker = tj.TheJoker(prior, random_state=rnd)
joker_samples = joker.rejection_sample(data, prior_samples,
max_posterior_samples=256) | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
This works by either passing in an instance of JokerSamples containing the prior samples, or by passing in a filename that contains JokerSamples written to disk. So, for example, this is equivalent: | joker_samples = joker.rejection_sample(data, "prior_samples.hdf5",
max_posterior_samples=256) | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
The max_posterior_samples argument above specifies the maximum number of posterior samples to return. It is often helpful to set a threshold here in cases when your data are very uninformative to avoid generating huge numbers of samples (which can slow down the sampler considerably).
In either case above, the joker_samples object returned from rejection_sample() is also an instance of the JokerSamples class, but now contains posterior samples for all nonlinear and linear parameters in the model: | joker_samples | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
Plotting The Joker orbit samples over the input data
With posterior samples in Keplerian orbital parameters in hand for our data set, we can now plot the posterior samples over the input data to get a sense for how constraining the data are. The Joker comes with a convenience plotting function, plot_rv_curves, for doing just this: | _ = tj.plot_rv_curves(joker_samples, data=data) | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
It has various options to allow customizing the style of the plot: | fig, ax = plt.subplots(1, 1, figsize=(8, 4))
_ = tj.plot_rv_curves(joker_samples, data=data,
plot_kwargs=dict(color='tab:blue'),
data_plot_kwargs=dict(color='tab:red'),
relative_to_t_ref=True, ax=ax)
ax.set_xlabel(f'BMJD$ - {data.t.tcb.mjd.min():.3f}$') | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
Another way to visualize the samples is to plot 2D projections of the sample values, for example, to plot period against eccentricity: | fig, ax = plt.subplots(1, 1, figsize=(8, 5))
with quantity_support():
ax.scatter(joker_samples['P'],
joker_samples['e'],
s=20, lw=0, alpha=0.5)
ax.set_xscale('log')
ax.set_xlim(prior.pars['P'].distribution.a,
prior.pars['P'].distribution.b)
ax.set_ylim(0, 1)
ax.set_xlabel('$P$ [day]')
ax.set_ylabel('$e$') | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
But is the true period value included in those distinct period modes returned by The Joker? When generating the simulated data, I also saved the true orbital parameters used to generate the data, so we can load and over-plot it: | import pickle
with open('true-orbit.pkl', 'rb') as f:
truth = pickle.load(f)
fig, ax = plt.subplots(1, 1, figsize=(8, 5))
with quantity_support():
ax.scatter(joker_samples['P'],
joker_samples['e'],
s=20, lw=0, alpha=0.5)
ax.axvline(truth['P'], zorder=-1, color='tab:green')
ax.axhline(truth['e'], zorder=-1, color='tab:green')
ax.text(truth['P'], 0.95, 'truth', fontsize=20,
va='top', ha='left', color='tab:green')
ax.set_xscale('log')
ax.set_xlim(prior.pars['P'].distribution.a,
prior.pars['P'].distribution.b)
ax.set_ylim(0, 1)
ax.set_xlabel('$P$ [day]')
ax.set_ylabel('$e$') | docs/examples/1-Getting-started.ipynb | adrn/thejoker | mit |
Review Inputs
In addition to a working ActivitySim model setup, estimation mode requires an ActivitySim format household travel survey. An ActivitySim format household travel survey is very similar to ActivitySim's simulation model tables:
households
persons
tours
joint_tour_participants
trips
Examples of the ActivitySim format household travel survey are included in the example_estimation data folders. The user is responsible for formatting their household travel survey into the appropriate format.
After creating an ActivitySim format household travel survey, the scripts/infer.py script is run to append additional calculated fields. An example of an additional calculated field is the household:joint_tour_frequency, which is calculated based on the tours and joint_tour_participants tables.
The input survey files are below.
Survey households | pd.read_csv("../data_sf/survey_data/override_households.csv") | activitysim/examples/example_estimation/notebooks/01_estimation_mode.ipynb | synthicity/activitysim | agpl-3.0 |
Survey persons | pd.read_csv("../data_sf/survey_data/override_persons.csv") | activitysim/examples/example_estimation/notebooks/01_estimation_mode.ipynb | synthicity/activitysim | agpl-3.0 |
Survey joint tour participants | pd.read_csv("../data_sf/survey_data/override_joint_tour_participants.csv") | activitysim/examples/example_estimation/notebooks/01_estimation_mode.ipynb | synthicity/activitysim | agpl-3.0 |
Survey tours | pd.read_csv("../data_sf/survey_data/override_tours.csv") | activitysim/examples/example_estimation/notebooks/01_estimation_mode.ipynb | synthicity/activitysim | agpl-3.0 |
Survey trips | pd.read_csv("../data_sf/survey_data/override_trips.csv") | activitysim/examples/example_estimation/notebooks/01_estimation_mode.ipynb | synthicity/activitysim | agpl-3.0 |
Example Setup if Needed
To avoid duplication of inputs, especially model settings and expressions, the example_estimation depends on the example. The following commands create an example setup for use. The location of these example setups (i.e. the folders) are important because the paths are referenced in this notebook. The commands below download the skims.omx for the SF county example from the activitysim resources repository. | !activitysim create -e example_estimation_sf -d test | activitysim/examples/example_estimation/notebooks/01_estimation_mode.ipynb | synthicity/activitysim | agpl-3.0 |
Run the Estimation Example
The next step is to run the model with an estimation.yaml settings file with the following settings in order to output the EDB for all submodels:
```
enable=True
bundles:
- school_location
- workplace_location
- auto_ownership
- free_parking
- cdap
- mandatory_tour_frequency
- mandatory_tour_scheduling
- joint_tour_frequency
- joint_tour_composition
- joint_tour_participation
- joint_tour_destination
- joint_tour_scheduling
- non_mandatory_tour_frequency
- non_mandatory_tour_destination
- non_mandatory_tour_scheduling
- tour_mode_choice
- atwork_subtour_frequency
- atwork_subtour_destination
- atwork_subtour_scheduling
- atwork_subtour_mode_choice
survey_tables:
households:
file_name: survey_data/override_households.csv
index_col: household_id
persons:
file_name: survey_data/override_persons.csv
index_col: person_id
tours:
file_name: survey_data/override_tours.csv
joint_tour_participants:
file_name: survey_data/override_joint_tour_participants.csv
```
This enables the estimation mode functionality, identifies which models to run and their output estimation data bundles (EDBs), and the input survey tables, which include the override settings for each model choice.
With this setup, the model will output an EBD with the following tables for this submodel:
- model settings - auto_ownership_model_settings.yaml
- coefficients - auto_ownership_coefficients.csv
- utilities specification - auto_ownership_SPEC.csv
- chooser and alternatives data - auto_ownership_values_combined.csv
The following code runs the software in estimation mode, inheriting the settings from the simulation setup and using the San Francisco county data setup. It produces the EDB for all submodels but runs all the model steps identified in the inherited settings file. | %cd test
!activitysim run -c configs_estimation/configs -c configs -o output -d data_sf | activitysim/examples/example_estimation/notebooks/01_estimation_mode.ipynb | synthicity/activitysim | agpl-3.0 |
What's going on in cells 11 & 14 ? | greet = functools.partial(hello_doctor)
greet("Dr.Susan Calvin")
welcome = functools.partial(hello_doctor)
welcome("Dr.Susan Calvin")
def numpower(base, exponent):
return base ** exponent
def square(base):
return numpower(base, 2)
def cube(base):
return numpower(base, 3)
print square(25)
print cube(15) | functools_partial_usage.ipynb | mramanathan/pydiary_notes | gpl-3.0 |
In this chapter, we compare our PQk-means to k-means in the faiss library. Faiss provides one of the most efficient implementations of nearest neighbor algorithms for both CPU(s) and GPU(s). It also provides an implementation of vanilla k-means, which we will compare to. The core part of faiss is implemented by C++, and the python binding is available.
We compare PQk-means to both CPU- and GPU-version. Our configurations are:
- faiss-CPU: This was built with Intel MKL, which provides the fastest backend BLAS implementation. The algorithms in the library are automatically parallelized. All evaluations are conducted on a server with 3.6 GHz Intel Xeon CPU (6 cores, 12 threads)
- faiss-GPU: The library was built with CUDA 8.0. Two middle-level GPUs, NVIDIA GTX 1080s, are used for the evaluation. The algorithms can be run over multi GPUs.
For the comparison, we leverage the SIFT1M dataset. | Xt, X = pqkmeans.evaluation.get_sift1m_dataset() # Xt: the training data. X: the testing data to be clusterd | tutorial/4_comparison_to_faiss.ipynb | DwangoMediaVillage/pqkmeans | mit |
First, you can download the data by a helper script. This would take several minutes, and consume 168 MB of the disk space. | Xt = Xt.astype(numpy.float32)
X = X.astype(numpy.float32)
D = X.shape[1]
print("Xt.shape:{}\nX.shape:{}".format(Xt.shape, X.shape)) | tutorial/4_comparison_to_faiss.ipynb | DwangoMediaVillage/pqkmeans | mit |
Because faiss takes 32-bit float vectors as inputs, the data is converted to float32.
2. Small-scale comparison: N=10^5, K=10^3 (k-means with faiss-CPU v.s. k-means with sklearn)
First, let us compare the k-means implementation of faiss and sklearn using 100K vectors from SIFT1M. Then we show that faiss is much faster than sklearn with almost the same error.
Note that it is hard to run k-means-sklearn with a large K because it is too slow (that is the reason for this small-scale experiment) | K_small = 1000
N_small = 100000
# Setup clustering instances. We stop each algorithm with 10 iterations
kmeans_faiss_cpu_small = faiss.Kmeans(d=D, k=K_small, niter=10)
kmeans_sklearn_small = KMeans(n_clusters=K_small, n_jobs=-1, max_iter=10) | tutorial/4_comparison_to_faiss.ipynb | DwangoMediaVillage/pqkmeans | mit |
Let's run each algorithm | %%time
print("faiss-cpu:")
kmeans_faiss_cpu_small.train(X[:N_small])
_, ids_faiss_cpu_small = kmeans_faiss_cpu_small.index.search(X[:N_small], 1)
%%time
print("sklearn:")
ids_sklearn_small = kmeans_sklearn_small.fit_predict(X[:N_small])
_, faiss_cpu_small_error, _ = pqkmeans.evaluation.calc_error(ids_faiss_cpu_small.reshape(-1), X[:N_small], K_small)
_, sklearn_small_error, _ = pqkmeans.evaluation.calc_error(ids_sklearn_small, X[:N_small], K_small)
print("k-means, faiss-cpu, error: {}".format(faiss_cpu_small_error))
print("k-means, sklearn, error: {}".format(sklearn_small_error)) | tutorial/4_comparison_to_faiss.ipynb | DwangoMediaVillage/pqkmeans | mit |
We observed that
- k-means with faiss-CPU (2 sec) is surprisingly faster than k-means with sklearn (3 min) with almost the same error. This speedup would be due to the highly optimized implementation of the nearest neighbor search in faiss with Intel MKL BLAS. This suggests that faiss-CPU is a better option for the exact k-means in a usual computer.
Because faiss-CPU is faster thant sklearn, sklearn is not compared in the next section.
3. Large-scale comparison: N=10^6, K=10^4 (PQk-means, k-means with faiss-CPU, and k-measn with falss-GPU) | # Setup GPUs for faiss-gpu
# In my environment, the first GPU (id=0) is for rendering, and the second (id=1) and the third (id=2) GPUs are GPGPU (GTX1080).
# We activate only the second and the third GPU
import os
os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID" # make sure the order is identical to the result of nvidia-smi
os.environ["CUDA_VISIBLE_DEVICES"] = "1,2" # Please change here for your environment | tutorial/4_comparison_to_faiss.ipynb | DwangoMediaVillage/pqkmeans | mit |
Next, let us compare PQk-meas with faiss-CPU and faiss-GPU using the whole dataset (N=10^6, K=10^4). Note that this is 100x larter setting compared to Sec 2 (NK=10^8 vs NK=10^10).
First, as pre-processing for PQk-means, let's train a PQencoder and encode all data. It will take around 10 sec. | %%time
# Train the encoder
encoder = pqkmeans.encoder.PQEncoder(num_subdim=4, Ks=256)
encoder.fit(Xt)
# Encode the vectors to PQ-codes
X_code = encoder.transform(X) | tutorial/4_comparison_to_faiss.ipynb | DwangoMediaVillage/pqkmeans | mit |
Note that X_code is 128x more memory efficient than X: | print("X.shape: {}, X.dtype: {}, X.nbytes: {} MB".format(X.shape, X.dtype, X.nbytes / 10**6))
print("X_code.shape: {}, X_code.dtype: {}, X_code.nbytes: {} MB".format(X_code.shape, X_code.dtype, X_code.nbytes / 10**6)) | tutorial/4_comparison_to_faiss.ipynb | DwangoMediaVillage/pqkmeans | mit |
Then each algorithms are instantiated as follows | K = 10000 # Set larger K
# Setup k-means instances. The number of iteration is set as 20 for all methods
# PQ-kmeans
kmeans_pqkmeans = pqkmeans.clustering.PQKMeans(encoder=encoder, k=K, iteration=20)
# Faiss-cpu
kmeans_faiss_cpu = faiss.Kmeans(d=D, k=K, niter=20)
kmeans_faiss_cpu.cp.max_points_per_centroid = 1000000 # otherwise the kmeans implementation sub-samples the training set | tutorial/4_comparison_to_faiss.ipynb | DwangoMediaVillage/pqkmeans | mit |
Because some configurations are required for GPU, we wrap up the gpu clustering as one function: | def run_faiss_gpu(X, K, ngpu):
# This code is based on https://github.com/facebookresearch/faiss/blob/master/benchs/kmeans_mnist.py
D = X.shape[1]
clus = faiss.Clustering(D, K)
# otherwise the kmeans implementation sub-samples the training set
clus.max_points_per_centroid = 10000000
clus.niter = 20
res = [faiss.StandardGpuResources() for i in range(ngpu)]
flat_config = []
for i in range(ngpu):
cfg = faiss.GpuIndexFlatConfig()
cfg.useFloat16 = False
cfg.device = i
flat_config.append(cfg)
if ngpu == 1:
index = faiss.GpuIndexFlatL2(res[0], D, flat_config[0])
else:
indexes = [faiss.GpuIndexFlatL2(res[i], D, flat_config[i])
for i in range(ngpu)]
index = faiss.IndexProxy()
for sub_index in indexes:
index.addIndex(sub_index)
# Run clustering
clus.train(X, index)
# Return the assignment
_, ids = index.search(X, 1)
return ids | tutorial/4_comparison_to_faiss.ipynb | DwangoMediaVillage/pqkmeans | mit |
Run each method and see the computational cost. | %%time
print("PQk-means:")
ids_pqkmeans = kmeans_pqkmeans.fit_predict(X_code)
%%time
print("faiss-cpu:")
kmeans_faiss_cpu.train(X)
_, ids_faiss_cpu = kmeans_faiss_cpu.index.search(X, 1)
%%time
print("faiss with GPU:")
ids_faiss_gpu = run_faiss_gpu(X, K, ngpu=2) # Please adjust ngpu for your environment
_, pqkmeans_error, _ = pqkmeans.evaluation.calc_error(ids_pqkmeans, X, K)
_, faiss_cpu_error, _ = pqkmeans.evaluation.calc_error(ids_faiss_cpu.reshape(-1), X, K)
_, faiss_gpu_error, _ = pqkmeans.evaluation.calc_error(ids_faiss_gpu.reshape(-1), X, K)
print("PQk-means, error: {}".format(pqkmeans_error))
print("k-means, faiss-cpu, error: {}".format(faiss_cpu_error))
print("k-means, faiss-gpu, error: {}".format(faiss_gpu_error)) | tutorial/4_comparison_to_faiss.ipynb | DwangoMediaVillage/pqkmeans | mit |
Of course, we need a better way to figure out how well we’ve fit the data than staring at the graph.
A common measure is the coefficient of determination (or R-squared), which measures the fraction of the total variation in the dependent variable that is captured by the model.
Multiple Regression using Matrix Method
Machine Learning in Action
https://github.com/computational-class/machinelearninginaction/
$$ y_i = X_i^T w$$
The constant could be represent by 1 in X
The squared error could be written as:
$$ \sum_{i = 1}^m (y_i -X_i^T w)^2 $$
We can also write this in matrix notation as $(y-Xw)^T(y-Xw)$.
If we take the derivative of this with respect to w, we’ll get $X^T(y-Xw)$.
We can set this to zero and solve for w to get the following equation:
$$\hat w = (X^T X)^{-1}X^T y$$ | # https://github.com/computational-class/machinelearninginaction/blob/master/Ch08/regression.py
import pandas as pd
import random
dat = pd.read_csv('../data/ex0.txt', sep = '\t', names = ['x1', 'x2', 'y'])
dat['x3'] = [yi*.3 + .5*random.random() for yi in dat['y']]
dat.head()
from numpy import mat, linalg, corrcoef
def standRegres(xArr,yArr):
xMat = mat(xArr); yMat = mat(yArr).T
xTx = xMat.T*xMat
if linalg.det(xTx) == 0.0:
print("This matrix is singular, cannot do inverse")
return
ws = xTx.I * (xMat.T*yMat)
return ws
xs = [[dat.x1[i], dat.x2[i], dat.x3[i]] for i in dat.index]
y = dat.y
print(xs[:2])
ws = standRegres(xs, y)
print(ws)
xMat=mat(xs)
yMat=mat(y)
yHat = xMat*ws
xCopy=xMat.copy()
xCopy.sort(0)
yHat=xCopy*ws
fig = plt.figure()
ax = fig.add_subplot(111)
ax.scatter(xMat[:,1].flatten().A[0], yMat.T[:,0].flatten().A[0])
ax.plot(xCopy[:,1],yHat, 'r-')
plt.ylim(0, 5)
plt.show()
yHat = xMat*ws
corrcoef(yHat.T, yMat) | code/08.06-regression.ipynb | computational-class/cjc2016 | mit |
Doing Statistics with statsmodels
http://www.statsmodels.org/stable/index.html
statsmodels is a Python module that provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests, and statistical data exploration. | import statsmodels.api as sm
import statsmodels.formula.api as smf
dat = pd.read_csv('ex0.txt', sep = '\t', names = ['x1', 'x2', 'y'])
dat['x3'] = [yi*.3 - .1*random.random() for yi in y]
dat.head()
results = smf.ols('y ~ x2 + x3', data=dat).fit()
results.summary()
fig = plt.figure(figsize=(12,8))
fig = sm.graphics.plot_partregress_grid(results, fig = fig)
plt.show()
import numpy as np
X = np.array(num_friends_good)
X = sm.add_constant(X, prepend=False)
mod = sm.OLS(daily_minutes_good, X)
res = mod.fit()
print(res.summary())
fig = plt.figure(figsize=(6,8))
fig = sm.graphics.plot_partregress_grid(res, fig = fig)
plt.show() | code/08.06-regression.ipynb | computational-class/cjc2016 | mit |
Resit Assignment part A
Deadline: Friday, November 13, 2020 before 17:00
Please name your files:
ASSIGNMENT-RESIT-A.ipynb
utils.py (from part B)
raw_text_to_coll.py (from part B)
Please name your zip file as follows: RESIT-ASSIGNMENT.zip and upload it via Canvas (Resit Assignment).
- Please submit your assignment on Canvas: Resit Assignment
- If you have questions about this topic, please contact [email protected].
Questions and answers will be collected in this Q&A document,
so please check if your question has already been answered.
All of the covered chapters are important to this assignment. However, please pay special attention to:
- Chapter 10 - Dictionaries
- Chapter 11 - Functions and scope
* Chapter 14 - Reading and writing text files
* Chapter 15 - Off to analyzing text
- Chapter 17 - Data Formats II (JSON)
- Chapter 19 - More about Natural Language Processing Tools (spaCy)
In this assignment:
* we are going to process the texts in ../Data/Dreams/*txt
* for each file, we are going to determine:
* the number of characters
* the number of sentences
* the number of words
* the longest word
* the longest sentence
Note
This notebook should be placed in the same folder as the other Assignments!
Loading spaCy
Please make sure that spaCy is installed on your computer | import spacy | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Please make sure you can load the English spaCy model: | nlp = spacy.load('en_core_web_sm') | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Exercise 1: get paths
Define a function called get_paths that has the following parameter:
* input_folder: a string
The function:
* stores all paths to .txt files in the input_folder in a list
* returns a list of strings, i.e., each string is a file path | # your code here | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Please test your function using the following function call | paths = get_paths(input_folder='../Data/Dreams')
print(paths) | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Exercise 2: load text
Define a function called load_text that has the following parameter:
* txt_path: a string
The function:
* opens the txt_path for reading and loads the contents of the file as a string
* returns a string, i.e., the content of the file | # your code here | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Exercise 3: return the longest
Define a function called return_the_longest that has the following parameter:
* list_of_strings: a list of strings
The function:
* returns the string with the highest number of characters. If multiple strings have the same length, return one of them. | def return_the_longest(list_of_strings):
"""
given a list of strings, return the longest string
if multiple strings have the same length, return one of them.
:param str list_of_strings: a list of strings
""" | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Please test you function by running the following cell: | a_list_of_strings = ["this", "is", "a", "sentence"]
longest_string = return_the_longest(a_list_of_strings)
error_message = f'the longest string should be "sentence", you provided {longest_string}'
assert longest_string == 'sentence', error_message | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Exercise 4: extract statistics
We are going to use spaCy to extract statistics from Vickie's dreams! Here are a few tips below about how to use spaCy:
tip 1: process text with spaCy | a_text = 'this is one sentence. this is another.'
doc = nlp(a_text) | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
tip 2: the number of characters is the length of the document | num_chars = len(doc.text)
print(num_chars) | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
tip 3: loop through the sentences of a document | for sent in doc.sents:
sent = sent.text
print(sent) | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
tip 4: loop through the words of a document | for token in doc:
word = token.text
print(word) | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Define a function called extract_statistics that has the following parameters:
* nlp: the result of calling spacy.load('en_core_web_sm')
* txt_path: path to a txt file, e.g., '../Data/Dreams/vickie8.txt'
The function:
* loads the content of the file using the function load_text
* processes the content of the file using nlp(content) (see tip 1 of this exercise)
The function returns a dictionary with five keys:
* num_sents: the number of sentences in the document
* num_chars: the number of characters in the document
* num_tokens: the number of words in the document
* longest_sent: the longest sentence in the document
* Please make a list with all the sentences and call the function return_the_longest to retrieve the longest sentence
* longest_word: the longest word in the document
* Please make a list with all the words and call the function return_the_longest to retrieve the longest word
Test the function on one of the files from Vickie's dreams. | def extract_statistics(nlp, txt_path):
"""
given a txt_path
-use the load_text function to load the text
-process the text using spaCy
:param nlp: loaded spaCy model (result of calling spacy.load('en_core_web_sm'))
:param str txt_path: path to txt file
:rtype: dict
:return: a dictionary with the following keys:
-"num_sents" : the number of sentences
-"num_chars" : the number of characters
-"num_tokens" : the number of words
-"longest_sent" : the longest sentence
-"longest_word" : the longest word
"""
stats = extract_statistics(nlp, txt_path=paths[0])
stats | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Exercise 5: process all txt files
tip 1: how to obtain the basename of a file | import os
basename = os.path.basename('../Data/Dreams/vickie1.txt')[:-4]
print(basename) | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Define a function called process_all_txt_files that has the following parameters:
* nlp: the result of calling spacy.load('en_core_web_sm')
* input_folder: a string (we will test it using '../Data/Dreams')
The function:
* obtains a list of txt paths using the function get_paths with input_folder as an argument
* loops through the txt paths one by one
* for each iteration, the extract_statistics function is called with txt_path as an argument
The function returns a dictionary:
* the keys are the basenames of the txt files (see tip 1 of this exercise)
* the values are the output of calling the function extract_statistics for a specific file
Test your function using '../Data/Dreams' as a value for the parameter input_folder. | def process_all_txt_files(nlp, input_folder):
"""
given a list of txt_paths
-process each with the extract_statistics function
:param nlp: loaded spaCy model (result of calling spacy.load('en_core_web_sm'))
:param list txt_paths: list of paths to txt files
:rtype: dict
:return: dictionary mapping:
-basename -> output of extract_statistics function
"""
basename_to_stats = process_all_txt_files(nlp, input_folder='../Data/Dreams')
basename_to_stats | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
Exercise 6: write to disk
In this exercise, you are going to write our results to our computer.
Please loop through basename_to_stats and create one JSON file for each dream.
the path is f'{basename}.json', i.e., 'vickie1.json', 'vickie2.json', etc. (please write them to the same folder as this notebook)
the content of each JSON file is each value of basename_to_stats | import json
for basename, stats in basename_to_stats.items():
pass | Assignments-colab/ASSIGNMENT_RESIT_A.ipynb | evanmiltenburg/python-for-text-analysis | apache-2.0 |
The result is a NumPy Record Array where the fields of the array correspond to the properties of a VesselTubeSpatialObjectPoint. | print(type(tubes))
print(tubes.dtype) | examples/TubeNumPyArrayAndPropertyHistograms.ipynb | KitwareMedical/ITKTubeTK | apache-2.0 |
The length of the array corresponds to the number of points that make up the tubes. | print(len(tubes))
print(tubes.shape) | examples/TubeNumPyArrayAndPropertyHistograms.ipynb | KitwareMedical/ITKTubeTK | apache-2.0 |
Individual points can be sliced, or views can be created on individual fields. | print('Entire points 0, 2:')
print(tubes[:4:2])
print('\nPosition of points 0, 2')
print(tubes['PositionInWorldSpace'][:4:2]) | examples/TubeNumPyArrayAndPropertyHistograms.ipynb | KitwareMedical/ITKTubeTK | apache-2.0 |
We can easily create a histogram of the radii or visualize the point positions. | %pylab inline
from mpl_toolkits.mplot3d import Axes3D
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(16, 6))
ax = fig.add_subplot(1, 2, 1)
ax.hist(tubes['RadiusInWorldSpace'], bins=100)
ax.set_xlabel('Radius')
ax.set_ylabel('Count')
ax = fig.add_subplot(1, 2, 2, projection='3d')
subsample = 100
position = tubes['PositionInWorldSpace'][::subsample]
radius = tubes['RadiusInWorldSpace'][::subsample]
ax.scatter(position[:,0], position[:,1], position[:,2], s=(2*radius)**2)
ax.set_title('Point Positions')
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('Z'); | examples/TubeNumPyArrayAndPropertyHistograms.ipynb | KitwareMedical/ITKTubeTK | apache-2.0 |
Finally, let's code the models! The tf.keras API accepts an array of layers into a model object, so we can create a dictionary of layers based on the different model types we want to use. The below file has two functions: get_layers and create_and_train_model. We will build the structure of our model in get_layers. Last but not least, we'll copy over the training code from the previous lab into train_and_evaluate.
TODO 1: Define the Keras layers for a DNN model
TODO 2: Define the Keras layers for a dropout model
TODO 3: Define the Keras layers for a CNN model
Hint: These models progressively build on each other. Look at the imported tensorflow.keras.layers modules and the default values for the variables defined in get_layers for guidance. | %%writefile mnist_models/trainer/model.py
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow.keras import Sequential
from tensorflow.keras.callbacks import TensorBoard
from tensorflow.keras.layers import (
Conv2D, Dense, Dropout, Flatten, MaxPooling2D, Softmax)
from . import util
# Image Variables
WIDTH = 28
HEIGHT = 28
def get_layers(
model_type,
nclasses=10,
hidden_layer_1_neurons=400,
hidden_layer_2_neurons=100,
dropout_rate=0.25,
num_filters_1=64,
kernel_size_1=3,
pooling_size_1=2,
num_filters_2=32,
kernel_size_2=3,
pooling_size_2=2):
"""Constructs layers for a keras model based on a dict of model types."""
model_layers = {
'linear': [
Flatten(),
Dense(nclasses),
Softmax()
],
'dnn': [
# TODO
],
'dnn_dropout': [
# TODO
],
'cnn': [
# TODO
]
}
return model_layers[model_type]
def build_model(layers, output_dir):
"""Compiles keras model for image classification."""
model = Sequential(layers)
model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
return model
def train_and_evaluate(model, num_epochs, steps_per_epoch, output_dir):
"""Compiles keras model and loads data into it for training."""
mnist = tf.keras.datasets.mnist.load_data()
train_data = util.load_dataset(mnist)
validation_data = util.load_dataset(mnist, training=False)
callbacks = []
if output_dir:
tensorboard_callback = TensorBoard(log_dir=output_dir)
callbacks = [tensorboard_callback]
history = model.fit(
train_data,
validation_data=validation_data,
epochs=num_epochs,
steps_per_epoch=steps_per_epoch,
verbose=2,
callbacks=callbacks)
if output_dir:
export_path = os.path.join(output_dir, 'keras_export')
model.save(export_path, save_format='tf')
return history
| courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Now that we know that our models are working as expected, let's run it on the Google Cloud AI Platform. We can run it as a python module locally first using the command line.
The below cell transfers some of our variables to the command line as well as create a job directory including a timestamp. This is where our model and tensorboard data will be stored. | current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "mnist_models/models/{}_{}/".format(
model_type, current_time) | courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Training on the cloud
Since we're using an unreleased version of TensorFlow on AI Platform, we can instead use a Deep Learning Container in order to take advantage of libraries and applications not normally packaged with AI Platform. Below is a simple Dockerlife which copies our code to be used in a TF2 environment. | %%writefile mnist_models/Dockerfile
FROM gcr.io/deeplearning-platform-release/tf2-cpu
COPY mnist_models/trainer /mnist_models/trainer
ENTRYPOINT ["python3", "-m", "mnist_models.trainer.task"] | courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Finally, we can kickoff the AI Platform training job. We can pass in our docker image using the master-image-uri flag. | current_time = datetime.now().strftime("%y%m%d_%H%M%S")
model_type = 'cnn'
os.environ["MODEL_TYPE"] = model_type
os.environ["JOB_DIR"] = "gs://{}/mnist_{}_{}/".format(
BUCKET, model_type, current_time)
os.environ["JOB_NAME"] = "mnist_{}_{}".format(
model_type, current_time)
%%bash
echo $JOB_DIR $REGION $JOB_NAME
gcloud ai-platform jobs submit training $JOB_NAME \
--staging-bucket=gs://$BUCKET \
--region=$REGION \
--master-image-uri=$IMAGE_URI \
--scale-tier=BASIC_GPU \
--job-dir=$JOB_DIR \
-- \
--model_type=$MODEL_TYPE | courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Can't wait to see the results? Run the code below and copy the output into the Google Cloud Shell to follow along with TensorBoard. Look at the web preview on port 6006. | !echo "tensorboard --logdir $JOB_DIR" | courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Deploying and predicting with model
Once you have a model you're proud of, let's deploy it! All we need to do is give AI Platform the location of the model. Below uses the keras export path of the previous job, but ${JOB_DIR}keras_export/ can always be changed to a different path.
Even though we're using a 1.14 runtime, it's compatable with TF2 exported models. Phew!
Uncomment the delete commands below if you are getting an "already exists error" and want to deploy a new model. | %%bash
MODEL_NAME="mnist"
MODEL_VERSION=${MODEL_TYPE}
MODEL_LOCATION=${JOB_DIR}keras_export/
echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes"
#yes | gcloud ai-platform versions delete ${MODEL_VERSION} --model ${MODEL_NAME}
#yes | gcloud ai-platform models delete ${MODEL_NAME}
gcloud ai-platform models create ${MODEL_NAME} --regions $REGION
gcloud ai-platform versions create ${MODEL_VERSION} \
--model ${MODEL_NAME} \
--origin ${MODEL_LOCATION} \
--framework tensorflow \
--runtime-version=1.14 | courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb | turbomanage/training-data-analyst | apache-2.0 |
To predict with the model, let's take one of the example images.
TODO 4: Write a .json file with image data to send to an AI Platform deployed model | import json, codecs
import tensorflow as tf
import matplotlib.pyplot as plt
from mnist_models.trainer import util
HEIGHT = 28
WIDTH = 28
IMGNO = 12
mnist = tf.keras.datasets.mnist.load_data()
(x_train, y_train), (x_test, y_test) = mnist
test_image = x_test[IMGNO]
jsondata = test_image.reshape(HEIGHT, WIDTH, 1).tolist()
json.dump(jsondata, codecs.open("test.json", "w", encoding = "utf-8"))
plt.imshow(test_image.reshape(HEIGHT, WIDTH)); | courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Finally, we can send it to the prediction service. The output will have a 1 in the index of the corresponding digit it is predicting. Congrats! You've completed the lab! | %%bash
gcloud ai-platform predict \
--model=mnist \
--version=${MODEL_TYPE} \
--json-instances=./test.json | courses/machine_learning/deepdive2/image_classification/labs/2_mnist_models.ipynb | turbomanage/training-data-analyst | apache-2.0 |
Otto Product Classification Data
Training and test data is available here. Go ahead and download the data. Inspecting it, you will see that the provided csv files consist of an id column, 93 integer feature columns. train.csv has an additional column for labels, which test.csv is missing. The challenge is to accurately predict test labels. For the rest of this notebook, we will assume data is stored at data_path, which you should modify below as needed. | data_path = "./" # <-- Make sure to adapt this to where your csv files are. | examples/Spark_ML_Pipeline.ipynb | maxpumperla/elephas | mit |
Loading data is relatively simple, but we have to take care of a few things. First, while you can shuffle rows of an RDD, it is generally not very efficient. But since data in train.csv is sorted by category, we'll have to shuffle in order to make the model perform well. This is what the function shuffle_csv below is for. Next, we read in plain text in load_data_rdd, split lines by comma and convert features to float vector type. Also, note that the last column in train.csv represents the category, which has a Class_ prefix.
Defining Data Frames
Spark has a few core data structures, among them is the data frame, which is a distributed version of the named columnar data structure many will now from either R or Pandas. We need a so called SQLContext and an optional column-to-names mapping to create a data frame from scratch. | from pyspark.sql import SQLContext
from pyspark.ml.linalg import Vectors
import numpy as np
import random
sql_context = SQLContext(sc)
def shuffle_csv(csv_file):
lines = open(csv_file).readlines()
random.shuffle(lines)
open(csv_file, 'w').writelines(lines)
def load_data_frame(csv_file, shuffle=True, train=True):
if shuffle:
shuffle_csv(csv_file)
data = sc.textFile(data_path + csv_file) # This is an RDD, which will later be transformed to a data frame
data = data.filter(lambda x:x.split(',')[0] != 'id').map(lambda line: line.split(','))
if train:
data = data.map(
lambda line: (Vectors.dense(np.asarray(line[1:-1]).astype(np.float32)),
str(line[-1])) )
else:
# Test data gets dummy labels. We need the same structure as in Train data
data = data.map( lambda line: (Vectors.dense(np.asarray(line[1:]).astype(np.float32)),"Class_1") )
return sqlContext.createDataFrame(data, ['features', 'category'])
| examples/Spark_ML_Pipeline.ipynb | maxpumperla/elephas | mit |
Let's load both train and test data and print a few rows of data using the convenient show method. | train_df = load_data_frame("train.csv")
test_df = load_data_frame("test.csv", shuffle=False, train=False) # No need to shuffle test data
print("Train data frame:")
train_df.show(10)
print("Test data frame (note the dummy category):")
test_df.show(10) | examples/Spark_ML_Pipeline.ipynb | maxpumperla/elephas | mit |
Preprocessing: Defining Transformers
Up until now, we basically just read in raw data. Luckily, Spark ML has quite a few preprocessing features available, so the only thing we will ever have to do is define transformations of data frames.
To proceed, we will first transform category strings to double values. This is done by a so called StringIndexer. Note that we carry out the actual transformation here already, but that is just for demonstration purposes. All we really need is too define string_indexer to put it into a pipeline later on. | from pyspark.ml.feature import StringIndexer
string_indexer = StringIndexer(inputCol="category", outputCol="index_category")
fitted_indexer = string_indexer.fit(train_df)
indexed_df = fitted_indexer.transform(train_df) | examples/Spark_ML_Pipeline.ipynb | maxpumperla/elephas | mit |
Next, it's good practice to normalize the features, which is done with a StandardScaler. | from pyspark.ml.feature import StandardScaler
scaler = StandardScaler(inputCol="features", outputCol="scaled_features", withStd=True, withMean=True)
fitted_scaler = scaler.fit(indexed_df)
scaled_df = fitted_scaler.transform(indexed_df)
print("The result of indexing and scaling. Each transformation adds new columns to the data frame:")
scaled_df.show(10) | examples/Spark_ML_Pipeline.ipynb | maxpumperla/elephas | mit |
Keras Deep Learning model
Now that we have a data frame with processed features and labels, let's define a deep neural net that we can use to address the classification problem. Chances are you came here because you know a thing or two about deep learning. If so, the model below will look very straightforward to you. We build a keras model by choosing a set of three consecutive Dense layers with dropout and ReLU activations. There are certainly much better architectures for the problem out there, but we really just want to demonstrate the general flow here. | from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Dropout, Activation
from tensorflow.keras.utils import to_categorical, generic_utils
nb_classes = train_df.select("category").distinct().count()
input_dim = len(train_df.select("features").first()[0])
model = Sequential()
model.add(Dense(512, input_shape=(input_dim,)))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(512))
model.add(Activation('relu'))
model.add(Dropout(0.5))
model.add(Dense(nb_classes))
model.add(Activation('softmax'))
model.compile(loss='categorical_crossentropy', optimizer='adam') | examples/Spark_ML_Pipeline.ipynb | maxpumperla/elephas | mit |
Distributed Elephas model
To lift the above Keras model to Spark, we define an Estimator on top of it. An Estimator is Spark's incarnation of a model that still has to be trained. It essentially only comes with only a single (required) method, namely fit. Once we call fit on a data frame, we get back a Model, which is a trained model with a transform method to predict labels.
We do this by initializing an ElephasEstimator and setting a few properties. As by now our input data frame will have many columns, we have to tell the model where to find features and labels by column name. Then we provide serialized versions of our Keras model. We can not plug in keras models into the Estimator directly, as Spark will have to serialize them anyway for communication with workers, so it's better to provide the serialization ourselves. In fact, while pyspark knows how to serialize model, it is extremely inefficient and can break if models become too large. Spark ML is especially picky (and rightly so) about parameters and more or less prohibits you from providing non-atomic types and arrays of the latter. Most of the remaining parameters are optional and rather self explainatory. Plus, many of them you know if you have ever run a keras model before. We just include them here to show the full set of training configuration. | from elephas.ml_model import ElephasEstimator
from tensorflow.keras import optimizers
adam = optimizers.Adam(lr=0.01)
opt_conf = optimizers.serialize(adam)
# Initialize SparkML Estimator and set all relevant properties
estimator = ElephasEstimator()
estimator.setFeaturesCol("scaled_features") # These two come directly from pyspark,
estimator.setLabelCol("index_category") # hence the camel case. Sorry :)
estimator.set_keras_model_config(model.to_yaml()) # Provide serialized Keras model
estimator.set_categorical_labels(True)
estimator.set_nb_classes(nb_classes)
estimator.set_num_workers(1) # We just use one worker here. Feel free to adapt it.
estimator.set_epochs(20)
estimator.set_batch_size(128)
estimator.set_verbosity(1)
estimator.set_validation_split(0.15)
estimator.set_optimizer_config(opt_conf)
estimator.set_mode("synchronous")
estimator.set_loss("categorical_crossentropy")
estimator.set_metrics(['acc']) | examples/Spark_ML_Pipeline.ipynb | maxpumperla/elephas | mit |
SparkML Pipelines
Now for the easy part: Defining pipelines is really as easy as listing pipeline stages. We can provide any configuration of Transformers and Estimators really, but here we simply take the three components defined earlier. Note that string_indexer and scaler and interchangable, while estimator somewhat obviously has to come last in the pipeline. | from pyspark.ml import Pipeline
pipeline = Pipeline(stages=[string_indexer, scaler, estimator]) | examples/Spark_ML_Pipeline.ipynb | maxpumperla/elephas | mit |
Fitting and evaluating the pipeline
The last step now is to fit the pipeline on training data and evaluate it. We evaluate, i.e. transform, on training data, since only in that case do we have labels to check accuracy of the model. If you like, you could transform the test_df as well. | from pyspark.mllib.evaluation import MulticlassMetrics
fitted_pipeline = pipeline.fit(train_df) # Fit model to data
prediction = fitted_pipeline.transform(train_df) # Evaluate on train data.
# prediction = fitted_pipeline.transform(test_df) # <-- The same code evaluates test data.
pnl = prediction.select("index_category", "prediction")
pnl.show(100)
prediction_and_label = pnl.map(lambda row: (row.index_category, row.prediction))
metrics = MulticlassMetrics(prediction_and_label)
print(metrics.precision()) | examples/Spark_ML_Pipeline.ipynb | maxpumperla/elephas | mit |
Load an interaction history | history_path = os.path.join('data', 'assistments_2009_2010.pkl')
with open(history_path, 'rb') as f:
history = pickle.load(f)
df = history.data | nb/model_explorations.ipynb | rddy/lentil | apache-2.0 |
Train an embedding model on the interaction history and visualize the results | embedding_dimension = 2
model = models.EmbeddingModel(
history,
embedding_dimension,
using_prereqs=True,
using_lessons=True,
using_bias=True,
learning_update_variance_constant=0.5)
estimator = est.EmbeddingMAPEstimator(
regularization_constant=1e-3,
using_scipy=True,
verify_gradient=False,
debug_mode_on=True,
ftol=1e-3)
model.fit(estimator)
print "Training AUC = %f" % (evaluate.training_auc(
model, history, plot_roc_curve=True))
split_history = history.split_interactions_by_type()
timestep_of_last_interaction = split_history.timestep_of_last_interaction
NUM_STUDENTS_TO_SAMPLE = 10
for student_id in random.sample(df['student_id'].unique(), NUM_STUDENTS_TO_SAMPLE):
student_idx = history.idx_of_student_id(student_id)
timesteps = range(1, timestep_of_last_interaction[student_id]+1)
for i in xrange(model.embedding_dimension):
plt.plot(timesteps, model.student_embeddings[student_idx, i, timesteps],
label='Skill %d' % (i+1))
norms = np.linalg.norm(model.student_embeddings[student_idx, :, timesteps], axis=1)
plt.plot(timesteps, norms, label='norm')
plt.title('student_id = %s' % student_id)
plt.xlabel('Timestep')
plt.ylabel('Skill')
plt.legend(loc='upper right')
plt.show()
assessment_norms = np.linalg.norm(model.assessment_embeddings, axis=1)
plt.xlabel('Assessment embedding norm')
plt.ylabel('Frequency (number of assessments)')
plt.hist(assessment_norms, bins=20)
plt.show()
def get_pass_rates(grouped):
"""
Get pass rate for each group
:param pd.GroupBy grouped: A grouped dataframe
:rtype: dict[str, float]
:return: A dictionary mapping group name to pass rate
"""
pass_rates = {}
for name, group in grouped:
vc = group['outcome'].value_counts()
if True not in vc:
pass_rates[name] = 0
else:
pass_rates[name] = vc[True] / len(group)
return pass_rates
grouped = df[df['module_type']==datatools.AssessmentInteraction.MODULETYPE].groupby('module_id')
pass_rates = get_pass_rates(grouped)
assessment_norms = [np.linalg.norm(model.assessment_embeddings[history.idx_of_assessment_id(assessment_id), :]) for assessment_id in pass_rates]
plt.xlabel('Assessment pass rate')
plt.ylabel('Assessment embedding norm')
plt.scatter(pass_rates.values(), assessment_norms)
plt.show()
grouped = df[df['module_type']==datatools.AssessmentInteraction.MODULETYPE].groupby('module_id')
pass_rates = get_pass_rates(grouped)
bias_minus_norm = [model.assessment_biases[history.idx_of_assessment_id(
assessment_id)] - np.linalg.norm(
model.assessment_embeddings[history.idx_of_assessment_id(
assessment_id), :]) for assessment_id in pass_rates]
plt.xlabel('Assessment pass rate')
plt.ylabel('Assessment bias - Assessment embedding norm')
plt.scatter(pass_rates.values(), bias_minus_norm)
plt.show()
grouped = df[df['module_type']==datatools.AssessmentInteraction.MODULETYPE].groupby('student_id')
pass_rates = get_pass_rates(grouped)
biases = [model.student_biases[history.idx_of_student_id(
student_id)] for student_id in pass_rates]
plt.xlabel('Student pass rate')
plt.ylabel('Student bias')
plt.scatter(pass_rates.values(), biases)
plt.show()
lesson_norms = np.linalg.norm(model.lesson_embeddings, axis=1)
plt.xlabel('Lesson embedding norm')
plt.ylabel('Frequency (number of lessons)')
plt.hist(lesson_norms, bins=20)
plt.show()
prereq_norms = np.linalg.norm(model.prereq_embeddings, axis=1)
plt.xlabel('Prereq embedding norm')
plt.ylabel('Frequency (number of lessons)')
plt.hist(prereq_norms, bins=20)
plt.show()
plt.xlabel('Lesson embedding norm')
plt.ylabel('Prereq embedding norm')
plt.scatter(prereq_norms, lesson_norms)
plt.show()
timesteps = range(model.student_embeddings.shape[2])
avg_student_norms = np.array(np.linalg.norm(np.mean(model.student_embeddings, axis=0), axis=0))
plt.xlabel('Timestep')
plt.ylabel('Average student embedding norm')
plt.plot(timesteps, avg_student_norms)
plt.show() | nb/model_explorations.ipynb | rddy/lentil | apache-2.0 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.