markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
This displays the call that produced the object fit and a three-column matrix with columns Df (the number of nonzero coefficients), %dev (the percent deviance explained) and Lambda (the corresponding value of $\lambda$). (Note that the digits option can used to specify significant digits in the printout.) Here the actual number of $\lambda$'s here is less than specified in the call. The reason lies in the stopping criteria of the algorithm. According to the default internal settings, the computations stop if either the fractional change in deviance down the path is less than $10^{-5}$ or the fraction of explained deviance reaches $0.999$. From the last few lines , we see the fraction of deviance does not change much and therefore the computation ends when meeting the stopping criteria. We can change such internal parameters. For details, see the Appendix section or type help(glmnet.control). We can plot the fitted object as in the previous section. There are more options in the plot function. Users can decide what is on the X-axis. xvar allows three measures: "norm" for the $\ell_1$-norm of the coefficients (default), "lambda" for the log-lambda value and "dev" for %deviance explained. Users can also label the curves with variable sequence numbers simply by setting label = TRUE. Let's plot "fit" against the log-lambda value and with each curve labeled.
glmnetPlot(fit, xvar = 'lambda', label = True);
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
Now when we plot against %deviance we get a very different picture. This is percent deviance explained on the training data. What we see here is that toward the end of the path this value are not changing much, but the coefficients are "blowing up" a bit. This lets us focus attention on the parts of the fit that matter. This will especially be true for other models, such as logistic regression.
glmnetPlot(fit, xvar = 'dev', label = True);
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
We can extract the coefficients and make predictions at certain values of $\lambda$. Two commonly used options are: s specifies the value(s) of $\lambda$ at which extraction is made. exact indicates whether the exact values of coefficients are desired or not. That is, if exact = TRUE, and predictions are to be made at values of s not included in the original fit, these values of s are merged with object$lambda, and the model is refit before predictions are made. If exact=FALSE (default), then the predict function uses linear interpolation to make predictions for values of s that do not coincide with lambdas used in the fitting algorithm. A simple example is:
any(fit['lambdau'] == 0.5) glmnetCoef(fit, s = scipy.float64([0.5]), exact = False)
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
The output is for False.(TBD) The exact = 'True' option is not yet implemented. Users can make predictions from the fitted object. In addition to the options in coef, the primary argument is newx, a matrix of new values for x. The type option allows users to choose the type of prediction: * "link" gives the fitted values "response" the sames as "link" for "gaussian" family. "coefficients" computes the coefficients at values of s "nonzero" retuns a list of the indices of the nonzero coefficients for each value of s. For example,
fc = glmnetPredict(fit, x[0:5,:], ptype = 'response', \ s = scipy.float64([0.05])) print(fc)
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
gives the fitted values for the first 5 observations at $\lambda = 0.05$. If multiple values of s are supplied, a matrix of predictions is produced. Users can customize K-fold cross-validation. In addition to all the glmnet parameters, cvglmnet has its special parameters including nfolds (the number of folds), foldid (user-supplied folds), ptype(the loss used for cross-validation): "deviance" or "mse" uses squared loss "mae" uses mean absolute error As an example,
warnings.filterwarnings('ignore') cvfit = cvglmnet(x = x.copy(), y = y.copy(), ptype = 'mse', nfolds = 20) warnings.filterwarnings('default')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
does 20-fold cross-validation, based on mean squared error criterion (default though). Parallel computing is also supported by cvglmnet. Parallel processing is turned off by default. It can be turned on using parallel=True in the cvglmnet call. Parallel computing can significantly speed up the computation process, especially for large-scale problems. But for smaller problems, it could result in a reduction in speed due to the additional overhead. User discretion is advised. Functions coef and predict on cv.glmnet object are similar to those for a glmnet object, except that two special strings are also supported by s (the values of $\lambda$ requested): "lambda.1se": the largest $\lambda$ at which the MSE is within one standard error of the minimal MSE. "lambda.min": the $\lambda$ at which the minimal MSE is achieved.
cvfit['lambda_min'] cvglmnetCoef(cvfit, s = 'lambda_min') cvglmnetPredict(cvfit, newx = x[0:5,], s='lambda_min')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
Users can control the folds used. Here we use the same folds so we can also select a value for $\alpha$.
foldid = scipy.random.choice(10, size = y.shape[0], replace = True) cv1=cvglmnet(x = x.copy(),y = y.copy(),foldid=foldid,alpha=1) cv0p5=cvglmnet(x = x.copy(),y = y.copy(),foldid=foldid,alpha=0.5) cv0=cvglmnet(x = x.copy(),y = y.copy(),foldid=foldid,alpha=0)
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
There are no built-in plot functions to put them all on the same plot, so we are on our own here:
f = plt.figure() f.add_subplot(2,2,1) cvglmnetPlot(cv1) f.add_subplot(2,2,2) cvglmnetPlot(cv0p5) f.add_subplot(2,2,3) cvglmnetPlot(cv0) f.add_subplot(2,2,4) plt.plot( scipy.log(cv1['lambdau']), cv1['cvm'], 'r.') plt.hold(True) plt.plot( scipy.log(cv0p5['lambdau']), cv0p5['cvm'], 'g.') plt.plot( scipy.log(cv0['lambdau']), cv0['cvm'], 'b.') plt.xlabel('log(Lambda)') plt.ylabel(cv1['name']) plt.xlim(-6, 4) plt.ylim(0, 9) plt.legend( ('alpha = 1', 'alpha = 0.5', 'alpha = 0'), loc = 'upper left', prop={'size':6});
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
We see that lasso (alpha=1) does about the best here. We also see that the range of lambdas used differs with alpha. Coefficient upper and lower bounds These are recently added features that enhance the scope of the models. Suppose we want to fit our model, but limit the coefficients to be bigger than -0.7 and less than 0.5. This is easily achieved via the upper.limits and lower.limits arguments:
cl = scipy.array([[-0.7], [0.5]], dtype = scipy.float64) tfit=glmnet(x = x.copy(),y= y.copy(), cl = cl) glmnetPlot(tfit);
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
These are rather arbitrary limits; often we want the coefficients to be positive, so we can set only lower.limit to be 0. (Note, the lower limit must be no bigger than zero, and the upper limit no smaller than zero.) These bounds can be a vector, with different values for each coefficient. If given as a scalar, the same number gets recycled for all. Penalty factors This argument allows users to apply separate penalty factors to each coefficient. Its default is 1 for each parameter, but other values can be specified. In particular, any variable with penalty.factor equal to zero is not penalized at all! Let $v_j$ denote the penalty factor for $j$ th variable. The penalty term becomes $$ \lambda \sum_{j=1}^p \boldsymbol{v_j} P_\alpha(\beta_j) = \lambda \sum_{j=1}^p \boldsymbol{v_j} \left[ (1-\alpha)\frac{1}{2} \beta_j^2 + \alpha |\beta_j| \right]. $$ Note the penalty factors are internally rescaled to sum to nvars. This is very useful when people have prior knowledge or preference over the variables. In many cases, some variables may be so important that one wants to keep them all the time, which can be achieved by setting corresponding penalty factors to 0:
pfac = scipy.ones([1, 20]) pfac[0, 4] = 0; pfac[0, 9] = 0; pfac[0, 14] = 0 pfit = glmnet(x = x.copy(), y = y.copy(), penalty_factor = pfac) glmnetPlot(pfit, label = True);
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
We see from the labels that the three variables with 0 penalty factors always stay in the model, while the others follow typical regularization paths and shrunken to 0 eventually. Some other useful arguments. exclude allows one to block certain variables from being the model at all. Of course, one could simply subset these out of x, but sometimes exclude is more useful, since it returns a full vector of coefficients, just with the excluded ones set to zero. There is also an intercept argument which defaults to True; if False the intercept is forced to be zero. Customizing plots Sometimes, especially when the number of variables is small, we want to add variable labels to a plot. Since glmnet is intended primarily for wide data, this is not supprted in plot.glmnet. However, it is easy to do, as the following little toy example shows. We first generate some data, with 10 variables, and for lack of imagination and ease we give them simple character names. We then fit a glmnet model, and make the standard plot.
scipy.random.seed(101) x = scipy.random.rand(100,10) y = scipy.random.rand(100,1) fit = glmnet(x = x, y = y) glmnetPlot(fit);
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
We wish to label the curves with the variable names. Here's a simple way to do this, using the matplotlib library in python (and a little research into how to customize it). We need to have the positions of the coefficients at the end of the path.
%%capture # Output from this sample code has been suppressed due to (possible) Jupyter limitations # The code works just fine from ipython (tested on spyder) c = glmnetCoef(fit) c = c[1:, -1] # remove intercept and get the coefficients at the end of the path h = glmnetPlot(fit) ax1 = h['ax1'] xloc = plt.xlim() xloc = xloc[1] for i in range(len(c)): ax1.text(xloc, c[i], 'var' + str(i));
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
We have done nothing here to avoid overwriting of labels, in the event that they are close together. This would be a bit more work, but perhaps best left alone, anyway. Linear Regression - Multiresponse Gaussian Family The multiresponse Gaussian family is obtained using family = "mgaussian" option in glmnet. It is very similar to the single-response case above. This is useful when there are a number of (correlated) responses - the so-called "multi-task learning" problem. Here the sharing involves which variables are selected, since when a variable is selected, a coefficient is fit for each response. Most of the options are the same, so we focus here on the differences with the single response model. Obviously, as the name suggests, $y$ is not a vector, but a matrix of quantitative responses in this section. The coefficients at each value of lambda are also a matrix as a result. Here we solve the following problem: $$ \min_{(\beta_0, \beta) \in \mathbb{R}^{(p+1)\times K}}\frac{1}{2N} \sum_{i=1}^N ||y_i -\beta_0-\beta^T x_i||^2_F+\lambda \left[ (1-\alpha)||\beta||F^2/2 + \alpha\sum{j=1}^p||\beta_j||_2\right]. $$ Here, $\beta_j$ is the jth row of the $p\times K$ coefficient matrix $\beta$, and we replace the absolute penalty on each single coefficient by a group-lasso penalty on each coefficient K-vector $\beta_j$ for a single predictor $x_j$. We use a set of data generated beforehand for illustration.
# Import relevant modules and setup for calling glmnet %reset -f %matplotlib inline import sys sys.path.append('../test') sys.path.append('../lib') import scipy, importlib, pprint, matplotlib.pyplot as plt, warnings from glmnet import glmnet; from glmnetPlot import glmnetPlot from glmnetPrint import glmnetPrint; from glmnetCoef import glmnetCoef; from glmnetPredict import glmnetPredict from cvglmnet import cvglmnet; from cvglmnetCoef import cvglmnetCoef from cvglmnetPlot import cvglmnetPlot; from cvglmnetPredict import cvglmnetPredict # parameters baseDataDir= '../data/' # load data x = scipy.loadtxt(baseDataDir + 'MultiGaussianExampleX.dat', dtype = scipy.float64, delimiter = ',') y = scipy.loadtxt(baseDataDir + 'MultiGaussianExampleY.dat', dtype = scipy.float64, delimiter = ',')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
We fit the data, with an object "mfit" returned.
mfit = glmnet(x = x.copy(), y = y.copy(), family = 'mgaussian')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
For multiresponse Gaussian, the options in glmnet are almost the same as the single-response case, such as alpha, weights, nlambda, standardize. A exception to be noticed is that standardize.response is only for mgaussian family. The default value is FALSE. If standardize.response = TRUE, it standardizes the response variables. To visualize the coefficients, we use the plot function.
glmnetPlot(mfit, xvar = 'lambda', label = True, ptype = '2norm');
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
Note that we set type.coef = "2norm". Under this setting, a single curve is plotted per variable, with value equal to the $\ell_2$ norm. The default setting is type.coef = "coef", where a coefficient plot is created for each response (multiple figures). xvar and label are two other options besides ordinary graphical parameters. They are the same as the single-response case. We can extract the coefficients at requested values of $\lambda$ by using the function coef and make predictions by predict. The usage is similar and we only provide an example of predict here.
f = glmnetPredict(mfit, x[0:5,:], s = scipy.float64([0.1, 0.01])) print(f[:,:,0], '\n') print(f[:,:,1])
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
The prediction result is saved in a three-dimensional array with the first two dimensions being the prediction matrix for each response variable and the third indicating the response variables. We can also do k-fold cross-validation. The options are almost the same as the ordinary Gaussian family and we do not expand here.
warnings.filterwarnings('ignore') cvmfit = cvglmnet(x = x.copy(), y = y.copy(), family = "mgaussian") warnings.filterwarnings('default')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
We plot the resulting cv.glmnet object "cvmfit".
cvglmnetPlot(cvmfit)
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
To show explicitly the selected optimal values of $\lambda$, type
cvmfit['lambda_min'] cvmfit['lambda_1se']
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
As before, the first one is the value at which the minimal mean squared error is achieved and the second is for the most regularized model whose mean squared error is within one standard error of the minimal. Prediction for cvglmnet object works almost the same as for glmnet object. We omit the details here. Logistic Regression Logistic regression is another widely-used model when the response is categorical. If there are two possible outcomes, we use the binomial distribution, else we use the multinomial. Logistic Regression: Binomial Models For the binomial model, suppose the response variable takes value in $\mathcal{G}={1,2}$. Denote $y_i = I(g_i=1)$. We model $$ \mbox{Pr}(G=2|X=x)+\frac{e^{\beta_0+\beta^Tx}}{1+e^{\beta_0+\beta^Tx}}, $$ which can be written in the following form $$ \log\frac{\mbox{Pr}(G=2|X=x)}{\mbox{Pr}(G=1|X=x)}=\beta_0+\beta^Tx, $$ the so-called "logistic" or log-odds transformation. The objective function for the penalized logistic regression uses the negative binomial log-likelihood, and is $$ \min_{(\beta_0, \beta) \in \mathbb{R}^{p+1}} -\left[\frac{1}{N} \sum_{i=1}^N y_i \cdot (\beta_0 + x_i^T \beta) - \log (1+e^{(\beta_0+x_i^T \beta)})\right] + \lambda \big[ (1-\alpha)||\beta||_2^2/2 + \alpha||\beta||_1\big]. $$ Logistic regression is often plagued with degeneracies when $p > N$ and exhibits wild behavior even when $N$ is close to $p$; the elastic-net penalty alleviates these issues, and regularizes and selects variables as well. Our algorithm uses a quadratic approximation to the log-likelihood, and then coordinate descent on the resulting penalized weighted least-squares problem. These constitute an outer and inner loop. For illustration purpose, we load pre-generated input matrix x and the response vector y from the data file.
# Import relevant modules and setup for calling glmnet %reset -f %matplotlib inline import sys sys.path.append('../test') sys.path.append('../lib') import scipy, importlib, pprint, matplotlib.pyplot as plt, warnings from glmnet import glmnet; from glmnetPlot import glmnetPlot from glmnetPrint import glmnetPrint; from glmnetCoef import glmnetCoef; from glmnetPredict import glmnetPredict from cvglmnet import cvglmnet; from cvglmnetCoef import cvglmnetCoef from cvglmnetPlot import cvglmnetPlot; from cvglmnetPredict import cvglmnetPredict # parameters baseDataDir= '../data/' # load data x = scipy.loadtxt(baseDataDir + 'BinomialExampleX.dat', dtype = scipy.float64, delimiter = ',') y = scipy.loadtxt(baseDataDir + 'BinomialExampleY.dat', dtype = scipy.float64)
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
The input matrix $x$ is the same as other families. For binomial logistic regression, the response variable $y$ should be either a factor with two levels, or a two-column matrix of counts or proportions. Other optional arguments of glmnet for binomial regression are almost same as those for Gaussian family. Don't forget to set family option to "binomial".
fit = glmnet(x = x.copy(), y = y.copy(), family = 'binomial')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
Like before, we can print and plot the fitted object, extract the coefficients at specific $\lambda$'s and also make predictions. For plotting, the optional arguments such as xvar and label are similar to the Gaussian. We plot against the deviance explained and show the labels.
glmnetPlot(fit, xvar = 'dev', label = True);
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
Prediction is a little different for logistic from Gaussian, mainly in the option type. "link" and "response" are never equivalent and "class" is only available for logistic regression. In summary, * "link" gives the linear predictors "response" gives the fitted probabilities "class" produces the class label corresponding to the maximum probability. "coefficients" computes the coefficients at values of s "nonzero" retuns a list of the indices of the nonzero coefficients for each value of s. For "binomial" models, results ("link", "response", "coefficients", "nonzero") are returned only for the class corresponding to the second level of the factor response. In the following example, we make prediction of the class labels at $\lambda = 0.05, 0.01$.
glmnetPredict(fit, newx = x[0:5,], ptype='class', s = scipy.array([0.05, 0.01]))
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
For logistic regression, cvglmnet has similar arguments and usage as Gaussian. nfolds, weights, lambda, parallel are all available to users. There are some differences in ptype: "deviance" and "mse" do not both mean squared loss and "class" is enabled. Hence, * "mse" uses squared loss. "deviance" uses actual deviance. "mae" uses mean absolute error. "class" gives misclassification error. "auc" (for two-class logistic regression ONLY) gives area under the ROC curve. For example,
warnings.filterwarnings('ignore') cvfit = cvglmnet(x = x.copy(), y = y.copy(), family = 'binomial', ptype = 'class') warnings.filterwarnings('default')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
It uses misclassification error as the criterion for 10-fold cross-validation. We plot the object and show the optimal values of $\lambda$.
cvglmnetPlot(cvfit) cvfit['lambda_min'] cvfit['lambda_1se']
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
coef and predict are simliar to the Gaussian case and we omit the details. We review by some examples.
cvglmnetCoef(cvfit, s = 'lambda_min')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
As mentioned previously, the results returned here are only for the second level of the factor response.
cvglmnetPredict(cvfit, newx = x[0:10, ], s = 'lambda_min', ptype = 'class')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
Like other GLMs, glmnet allows for an "offset". This is a fixed vector of N numbers that is added into the linear predictor. For example, you may have fitted some other logistic regression using other variables (and data), and now you want to see if the present variables can add anything. So you use the predicted logit from the other model as an offset in. Like other GLMs, glmnet allows for an "offset". This is a fixed vector of N numbers that is added into the linear predictor. For example, you may have fitted some other logistic regression using other variables (and data), and now you want to see if the present variables can add anything. So you use the predicted logit from the other model as an offset in. Logistic Regression - Multinomial Models For the multinomial model, suppose the response variable has $K$ levels ${\cal G}={1,2,\ldots,K}$. Here we model $$\mbox{Pr}(G=k|X=x)=\frac{e^{\beta_{0k}+\beta_k^Tx}}{\sum_{\ell=1}^Ke^{\beta_{0\ell}+\beta_\ell^Tx}}.$$ Let ${Y}$ be the $N \times K$ indicator response matrix, with elements $y_{i\ell} = I(g_i=\ell)$. Then the elastic-net penalized negative log-likelihood function becomes $$ \ell({\beta_{0k},\beta_{k}}1^K) = -\left[\frac{1}{N} \sum{i=1}^N \Big(\sum_{k=1}^Ky_{il} (\beta_{0k} + x_i^T \beta_k)- \log \big(\sum_{k=1}^K e^{\beta_{0k}+x_i^T \beta_k}\big)\Big)\right] +\lambda \left[ (1-\alpha)||\beta||F^2/2 + \alpha\sum{j=1}^p||\beta_j||_q\right]. $$ Here we really abuse notation! $\beta$ is a $p\times K$ matrix of coefficients. $\beta_k$ refers to the kth column (for outcome category k), and $\beta_j$ the jth row (vector of K coefficients for variable j). The last penalty term is $||\beta_j||_q$, we have two options for q: $q\in {1,2}$. When q=1, this is a lasso penalty on each of the parameters. When q=2, this is a grouped-lasso penalty on all the K coefficients for a particular variables, which makes them all be zero or nonzero together. The standard Newton algorithm can be tedious here. Instead, we use a so-called partial Newton algorithm by making a partial quadratic approximation to the log-likelihood, allowing only $(\beta_{0k}, \beta_k)$ to vary for a single class at a time. For each value of $\lambda$, we first cycle over all classes indexed by $k$, computing each time a partial quadratic approximation about the parameters of the current class. Then the inner procedure is almost the same as for the binomial case. This is the case for lasso (q=1). When q=2, we use a different approach, which we wont dwell on here. For the multinomial case, the usage is similar to logistic regression, and we mainly illustrate by examples and address any differences. We load a set of generated data.
# Import relevant modules and setup for calling glmnet %reset -f %matplotlib inline import sys sys.path.append('../test') sys.path.append('../lib') import scipy, importlib, pprint, matplotlib.pyplot as plt, warnings from glmnet import glmnet; from glmnetPlot import glmnetPlot from glmnetPrint import glmnetPrint; from glmnetCoef import glmnetCoef; from glmnetPredict import glmnetPredict from cvglmnet import cvglmnet; from cvglmnetCoef import cvglmnetCoef from cvglmnetPlot import cvglmnetPlot; from cvglmnetPredict import cvglmnetPredict # parameters baseDataDir= '../data/' # load data x = scipy.loadtxt(baseDataDir + 'MultinomialExampleX.dat', dtype = scipy.float64, delimiter = ',') y = scipy.loadtxt(baseDataDir + 'MultinomialExampleY.dat', dtype = scipy.float64)
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
The optional arguments in glmnet for multinomial logistic regression are mostly similar to binomial regression except for a few cases. The response variable can be a nc >= 2 level factor, or a nc-column matrix of counts or proportions. Internally glmnet will make the rows of this matrix sum to 1, and absorb the total mass into the weight for that observation. offset should be a nobs x nc matrix if there is one. A special option for multinomial regression is mtype, which allows the usage of a grouped lasso penalty if mtype = 'grouped'. This will ensure that the multinomial coefficients for a variable are all in or out together, just like for the multi-response Gaussian.
fit = glmnet(x = x.copy(), y = y.copy(), family = 'multinomial', mtype = 'grouped')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
We plot the resulting object "fit".
glmnetPlot(fit, xvar = 'lambda', label = True, ptype = '2norm');
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
The options are xvar, label and ptype, in addition to other ordinary graphical parameters. xvar and label are the same as other families while ptype is only for multinomial regression and multiresponse Gaussian model. It can produce a figure of coefficients for each response variable if ptype = "coef" or a figure showing the $\ell_2$-norm in one figure if ptype = "2norm" We can also do cross-validation and plot the returned object.
warnings.filterwarnings('ignore') cvfit=cvglmnet(x = x.copy(), y = y.copy(), family='multinomial', mtype = 'grouped'); warnings.filterwarnings('default') cvglmnetPlot(cvfit)
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
Note that although mtype is not a typical argument in cvglmnet, in fact any argument that can be passed to glmnet is valid in the argument list of cvglmnet. We also use parallel computing to accelerate the calculation. Users may wish to predict at the optimally selected $\lambda$:
cvglmnetPredict(cvfit, newx = x[0:10, :], s = 'lambda_min', ptype = 'class')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
Poisson Models Poisson regression is used to model count data under the assumption of Poisson error, or otherwise non-negative data where the mean and variance are proportional. Like the Gaussian and binomial model, the Poisson is a member of the exponential family of distributions. We usually model its positive mean on the log scale: $\log \mu(x) = \beta_0+\beta' x$. The log-likelihood for observations ${x_i,y_i}1^N$ is given my $$ l(\beta|X, Y) = \sum{i=1}^N (y_i (\beta_0+\beta' x_i) - e^{\beta_0+\beta^Tx_i}. $$ As before, we optimize the penalized log-likelihood: $$ \min_{\beta_0,\beta} -\frac1N l(\beta|X, Y) + \lambda \left((1-\alpha) \sum_{i=1}^N \beta_i^2/2) +\alpha \sum_{i=1}^N |\beta_i|\right). $$ Glmnet uses an outer Newton loop, and an inner weighted least-squares loop (as in logistic regression) to optimize this criterion. First, we load a pre-generated set of Poisson data.
# Import relevant modules and setup for calling glmnet %reset -f %matplotlib inline import sys sys.path.append('../test') sys.path.append('../lib') import scipy, importlib, pprint, matplotlib.pyplot as plt, warnings from glmnet import glmnet; from glmnetPlot import glmnetPlot from glmnetPrint import glmnetPrint; from glmnetCoef import glmnetCoef; from glmnetPredict import glmnetPredict from cvglmnet import cvglmnet; from cvglmnetCoef import cvglmnetCoef from cvglmnetPlot import cvglmnetPlot; from cvglmnetPredict import cvglmnetPredict # parameters baseDataDir= '../data/' # load data x = scipy.loadtxt(baseDataDir + 'PoissonExampleX.dat', dtype = scipy.float64, delimiter = ',') y = scipy.loadtxt(baseDataDir + 'PoissonExampleY.dat', dtype = scipy.float64, delimiter = ',')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
We apply the function glmnet with the "poisson" option.
fit = glmnet(x = x.copy(), y = y.copy(), family = 'poisson')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
The optional input arguments of glmnet for "poisson" family are similar to those for others. offset is a useful argument particularly in Poisson models. When dealing with rate data in Poisson models, the counts collected are often based on different exposures, such as length of time observed, area and years. A poisson rate $\mu(x)$ is relative to a unit exposure time, so if an observation $y_i$ was exposed for $E_i$ units of time, then the expected count would be $E_i\mu(x)$, and the log mean would be $\log(E_i)+\log(\mu(x)$. In a case like this, we would supply an offset $\log(E_i)$ for each observation. Hence offset is a vector of length nobs that is included in the linear predictor. Other families can also use options, typically for different reasons. (Warning: if offset is supplied in glmnet, offsets must also also be supplied to predict to make reasonable predictions.) Again, we plot the coefficients to have a first sense of the result.
glmnetPlot(fit);
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
Like before, we can extract the coefficients and make predictions at certain $\lambda$'s by using coef and predict respectively. The optional input arguments are similar to those for other families. In function predict, the option type, which is the type of prediction required, has its own specialties for Poisson family. That is, * "link" (default) gives the linear predictors like others * "response" gives the fitted mean * "coefficients" computes the coefficients at the requested values for s, which can also be realized by coef function * "nonzero" returns a a list of the indices of the nonzero coefficients for each value of s. For example, we can do as follows:
glmnetCoef(fit, s = scipy.float64([1.0])) glmnetPredict(fit, x[0:5,:], ptype = 'response', s = scipy.float64([0.1, 0.01]))
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
We may also use cross-validation to find the optimal $\lambda$'s and thus make inferences.
warnings.filterwarnings('ignore') cvfit = cvglmnet(x.copy(), y.copy(), family = 'poisson') warnings.filterwarnings('default')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
Options are almost the same as the Gaussian family except that for type.measure, * "deviance" (default) gives the deviance * "mse" stands for mean squared error * "mae" is for mean absolute error. We can plot the cvglmnet object.
cvglmnetPlot(cvfit)
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
We can also show the optimal $\lambda$'s and the corresponding coefficients.
optlam = scipy.array([cvfit['lambda_min'], cvfit['lambda_1se']]).reshape([2,]) cvglmnetCoef(cvfit, s = optlam)
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
The predict method is similar and we do not repeat it here. Cox Models The Cox proportional hazards model is commonly used for the study of the relationship beteween predictor variables and survival time. In the usual survival analysis framework, we have data of the form $(y_1, x_1, \delta_1), \ldots, (y_n, x_n, \delta_n)$ where $y_i$, the observed time, is a time of failure if $\delta_i$ is 1 or right-censoring if $\delta_i$ is 0. We also let $t_1 < t_2 < \ldots < t_m$ be the increasing list of unique failure times, and $j(i)$ denote the index of the observation failing at time $t_i$. The Cox model assumes a semi-parametric form for the hazard $$ h_i(t) = h_0(t) e^{x_i^T \beta}, $$ where $h_i(t)$ is the hazard for patient $i$ at time $t$, $h_0(t)$ is a shared baseline hazard, and $\beta$ is a fixed, length $p$ vector. In the classic setting $n \geq p$, inference is made via the partial likelihood $$ L(\beta) = \prod_{i=1}^m \frac{e^{x_{j(i)}^T \beta}}{\sum_{j \in R_i} e^{x_j^T \beta}}, $$ where $R_i$ is the set of indices $j$ with $y_j \geq t_i$ (those at risk at time $t_i$). Note there is no intercept in the Cox mode (its built into the baseline hazard, and like it, would cancel in the partial likelihood.) We penalize the negative log of the partial likelihood, just like the other models, with an elastic-net penalty. We use a pre-generated set of sample data and response. Users can load their own data and follow a similar procedure. In this case $x$ must be an $n\times p$ matrix of covariate values — each row corresponds to a patient and each column a covariate. $y$ is an $n \times 2$ matrix, with a column "time" of failure/censoring times, and "status" a 0/1 indicator, with 1 meaning the time is a failure time, and zero a censoring time.
# Import relevant modules and setup for calling glmnet %reset -f %matplotlib inline import sys sys.path.append('../test') sys.path.append('../lib') import scipy, importlib, pprint, matplotlib.pyplot as plt, warnings from glmnet import glmnet; from glmnetPlot import glmnetPlot from glmnetPrint import glmnetPrint; from glmnetCoef import glmnetCoef; from glmnetPredict import glmnetPredict from cvglmnet import cvglmnet; from cvglmnetCoef import cvglmnetCoef from cvglmnetPlot import cvglmnetPlot; from cvglmnetPredict import cvglmnetPredict # parameters baseDataDir= '../data/' # load data x = scipy.loadtxt(baseDataDir + 'CoxExampleX.dat', dtype = scipy.float64, delimiter = ',') y = scipy.loadtxt(baseDataDir + 'CoxExampleY.dat', dtype = scipy.float64, delimiter = ',')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
The Surv function in the package survival can create such a matrix. Note, however, that the coxph and related linear models can handle interval and other fors of censoring, while glmnet can only handle right censoring in its present form. We apply the glmnet function to compute the solution path under default settings.
fit = glmnet(x = x.copy(), y = y.copy(), family = 'cox')
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
All the standard options are available such as alpha, weights, nlambda and standardize. Their usage is similar as in the Gaussian case and we omit the details here. Users can also refer to the help file help(glmnet). We can plot the coefficients.
glmnetPlot(fit);
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
As before, we can extract the coefficients at certain values of $\lambda$.
glmnetCoef(fit, s = scipy.float64([0.05]))
docs/glmnet_vignette.ipynb
bbalasub1/glmnet_python
gpl-3.0
Basic Data Types Python has many data types, e.g. * numeric: int, float, complex * string * boolean values, i.e. true and false * sequences: list, tuple * dict Variables are declared via assignment: python x = 5
# scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Numeric Types Python numeric types are similar to those in other languages such as C/C++. python x = 5 # int x = 10**100 # long (2.7) or int (3.5) x = 3.141592 # float x = 1.0j # complex Note: ordinary machine types can be accessed/manipulated through the ctypes module.
# scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Arithmetic Operations python 3 + 2 # addition 3 - 2 # subtraction 3 * 2 # multiplication 3 ** 2 # exponentiation 3 / 2 # division (warning: int (2.7) or float (3.5)) 3 % 2 # modulus
# scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Exercise Use the Python interpreter to perform some basic arithemetic. Strings python x = "hello" # string enclosed with double quotes y = 'world' # string enclosed with single quotes x + ' ' + y # string concatenation via + "{} + {} = {}".format(5 , 6, 5+6) # string formatting
# scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Lists python x = [1, 2, 3] # initialize list x[1] = 0 # modify element x.append(4) # append to end x.extend([5, 6]) # extend x[3:5] # slice
# scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Tuples Tuples are similar to lists, but are immutable: python x = (1, 2, 3) # initialize a tuple with () x[0] = 4 # will result in error
# scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
List Comprehension Comprehension provides a convenient way to create new lists: python [ i for i in range (5) ] # result: [0, 1, 2, 3, 4] [ i**2 for i in range (5) ] # result: [0, 1, 4, 9, 16] the_list = [5, 2, 6, 1] [ i**2 for i in the_list ] # result [25, 4, 36, 1]
# scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Exercise Create a list of floating point numbers and then create a second list which contains the squares of the entries of teh fist list Boolean Values and Comparisons Boolean types take the values True or False. The result of a comparison operator is boolean. python 5 &lt; 6 # evalutes to True 5 &gt;= 6 # evaluates to False 5 == 6 # evaluates to False Logical operations: python True and False # False True or False # True not True # False True ^ False # True (exclusive or)
# scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Functions Functions are defined with def: python def hello(): print 'hello, world' Note: Python uses indentation to denote blocks of code, rather than braces {} as in many other languages. It is common to use either 4 spaces or 2 spaces to indent. It doesn't matter, as long as you are consistent. Use the return keyword for a function which returns a value: python def square(x): return x**2
# scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Loops and Flow Control For loop:
for i in range(10): print(i**2)
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
It is also possible to use for..in to iterate through elements of a list:
for i in ['hello', 'world']: print(i)
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
While loops have the form while condition:
i = 0 while i < 10: print(i**2) i = i + 1
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
The keywords break and continue can be used for flow control inside a loop * continue: skip to the next iteration of the loop * break: jump out of the loop entirely
for i in range(10): if i == 3: continue if i == 7: break print(i)
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Use the keywords if, elif, else for branching python if 5 &gt; 6: # never reached pass elif 1 &gt; 2: # reached pass else: # never reached pass
# scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Exercise Write a function fib(n) which returns the nth Fibonacci number. The Fibonacci numbers are defined by * fib(0) = fib(1) = 1 * fib(n) = fib(n-1) + fib(n-2) for n &gt;= 2. Exercise ”Write a program that prints the numbers from 1 to 100. But for multiples of three print Fizz instead of the number and for the multiples of five print Buzz. For numbers which are multiples of both three and five print FizzBuzz.” http://wiki.c2.com/?FizzBuzzTest
# scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Modules Load external modules (built-in or user-defined) via import:
import math print(math.pi) print(math.sin(math.pi/2.0))
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Rename modules with as:
import math as m print(m.pi)
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Load specific functions or submodules:
from math import pi, sin print(sin(pi/2.0)) # scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
User-defined Modules Any code written in a separate file (with .py extension) can be imported as a module. Suppose we have a script my_module.py which defines a function do_something(). Then we can call it as
import my_module my_module.do_something()
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Exercise Implement your FizzBuzz solution as a function called FizzBuzz() in a module called fizzbuzz. Check that it works by importing it and calling FizzBuzz() in a separate script.
# scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
numpy numpy is a module used for numerical calculation. The main data type is numpy.array, which is a multidimensional array of numbers (integer, float, complex).
import numpy as np x = np.array([1, 2, 3, 4]) print(x.sum()) print(x.mean())
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
The basic arithmetic operations work elementwise on numpy arrays:
x = np.array([1, 2, 3, 4]) y = np.array([5, 6, 7, 8]) print(x + y) print(x * y) print(x / y)
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
It is also possible to call functions on numpy arrays:
x = np.array([1, 2, 3, 4]) print(np.sin(x)) print(np.log(x)) # scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Generating numpy Arrays numpy arrays can be generated with zeros, ones, linspace, and rand:
print(np.zeros(4)) print(np.ones(3)) print(np.linspace(-1, 1, num=4)) print(np.random.rand(2)) # scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Plotting with matplotlib We use matplotlib.pyplot for plotting:
import numpy as np from matplotlib import pyplot as plt x = np.linspace(-3.14, 3.14, num=100) y = np.sin(x) plt.plot(x, y) plt.xlabel('x values') plt.ylabel('y') plt.title('y=sin(x)') plt.show()
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Exercise Create plots the following functions * f(x) = log(x) * f(x) = sqrt(x) * f(x) = x**2 * f(x) = log(1 + x**2) * anything else you might find interesting or challenging Combining Plots Plots can be combined using addition:
x = np.linspace(-10, 10, num=100) y1 = np.sin(x) y2 = np.cos(x) y3 = np.arctan(x) plt.plot(x, y1, x, y2, x, y3) plt.show()
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
todo array manipulation routines numpy.flipud, fliplr, transpose, rot90, flatten, ravel Colormap Plots Plot color maps with pcolormesh:
x = np.linspace (-1, 1, num =100) y = np.linspace (-1, 1, num =100) xx, yy = np.meshgrid (x, y) z = np.sin(xx**2 + yy**2 + yy) plt.pcolormesh(x, y, z, shading = 'gouraud') plt.show()
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Or with imshow:
plt.imshow(z, aspect='auto') plt.show()
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Note that the image is flipped because images start from top left and go to bottom right. We can fix this with flipud:
plt.imshow(np.flipud(z), aspect='auto') plt.show() # scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
3D Plots
from mpl_toolkits.mplot3d import Axes3D from matplotlib import cm %matplotlib inline fig = plt.figure() ax = fig.gca(projection='3d') ax.plot_surface(xx, yy, z, rstride=5, cstride=5, cmap=cm.coolwarm, linewidth=1, antialiased=True) plt.show()
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
3D Wireframe Plot
%matplotlib inline fig = plt.figure() ax = fig.gca(projection='3d') ax.plot_wireframe(xx, yy, z, rstride=5, cstride=5, antialiased=True) plt.show()
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Gallery of matplotlib Plots See http://matplotlib.org/gallery.html Plotting Exercise Consider the function f(x, y) = exp(x + 1.0j*y) for −4 ≤ x, y ≤ 4. Create colormap and 3d plots of the magnitude, real, and imaginary parts of f.
# scratch
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Plotting Images
x = np.linspace(-2, 2, num=100) y = np.linspace(-2, 2, num=100) result = np.flipud(np.array([[u*v for u in x] for v in y])) fig = plt.figure() plt.imshow(result, extent=[x.min(), x.max(), y.min(), y.max()], aspect='auto') plt.show()
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Classes Classes can be used to package data and methods together:
class SomeClass: def __init__ (self, x): self.x = x def doSomething(self): print("my x value is {}".format(self.x)) obj = SomeClass(5) obj.doSomething() # scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Inheritance Classes can be derived from others:
class SomeOtherClass (SomeClass): def __init__ (self, x, y): SomeClass.__init__ (self, x) self.y = y def doSomethingElse(self): print("my y value is {}".format(self.y)) other_obj = SomeOtherClass(5, 6) other_obj.doSomething() other_obj.doSomethingElse()
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Polymorphism An instance of a derived class is automatically an instance of its base class:
print('The type of obj is {}'.format(type(obj))) print('The type of other_obj is {}'.format(type(other_obj))) print('obj is instance of SomeClass? {}'.format(isinstance(obj, SomeClass))) print('obj is instance of SomeOtherClass? {}'.format(isinstance(obj, SomeOtherClass))) print('other_obj is instance of SomeClass? {}'.format(isinstance(obj, SomeClass))) print('other_obj is instance of SomeOtherClass? {}'.format( isinstance(obj, SomeOtherClass))) # scratch area
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Exercise todo
# todo
old/python/tutorial.ipynb
waltervh/BornAgain-tutorial
gpl-3.0
Detectors Note: LRISb has employed different detectors. We may need to make PYPIT backwards compatible. FITS file
fil = '/Users/xavier/PYPIT/LRIS_blue/Raw/b150910_2033.fits.gz' hdu = fits.open(fil) hdu.info() head0['OBSTYPE'] head0 = hdu[0].header head0 #head0['DATE'] plt.clf() plt.imshow(hdu[1].data) plt.show()
doc/nb/LRIS_blue_notes.ipynb
PYPIT/PYPIT
gpl-3.0
Display Raw LRIS image in Ginga
### Need to port readmhdufits head0 reload(pyp_ario) img, head = pyp_ario.read_lris('/Users/xavier/PYPIT/LRIS_blue/Raw/b150910_2070.fits',TRIM=True) xdb.ximshow(img) import subprocess subprocess.call(["touch", "dum.fil"]) b = 'as' '{1:s}'.format(b) range(1,5) tmp = np.ones((10,20)) tmp[0:1,:].shape
doc/nb/LRIS_blue_notes.ipynb
PYPIT/PYPIT
gpl-3.0
Floating point numbers Floating point numbers, or decimal numbers are just that: any number with a decimal place in it such as 4.566642 and -156.986714. Pandas stores these as a float64. They could also be stored in scientific notation like this: 4.509013e+14. This means "4.509013 times 10 raised to the +14". These are still floating point numbers and are treated like any other decimal number.
print("Float Values") print(sampledata['FloatCol'].values)
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
Before we move on, I'd like to take a quick look at the data graphically.
sampledata.plot(kind='scatter', x='IntCol',y='FloatCol')
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
Because this is "fake" data, I put in a functional dependence here. The float column looks like it is some function of the integer column. It is almost always a good idea to visualize your data early on to see what it looks like graphically! Text Pandas can store text in its columns. Because there are a number of different types of text objects, by default pandas will store text as an object which just means it doesn't know which of the types it really is. Text can, in principle, be anything you want it to be, so it is both the most flexible and the most challenging data type.
print("Text Values") print(sampledata['TextCol'].values)
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
Categorical A categorical data type is a finite set of different objects. These objects are represented internally as integers but may be displayed as text or other generic objects. To make things simple, we'll start with a categorical object that has three possible values: "yes", "no", and "maybe". Internally, pandas will represent these as integers 0,1, and 2. But it knows that this is a categorical data type, so it keeps track of the text value associated with the integer and displays that for the user.
print("Categorical Values") print(sampledata['CatCol'].values)
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
When we loaded the data, it actually loaded this column as an object, which means it doesn't know that it is supposed to be a categorical column. We will tell pandas to do that. We will use the astype() command that will tell pandas to change the data type of that column. We check to make sure it worked, too. Note that the "CatCol2" column is now a 'category' type. Data Processing Tip A quick aside here: there are a couple of ways of doing this kind of transformation on the data. We'll see this a little later when we do more column-wise processing. We could either change the original column or we could create a new column. The second method doesn't overwrite the original data and will be what we typically do. That way if something goes wrong or we want to change how we are processing the data, we still have the original data column to work with.
sampledata["CatCol2"] = sampledata["CatCol"].astype('category') sampledata.dtypes
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
We can now look at how the data are stored as categorical data. We can get thi internal codes for each of the entries like this:
sampledata["CatCol2"].cat.codes
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
We can also get a list of the categories that pandas found when converting the column. These are in order- the first entry corresponds to 0, the second to 1, etc.
sampledata["CatCol2"].cat.categories
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
We may encounter situations where we want to plot the data and visualize each category as its own color. We saw how to do this back in Class01.
import seaborn as sns sns.set_style('white') sns.lmplot(x='IntCol', y='FloatCol', data=sampledata, hue='CatCol2', fit_reg=False)
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
Date/Times We will frequently encounter date/time values in working with data. There are many different ways that these values get stored, but mostly we'll find that they start as a text object. We need to know how they are stored (in what order are the year-month-day-hour-minute-second values are stored). There are utilities to convert any type of date/time string to a datetime object in pandas. We will start with the ISO 8601 datetime standard, since it is both the most logical and the easiest to work with. Dates are stored like this: 2017-01-23 where we use a four-digit year, then a two-digit month and a two-digit day, all separated by dashes. If we want to add a time, it is appended to the date like this: 2017-01-23T03:13:42. The "T" tells the computer that we've added a time. Then it is followed by a two-digit hour (using 00 as midnight and 23 as 11pm) a colon, a two-digit minute, a colon, and a two-digit second. There are other variations of this that can include a time-zone, but we will leave those for later.
print("Date/Time Values") print(sampledata['DateCol'].values)
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
They are currently stored as objects, not as datetimes. We need to convert this column as well, but we'll use a special pandas function to do that. Take a quick look at the reference page for this function to see what else it can do. Note that the new column has type datetime64[ns]. That means that the date format is capable of counting nanoseconds. We won't use all of that capability, but pandas used that format because our dates are accurate to the second.
sampledata["DateCol2"] = pd.to_datetime(sampledata["DateCol"]) sampledata.dtypes #We print out the column to see what it looks like sampledata["DateCol2"]
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
Now that we have the datetime column, I'd like to plot the data as a function of date. This is often a useful thing to do with time series data. We'll need to import the matplotlib library and use a trick to format the data by date. Here's the code that makes it work.
import matplotlib.pyplot as plt %matplotlib inline # We will plot the data values and set the linestyle to 'None' which will not plot the line. We also want to show the individual data points, so we set the marker. plt.plot(sampledata['DateCol2'].values, sampledata['FloatCol'].values, linestyle='None', marker='o') # autofmt_xdate() tells the computer that it should treat the x-values as dates and format them appropriately. This is a figure function, so we use gcf() to "get current figure" plt.gcf().autofmt_xdate()
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
Geographical Although this is not typically a single data type, you may encounter geographical data. These are typically in a Latitude-Longitude format where both Latitude and Longitude are floating point numbers like this: (32.1545, -138.5532). There are a number of tools we can use to work with and plot this type of data, so I wanted to cover it now. For now, we will treat these as separate entities and work with geographical data as we encounter it.
print("Latitude Values") print(sampledata['LatCol'].values) print("Longitude Values") print(sampledata['LonCol'].values)
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
It is also useful to plot the geographical data. There are python libraries that make this easy to do.
from mpl_toolkits.basemap import Basemap import numpy as np # Draw the base map of the world m = Basemap(projection='robin',lon_0=0,resolution='c') # Draw the continent coast lines m.drawcoastlines() # Color in the water and the land masses m.fillcontinents(color='red',lake_color='aqua') # draw parallels and meridians. m.drawparallels(np.arange(-90.,120.,30.)) m.drawmeridians(np.arange(0.,360.,60.)) #m.drawmapboundary(fill_color='aqua') # Prep the data for plotting on the map x,y = m(sampledata['LonCol'].values, sampledata['LatCol'].values) # Plot the data points on the map m.plot(x,y, 'bo', markersize=10)
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
Column-wise processing Now that we have data columns, we've already seen a couple of examples of column-wise processing. When we created the categorical column and the datetime column we took the data from one column and operated on it all at the same time creating the new columns with the different data types. There are other ways to manipulate the columns. apply The apply function takes each entry in a column and applies whatever function you want to the entry. For example, we are interested in whether the entry is greater than 4. We will simplify the code by using what is called a lambda function. So, inside the apply() function we have: lambda x: x&gt;4. This is shorthand notation for the following: "Treat x as if it were each entry in the column. Apply whatever follows the colon (:) to each entry and create a new column based on the output". The use of x was arbitrary: we could choose any variable. For example if we chose w, the code would read: lambda w: w&gt;4. This would do exactly the same thing.
sampledata['GTfour'] = sampledata['FloatCol'].apply(lambda x: x > 4.0) print(sampledata[['FloatCol','GTfour']])
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
Common functions There are a number of common functions that we could use inside the apply. For example, if we wanted to get the square root of each entry, this is what it would look like. We are using the function np.sqrt from the numpy library. We already imported this library, but if we didn't, we'd need to import numpy as np before running this function.
sampledata['FloatSQRT'] = sampledata['FloatCol'].apply(np.sqrt) print(sampledata[['FloatCol','FloatSQRT']])
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
Another useful function is adding up columns. Note that we need to tell pandas to run through each row by adding the argument axis=1 to the apply function. Otherwise it tries to add up each column. This might be something you might want to do, too, though the easiest way to do that is to use the pandas sum function for the column.
sampledata['IntSUM'] = sampledata[['IntCol','FloatCol']].apply(np.sum,axis=1) print(sampledata[['IntCol','FloatCol','IntSUM']]) sampledata['IntCol'].sum()
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
Custom functions We will now create our first custom function and use it to process the data. We will make a short function that will look to see if a value in the TextCol feature matches an item on a list we create.
# We first tell the computer that we are writing a function by starting with "def" # The next text is the name of the function. We name this one "isMammal" meaning it will tell us if an animal is in our list of mammals # The final text in the parenthesis is an input to the function. This is another "dummy" variable - we could give it any name we want. # In this case we call it "animal" to remind ourselves that we expect an animal type in text form. def isMammal(animal): # We create a list of text objects that will be our "inclusive" list. If the item is on this list, the function will return True. Otherwise it returns false. mammallist = ['cat','dog','horse','cow','elephant','giraffe','wolf','prairie dog', 'whale', 'dolphin'] # This is our first "if" statement. What this particular version does is look at the list "mammallist". # If the text passed into the variable "animal" matches any item in the list, it jumps into this next block of code # Otherwise it jumps into block of code following the "else" statement if animal in mammallist: # the "return" code word tells the computer we are done and to send back to the apply function the value following "return". In this case, send back "True" return 'mammal' else: # The other case will send back "false". return 'notmammal' sampledata['IsMammal'] = sampledata['TextCol'].apply(isMammal) print(sampledata[['TextCol', 'IsMammal']]) # We'll now operate on an entire row of data at once and do a more complicated operation. We'll return only mammals where the 'FloatCol' is smaller than 2. def isMammalFloat(row): # We create a list of text objects that will be our "inclusive" list. If the item is on this list, the function will return True. Otherwise it returns false. mammallist = ['cat','dog','horse','cow','elephant','giraffe','wolf','prairie dog', 'whale', 'dolphin'] # We need to identify the animal from the row - it can be addressed using the column name animal = row['TextCol'] if animal in mammallist: # the "return" code word tells the computer we are done and to send back to the apply function the value following "return". # In this case it returns True if the float value is less than 2 and false otherwise. return row['FloatCol'] < 2 else: # If it isn't a mammal, return false return False # Note that we need to tell `apply` to send one row at a time by adding the `axis=1` argument sampledata['IsSmallMammal'] = sampledata.apply(isMammalFloat, axis=1) print(sampledata[['TextCol', 'FloatCol','IsSmallMammal']]) sampledata['TextCol'][ sampledata['FloatCol']<2 ]
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0
Feature extraction We can often pull additional features from what we currently have. This involves doing a column-wise processing step, but with the additional component of doing a transformation or extraction from the data. We'll look at a couple of techniques to do this. Date/day/week features We already saw how to take a text column that is a date and turn it into a datetime data type. The to_datetime() function has the capability of parsing many different string formats. I recommend looking at the documentation for the function to learn how to do parsing of more specific date time formats. Once we have a datetime data type, we can use other functions to get, for example, the day of the week or the week of the year for any given date. This may be useful for looking at weekly patterns or yearly patterns. The full list of features we can easily extract is found in the documentation. We use the apply function with the simple in-line lambda function to get the date or time features. Another use for this might be to identify holidays- for example, Memorial day is always on the same relative day of the year (last Monday in May). We could use these functions to identify which days are national or bank holidays.
# Get the day of the week for each of the data features. We can get either a numerical value (0-6) or the names sampledata['DayofWeek'] = sampledata['DateCol2'].apply(lambda x: x.weekday_name) # Or the week number in the year sampledata['WeekofYear'] = sampledata['DateCol2'].apply(lambda x: x.week) print(sampledata[['DayofWeek', 'WeekofYear']])
Class03/Class03.ipynb
madsenmj/ml-introduction-course
apache-2.0