markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Estimation Load data:
data = sm.datasets.stackloss.load() data.exog = sm.add_constant(data.exog)
examples/notebooks/robust_models_0.ipynb
yl565/statsmodels
bsd-3-clause
Huber's T norm with the (default) median absolute deviation scaling
huber_t = sm.RLM(data.endog, data.exog, M=sm.robust.norms.HuberT()) hub_results = huber_t.fit() print(hub_results.params) print(hub_results.bse) print(hub_results.summary(yname='y', xname=['var_%d' % i for i in range(len(hub_results.params))]))
examples/notebooks/robust_models_0.ipynb
yl565/statsmodels
bsd-3-clause
Huber's T norm with 'H2' covariance matrix
hub_results2 = huber_t.fit(cov="H2") print(hub_results2.params) print(hub_results2.bse)
examples/notebooks/robust_models_0.ipynb
yl565/statsmodels
bsd-3-clause
Andrew's Wave norm with Huber's Proposal 2 scaling and 'H3' covariance matrix
andrew_mod = sm.RLM(data.endog, data.exog, M=sm.robust.norms.AndrewWave()) andrew_results = andrew_mod.fit(scale_est=sm.robust.scale.HuberScale(), cov="H3") print('Parameters: ', andrew_results.params)
examples/notebooks/robust_models_0.ipynb
yl565/statsmodels
bsd-3-clause
See help(sm.RLM.fit) for more options and module sm.robust.scale for scale options Comparing OLS and RLM Artificial data with outliers:
nsample = 50 x1 = np.linspace(0, 20, nsample) X = np.column_stack((x1, (x1-5)**2)) X = sm.add_constant(X) sig = 0.3 # smaller error variance makes OLS<->RLM contrast bigger beta = [5, 0.5, -0.0] y_true2 = np.dot(X, beta) y2 = y_true2 + sig*1. * np.random.normal(size=nsample) y2[[39,41,43,45,48]] -= 5 # add some outliers (10% of nsample)
examples/notebooks/robust_models_0.ipynb
yl565/statsmodels
bsd-3-clause
Example 1: quadratic function with linear truth Note that the quadratic term in OLS regression will capture outlier effects.
res = sm.OLS(y2, X).fit() print(res.params) print(res.bse) print(res.predict())
examples/notebooks/robust_models_0.ipynb
yl565/statsmodels
bsd-3-clause
Estimate RLM:
resrlm = sm.RLM(y2, X).fit() print(resrlm.params) print(resrlm.bse)
examples/notebooks/robust_models_0.ipynb
yl565/statsmodels
bsd-3-clause
Draw a plot to compare OLS estimates to the robust estimates:
fig = plt.figure(figsize=(12,8)) ax = fig.add_subplot(111) ax.plot(x1, y2, 'o',label="data") ax.plot(x1, y_true2, 'b-', label="True") prstd, iv_l, iv_u = wls_prediction_std(res) ax.plot(x1, res.fittedvalues, 'r-', label="OLS") ax.plot(x1, iv_u, 'r--') ax.plot(x1, iv_l, 'r--') ax.plot(x1, resrlm.fittedvalues, 'g.-', label="RLM") ax.legend(loc="best")
examples/notebooks/robust_models_0.ipynb
yl565/statsmodels
bsd-3-clause
Example 2: linear function with linear truth Fit a new OLS model using only the linear term and the constant:
X2 = X[:,[0,1]] res2 = sm.OLS(y2, X2).fit() print(res2.params) print(res2.bse)
examples/notebooks/robust_models_0.ipynb
yl565/statsmodels
bsd-3-clause
Estimate RLM:
resrlm2 = sm.RLM(y2, X2).fit() print(resrlm2.params) print(resrlm2.bse)
examples/notebooks/robust_models_0.ipynb
yl565/statsmodels
bsd-3-clause
Draw a plot to compare OLS estimates to the robust estimates:
prstd, iv_l, iv_u = wls_prediction_std(res2) fig, ax = plt.subplots(figsize=(8,6)) ax.plot(x1, y2, 'o', label="data") ax.plot(x1, y_true2, 'b-', label="True") ax.plot(x1, res2.fittedvalues, 'r-', label="OLS") ax.plot(x1, iv_u, 'r--') ax.plot(x1, iv_l, 'r--') ax.plot(x1, resrlm2.fittedvalues, 'g.-', label="RLM") legend = ax.legend(loc="best")
examples/notebooks/robust_models_0.ipynb
yl565/statsmodels
bsd-3-clause
The information about the class of each sample is stored in the target attribute of the dataset:
iris.target iris.target_names %matplotlib inline import matplotlib.pyplot as plt plt.style.use('fivethirtyeight') x_index = 3 y_index = 2 # this formatter will label the colorbar with the correct target names formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)]) plt.scatter(iris.data[:, x_index], iris.data[:, y_index], c=iris.target) plt.colorbar(ticks=[0, 1, 2], format=formatter) plt.xlabel(iris.feature_names[x_index]) plt.ylabel(iris.feature_names[y_index]) from sklearn.decomposition import PCA pca = PCA(n_components=2, whiten=True).fit(iris.data) X_pca = pca.transform(iris.data) plt.scatter(X_pca[:, 0], X_pca[:, 1], c=iris.target) plt.colorbar(ticks=[0, 1, 2], format=formatter) var_explained = pca.explained_variance_ratio_ * 100 plt.xlabel('First Component: {0:.1f}%'.format(var_explained[0])) plt.ylabel('Second Component: {0:.1f}%'.format(var_explained[1]))
notebooks/Scikit Learn.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
scikit-learn interface All objects within scikit-learn share a uniform common basic API consisting of three complementary interfaces: estimator interface for building and fitting models predictor interface for making predictions transformer interface for converting data. The estimator interface is at the core of the library. It defines instantiation mechanisms of objects and exposes a fit method for learning a model from training data. All supervised and unsupervised learning algorithms (e.g., for classification, regression or clustering) are offered as objects implementing this interface. Machine learning tasks like feature extraction, feature selection or dimensionality reduction are also provided as estimators. Scikit-learn strives to have a uniform interface across all methods. For example, a typical estimator follows this template:
class Estimator(object): def fit(self, X, y=None): """Fit model to data X (and y)""" self.some_attribute = self.some_fitting_method(X, y) return self def predict(self, X_test): """Make prediction based on passed features""" pred = self.make_prediction(X_test) return pred
notebooks/Scikit Learn.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
For a given scikit-learn estimator object named model, several methods are available. Irrespective of the type of estimator, there will be a fit method: model.fit : fit training data. For supervised learning applications, this accepts two arguments: the data X and the labels y (e.g. model.fit(X, y)). For unsupervised learning applications, this accepts only a single argument, the data X (e.g. model.fit(X)). During the fitting process, the state of the estimator is stored in attributes of the estimator instance named with a trailing underscore character (_). For example, the sequence of regression trees sklearn.tree.DecisionTreeRegressor is stored in estimators_ attribute. The predictor interface extends the notion of an estimator by adding a predict method that takes an array X_test and produces predictions based on the learned parameters of the estimator. In the case of supervised learning estimators, this method typically returns the predicted labels or values computed by the model. Some unsupervised learning estimators may also implement the predict interface, such as k-means, where the predicted values are the cluster labels. supervised estimators are expected to have the following methods: model.predict : given a trained model, predict the label of a new set of data. This method accepts one argument, the new data X_new (e.g. model.predict(X_new)), and returns the learned label for each object in the array. model.predict_proba : For classification problems, some estimators also provide this method, which returns the probability that a new observation has each categorical label. In this case, the label with the highest probability is returned by model.predict(). model.score : for classification or regression problems, most (all?) estimators implement a score method. Scores are between 0 and 1, with a larger score indicating a better fit. Since it is common to modify or filter data before feeding it to a learning algorithm, some estimators in the library implement a transformer interface which defines a transform method. It takes as input some new data X_test and yields as output a transformed version. Preprocessing, feature selection, feature extraction and dimensionality reduction algorithms are all provided as transformers within the library. unsupervised estimators will always have these methods: model.transform : given an unsupervised model, transform new data into the new basis. This also accepts one argument X_new, and returns the new representation of the data based on the unsupervised model. model.fit_transform : some estimators implement this method, which more efficiently performs a fit and a transform on the same input data. Regression Analysis To demonstrate how scikit-learn is used, let's conduct a logistic regression analysis on a dataset for very low birth weight (VLBW) infants. Data on 671 infants with very low (less than 1600 grams) birth weight from 1981-87 were collected at Duke University Medical Center by OShea et al. (1992). Of interest is the relationship between the outcome intra-ventricular hemorrhage and the predictors birth weight, gestational age, presence of pneumothorax, mode of delivery, single vs. multiple birth, and whether the birth occurred at Duke or at another hospital with later transfer to Duke. A secular trend in the outcome is also of interest. The metadata for this dataset can be found here.
import pandas as pd vlbw = pd.read_csv("../data/vlbw.csv", index_col=0) subset = vlbw[['ivh', 'gest', 'bwt', 'delivery', 'inout', 'pltct', 'lowph', 'pneumo', 'twn', 'apg1']].dropna() # Extract response variable y = subset.ivh.replace({'absent':0, 'possible':1, 'definite':1}) # Standardize some variables X = subset[['gest', 'bwt', 'pltct', 'lowph']] X0 = (X - X.mean(axis=0)) / X.std(axis=0) # Recode some variables X0['csection'] = subset.delivery.replace({'vaginal':0, 'abdominal':1}) X0['transported'] = subset.inout.replace({'born at Duke':0, 'transported':1}) X0[['pneumo', 'twn', 'apg1']] = subset[['pneumo', 'twn','apg1']] X0.head()
notebooks/Scikit Learn.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
We split the data into a training set and a testing set. By default, 25% of the data is reserved for testing. This is the first of multiple ways that we will see to do this.
from sklearn.model_selection import train_test_split X_train, X_test, y_train, y_test = train_test_split(X0, y)
notebooks/Scikit Learn.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
The LogisticRegression model in scikit-learn employs a regularization coefficient C, which defaults to 1. The amount of regularization is lower with larger values of C. Regularization penalizes the values of regression coefficients, while smaller ones let the coefficients range widely. Scikit-learn includes two penalties: a l2 penalty which penalizes the sum of the squares of the coefficients (the default), and a l1 penalty which penalizes the sum of the absolute values. The reason for doing regularization is to let us to include more covariates than our data might otherwise allow. We only have a few coefficients, so we will set C to a large value.
from sklearn.linear_model import LogisticRegression lrmod = LogisticRegression(C=1000) lrmod.fit(X_train, y_train) pred_train = lrmod.predict(X_train) pred_test = lrmod.predict(X_test) pd.crosstab(y_train, pred_train, rownames=["Actual"], colnames=["Predicted"]) pd.crosstab(y_test, pred_test, rownames=["Actual"], colnames=["Predicted"]) for name, value in zip(X0.columns, lrmod.coef_[0]): print('{0}:\t{1:.2f}'.format(name, value))
notebooks/Scikit Learn.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
We can bootstrap some confidence intervals:
import numpy as np n = 1000 boot_samples = np.empty((n, len(lrmod.coef_[0]))) for i in np.arange(n): boot_ind = np.random.randint(0, len(X0), len(X0)) y_i, X_i = y.values[boot_ind], X0.values[boot_ind] lrmod_i = LogisticRegression(C=1000) lrmod_i.fit(X_i, y_i) boot_samples[i] = lrmod_i.coef_[0] boot_samples.sort(axis=0) boot_se = boot_samples[[25, 975], :].T coefs = lrmod.coef_[0] plt.plot(coefs, 'r.') for i in range(len(coefs)): plt.errorbar(x=[i,i], y=boot_se[i], color='red') plt.xlim(-0.5, 8.5) plt.xticks(range(len(coefs)), X0.columns.values, rotation=45) plt.axhline(0, color='k', linestyle='--')
notebooks/Scikit Learn.ipynb
fonnesbeck/scientific-python-workshop
cc0-1.0
Of course, that only predicts the value for a fraction of the data set. I don't think that I have made it entirely clear how to use cross-validation to get a prediction for the full training set, so let's do that now. We'll use Scikit-Learn's cross_val_predict.
from sklearn.model_selection import cross_val_predict yCVpred = cross_val_predict(clf, X, y, cv=5) # Complete fig = plt.figure(figsize=(6, 6)) plt.scatter(y,yCVpred) plt.xlabel("Actual Value [x$1000]") plt.ylabel("Predicted Value [x$1000]") plt.show()
NeuralNetworks.ipynb
gtrichards/PHYS_T480
mit
Let's try to use the multi-layer perceptron classifier on the digits data set. We will use a single hidden layer to keep the training time reasonable.
%matplotlib inline import numpy as np from matplotlib import pyplot as plt from sklearn import datasets, cross_validation, neural_network, svm, metrics from sklearn.neural_network import MLPClassifier digits = datasets.load_digits() images_and_labels = list(zip(digits.images, digits.target)) for index, (image, label) in enumerate(images_and_labels[:4]): plt.subplot(2, 4, index + 1) plt.axis('off') plt.imshow(image, cmap=plt.cm.gray_r, interpolation='nearest') plt.title('Training: %i' % label) plt.show() # To apply a classifier on this data, we need to flatten the image, to # turn the data in a (samples, feature) matrix: n_samples = len(digits.images) data = digits.images.reshape((n_samples, -1)) # Create a classifier: a support vector classifier classifier = MLPClassifier(solver='lbfgs', alpha=1e-5, random_state=0, hidden_layer_sizes=(15,) ) # We learn the digits on the first half of the digits classifier.fit(data[:n_samples / 2], digits.target[:n_samples / 2]) print("Training set score: %f" % classifier.score(data[n_samples / 2:], digits.target[n_samples / 2:])) # Now predict the value of the digit on the second half: expected = digits.target[n_samples / 2:] predicted = classifier.predict(data[n_samples / 2:]) print("Classification report for classifier %s:\n%s\n" % (classifier, metrics.classification_report(expected, predicted))) print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted))
NeuralNetworks.ipynb
gtrichards/PHYS_T480
mit
This looks pretty good! In general increasing the size of the hidden layer will improve performance at the cost of longer training time. Now try training networks with a hidden layer size of 5 to 20. At what point does performance stop improving?
from sklearn.model_selection import cross_val_score hidden_size = np.arange(5,20) scores = np.array([]) for sz in hidden_size: classifier = MLPClassifier(solver='lbfgs', alpha=1e-5, random_state=0, hidden_layer_sizes=(sz,) ) #classifier.fit(data[:n_samples / 2], digits.target[:n_samples / 2]) scores = np.append(scores, np.mean(cross_val_score(classifier, data, digits.target, cv=5))) #plt.plot(hidden_size,scores) fig = plt.figure() ax = plt.gca() ax.plot(hidden_size,scores,'x-') plt.show()
NeuralNetworks.ipynb
gtrichards/PHYS_T480
mit
Our basic perceptron can do a pretty good job recognizing handwritten digits, assuming the digits are all centered in an 8x8 image. What happens if we embed the digit images at random locations within a 32x32 image? Try increasing the size of the hidden layer and see if we can improve the performance.
%matplotlib inline import numpy as np from matplotlib import pyplot as plt from sklearn import datasets, cross_validation, neural_network, svm, metrics from sklearn.neural_network import MLPClassifier digits = datasets.load_digits() resize = 32 #Size of larger image to embed the digits images_ex = np.zeros((digits.target.size,resize,resize)) for index, image in enumerate(digits.images): offrow = np.random.randint(low=0,high=resize-8,size=1) offcol = np.random.randint(low=0,high=resize-8,size=1) images_ex[index,offrow:offrow+8,offcol:offcol+8] = digits.images[index,:,:] for jj in range(1,4): fig = plt.figure() ax1 = fig.add_subplot(1,2,2) ax1.imshow(images_ex[jj,:,:],aspect='auto',origin='lower',cmap=plt.cm.gray_r, interpolation='nearest') ax2 = fig.add_subplot(1,2,1) ax2.imshow(digits.images[jj,:,:],aspect='auto',origin='lower',cmap=plt.cm.gray_r, interpolation='nearest') plt.title(digits.target[jj]) plt.show() # To apply a classifier on this data, we need to flatten the image, to # turn the data in a (samples, feature) matrix: n_samples = len(digits.images) data_ex = images_ex.reshape((n_samples,-1)) # Create a classifier: Multi-layer perceptron classifier = MLPClassifier(solver='lbfgs', alpha=1e-5, random_state=0, hidden_layer_sizes=(64,) ) classifier.fit(data_ex[:n_samples / 2], digits.target[:n_samples / 2]) # Now predict the value of the digit on the second half: expected = digits.target[n_samples / 2:] predicted = classifier.predict(data_ex[n_samples / 2:]) print("Classification report for classifier %s:\n%s\n" % (classifier, metrics.classification_report(expected, predicted))) print("Confusion matrix:\n%s" % metrics.confusion_matrix(expected, predicted))
NeuralNetworks.ipynb
gtrichards/PHYS_T480
mit
Well that fell apart quickly! We're at roughly the point where neural networks faded from popularity in the 90s. Perceptrons generated intense interest because they were biologically inspired and could be applied generically to any supervised learning problem. However they weren't extensible to more realistic problems, and for supervised learning there were techniques such as support vector machines that provided better performance and avoided the explosion in training time seen for large perceptrons. Recent interest in neural networks surged in 2012 when a team using a deep convolutional neural network aceived record results classifying objects in the ImageNet data set. Some examples of the types of classification performed on the dataset are shown below. This is clearly much more sophisticated than our basic perceptron. "Deep" networks consist of tens of layers with thousands of neurons. These large networks have become usabel thanks to two breakthroughs: the use of sparse layers and the power of graphics processing units (GPUs). Many image processing tasks involve convolving an image with a 2-dimensional kernel as shown below. The sparse layers or convolutional layers in a deep network contain a large number of hidden nodes but very few synapses. The sparseness arises from the relatively small size of a typical convolution kernel (15x15 is a large kernel), so a hidden node representing one output of the convolution is connected to only a few input nodes. Compare this the our previous perceptron, in which every hidden node was connected to every input node. Even though the total number of connections is greatly reduced in the sparse layers, the total number of nodes and connections in a modern deep network is still enormous. Luckily, training these networks turns out to be a great task for GPU acceleration! Serious work using neural networks is almost always done usign specialized GPU-accelerated platforms. The Keras framework provides a Python environment for CNN development. Keras uses the TensorFlow module for backend processing. Installing Keras is simple with pip: pip install tensorflow pip install keras
from keras.models import Sequential from keras.layers import Dense, Activation, Dropout, Flatten from keras.layers import Convolution2D, MaxPooling2D from keras.utils import np_utils #Create a model model = Sequential() #Use two sparse layers to learn useful, translation-invariant features model.add(Convolution2D(32,7,7,border_mode='valid',input_shape=(32,32,1))) model.add(Activation('relu')) model.add(Convolution2D(32,5,5)) model.add(Activation('relu')) model.add(MaxPooling2D(pool_size=(2,2))) model.add(Dropout(0.25)) model.add(Flatten()) #Add dense layers to do the actual classification model.add(Dense(128)) model.add(Activation('relu')) model.add(Dropout(0.5)) model.add(Dense(10)) model.add(Activation('softmax')) model.compile(loss='categorical_crossentropy', optimizer='rmsprop',metrics=['accuracy']) model.summary() #Keras has some particular requirements for data formats... dataX = images_ex.reshape(images_ex.shape[0],images_ex.shape[1],images_ex.shape[2],1) dataY = np_utils.to_categorical(digits.target) #Train the model. We get a summary of performance after each training epoch model.fit(dataX, dataY, validation_split=0.1, batch_size=128, nb_epoch=10)
NeuralNetworks.ipynb
gtrichards/PHYS_T480
mit
2 - Dataset You will use the same "Cat vs non-Cat" dataset as in "Logistic Regression as a Neural Network" (Assignment 2). The model you had built had 70% test accuracy on classifying cats vs non-cats images. Hopefully, your new model will perform a better! Problem Statement: You are given a dataset ("data.h5") containing: - a training set of m_train images labelled as cat (1) or non-cat (0) - a test set of m_test images labelled as cat and non-cat - each image is of shape (num_px, num_px, 3) where 3 is for the 3 channels (RGB). Let's get more familiar with the dataset. Load the data by running the cell below.
train_x_orig, train_y, test_x_orig, test_y, classes = load_data()
deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb
diegocavalca/Studies
cc0-1.0
The following code will show you an image in the dataset. Feel free to change the index and re-run the cell multiple times to see other images.
# Example of a picture index = 73 plt.imshow(train_x_orig[index]) print ("y = " + str(train_y[0,index]) + ". It's a " + classes[train_y[0,index]].decode("utf-8") + " picture.") # Explore your dataset m_train = train_x_orig.shape[0] num_px = train_x_orig.shape[1] m_test = test_x_orig.shape[0] print ("Number of training examples: " + str(m_train)) print ("Number of testing examples: " + str(m_test)) print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)") print ("train_x_orig shape: " + str(train_x_orig.shape)) print ("train_y shape: " + str(train_y.shape)) print ("test_x_orig shape: " + str(test_x_orig.shape)) print ("test_y shape: " + str(test_y.shape))
deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb
diegocavalca/Studies
cc0-1.0
As usual, you reshape and standardize the images before feeding them to the network. The code is given in the cell below. <img src="images/imvectorkiank.png" style="width:450px;height:300px;"> <caption><center> <u>Figure 1</u>: Image to vector conversion. <br> </center></caption>
# Reshape the training and test examples train_x_flatten = train_x_orig.reshape(train_x_orig.shape[0], -1).T # The "-1" makes reshape flatten the remaining dimensions test_x_flatten = test_x_orig.reshape(test_x_orig.shape[0], -1).T # Standardize data to have feature values between 0 and 1. train_x = train_x_flatten/255. test_x = test_x_flatten/255. print ("train_x's shape: " + str(train_x.shape)) print ("test_x's shape: " + str(test_x.shape))
deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb
diegocavalca/Studies
cc0-1.0
$12,288$ equals $64 \times 64 \times 3$ which is the size of one reshaped image vector. 3 - Architecture of your model Now that you are familiar with the dataset, it is time to build a deep neural network to distinguish cat images from non-cat images. You will build two different models: - A 2-layer neural network - An L-layer deep neural network You will then compare the performance of these models, and also try out different values for $L$. Let's look at the two architectures. 3.1 - 2-layer neural network <img src="images/2layerNN_kiank.png" style="width:650px;height:400px;"> <caption><center> <u>Figure 2</u>: 2-layer neural network. <br> The model can be summarized as: INPUT -> LINEAR -> RELU -> LINEAR -> SIGMOID -> OUTPUT. </center></caption> <u>Detailed Architecture of figure 2</u>: - The input is a (64,64,3) image which is flattened to a vector of size $(12288,1)$. - The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ of size $(n^{[1]}, 12288)$. - You then add a bias term and take its relu to get the following vector: $[a_0^{[1]}, a_1^{[1]},..., a_{n^{[1]}-1}^{[1]}]^T$. - You then repeat the same process. - You multiply the resulting vector by $W^{[2]}$ and add your intercept (bias). - Finally, you take the sigmoid of the result. If it is greater than 0.5, you classify it to be a cat. 3.2 - L-layer deep neural network It is hard to represent an L-layer deep neural network with the above representation. However, here is a simplified network representation: <img src="images/LlayerNN_kiank.png" style="width:650px;height:400px;"> <caption><center> <u>Figure 3</u>: L-layer neural network. <br> The model can be summarized as: [LINEAR -> RELU] $\times$ (L-1) -> LINEAR -> SIGMOID</center></caption> <u>Detailed Architecture of figure 3</u>: - The input is a (64,64,3) image which is flattened to a vector of size (12288,1). - The corresponding vector: $[x_0,x_1,...,x_{12287}]^T$ is then multiplied by the weight matrix $W^{[1]}$ and then you add the intercept $b^{[1]}$. The result is called the linear unit. - Next, you take the relu of the linear unit. This process could be repeated several times for each $(W^{[l]}, b^{[l]})$ depending on the model architecture. - Finally, you take the sigmoid of the final linear unit. If it is greater than 0.5, you classify it to be a cat. 3.3 - General methodology As usual you will follow the Deep Learning methodology to build the model: 1. Initialize parameters / Define hyperparameters 2. Loop for num_iterations: a. Forward propagation b. Compute cost function c. Backward propagation d. Update parameters (using parameters, and grads from backprop) 4. Use trained parameters to predict labels Let's now implement those two models! 4 - Two-layer neural network Question: Use the helper functions you have implemented in the previous assignment to build a 2-layer neural network with the following structure: LINEAR -> RELU -> LINEAR -> SIGMOID. The functions you may need and their inputs are: python def initialize_parameters(n_x, n_h, n_y): ... return parameters def linear_activation_forward(A_prev, W, b, activation): ... return A, cache def compute_cost(AL, Y): ... return cost def linear_activation_backward(dA, cache, activation): ... return dA_prev, dW, db def update_parameters(parameters, grads, learning_rate): ... return parameters
### CONSTANTS DEFINING THE MODEL #### n_x = 12288 # num_px * num_px * 3 n_h = 7 n_y = 1 layers_dims = (n_x, n_h, n_y) # GRADED FUNCTION: two_layer_model def two_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False): """ Implements a two-layer neural network: LINEAR->RELU->LINEAR->SIGMOID. Arguments: X -- input data, of shape (n_x, number of examples) Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples) layers_dims -- dimensions of the layers (n_x, n_h, n_y) num_iterations -- number of iterations of the optimization loop learning_rate -- learning rate of the gradient descent update rule print_cost -- If set to True, this will print the cost every 100 iterations Returns: parameters -- a dictionary containing W1, W2, b1, and b2 """ np.random.seed(1) grads = {} costs = [] # to keep track of the cost m = X.shape[1] # number of examples (n_x, n_h, n_y) = layers_dims # Initialize parameters dictionary, by calling one of the functions you'd previously implemented ### START CODE HERE ### (≈ 1 line of code) parameters = initialize_parameters(n_x, n_h, n_y) ### END CODE HERE ### # Get W1, b1, W2 and b2 from the dictionary parameters. W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: LINEAR -> RELU -> LINEAR -> SIGMOID. Inputs: "X, W1, b1". Output: "A1, cache1, A2, cache2". ### START CODE HERE ### (≈ 2 lines of code) A1, cache1 = linear_activation_forward(X, W1, b1, 'relu') A2, cache2 = linear_activation_forward(A1, W2, b2, 'sigmoid') ### END CODE HERE ### # Compute cost ### START CODE HERE ### (≈ 1 line of code) cost = compute_cost(A2, Y) ### END CODE HERE ### # Initializing backward propagation dA2 = - (np.divide(Y, A2) - np.divide(1 - Y, 1 - A2)) # Backward propagation. Inputs: "dA2, cache2, cache1". Outputs: "dA1, dW2, db2; also dA0 (not used), dW1, db1". ### START CODE HERE ### (≈ 2 lines of code) dA1, dW2, db2 = linear_activation_backward(dA2, cache2, 'sigmoid') dA0, dW1, db1 = linear_activation_backward(dA1, cache1, 'relu') ### END CODE HERE ### # Set grads['dWl'] to dW1, grads['db1'] to db1, grads['dW2'] to dW2, grads['db2'] to db2 grads['dW1'] = dW1 grads['db1'] = db1 grads['dW2'] = dW2 grads['db2'] = db2 # Update parameters. ### START CODE HERE ### (approx. 1 line of code) parameters = update_parameters(parameters, grads, learning_rate) ### END CODE HERE ### # Retrieve W1, b1, W2, b2 from parameters W1 = parameters["W1"] b1 = parameters["b1"] W2 = parameters["W2"] b2 = parameters["b2"] # Print the cost every 100 training example if print_cost and i % 100 == 0: print("Cost after iteration {}: {}".format(i, np.squeeze(cost))) if print_cost and i % 100 == 0: costs.append(cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters
deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb
diegocavalca/Studies
cc0-1.0
Run the cell below to train your parameters. See if your model runs. The cost should be decreasing. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
parameters = two_layer_model(train_x, train_y, layers_dims = (n_x, n_h, n_y), num_iterations = 2500, print_cost=True)
deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb
diegocavalca/Studies
cc0-1.0
Expected Output: <table> <tr> <td> **Cost after iteration 0**</td> <td> 0.6930497356599888 </td> </tr> <tr> <td> **Cost after iteration 100**</td> <td> 0.6464320953428849 </td> </tr> <tr> <td> **...**</td> <td> ... </td> </tr> <tr> <td> **Cost after iteration 2400**</td> <td> 0.048554785628770206 </td> </tr> </table> Good thing you built a vectorized implementation! Otherwise it might have taken 10 times longer to train this. Now, you can use the trained parameters to classify images from the dataset. To see your predictions on the training and test sets, run the cell below.
predictions_train = predict(train_x, train_y, parameters)
deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb
diegocavalca/Studies
cc0-1.0
Expected Output: <table> <tr> <td> **Accuracy**</td> <td> 1.0 </td> </tr> </table>
predictions_test = predict(test_x, test_y, parameters)
deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb
diegocavalca/Studies
cc0-1.0
Expected Output: <table> <tr> <td> **Accuracy**</td> <td> 0.72 </td> </tr> </table> Note: You may notice that running the model on fewer iterations (say 1500) gives better accuracy on the test set. This is called "early stopping" and we will talk about it in the next course. Early stopping is a way to prevent overfitting. Congratulations! It seems that your 2-layer neural network has better performance (72%) than the logistic regression implementation (70%, assignment week 2). Let's see if you can do even better with an $L$-layer model. 5 - L-layer Neural Network Question: Use the helper functions you have implemented previously to build an $L$-layer neural network with the following structure: [LINEAR -> RELU]$\times$(L-1) -> LINEAR -> SIGMOID. The functions you may need and their inputs are: python def initialize_parameters_deep(layer_dims): ... return parameters def L_model_forward(X, parameters): ... return AL, caches def compute_cost(AL, Y): ... return cost def L_model_backward(AL, Y, caches): ... return grads def update_parameters(parameters, grads, learning_rate): ... return parameters
### CONSTANTS ### layers_dims = [12288, 20, 7, 5, 1] # 5-layer model # GRADED FUNCTION: L_layer_model def L_layer_model(X, Y, layers_dims, learning_rate = 0.0075, num_iterations = 3000, print_cost=False):#lr was 0.009 """ Implements a L-layer neural network: [LINEAR->RELU]*(L-1)->LINEAR->SIGMOID. Arguments: X -- data, numpy array of shape (number of examples, num_px * num_px * 3) Y -- true "label" vector (containing 0 if cat, 1 if non-cat), of shape (1, number of examples) layers_dims -- list containing the input size and each layer size, of length (number of layers + 1). learning_rate -- learning rate of the gradient descent update rule num_iterations -- number of iterations of the optimization loop print_cost -- if True, it prints the cost every 100 steps Returns: parameters -- parameters learnt by the model. They can then be used to predict. """ np.random.seed(1) costs = [] # keep track of cost # Parameters initialization. ### START CODE HERE ### parameters = initialize_parameters_deep(layers_dims) ### END CODE HERE ### # Loop (gradient descent) for i in range(0, num_iterations): # Forward propagation: [LINEAR -> RELU]*(L-1) -> LINEAR -> SIGMOID. ### START CODE HERE ### (≈ 1 line of code) AL, caches = L_model_forward(X, parameters) ### END CODE HERE ### # Compute cost. ### START CODE HERE ### (≈ 1 line of code) cost = compute_cost(AL, Y) ### END CODE HERE ### # Backward propagation. ### START CODE HERE ### (≈ 1 line of code) grads = L_model_backward(AL, Y, caches) ### END CODE HERE ### # Update parameters. ### START CODE HERE ### (≈ 1 line of code) parameters = update_parameters(parameters, grads, learning_rate) ### END CODE HERE ### # Print the cost every 100 training example if print_cost and i % 100 == 0: print ("Cost after iteration %i: %f" %(i, cost)) if print_cost and i % 100 == 0: costs.append(cost) # plot the cost plt.plot(np.squeeze(costs)) plt.ylabel('cost') plt.xlabel('iterations (per tens)') plt.title("Learning rate =" + str(learning_rate)) plt.show() return parameters
deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb
diegocavalca/Studies
cc0-1.0
You will now train the model as a 5-layer neural network. Run the cell below to train your model. The cost should decrease on every iteration. It may take up to 5 minutes to run 2500 iterations. Check if the "Cost after iteration 0" matches the expected output below, if not click on the square (⬛) on the upper bar of the notebook to stop the cell and try to find your error.
parameters = L_layer_model(train_x, train_y, layers_dims, num_iterations = 2500, print_cost = True)
deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb
diegocavalca/Studies
cc0-1.0
Expected Output: <table> <tr> <td> **Cost after iteration 0**</td> <td> 0.771749 </td> </tr> <tr> <td> **Cost after iteration 100**</td> <td> 0.672053 </td> </tr> <tr> <td> **...**</td> <td> ... </td> </tr> <tr> <td> **Cost after iteration 2400**</td> <td> 0.092878 </td> </tr> </table>
pred_train = predict(train_x, train_y, parameters)
deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb
diegocavalca/Studies
cc0-1.0
<table> <tr> <td> **Train Accuracy** </td> <td> 0.985645933014 </td> </tr> </table>
pred_test = predict(test_x, test_y, parameters)
deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb
diegocavalca/Studies
cc0-1.0
Expected Output: <table> <tr> <td> **Test Accuracy**</td> <td> 0.8 </td> </tr> </table> Congrats! It seems that your 5-layer neural network has better performance (80%) than your 2-layer neural network (72%) on the same test set. This is good performance for this task. Nice job! Though in the next course on "Improving deep neural networks" you will learn how to obtain even higher accuracy by systematically searching for better hyperparameters (learning_rate, layers_dims, num_iterations, and others you'll also learn in the next course). 6) Results Analysis First, let's take a look at some images the L-layer model labeled incorrectly. This will show a few mislabeled images.
print_mislabeled_images(classes, test_x, test_y, pred_test)
deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb
diegocavalca/Studies
cc0-1.0
A few type of images the model tends to do poorly on include: - Cat body in an unusual position - Cat appears against a background of a similar color - Unusual cat color and species - Camera Angle - Brightness of the picture - Scale variation (cat is very large or small in image) 7) Test with your own image (optional/ungraded exercise) Congratulations on finishing this assignment. You can use your own image and see the output of your model. To do that: 1. Click on "File" in the upper bar of this notebook, then click "Open" to go on your Coursera Hub. 2. Add your image to this Jupyter Notebook's directory, in the "images" folder 3. Change your image's name in the following code 4. Run the code and check if the algorithm is right (1 = cat, 0 = non-cat)!
## START CODE HERE ## my_image = "my_image.jpg" # change this to the name of your image file my_label_y = [1] # the true class of your image (1 -> cat, 0 -> non-cat) ## END CODE HERE ## fname = "images/" + my_image image = np.array(ndimage.imread(fname, flatten=False)) my_image = scipy.misc.imresize(image, size=(num_px,num_px)).reshape((num_px*num_px*3,1)) my_predicted_image = predict(my_image, my_label_y, parameters) plt.imshow(image) print ("y = " + str(np.squeeze(my_predicted_image)) + ", your L-layer model predicts a \"" + classes[int(np.squeeze(my_predicted_image)),].decode("utf-8") + "\" picture.")
deep-learnining-specialization/1. neural nets and deep learning/resources/Deep Neural Network - Application v3.ipynb
diegocavalca/Studies
cc0-1.0
Create fake "observations" For the purposes of this tutorial, we'll a simplified version of the fake "observations" used in the Fitting 2 Paper Examples. We'll only use a Johnson:V light curve here and leave the PHOEBE parameters to their true values. We'll also use the spherical distortion method to speed up computations. For a full analysis of the system, see the stack of examples accompanying the paper.
b = phoebe.default_binary() b.set_value_all('distortion_method', 'sphere') b.add_dataset('lc', passband='Johnson:V', dataset='mylcV') b['sma@binary'] = 9.435 b['requiv@primary'] = 1.473 b['requiv@secondary'] = 0.937 b['incl@binary'] = 87.35 b['period@binary'] = 2.345678901 b['q@binary'] = 0.888 b['teff@primary'] = 6342. b['teff@secondary'] = 5684. b['t0@system'] = 1.23456789 b['ecc@orbit'] = 0.148 b['per0@orbit'] = 65.5 b['vgamma@system'] = 185.5 t = np.arange(1., 10.35, 29.44/1440) b['times@mylcV@dataset'] = t b.run_compute()
development/tutorials/gaussian_processes.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Let's now add some correlated noise to the data, comprised of a Gaussian, exponential and quadratic term. This trend mimics instrumental noise and does not bear any astrophysical significance. It is still important we account for it, as it can affect the values of some astrophysical parameters we do care about (in this case most prominently the parameters most closely related to the depths of the eclipses: the ratio of temperatures and radii).
np.random.seed(1) noiseV = 0.006 * np.exp( 0.4 + (t-t[0])/(t[-1]-t[0]) ) + np.random.normal(0.0, 0.003, len(t)) - 0.002*(t-t[0])**2/(t[-1]-t[0])**2 + 0.001*(t-t[0])/(t[-1]-t[0]) + 0.0002 noiseV -= np.mean(noiseV)
development/tutorials/gaussian_processes.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Let's also generate a noise model that resembles an astrophysical signal: for example, stellar pulsations, which we will represent with a sum of sine functions:
freqs = [1.97, 1.72, 2.98] # oscillation frequencies amps = [0.034, 0.019, 0.019] # amplitudes deltas = [0.16, 0.34, 0.86] # phase shifts terms = [amps[i]*np.sin(2*np.pi*(freqs[i]*t)+deltas[i]) for i in [0,1,2]] noiseV_puls = np.sum(terms, axis=0) + np.random.normal(0.0, 0.003, len(t)) fluxes = b.get_value('fluxes', context='model') plt.plot(t, fluxes+noiseV, label='instrumental') plt.plot(t, fluxes+noiseV_puls, label='astrophysical') plt.legend()
development/tutorials/gaussian_processes.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Now we have some fake data with two types of noise suitable for modeling with GPs. PHOEBE supports two different GP models: 'sklearn', which uses the GP implementation in scikit-learn, and celerite2. We have found that sklearn works better for instrumental noise, while celerite2 is designed with astrophysical noise in mind. Let's demonstrate how these two work on our case: Instrumental noise: the sklearn GPs backend Let's now add the fluxes with instrumental noise to our bundle and plot the residuals between the true model and the fake observations with added noise:
b['fluxes@mylcV@dataset'] = fluxes + noiseV b['sigmas@mylcV@dataset'] = 0.003*np.ones_like(fluxes) _ = b.plot(y='residuals', show=True)
development/tutorials/gaussian_processes.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
We can add a gaussian process kernel by only providing the GPs backend we want to use, in this case 'sklearn':
b.add_gaussian_process('sklearn') print(b['gp_sklearn01'])
development/tutorials/gaussian_processes.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
The default sklearn kernel is 'white', which models, as the name suggests, white noise with a single 'noise_level' parameter. Let's see what the other options are:
b['kernel@gp_sklearn01'].choices
development/tutorials/gaussian_processes.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
To see in more detail how each one of these works, refer to https://scikit-learn.org/stable/modules/gaussian_process.html#gp-kernels. For this trend, we have found that a sum of a DotProduct and RBF kernel works best (for how we arrived to this see the Fitting 2 Paper Automated GP Selection Example. Let's switch the kernel type for the current one to DotProduct and add a new RBF kernel. These will then be summed when running compute.
b['kernel@gp_sklearn01'] = 'dot_product' b.add_gaussian_process('sklearn', kernel='rbf') # set the parameters of the kernels to ones that model the noise trend closely b.set_value('sigma_0', feature='gp_sklearn01', value=0.0198) b.set_value('length_scale', feature='gp_sklearn02', value=71.0)
development/tutorials/gaussian_processes.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Sometimes, there may be some residuals in the eclipses due to the PHOEBE model not fitting the data well. GPs are very sensitive to these residuals and more often that not, will begin "stealing" signal from the PHOEBE model, rendering it useless. We can prevent this by masking out the points in the eclipse when running GPs on the residuals, with the parameter 'gp_exclude_phases':
b['gp_exclude_phases@mylcV'] = [[-0.04,0.04], [-0.52,0.40]]
development/tutorials/gaussian_processes.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Let's finally compute the model with GPs and plot the result:
b.run_compute(model='model_gps') b.plot(s={'dataset': 0.005, 'model': 0.02}, ls={'model': '-'}, marker={'dataset': '.'}, legend=True) b.plot(s={'dataset': 0.005, 'model': 0.02}, ls={'model': '-'}, marker={'dataset': '.'}, y='residuals', legend=True, show=True)
development/tutorials/gaussian_processes.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
We can see that the GP model accounted for the correlated noise in our data, which in turn allows the PHOEBE model to fit the astrophysical signal more accurately. Astrophysical noise: the celerite2 GPs backend Let's now replace the observed fluxes with those with astrophysical noise and plot the residuals:
b['fluxes@mylcV@dataset'] = fluxes + noiseV_puls b['sigmas@mylcV@dataset'] = 0.003*np.ones_like(fluxes) _ = b.plot(model='latest', y='residuals', show=True)
development/tutorials/gaussian_processes.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
For this type of noise, a gaussian process kernel with the 'celerite2' backend on average works better than the 'sklearn' ExpSineSquared periodic kernel. We'll add two SHO kernels corresponding to the three frequencies used to generate the noise and approximate the other parameters. For a full description of each kernel and its parameters see the celerite2 documentation.
b.add_gaussian_process('celerite2', kernel='sho', sigma=0.2, rho=1/1.97, tau=3) b.add_gaussian_process('celerite2', kernel='sho', sigma=0.2, rho=1/1.72, tau=3) b.add_gaussian_process('celerite2', kernel='sho', sigma=0.2, rho=1/2.98, tau=3) b.run_compute()
development/tutorials/gaussian_processes.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
Uh-oh, we get an error! The issue is that we already have a sklearn kernel attached to our data and ss of yet, PHOEBE only supports one GPs "backend" at a time, either sklearn or celerite2. The two can't be mixed, however you can mix different kernels from the same module. So, let's disable the sklearn GP features before moving on to compute the celerite2 GPs.
b.disable_feature('gp_sklearn01') b.disable_feature('gp_sklearn02') b.run_compute(model='model_gps', overwrite=True) b.plot(s={'dataset': 0.005, 'model': 0.02}, ls={'model': '-'}, marker={'dataset': '.'}, legend=True, show=True)
development/tutorials/gaussian_processes.ipynb
phoebe-project/phoebe2-docs
gpl-3.0
You can inspect the content of the state using the following command.
def print_truncated_random_state(): """To avoid spamming the outputs, print only part of the state.""" full_random_state = np.random.get_state() print(str(full_random_state)[:460], '...') print_truncated_random_state()
docs/jax-101/05-random-numbers.ipynb
google/jax
apache-2.0
The state is updated by each call to a random function:
np.random.seed(0) print_truncated_random_state() _ = np.random.uniform() print_truncated_random_state()
docs/jax-101/05-random-numbers.ipynb
google/jax
apache-2.0
NumPy allows you to sample both individual numbers, or entire vectors of numbers in a single function call. For instance, you may sample a vector of 3 scalars from a uniform distribution by doing:
np.random.seed(0) print(np.random.uniform(size=3))
docs/jax-101/05-random-numbers.ipynb
google/jax
apache-2.0
NumPy provides a sequential equivalent guarantee, meaning that sampling N numbers in a row individually or sampling a vector of N numbers results in the same pseudo-random sequences:
np.random.seed(0) print("individually:", np.stack([np.random.uniform() for _ in range(3)])) np.random.seed(0) print("all at once: ", np.random.uniform(size=3))
docs/jax-101/05-random-numbers.ipynb
google/jax
apache-2.0
Random numbers in JAX JAX's random number generation differs from NumPy's in important ways. The reason is that NumPy's PRNG design makes it hard to simultaneously guarantee a number of desirable properties for JAX, specifically that code must be: reproducible, parallelizable, vectorisable. We will discuss why in the following. First, we will focus on the implications of a PRNG design based on a global state. Consider the code:
import numpy as np np.random.seed(0) def bar(): return np.random.uniform() def baz(): return np.random.uniform() def foo(): return bar() + 2 * baz() print(foo())
docs/jax-101/05-random-numbers.ipynb
google/jax
apache-2.0
The function foo sums two scalars sampled from a uniform distribution. The output of this code can only satisfy requirement #1 if we assume a specific order of execution for bar() and baz(), as native Python does. This doesn't seem to be a major issue in NumPy, as it is already enforced by Python, but it becomes an issue in JAX. Making this code reproducible in JAX would require enforcing this specific order of execution. This would violate requirement #2, as JAX should be able to parallelize bar and baz when jitting as these functions don't actually depend on each other. To avoid this issue, JAX does not use a global state. Instead, random functions explicitly consume the state, which is referred to as a key .
from jax import random key = random.PRNGKey(42) print(key)
docs/jax-101/05-random-numbers.ipynb
google/jax
apache-2.0
A key is just an array of shape (2,). 'Random key' is essentially just another word for 'random seed'. However, instead of setting it once as in NumPy, any call of a random function in JAX requires a key to be specified. Random functions consume the key, but do not modify it. Feeding the same key to a random function will always result in the same sample being generated:
print(random.normal(key)) print(random.normal(key))
docs/jax-101/05-random-numbers.ipynb
google/jax
apache-2.0
Note: Feeding the same key to different random functions can result in correlated outputs, which is generally undesirable. The rule of thumb is: never reuse keys (unless you want identical outputs). In order to generate different and independent samples, you must split() the key yourself whenever you want to call a random function:
print("old key", key) new_key, subkey = random.split(key) del key # The old key is discarded -- we must never use it again. normal_sample = random.normal(subkey) print(r" \---SPLIT --> new key ", new_key) print(r" \--> new subkey", subkey, "--> normal", normal_sample) del subkey # The subkey is also discarded after use. # Note: you don't actually need to `del` keys -- that's just for emphasis. # Not reusing the same values is enough. key = new_key # If we wanted to do this again, we would use new_key as the key.
docs/jax-101/05-random-numbers.ipynb
google/jax
apache-2.0
split() is a deterministic function that converts one key into several independent (in the pseudorandomness sense) keys. We keep one of the outputs as the new_key, and can safely use the unique extra key (called subkey) as input into a random function, and then discard it forever. If you wanted to get another sample from the normal distribution, you would split key again, and so on. The crucial point is that you never use the same PRNGKey twice. Since split() takes a key as its argument, we must throw away that old key when we split it. It doesn't matter which part of the output of split(key) we call key, and which we call subkey. They are all pseudorandom numbers with equal status. The reason we use the key/subkey convention is to keep track of how they're consumed down the road. Subkeys are destined for immediate consumption by random functions, while the key is retained to generate more randomness later. Usually, the above example would be written concisely as
key, subkey = random.split(key)
docs/jax-101/05-random-numbers.ipynb
google/jax
apache-2.0
which discards the old key automatically. It's worth noting that split() can create as many keys as you need, not just 2:
key, *forty_two_subkeys = random.split(key, num=43)
docs/jax-101/05-random-numbers.ipynb
google/jax
apache-2.0
Another difference between NumPy's and JAX's random modules relates to the sequential equivalence guarantee mentioned above. As in NumPy, JAX's random module also allows sampling of vectors of numbers. However, JAX does not provide a sequential equivalence guarantee, because doing so would interfere with the vectorization on SIMD hardware (requirement #3 above). In the example below, sampling 3 values out of a normal distribution individually using three subkeys gives a different result to using giving a single key and specifying shape=(3,):
key = random.PRNGKey(42) subkeys = random.split(key, 3) sequence = np.stack([random.normal(subkey) for subkey in subkeys]) print("individually:", sequence) key = random.PRNGKey(42) print("all at once: ", random.normal(key, shape=(3,)))
docs/jax-101/05-random-numbers.ipynb
google/jax
apache-2.0
Prove manipolazioni array
unimatr = numpy.ones((10,10)) #unimatr duimatr = unimatr*2 #duimatr uniarray = numpy.ones((10,1)) #uniarray triarray = uniarray*3 scalarray = numpy.arange(10) scalarray = scalarray.reshape(10,1) #NB fare il reshape da orizzontale a verticale è come se aggiungesse #una dimensione all'array facendolo diventare un ndarray #(prima era un array semplice, poi diventa un array (x,1), quindi puoi fare trasposto) #NB NUMPY NON FA TRASPOSTO DI ARRAY SEMPLICE! #scalarray scalarray.T ramatricia = numpy.random.randint(2, size=36).reshape((6,6)) ramatricia2 = numpy.random.randint(2, size=36).reshape((6,6)) #WARNING questa operazione moltiplica elemento per elemento #se l'oggetto è di dimensione inferiore moltiplica ogni riga/colonna # o matrice verticale/orizzontale a seconda della forma dell'oggetto duimatr*scalarray #duimatr*scalarray.T #duimatr*duimatr ramatricia*ramatricia2 #numpy dot invece fa prodotto matriciale righe per colonne numpy.dot(duimatr,scalarray) #numpy.dot(duimatr,duimatr) numpy.dot(ramatricia2,ramatricia) duimatr + scalarray
codici/.ipynb_checkpoints/Prove numpy-checkpoint.ipynb
iurilarosa/thesis
gpl-3.0
Prove creazione matrice 3D con prodotti esterni
scalarray = numpy.arange(10) uniarray = numpy.ones(10) matricia = numpy.outer(scalarray, uniarray) matricia tensorio = numpy.outer(matricia,scalarray).reshape(10,10,10) tensorio # metodo di creazione array nd (numpy.ndarray)
codici/.ipynb_checkpoints/Prove numpy-checkpoint.ipynb
iurilarosa/thesis
gpl-3.0
Prove manipolazione matrici 3D numpy
tensorio = numpy.ones(1000).reshape(10,10,10) tensorio # metodo di creazione array nd (numpy.ndarray) #altro metodo è con comando diretto #tensorio = numpy.ndarray((3,3,3), dtype = int, buffer=numpy.arange(30)) #potrebbe essere utile con la matrice sparsa della peakmap, anche se difficilmente è maneggiabile come matrice densa #oppure # HO FINALMENTE SCOPERTO COME SI METTE IL DTYPE COME SI DEVE!! con "numpy.float32"! #tensorio = numpy.zeros((3,3,3), dtype = numpy.float32) #tensorio.dtype #tensorio scalarray = numpy.arange(10) uniarray = numpy.ones(10) scalamatricia = numpy.outer(scalarray,scalarray) #scalamatricia tensorio * 2 tensorio + 2 tensorio + scalamatricia %time tensorio + scalarray %time tensorio.__add__(scalarray) #danno stesso risultato con tempi paragonabili
codici/.ipynb_checkpoints/Prove numpy-checkpoint.ipynb
iurilarosa/thesis
gpl-3.0
Prove matrici sparse
from scipy import sparse ramatricia = numpy.random.randint(2, size=25).reshape((5,5)) ramatricia #efficiente per colonne #sparsamatricia = sparse.csc_matrix(ramatricia) #print(sparsamatricia) #per righe sparsamatricia = sparse.csr_matrix(ramatricia) print(sparsamatricia) sparsamatricia.toarray() righe = numpy.array([0,0,0,1,2,3,3,4]) colonne = numpy.array([0,0,4,2,1,4,3,0]) valori = numpy.ones(righe.size) sparsamatricia = sparse.coo_matrix((valori, (righe,colonne))) print(sparsamatricia) sparsamatricia.toarray()
codici/.ipynb_checkpoints/Prove numpy-checkpoint.ipynb
iurilarosa/thesis
gpl-3.0
Prodotto di matrici Prodotti interni Considera di avere 2 matrici, a e b, in forma numpy array: a*b fa il prodotto elemento per elemento (solo se a e b hanno stessa dimensione) numpy.dot(a,b) fa il prodotto matriciale righe per colonne Ora considera di avere 2 matrici, a e b, in forma di scipy.sparse: a*b fa il prodotto matriciale righe per colonne numpy.dot(a,b) non funziona per nulla a.dot(b) fa il prodotto matriciale righe per colonne
#vari modi per fare prodotti di matrici (con somma con operatore + è lo stesso) densamatricia = sparsamatricia.toarray() #densa-densa prodottoPerElementiDD = densamatricia*densamatricia prodottoMatricialeDD = numpy.dot(densamatricia, densamatricia) #sparsa-densa prodottoMatricialeSD = sparsamatricia*densamatricia prodottoMatricialeSD2 = sparsamatricia.dot(densamatricia) #sparsa-sparsa prodottoMatricialeSS = sparsamatricia*sparsamatricia prodottoMatricialeSS2 = sparsamatricia.dot(sparsamatricia) # "SPARSA".dot("SPARSA O DENSA") FA PRODOTTO MATRICIALE # "SPARSA * SPARSA" FA PRODOTTO MATRICIALE prodottoMatricialeDD - prodottoMatricialeSS #nb somme e sottrazioni tra matrici sparse e dense sono ok # prodotto matriciale tra densa e sparsa funziona come sparsa e sparsa
codici/.ipynb_checkpoints/Prove numpy-checkpoint.ipynb
iurilarosa/thesis
gpl-3.0
Prodotti esterni
densarray = numpy.array(["a","b"],dtype = object) densarray2 = numpy.array(["c","d"],dtype = object) numpy.outer(densarray,[1,2]) densamatricia = numpy.array([[1,2],[3,4]]) densamatricia2 = numpy.array([["a","b"],["c","d"]], dtype = object) numpy.outer(densamatricia2,densamatricia).reshape(4,2,2) densarray1 = numpy.array([0,2]) densarray2 = numpy.array([5,0]) densamatricia = numpy.array([[1,2],[3,4]]) densamatricia2 = numpy.array([[0,2],[5,0]]) nrighe = 2 ncolonne = 2 npiani = 4 prodottoEstDD = numpy.outer(densamatricia,densamatricia2).reshape(npiani,ncolonne,nrighe) #prodottoEstDD #prodottoEstDD = numpy.dstack((prodottoEstDD[0,:],prodottoEstDD[1,:])) prodottoEstDD sparsarray1 = sparse.csr_matrix(densarray1) sparsarray2 = sparse.csr_matrix(densarray2) sparsamatricia = sparse.csr_matrix(densamatricia) sparsamatricia2 = sparse.csr_matrix(densamatricia2) prodottoEstSS = sparse.kron(sparsamatricia,sparsamatricia2).toarray() prodottoEstSD = sparse.kron(sparsamatricia,densamatricia2).toarray() prodottoEstSD #prove prodotti esterni # numpy.outer # scipy.sparse.kron #densa-densa prodottoEsternoDD = numpy.outer(densamatricia,densamatricia) #sparsa-densa prodottoEsternoSD = sparse.kron(sparsamatricia,densamatricia) #sparsa-sparsa prodottoEsternoSS = sparse.kron(sparsamatricia,sparsamatricia) prodottoEsternoDD-prodottoEsternoSS # altre prove di prodotti esterni rarray1 = numpy.random.randint(2, size=4) rarray2 = numpy.random.randint(2, size=4) print(rarray1,rarray2) ramatricia = numpy.outer(rarray1,rarray2) unimatricia = numpy.ones((4,4)).astype(int) #ramatricia2 = rarray1 * rarray2.T print(ramatricia,unimatricia) #print(ramatricia) #print("eppoi") #print(ramatricia2) #sparsarray = sparse.csr_matrix(rarray1) #print(sparsarray) #ramatricia2 = #il mio caso problematico è che ho una matrice di cui so tutti gli elementi non zero, #so quante righe ho (i tempi), ma non so quante colonne di freq ho randomcolonne = numpy.random.randint(10)+1 ramatricia = numpy.random.randint(2, size=10*randomcolonne).reshape((10,randomcolonne)) print(ramatricia.shape) #ramatricia nonzeri = numpy.nonzero(ramatricia) ndati = len(nonzeri[0]) ndati ramatricia #ora cerco di fare la matrice sparsa print(ndati) dati = numpy.ones(2*ndati).reshape(ndati,2) dati coordinateRighe = nonzeri[0] coordinateColonne = nonzeri[1] sparsamatricia = sparse.coo_matrix((dati,(coordinateRighe,coordinateColonne))) densamatricia = sparsamatricia.toarray() densamatricia
codici/.ipynb_checkpoints/Prove numpy-checkpoint.ipynb
iurilarosa/thesis
gpl-3.0
Provo a passare operazioni a array con array di coordinate
matrice = numpy.arange(30).reshape(10,3) matrice righe = numpy.array([1,0,1,1]) colonne = numpy.array([2,0,2,2]) pesi = numpy.array([100,200,300,10]) print(righe,colonne) matrice[righe,colonne] matrice[righe,colonne] = (matrice[righe,colonne] + numpy.array([100,200,300,10])) matrice %matplotlib inline a = pyplot.imshow(matrice) numpy.add.at(matrice, [righe,colonne],pesi) matrice %matplotlib inline a = pyplot.imshow(matrice) matr
codici/.ipynb_checkpoints/Prove numpy-checkpoint.ipynb
iurilarosa/thesis
gpl-3.0
Prove plots
from matplotlib import pyplot %matplotlib inline ##AL MOMENTO INUTILE, NON COMPILARE x = numpy.random.randint(10,size = 10) y = numpy.random.randint(10,size = 10) pyplot.scatter(x,y, s = 5) #nb imshow si può fare solo con un 2d array #visualizzazione di una matrice, solo matrici dense a quanto pare a = pyplot.imshow(densamatricia) #a = pyplot.imshow(sparsamatricia) #c = pyplot.matshow(densamatricia) #spy invece funziona anche per le sparse! pyplot.spy(sparsamatricia,precision=0.01, marker = ".", markersize=10) #in alternativa, scatterplot delle coordinate dal dataframe b = pyplot.scatter(coordinateColonne,coordinateRighe, s = 2) import seaborn %matplotlib inline sbRegplot = seaborn.regplot(x=coordinateRighe, y=coordinateColonne, color="g", fit_reg=False) import pandas coordinateRighe = coordinateRighe.reshape(len(coordinateRighe),1) coordinateColonne = coordinateColonne.reshape(len(coordinateColonne),1) #print([coordinateRighe,coordinateColonne]) coordinate = numpy.concatenate((coordinateRighe,coordinateColonne),axis = 1) coordinate tabella = pandas.DataFrame(coordinate) tabella.columns = ["righe", "colonne"] sbPlmplot = seaborn.lmplot(x = "righe", y = "colonne", data = tabella, fit_reg=False)
codici/.ipynb_checkpoints/Prove numpy-checkpoint.ipynb
iurilarosa/thesis
gpl-3.0
Un esempio semplice del mio problema
import numpy from scipy import sparse import multiprocessing from matplotlib import pyplot #first i build a matrix of some x positions vs time datas in a sparse format matrix = numpy.random.randint(2, size = 100).astype(float).reshape(10,10) x = numpy.nonzero(matrix)[0] times = numpy.nonzero(matrix)[1] weights = numpy.random.rand(x.size) import scipy.io mint = numpy.amin(times) maxt = numpy.amax(times) scipy.io.savemat('debugExamples/numpy.mat',{ 'matrix':matrix, 'x':x, 'times':times, 'weights':weights, 'mint':mint, 'maxt':maxt, }) times #then i define an array of y positions nStepsY = 5 y = numpy.arange(1,nStepsY+1) # provo a iterare # VERSIONE CON HACK CON SPARSE verificato viene uguale a tutti gli altri metodi più semplici che ho provato # ma ha problemi con parallelizzazione nRows = nStepsY nColumns = 80 y = numpy.arange(1,nStepsY+1) image = numpy.zeros((nRows, nColumns)) def itermatrix(ithStep): yTimed = y[ithStep]*times positions = (numpy.round(x-yTimed)+50).astype(int) fakeRow = numpy.zeros(positions.size) matrix = sparse.coo_matrix((weights, (fakeRow, positions))).todense() matrix = numpy.ravel(matrix) missColumns = (nColumns-matrix.size) zeros = numpy.zeros(missColumns) matrix = numpy.concatenate((matrix, zeros)) return matrix #for i in numpy.arange(nStepsY): # image[i] = itermatrix(i) #or imageSparsed = list(map(itermatrix, range(nStepsY))) imageSparsed = numpy.array(imageSparsed) scipy.io.savemat('debugExamples/numpyResult.mat', {'imageSparsed':imageSparsed}) a = pyplot.imshow(imageSparsed, aspect = 10) pyplot.show() import numpy from scipy import sparse import multiprocessing from matplotlib import pyplot #first i build a matrix of some x positions vs time datas in a sparse format matrix = numpy.random.randint(2, size = 100).astype(float).reshape(10,10) times = numpy.nonzero(matrix)[0] freqs = numpy.nonzero(matrix)[1] weights = numpy.random.rand(times.size) #then i define an array of y positions nStepsSpindowns = 5 spindowns = numpy.arange(1,nStepsSpindowns+1) #PROVA CON BINCOUNT def mapIt(ithStep): ncolumns = 80 image = numpy.zeros(ncolumns) sdTimed = spindowns[ithStep]*times positions = (numpy.round(freqs-sdTimed)+50).astype(int) values = numpy.bincount(positions,weights) values = values[numpy.nonzero(values)] positions = numpy.unique(positions) image[positions] = values return image %time imageMapped = list(map(mapIt, range(nStepsSpindowns))) imageMapped = numpy.array(imageMapped) %matplotlib inline a = pyplot.imshow(imageMapped, aspect = 10) # qui provo fully vectorial def fullmatrix(nRows, nColumns): spindowns = numpy.arange(1,nStepsSpindowns+1) image = numpy.zeros((nRows, nColumns)) sdTimed = numpy.outer(spindowns,times) freqs3d = numpy.outer(numpy.ones(nStepsSpindowns),freqs) weights3d = numpy.outer(numpy.ones(nStepsSpindowns),weights) spindowns3d = numpy.outer(spindowns,numpy.ones(times.size)) positions = (numpy.round(freqs3d-sdTimed)+50).astype(int) matrix = sparse.coo_matrix((numpy.ravel(weights3d), (numpy.ravel(spindowns3d), numpy.ravel(positions)))).todense() return matrix %time image = fullmatrix(nStepsSpindowns, 80) a = pyplot.imshow(image, aspect = 10) pyplot.show()
codici/.ipynb_checkpoints/Prove numpy-checkpoint.ipynb
iurilarosa/thesis
gpl-3.0
Confronti Debug!
#confronto con codice ORIGINALE in matlab immagineOrig = scipy.io.loadmat('debugExamples/dbOrigResult.mat')['binh_df0'] a = pyplot.imshow(immagineOrig[:,0:80], aspect = 10) pyplot.show() #PROVA CON BINCOUNT def mapIt(ithStep): ncolumns = 80 image = numpy.zeros(ncolumns) yTimed = y[ithStep]*times positions = (numpy.round(x-yTimed)+50).astype(int) values = numpy.bincount(positions,weights) values = values[numpy.nonzero(values)] positions = numpy.unique(positions) image[positions] = values return image %time imageMapped = list(map(mapIt, range(nStepsY))) imageMapped = numpy.array(imageMapped) %matplotlib inline a = pyplot.imshow(imageMapped, aspect = 10) # qui provo con vettorializzazione di numpy (apply along axis) nrows = nStepsY ncolumns = 80 matrix = numpy.zeros(nrows*ncolumns).reshape(nrows,ncolumns) def applyIt(image): ithStep = 1 image = numpy.zeros(ncolumns) yTimed = y[ithStep]*times positions = (numpy.round(x-yTimed)+50).astype(int) #print(positions) values = numpy.bincount(positions,weights) values = values[numpy.nonzero(values)] positions = numpy.unique(positions) image[positions] = values return image imageApplied = numpy.apply_along_axis(applyIt,1,matrix) a = pyplot.imshow(imageApplied, aspect = 10) # qui provo fully vectorial def fullmatrix(nRows, nColumns): y = numpy.arange(1,nStepsY+1) image = numpy.zeros((nRows, nColumns)) yTimed = numpy.outer(y,times) x3d = numpy.outer(numpy.ones(nStepsY),x) weights3d = numpy.outer(numpy.ones(nStepsY),weights) y3d = numpy.outer(y,numpy.ones(x.size)) positions = (numpy.round(x3d-yTimed)+50).astype(int) matrix = sparse.coo_matrix((numpy.ravel(weights3d), (numpy.ravel(y3d), numpy.ravel(positions)))).todense() return matrix %time image = fullmatrix(nStepsY, 80) a = pyplot.imshow(image, aspect = 10) pyplot.show() imageMapped = list(map(itermatrix, range(nStepsY))) imageMapped = numpy.array(imageMapped) a = pyplot.imshow(imageMapped, aspect = 10) pyplot.show() # prova con numpy.put nStepsY = 5 def mapIt(ithStep): ncolumns = 80 image = numpy.zeros(ncolumns) yTimed = y[ithStep]*times positions = (numpy.round(x-yTimed)+50).astype(int) values = numpy.bincount(positions,weights) values = values[numpy.nonzero(values)] positions = numpy.unique(positions) image[positions] = values return image %time imagePutted = list(map(mapIt, range(nStepsY))) imagePutted = numpy.array(imagePutted) %matplotlib inline a = pyplot.imshow(image, aspect = 10) pyplot.show()
codici/.ipynb_checkpoints/Prove numpy-checkpoint.ipynb
iurilarosa/thesis
gpl-3.0
Next we will build a set of x values from zero to 4&pi; in increments of 0.1 radians to use in our plot. The x-values are stored in a numpy array. Numpy's arange() function has three arguments: start, stop, step. We start at zero, stop at 4&pi; and step by 0.1 radians. Then we define a variable y as the sine of x using numpy's sin() function.
x = np.arange(0,4*np.pi,0.1) # start,stop,step y = np.sin(x)
content/code/matplotlib_plots/plotting_trig_functions.ipynb
ProfessorKazarinoff/staticsite
gpl-3.0
To create the plot, we use matplotlib's plt.plot() function. The two arguments are our numpy arrays x and y. The line plt.show() will show the finished plot.
plt.plot(x,y) plt.show()
content/code/matplotlib_plots/plotting_trig_functions.ipynb
ProfessorKazarinoff/staticsite
gpl-3.0
Next let's build a plot which shows two trig functions, sine and cosine. We will create the same two numpy arrays x and y as before, and add a third numpy array z which is the cosine of x.
x = np.arange(0,4*np.pi,0.1) # start,stop,step y = np.sin(x) z = np.cos(x)
content/code/matplotlib_plots/plotting_trig_functions.ipynb
ProfessorKazarinoff/staticsite
gpl-3.0
To plot both sine and cosine on the same set of axies, we need to include two pair of x,y values in our plt.plot() arguments. The first pair is x,y. This corresponds to the sine function. The second pair is x,z. This correspons to the cosine function. If you try and only add three arguments as in plt.plot(x,y,z), your plot will not show sine and cosine on the same set of axes.
plt.plot(x,y,x,z) plt.show()
content/code/matplotlib_plots/plotting_trig_functions.ipynb
ProfessorKazarinoff/staticsite
gpl-3.0
Let's build one more plot, a plot which shows the sine and cosine of x and also includes axis labels, a title and a legend. We build the numpy arrays using the trig functions as before:
x = np.arange(0,4*np.pi-1,0.1) # start,stop,step y = np.sin(x) z = np.cos(x)
content/code/matplotlib_plots/plotting_trig_functions.ipynb
ProfessorKazarinoff/staticsite
gpl-3.0
The plt.plot() call is the same as before using two pairs of x and y values. To add axis labels we will use the following methods: | matplotlib method | description | example | | ----------------- | ----------- | ------- | | plt.xlabel() | x-axis label | plt.xlabel('x values from 0 to 4pi') | | plt.ylabel() | y-axis label | plt.ylabel('sin(x) and cos(x)') | | plt.title() | plot title | plt.title('Plot of sin and cos from 0 to 4pi') | | plt.legend([ ]) | legend | plt.legend(['sin(x)', 'cos(x)']) | Note that plt.legend() method requires a list of strings (['string1', 'string2']), where the individual strings are enclosed with qutoes, then seperated by commas and finally inclosed in brackets to make a list. The first string in the list corresponds to the first x-y pair when we called plt.plot() , the second string in the list corresponds to the second x,y pair in the plt.plot() line.
plt.plot(x,y,x,z) plt.xlabel('x values from 0 to 4pi') # string must be enclosed with quotes ' ' plt.ylabel('sin(x) and cos(x)') plt.title('Plot of sin and cos from 0 to 4pi') plt.legend(['sin(x)', 'cos(x)']) # legend entries as seperate strings in a list plt.show()
content/code/matplotlib_plots/plotting_trig_functions.ipynb
ProfessorKazarinoff/staticsite
gpl-3.0
code,代码 name,名称 industry,所属行业 area,地区 pe,市盈率 outstanding,流通股本(亿) totals,总股本(亿) totalAssets,总资产(万) liquidAssets,流动资产 fixedAssets,固定资产 reserved,公积金 reservedPerShare,每股公积金 esp,每股收益 bvps,每股净资 pb,市净率 timeToMarket,上市日期 undp,未分利润 perundp, 每股未分配 rev,收入同比(%) profit,利润同比(%) gpr,毛利率(%) npr,净利润率(%) holders,股东人数 ['name', 'pe', 'outstanding', 'totals', 'totalAssets', 'liquidAssets', 'fixedAssets', 'esp', 'bvps', 'pb', 'perundp', 'rev', 'profit', 'gpr', 'npr', 'holders']
col_show = ['name', 'open', 'pre_close', 'price', 'high', 'low', 'volume', 'amount', 'time', 'code'] initial_letter = ['HTGD','OFKJ','CDKJ','ZJXC','GXKJ','FHTX','DZJG'] code =[] for letter in initial_letter: code.append(df[df['UP']==letter].code[0]) #print(code) if code != '': #not empty != '' df_price = ts.get_realtime_quotes(code) #print(df_price) #df_price.columns.values.tolist() df_price[col_show]
sample_code/date_utils.ipynb
yunfeiz/py_learnt
apache-2.0
TO-DO Add the map from initial to code build up a dataframe with fundamental and indicotors For Leadings, need cache more data for the begining data
from matplotlib.mlab import csv2rec df=ts.get_k_data("002456",start='2018-01-05',end='2018-01-09') df.to_csv("temp.csv") r=csv2rec("temp.csv") #r.date import time, datetime #str = df[df.code == '600487'][clommun_show].name.values #print(str) today=datetime.date.today() yesterday = today - datetime.timedelta(1) #print(today, yesterday) i = datetime.datetime.now() print ("当前的日期和时间是 %s" % i) print ("ISO格式的日期和时间是 %s" % i.isoformat() ) print ("当前的年份是 %s" %i.year) print ("当前的月份是 %s" %i.month) print ("当前的日期是 %s" %i.day) print ("dd/mm/yyyy 格式是 %s/%s/%s" % (i.day, i.month, i.year) ) print ("当前小时是 %s" %i.hour) print ("当前分钟是 %s" %i.minute) print ("当前秒是 %s" %i.second) import time localtime = time.localtime(time.time()) print("本地时间为 :", localtime) # 格式化成2016-03-20 11:45:39形式 print(time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())) # 格式化成Sat Mar 28 22:24:24 2016形式 print(time.strftime("%a %b %d %H:%M:%S %Y", time.localtime())) #!/usr/bin/python # -*- coding: UTF-8 -*- import calendar cal = calendar.month(2019, 3) #print (cal)
sample_code/date_utils.ipynb
yunfeiz/py_learnt
apache-2.0
(1024, 3) 行列数都与我们爬取到的数量一致,通过。 分词 下面我们需要做一件重要工作——分词 我们首先调用jieba分词包。 我们此次需要处理的,不是单一文本数据,而是1000多条文本数据,因此我们需要把这项工作并行化。这就需要首先编写一个函数,处理单一文本的分词。 有了这个函数之后,我们就可以不断调用它来批量处理数据框里面的全部文本(正文)信息了。你当然可以自己写个循环来做这项工作。但这里我们使用更为高效的apply函数。如果你对这个函数有兴趣,可以点击这段教学视频查看具体的介绍。 下面这一段代码执行起来,可能需要一小段时间。请耐心等候。
import jieba def chinese_word_cut(mytext): return " ".join(jieba.cut(mytext)) df["content_cutted"] = df.content.apply(chinese_word_cut) #执行完毕之后,我们需要查看一下,文本是否已经被正确分词。 df.content_cutted.head() #文本向量化 from sklearn.feature_extraction.text import TfidfVectorizer, CountVectorizer n_features = 1000 tf_vectorizer = CountVectorizer(strip_accents = 'unicode', max_features=n_features, stop_words='english', max_df = 0.5, min_df = 10) tf = tf_vectorizer.fit_transform(df.content_cutted)
jupyter_notebook/datascience.ipynb
xiaoxiaoyao/MyApp
unlicense
我们需要人为设定主题的数量。这个要求让很多人大跌眼镜——我怎么知道这一堆文章里面多少主题?! 别着急。应用LDA方法,指定(或者叫瞎猜)主题个数是必须的。如果你只需要把文章粗略划分成几个大类,就可以把数字设定小一些;相反,如果你希望能够识别出非常细分的主题,就增大主题个数。 对划分的结果,如果你觉得不够满意,可以通过继续迭代,调整主题数量来优化。 这里我们先设定为5个分类试试。
#应用LDA方法 from sklearn.decomposition import LatentDirichletAllocation n_topics = 5 lda = LatentDirichletAllocation(n_topics=n_topics, max_iter=50, learning_method='online', learning_offset=50., random_state=0) #这一部分工作量较大,程序会执行一段时间,Jupyter Notebook在执行中可能暂时没有响应。等待一会儿就好,不要着急。 lda.fit(tf) #主题没有一个确定的名称,而是用一系列关键词刻画的。我们定义以下的函数,把每个主题里面的前若干个关键词显示出来: def print_top_words(model, feature_names, n_top_words): for topic_idx, topic in enumerate(model.components_): print("Topic #%d:" % topic_idx) print(" ".join([feature_names[i] for i in topic.argsort()[:-n_top_words - 1:-1]])) print() #定义好函数之后,我们暂定每个主题输出前20个关键词。 n_top_words = 20 #以下命令会帮助我们依次输出每个主题的关键词表: tf_feature_names = tf_vectorizer.get_feature_names() print_top_words(lda, tf_feature_names, n_top_words)
jupyter_notebook/datascience.ipynb
xiaoxiaoyao/MyApp
unlicense
到这里,LDA已经成功帮我们完成了主题抽取。但是我知道你不是很满意,因为结果不够直观。 那咱们就让它直观一些好了。 执行以下命令,会有有趣的事情发生。
import pyLDAvis import pyLDAvis.sklearn pyLDAvis.enable_notebook() pyLDAvis.sklearn.prepare(lda, tf, tf_vectorizer)
jupyter_notebook/datascience.ipynb
xiaoxiaoyao/MyApp
unlicense
Load data
lastnode = 5000 datafile = open('/var/datasets/wdc/small-pld-arc') G = nx.DiGraph() for line in datafile: ijstr = line.split('\t') i=int(ijstr[0]) j=int(ijstr[1]) if i>lastnode: break if j>lastnode: continue G.add_edge(i,j) datafile.close() Gorig = G.copy() indexfile = open('/var/datasets/wdc/small-pld-index') index = {} for line in indexfile: namei = line.split('\t') name=namei[0] i=int(namei[1]) if i>lastnode: break index[i]=name indexfile.close() def cleanupgraph(G): comp = nx.weakly_connected_components(G.copy()) for c in comp: if len(c)<4: G.remove_nodes_from(c) def graphcleanup(G): for (node, deg) in G.degree_iter(): if deg==0: G.remove_node(node) elif deg==1: if G.degree((G.predecessors(node) + G.successors(node))[0]) == 1: G.remove_node(node) elif deg==2 and G.in_degree(node)==1: if (G.predecessors(node) == G.successors(node)) and G.degree((G.predecessors(node) + G.successors(node))[0]) == 2: G.remove_node(node) cleanupgraph(G) G.size() Gorig.number_of_nodes() Gorig.size()
randomwalks/WDC Random Walk.ipynb
mitliagkas/graphs
mit
Convert to Javascript for interactivity Adapted from: http://nbviewer.ipython.org/github/ipython-books/cookbook-code/blob/master/notebooks/chapter06_viz/04_d3.ipynb From: http://networkx.github.io/documentation/latest/examples/javascript/force.html
#from IPython.core.display import display_javascript import json from networkx.readwrite import json_graph d = json_graph.node_link_data(G) for node in d['nodes']: node['name']=node['id'] node['value']=G.degree(node['id']) if True: node['group'] = node['id'] % 4 else: if node['id']<10: node['group']=0#node['id'] % 4 else: node['group']=1#node['id'] % 4 d['adjacency'] = json_graph.adjacency_data(G)['adjacency'] json.dump(d, open('rwgraph.json','w')) %%html <div id="d3-example"></div> <style> .node {stroke: #fff; stroke-width: 1.5px;} .link {stroke: #999; stroke-opacity: .3;} </style> <script src="randomwalk.js"></script>
randomwalks/WDC Random Walk.ipynb
mitliagkas/graphs
mit
Uses: https://github.com/mbostock/d3/wiki/Force-Layout http://bl.ocks.org/mbostock/4062045
Javascript(filename='force.js') L = nx.linalg.laplacianmatrix.directed_laplacian_matrix(G) Linv = np.linalg.inv(L) L.shape n = L.shape[0] Reff = np.zeros((n,n)) Gsparse = G.copy() graphcleanup(Gsparse) nodelookup={Gsparse.nodes()[idx]:idx for idx in range(len(Gsparse.nodes()))} edge = np.zeros((n,1)) for (i,j) in Gsparse.edges_iter(): edge[nodelookup[i]] = 1 edge[nodelookup[j]] = -1 Reff[nodelookup[i],nodelookup[j]] = edge.T.dot(Linv.dot(edge)) edge[[nodelookup[i]]] = 0 edge[[nodelookup[j]]] = 0 ReffAbs=np.abs(Reff)+np.abs(Reff.T)
randomwalks/WDC Random Walk.ipynb
mitliagkas/graphs
mit
If you call arr.argsort()[:3] It will give you the indices of the 3 smallest elements. array([0, 2, 1], dtype=int64) So, for n, you should call arr.argsort()[:n]
res = ReffAbs.reshape(n**2) argp = np.argpartition(res,n**2-n) mask = (ReffAbs < res[argp[-int(0.5*Gsparse.number_of_nodes())]]) & (ReffAbs >0) for (i,j) in Gsparse.edges(): if mask[nodelookup[i],nodelookup[j]]: Gsparse.remove_edge(i,j) cleanupgraph(Gsparse) d = json_graph.node_link_data(Gsparse) for node in d['nodes']: node['name']=index[node['id']] node['value']=Gsparse.degree(node['id']) node['group']=index[node['id']][-3:] json.dump(d, open('graph.json','w')) Gorig.number_of_edges() Gsparse.number_of_edges() Gsparse.number_of_nodes() GsparseAdj = nx.linalg.adjacency_matrix(Gorig).toarray() GsparseAdj = nx.to_numpy_matrix(Gorig) GsparseAdj[ReffAbs < res[argp[-300]]] = 0 Gsparse = nx.from_numpy_matrix? Gsparse = nx.from_numpy_matrix Gsparse = nx.from_numpy_matrix(GsparseAdj, create_using=nx.DiGraph()) Gsparse = nx.from_numpy_matrix edge = np.zeros((n,1)) for i in range(n): if i % int(math.ceil((float(10)/100)*n)) == 0: print int(math.floor(100*float(i)/n)), '%' edge[i] = 1 for j in range(i+1, n): edge[j] = -1 Reff[i,j] = edge.T.dot(Linv.dot(edge)) edge[j] = 0 edge[i] = 0
randomwalks/WDC Random Walk.ipynb
mitliagkas/graphs
mit
Check if there are missing values.
# for col in varsom_df.columns.values: # print(f'{col}: {varsom_df[col].unique()} \n') # Find the amount of NaN values in each column print(varsom_df.isnull().sum().sort_values(ascending=False))
aps/notebooks/ml_varsom/preprocessing.ipynb
kmunve/APS
mit
Fill missing values where necessary.
varsom_df['mountain_weather_wind_speed'] = varsom_df['mountain_weather_wind_speed'].fillna('None') varsom_df['mountain_weather_wind_direction'] = varsom_df['mountain_weather_wind_direction'].fillna('None') print(varsom_df.isnull().sum().sort_values(ascending=False))
aps/notebooks/ml_varsom/preprocessing.ipynb
kmunve/APS
mit
Feature engineering Re-label og -classifiy variables where necessary. Add an avalanche problem severity index - based on its attributes size, distribution and sensitivity. When using shift or filling values using mean or similar, make sure to first sort individual regions and seasons by date.
varsom_df['date'] = pd.to_datetime(varsom_df['date_valid'], infer_datetime_format=True) def add_prevday_features(df): ### danger level df['danger_level_prev1day'] = df['danger_level'].shift(1) df['danger_level_name_prev1day'] = df['danger_level_name'].shift(1) df['danger_level_prev2day'] = df['danger_level'].shift(2) df['danger_level_name_prev2day'] = df['danger_level_name'].shift(2) df['danger_level_prev3day'] = df['danger_level'].shift(3) df['danger_level_name_prev3day'] = df['danger_level_name'].shift(3) ### avalanche problem df['avalanche_problem_1_cause_id_prev1day'] = df['avalanche_problem_1_cause_id'].shift(1) df['avalanche_problem_1_problem_type_id_prev1day'] = df['avalanche_problem_1_problem_type_id'].shift(1) df['avalanche_problem_1_cause_id_prev2day'] = df['avalanche_problem_1_cause_id'].shift(2) df['avalanche_problem_1_problem_type_id_prev2day'] = df['avalanche_problem_1_problem_type_id'].shift(2) df['avalanche_problem_1_cause_id_prev3day'] = df['avalanche_problem_1_cause_id'].shift(3) df['avalanche_problem_1_problem_type_id_prev3day'] = df['avalanche_problem_1_problem_type_id'].shift(3) df['avalanche_problem_2_cause_id_prev1day'] = df['avalanche_problem_2_cause_id'].shift(1) df['avalanche_problem_2_problem_type_id_prev1day'] = df['avalanche_problem_2_problem_type_id'].shift(1) df['avalanche_problem_2_cause_id_prev2day'] = df['avalanche_problem_2_cause_id'].shift(2) df['avalanche_problem_2_problem_type_id_prev2day'] = df['avalanche_problem_2_problem_type_id'].shift(2) df['avalanche_problem_2_cause_id_prev3day'] = df['avalanche_problem_2_cause_id'].shift(3) df['avalanche_problem_2_problem_type_id_prev3day'] = df['avalanche_problem_2_problem_type_id'].shift(3) ### weather df['mountain_weather_temperature_max_prev1day'] = df['mountain_weather_temperature_max'].shift(1) df['mountain_weather_temperature_max_prev2day'] = df['mountain_weather_temperature_max'].shift(2) df['mountain_weather_temperature_max_prev3day'] = df['mountain_weather_temperature_max'].shift(3) df['mountain_weather_temperature_min_prev1day'] = df['mountain_weather_temperature_min'].shift(1) df['mountain_weather_temperature_min_prev2day'] = df['mountain_weather_temperature_min'].shift(2) df['mountain_weather_temperature_min_prev3day'] = df['mountain_weather_temperature_min'].shift(3) df['mountain_weather_precip_region_prev1day'] = df['mountain_weather_precip_region'].shift(1) df['mountain_weather_precip_most_exposed_prev1day'] = df['mountain_weather_precip_most_exposed'].shift(1) df['mountain_weather_precip_region_prev3daysum'] = df['mountain_weather_precip_region'].shift(1) + df['mountain_weather_precip_region'].shift(2) + df['mountain_weather_precip_region'].shift(3) return df varsom_df[(varsom_df['date']>=datetime.date(year=2016, month=12, day=1)) & (varsom_df['date']<datetime.date(year=2017, month=6, day=1))] # grouping by region and season grouped_df = pd.DataFrame() for id in varsom_df['region_id'].unique(): #for id in [3003, 3011, 3014, 3028]: _tmp_df = varsom_df[varsom_df['region_id']==id].copy() _tmp_df.sort_values(by='valid_from') start, stop = int(_tmp_df['date_valid'].min()[:4]), int(_tmp_df['date_valid'].max()[:4]) for yr in range(start, stop-1): _tmp_df[(_tmp_df['date']>=datetime.date(year=yr, month=12, day=1)) & (_tmp_df['date']<datetime.date(year=yr+1, month=6, day=1))] _tmp_df = add_prevday_features(_tmp_df) #print(len(_tmp_df), _tmp_df['region_id'].unique()) if grouped_df.empty: print('empty') grouped_df = _tmp_df.copy() else: grouped_df = pd.concat([grouped_df, _tmp_df], ignore_index=True).copy() #print('g', len(grouped_df), grouped_df['region_id'].unique()) grouped_df.filter(['valid_from', 'region_name', 'region_id', 'avalanche_problem_1_problem_type_id', 'avalanche_problem_1_problem_type_id_prev2day']) varsom_df = grouped_df.copy() #from aps.notebooks.ml_varsom.regroup_forecast import regroup from regroup_forecast import regroup varsom_df = regroup(varsom_df)
aps/notebooks/ml_varsom/preprocessing.ipynb
kmunve/APS
mit
Add historical values, e.g. yesterdays precipitation Add a tag to the feature name to indicate if it is categorical (c) or numerical (n). Add a target tag (t). Add a modelled (m) or observed (o) tag. _prev1day _prev3day n_f_Next24HourChangeInTempFromPrev3DayMax - change of temperature over a certain period. n_r_Prev7dayMinTemp2InPast - ??? n_r_SNOWDAS_SnowpackAveTemp_k2InPast - modelled average temperature from model SNOWDAS (? https://nsidc.org/data/g02158)
# Check if sensitivity transformation worked... print(varsom_df['avalanche_problem_1_sensitivity_id_class'].value_counts()) varsom_df.filter(['mountain_weather_precip_region', 'mountain_weather_precip_region_prev3daysum']).head(12) varsom_df[varsom_df['region_id']==3012].filter(['region_id', 'danger_level', 'danger_level_prev1day']).head(40)
aps/notebooks/ml_varsom/preprocessing.ipynb
kmunve/APS
mit
Combine avalanche problem attributes into single parameter
def get_aval_problem_combined(type_, dist_, sens_, size_): return int("{0}{1}{2}{3}".format(type_, dist_, sens_, size_)) def print_aval_problem_combined(aval_combined_int): aval_combined_str = str(aval_combined_int) #with open(aps_pth / r'aps/config/snoskred_keys.json') as jdata: with open(r'D:\Dev\APS\aps\config\snoskred_keys.json') as jdata: snoskred_keys = json.load(jdata) type_ = snoskred_keys["Class_AvalancheProblemTypeName"][aval_combined_str[0]] dist_ = snoskred_keys["Class_AvalDistributionName"][aval_combined_str[1]] sens_ = snoskred_keys["Class_AvalSensitivityId"][aval_combined_str[2]] size_ = snoskred_keys["DestructiveSizeId"][aval_combined_str[3]] return f"{type_}:{dist_}:{sens_}:{size_}" print(print_aval_problem_combined(6221)) varsom_df['aval_problem_1_combined'] = varsom_df.apply(lambda row: get_aval_problem_combined(row['avalanche_problem_1_problem_type_id_class'], row['avalanche_problem_1_distribution_id'], row['avalanche_problem_1_sensitivity_id_class'], #avalanche_problem_1_trigger_simple_id_class / avalanche_problem_1_sensitivity_id_class row['avalanche_problem_1_destructive_size_ext_id']), axis=1) aval_uni = varsom_df['aval_problem_1_combined'].unique() print(aval_uni, len(aval_uni)) print(varsom_df['aval_problem_1_combined'].value_counts()) print(varsom_df['avalanche_problem_1_problem_type_id_class'].value_counts())
aps/notebooks/ml_varsom/preprocessing.ipynb
kmunve/APS
mit
Hot encode categorical variables where necessary.
# hot encode hot_encode_ = ['emergency_warning', 'author', 'mountain_weather_wind_direction'] varsom_df = pd.get_dummies(varsom_df, columns=hot_encode_)
aps/notebooks/ml_varsom/preprocessing.ipynb
kmunve/APS
mit
Check if there are no weired or missing values.
# Check if there are no weired or missing values. for col in varsom_df.columns.values: print(f'{col}: {varsom_df[col].unique()} \n')
aps/notebooks/ml_varsom/preprocessing.ipynb
kmunve/APS
mit
Remove variables we know we do not need. In this case mainly because they are redundant like the avalanche_problem_1_ext_name and avalanche_problem_1_ext_id - in this case we only keep the numeric id variable.
del_list = [ 'utm_zone', 'utm_east', 'utm_north', 'danger_level_name', 'avalanche_problem_1_exposed_height_fill', 'avalanche_problem_2_exposed_height_fill', 'avalanche_problem_3_exposed_height_fill', 'avalanche_problem_1_valid_expositions', 'avalanche_problem_2_valid_expositions', 'avalanche_problem_3_valid_expositions', 'avalanche_problem_1_cause_name', 'avalanche_problem_1_problem_type_name', 'avalanche_problem_1_destructive_size_ext_name', 'avalanche_problem_1_distribution_name', 'avalanche_problem_1_ext_name', 'avalanche_problem_1_probability_name', 'avalanche_problem_1_trigger_simple_name', 'avalanche_problem_1_type_name', 'avalanche_problem_2_cause_name', 'avalanche_problem_2_problem_type_name', 'avalanche_problem_2_destructive_size_ext_name', 'avalanche_problem_2_distribution_name', 'avalanche_problem_2_ext_name', 'avalanche_problem_2_probability_name', 'avalanche_problem_2_trigger_simple_name', 'avalanche_problem_2_type_name', 'avalanche_problem_3_cause_name', 'avalanche_problem_3_problem_type_name', 'avalanche_problem_3_destructive_size_ext_name', 'avalanche_problem_3_distribution_name', 'avalanche_problem_3_ext_name', 'avalanche_problem_3_probability_name', 'avalanche_problem_3_trigger_simple_name', 'avalanche_problem_3_type_name', 'latest_avalanche_activity', 'main_text', 'snow_surface', 'current_weak_layers', 'avalanche_danger', 'avalanche_problem_1_advice', 'avalanche_problem_2_advice', 'avalanche_problem_3_advice', 'mountain_weather_wind_speed', 'region_type_name', 'region_name', 'reg_id', 'valid_from', 'valid_to' ] removed_ = [varsom_df.pop(v) for v in del_list] removed_
aps/notebooks/ml_varsom/preprocessing.ipynb
kmunve/APS
mit
Fill missing values where necessary
fill_list = [ 'mountain_weather_freezing_level', 'mountain_weather_precip_region', 'mountain_weather_precip_region_prev1day', 'mountain_weather_precip_region_prev3daysum', 'mountain_weather_precip_most_exposed', 'mountain_weather_precip_most_exposed_prev1day', 'mountain_weather_temperature_min', 'mountain_weather_temperature_max', 'mountain_weather_temperature_elevation', 'danger_level_prev3day', 'avalanche_problem_1_problem_type_id_prev3day', 'avalanche_problem_2_problem_type_id_prev3day', 'avalanche_problem_2_cause_id_prev3day', 'avalanche_problem_1_cause_id_prev3day', 'danger_level_prev2day', 'avalanche_problem_1_cause_id_prev2day', 'avalanche_problem_1_problem_type_id_prev2day', 'avalanche_problem_2_cause_id_prev2day', 'avalanche_problem_2_problem_type_id_prev2day', 'avalanche_problem_2_cause_id_prev1day', 'avalanche_problem_2_problem_type_id_prev1day', 'avalanche_problem_1_problem_type_id_prev1day', 'avalanche_problem_1_cause_id_prev1day', 'danger_level_prev1day' ] filled_ = [varsom_df[v].fillna(0., inplace=True) for v in fill_list] filled_
aps/notebooks/ml_varsom/preprocessing.ipynb
kmunve/APS
mit
Eventually remove variables with many missing values.
del_list = [ 'danger_level_name_prev1day', 'danger_level_name_prev2day', 'danger_level_name_prev3day', 'mountain_weather_change_wind_direction', 'mountain_weather_change_hour_of_day_start', 'mountain_weather_change_hour_of_day_stop', 'mountain_weather_change_wind_speed', 'mountain_weather_fl_hour_of_day_stop', 'mountain_weather_fl_hour_of_day_start', 'latest_observations', 'publish_time', 'date_valid', 'mountain_weather_temperature_max_prev3day', 'mountain_weather_temperature_min_prev3day', 'mountain_weather_temperature_max_prev2day', 'mountain_weather_temperature_min_prev2day', 'mountain_weather_temperature_max_prev1day', 'mountain_weather_temperature_min_prev1day' ] removed_ = [varsom_df.pop(v) for v in del_list]
aps/notebooks/ml_varsom/preprocessing.ipynb
kmunve/APS
mit
Check again if there are still values missing... need to replace these Nans with meaningful values or remove the feature.
# Find the amount of NaN values in each column print(varsom_df.isnull().sum().sort_values(ascending=False)) # Compute the correlation matrix - works only on numerical variables. corr = varsom_df.corr() # Generate a mask for the upper triangle mask = np.zeros_like(corr, dtype=np.bool) mask[np.triu_indices_from(mask)] = True # Set up the matplotlib figure f, ax = plt.subplots(figsize=(11, 11)) # Generate a custom diverging colormap cmap = sns.diverging_palette(1000, 15, as_cmap=True) # Draw the heatmap with the mask and correct aspect ratio sns.heatmap(corr, mask=mask, cmap=cmap, vmax=.8, center=0, square=True, linewidths=.5, cbar_kws={"shrink": .5})
aps/notebooks/ml_varsom/preprocessing.ipynb
kmunve/APS
mit
We can see that some parameters are highly correlated. These are mainly the parameters belonging to the same avalanche problem. Depending on the ML algorithm we use we have to remove some of them.
#corr['avalanche_problem_1_cause_id'].sort_values(ascending=False) #corr #sns.pairplot(varsom_df.drop(['date_valid'], axis=1)) # Get all numerical features num_feat = varsom_df._get_numeric_data().columns num_feat # let's see the details about remainig variables varsom_df.describe()
aps/notebooks/ml_varsom/preprocessing.ipynb
kmunve/APS
mit
Save data for further analysis
varsom_df.to_csv('varsom_ml_preproc_3y.csv', index_label='index')
aps/notebooks/ml_varsom/preprocessing.ipynb
kmunve/APS
mit
<table align="left"> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> Overview This tutorial walks through building a custom container to serve a scikit-learn model on Vertex Predictions. You will use the FastAPI Python web server framework to create a prediction and health endpoint. You will also cover incorporating a pre-processor from training into your online serving. Dataset This tutorial uses R.A. Fisher's Iris dataset, a small dataset that is popular for trying out machine learning techniques. Each instance has four numerical features, which are different measurements of a flower, and a target label that marks it as one of three types of iris: Iris setosa, Iris versicolour, or Iris virginica. This tutorial uses the copy of the Iris dataset included in the scikit-learn library. Objective The goal is to: - Train a model that uses a flower's measurements as input to predict what type of iris it is. - Save the model and its serialized pre-processor - Build a FastAPI server to handle predictions and health checks - Build a custom container with model artifacts - Upload and deploy custom container to Vertex Prediction This tutorial focuses more on deploying this model with Vertex AI than on the design of the model itself. Costs This tutorial uses billable components of Google Cloud: Vertex AI Learn about Vertex AI pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Set up your local development environment If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step. Otherwise, make sure your environment meets this notebook's requirements. You need the following: Docker Git Google Cloud SDK (gcloud) Python 3 virtualenv Jupyter notebook running in a virtual environment with Python 3 The Google Cloud guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions: Install and initialize the Cloud SDK. Install Python 3. Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment. To install Jupyter, run pip install jupyter on the command-line in a terminal shell. To launch Jupyter, run jupyter notebook on the command-line in a terminal shell. Open this notebook in the Jupyter Notebook Dashboard. Install additional packages Install additional package dependencies not installed in your notebook environment, such as NumPy, Scikit-learn, FastAPI, Uvicorn, and joblib. Use the latest major GA version of each package.
%%writefile requirements.txt joblib~=1.0 numpy~=1.20 scikit-learn~=0.24 google-cloud-storage>=1.26.0,<2.0.0dev # Required in Docker serving container %pip install -U --user -r requirements.txt # For local FastAPI development and running %pip install -U --user "uvicorn[standard]>=0.12.0,<0.14.0" fastapi~=0.63 # Vertex SDK for Python %pip install -U --user google-cloud-aiplatform
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Before you begin Set up your Google Cloud project The following steps are required, regardless of your notebook environment. Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs. Make sure that billing is enabled for your project. Enable the Vertex AI API and Compute Engine API. If you are running this notebook locally, you will need to install the Cloud SDK. Enter your project ID in the cell below. Then run the cell to make sure the Cloud SDK uses the right project for all the commands in this notebook. Note: Jupyter runs lines prefixed with ! or % as shell commands, and it interpolates Python variables with $ or {} into these commands. Set your project ID If you don't know your project ID, you may be able to get your project ID using gcloud.
# Get your Google Cloud project ID from gcloud shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null try: PROJECT_ID = shell_output[0] except IndexError: PROJECT_ID = None print("Project ID:", PROJECT_ID)
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Authenticate your Google Cloud account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI" into the filter box, and select Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
import os import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # If on Google Cloud Notebooks, then don't execute this code if not os.path.exists("/opt/deeplearning/metadata/env_version"): if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING") and not os.getenv( "GOOGLE_APPLICATION_CREDENTIALS" ): %env GOOGLE_APPLICATION_CREDENTIALS ''
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Configure project and resource names
REGION = "us-central1" # @param {type:"string"} MODEL_ARTIFACT_DIR = "custom-container-prediction-model" # @param {type:"string"} REPOSITORY = "custom-container-prediction" # @param {type:"string"} IMAGE = "sklearn-fastapi-server" # @param {type:"string"} MODEL_DISPLAY_NAME = "sklearn-custom-container" # @param {type:"string"}
notebooks/community/sdk/SDK_Custom_Container_Prediction.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0