markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Use can use sympy to make $\LaTeX$ equations for you!
z = sp.symbols('z') a = 1/( (z+2)*(z+1) ) print(sp.latex(a))
08_Python_LaTeX.ipynb
UWashington-Astro300/Astro300-A16
mit
$$ \int z^{2}\, dz $$ Astropy can output $\LaTeX$ tables
from astropy.io import ascii from astropy.table import QTable T = QTable.read('Zodiac.csv', format='ascii.csv') T[0:3] ascii.write(T, format='latex')
08_Python_LaTeX.ipynb
UWashington-Astro300/Astro300-A16
mit
Some websites to open up for class: Special Relativity ShareLatex ShareLatex Docs and Help Latex Symbols Latex draw symbols The SAO/NASA Astrophysics Data System Latex wikibook Assignment for Week 8
t = np.linspace(0,12*np.pi,2000) fig,ax = plt.subplots(1,1) # One window fig.set_size_inches(11,8.5) # (width,height) - letter paper landscape fig.tight_layout() # Make better use of space on plot
08_Python_LaTeX.ipynb
UWashington-Astro300/Astro300-A16
mit
Import pandas and read the csv file: 2014_World_Power_Consumption
df = pd.read_csv('./2014_World_Power_Consumption') df.head()
udemy_ml_bootcamp/Python-for-Data-Visualization/Geographical Plotting/Choropleth Maps Exercise .ipynb
AtmaMani/pyChakras
mit
Referencing the lecture notes, create a Choropleth Plot of the Power Consumption for Countries using the data and layout dictionary.
data = {'type':'choropleth', 'locations':df['Country'],'locationmode':'country names', 'z':df['Power Consumption KWH'], 'text':df['Text']} layout={'title':'World power consumption', 'geo':{'showframe':True, 'projection':{'type':'Mercator'}}} choromap = go.Figure(data = [data],layout = layout) iplot(choromap,validate=False)
udemy_ml_bootcamp/Python-for-Data-Visualization/Geographical Plotting/Choropleth Maps Exercise .ipynb
AtmaMani/pyChakras
mit
USA Choropleth Import the 2012_Election_Data csv file using pandas.
df2 = pd.read_csv('./2012_Election_Data')
udemy_ml_bootcamp/Python-for-Data-Visualization/Geographical Plotting/Choropleth Maps Exercise .ipynb
AtmaMani/pyChakras
mit
Now create a plot that displays the Voting-Age Population (VAP) per state. If you later want to play around with other columns, make sure you consider their data type. VAP has already been transformed to a float for you.
data = {'type':'choropleth', 'locations':df2['State Abv'],'locationmode':'USA-states', 'z':df2['% Non-citizen'], 'text':df2['% Non-citizen']} layout={'geo':{'scope':'usa'}} choromap = go.Figure(data = [data],layout = layout) iplot(choromap,validate=False)
udemy_ml_bootcamp/Python-for-Data-Visualization/Geographical Plotting/Choropleth Maps Exercise .ipynb
AtmaMani/pyChakras
mit
We'll take a look at our data:
df.head()
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Apparently, the first 5 instances of our datasets are all fake (Class is 0).
print("No of Fake bank notes = " + str(len(df[df['Class'] == 0]))) print("No of Authentic bank notes = " + str(len(df[df['Class'] == 1])))
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
This shows we have 762 total instances of Fake banknotes and 610 total instances of Authentic banknotes in our dataset.
features=list(df.columns[:-1]) print("Our features :" ) features X = df[features] y = df['Class'] print('Class labels:', np.unique(y))
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
To evaluate how well a trained model performs on unseen data, we will further split the dataset into separate training and test datasets. Splitting data into 70% training and 30% test data:
from sklearn.cross_validation import train_test_split X_train, X_test, y_train, y_test = train_test_split( X, y, test_size=0.3, random_state=0)
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Many machine learning and optimization algorithms also require feature scaling for optimal performance. Here, we will standardize the features using the StandardScaler class from scikit-learn's preprocessing module: Standardizing the features:
from sklearn.preprocessing import StandardScaler sc = StandardScaler() sc.fit(X_train) X_train_std = sc.transform(X_train) X_test_std = sc.transform(X_test)
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Using the preceding code, we loaded the StandardScaler class from the preprocessing module and initialized a new StandardScaler object that we assigned to the variable sc. Using the fit method, StandardScaler estimated the parameters μ (sample mean) and  (standard deviation) for each feature dimension from the training data. By calling the transform method, we then standardized the training data using those estimated parameters μ and . Note that we used the same scaling parameters to standardize the test set so that both the values in the training and test dataset are comparable to each other. Logistic regression: Logistic regression is a classification model that is very easy to implement but performs very well on linearly separable classes. It is one of the most widely used algorithms for classification in industry. The logistic regression model is a linear model for binary classification that can be extended to multiclass classification via the OvR technique. The sigmoid function used in the Logistic Regression:
import matplotlib.pyplot as plt import numpy as np def sigmoid(z): return 1.0 / (1.0 + np.exp(-z)) z = np.arange(-7, 7, 0.1) phi_z = sigmoid(z) plt.plot(z, phi_z) plt.axvline(0.0, color='k') plt.ylim(-0.1, 1.1) plt.xlabel('z') plt.ylabel('$\phi (z)$') # y axis ticks and gridline plt.yticks([0.0, 0.5, 1.0]) ax = plt.gca() ax.yaxis.grid(True) plt.tight_layout() # plt.savefig('./figures/sigmoid.png', dpi=300) plt.show()
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Learning the weights of the logistic cost function
def cost_1(z): return - np.log(sigmoid(z)) def cost_0(z): return - np.log(1 - sigmoid(z)) z = np.arange(-10, 10, 0.1) phi_z = sigmoid(z) c1 = [cost_1(x) for x in z] plt.plot(phi_z, c1, label='J(w) if y=1') c0 = [cost_0(x) for x in z] plt.plot(phi_z, c0, linestyle='--', label='J(w) if y=0') plt.ylim(0.0, 5.1) plt.xlim([0, 1]) plt.xlabel('$\phi$(z)') plt.ylabel('J(w)') plt.legend(loc='best') plt.tight_layout() # plt.savefig('./figures/log_cost.png', dpi=300) plt.show()
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Training a logistic regression model with scikit-learn Scikit-learn implements a highly optimized version of logistic regression that also supports multiclass settings off-the-shelf, we will skip the implementation and use the sklearn.linear_model.LogisticRegression class as well as the familiar fit method to train the model on the standardized flower training dataset:
from sklearn.linear_model import LogisticRegression lr = LogisticRegression(C=1000.0, random_state=0) lr.fit(X_train_std, y_train) y_test.shape
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Having trained a model in scikit-learn, we can make predictions via the predict method
y_pred = lr.predict(X_test_std) print('Misclassified samples: %d' % (y_test != y_pred).sum())
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
On executing the preceding code, we see that the perceptron misclassifies 5 out of the 412 note samples. Thus, the misclassification error on the test dataset is 0.012 or 1.2 percent (5/412 = 0.012). Measuring our classifier using Binary classification performance metrics A variety of metrics exist to evaluate the performance of binary classifiers against trusted labels. The most common metrics are accuracy, precision, recall, F1 measure, and ROC AUC score. All of these measures depend on the concepts of true positives, true negatives, false positives, and false negatives. Positive and negative refer to the classes. True and false denote whether the predicted class is the same as the true class. For our Banknote classifier, a true positive prediction is when the classifier correctly predicts that a note is authentic. A true negative prediction is when the classifier correctly predicts that a note is fake. A prediction that a fake note is authentic is a false positive prediction, and an authentic note is incorrectly classified as fake is a false negative prediction. Confusion Matrix A confusion matrix, or contingency table, can be used to visualize true and false positives and negatives. The rows of the matrix are the true classes of the instances, and the columns are the predicted classes of the instances:
from sklearn.metrics import confusion_matrix import matplotlib.pyplot as plt %matplotlib inline confusion_matrix = confusion_matrix(y_test, y_pred) print(confusion_matrix) plt.matshow(confusion_matrix) plt.title('Confusion matrix') plt.colorbar() plt.ylabel('True label') plt.xlabel('Predicted label') plt.show()
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
The confusion matrix indicates that there were 227 true negative predictions, 180 true positive predictions, 0 false negative predictions, and 5 false positive prediction. Scikit-learn also implements a large variety of different performance metrics that are available via the metrics module. For example, we can calculate the classification accuracy of the perceptron on the test set as follows:
from sklearn.metrics import accuracy_score print('Accuracy: %.2f' % accuracy_score(y_test, y_pred))
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Here, y_test are the true class labels and y_pred are the class labels that we predicted previously. Furthermore, we can predict the class-membership probability of the samples via the predict_proba method. For example, we can predict the probabilities of the first banknote sample:
lr.predict_proba(X_test_std[0,:])
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
The preceding array tells us that the model predicts a chance of 99.96 percent that the sample is an autentic banknote (y = 1) class, and 0.003 percent chance that the sample is a fake note (y = 0). While accuracy measures the overall correctness of the classifier, it does not distinguish between false positive errors and false negative errors. Some applications may be more sensitive to false negatives than false positives, or vice versa. Furthermore, accuracy is not an informative metric if the proportions of the classes are skewed in the population. For example, a classifier that predicts whether or not credit card transactions are fraudulent may be more sensitive to false negatives than to false positives. A classifier that always predicts that transactions are legitimate could have a high accuracy score, but would not be useful. For these reasons, classifiers are often evaluated using two additional measures called precision and recall. Precision and Recall: Precision is the fraction of positive predictions that are correct. For instance, in our Banknote Authentication classifier, precision is the fraction of notes classified as authentic that are actually authentic. Precision is given by the following ratio: P = TP / (TP + FP) Sometimes called sensitivity in medical domains, recall is the fraction of the truly positive instances that the classifier recognizes. A recall score of one indicates that the classifier did not make any false negative predictions. For our Banknote Authentication classifier, recall is the fraction of authentic notes that were truly classified as authentic. Recall is calculated with the following ratio: R = TP / (TP + FN) Individually, precision and recall are seldom informative; they are both incomplete views of a classifier's performance. Both precision and recall can fail to distinguish classifiers that perform well from certain types of classifiers that perform poorly. A trivial classifier could easily achieve a perfect recall score by predicting positive for every instance. For example, assume that a test set contains ten positive examples and ten negative examples. A classifier that predicts positive for every example will achieve a recall of one, as follows: R = 10 / (10 + 0) = 1 A classifier that predicts negative for every example, or that makes only false positive and true negative predictions, will achieve a recall score of zero. Similarly, a classifier that predicts that only a single instance is positive and happens to be correct will achieve perfect precision. Scikit-learn provides a function to calculate the precision and recall for a classifier from a set of predictions and the corresponding set of trusted labels. Calculating our Banknote Authentication classifier's precision and recall:
from sklearn.cross_validation import cross_val_score precisions = cross_val_score(lr, X_train_std, y_train, cv=5,scoring='precision') print('Precision', np.mean(precisions), precisions) recalls = cross_val_score(lr, X_train_std, y_train, cv=5,scoring='recall') print('Recalls', np.mean(recalls), recalls)
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Our classifier's precision is 0.988; almost all of the notes that it predicted as authentic were actually authentic. Its recall is also high, indicating that it correctly classified approximately 98 percent of the authentic messages as authentic. Calculating the F1 measure The F1 measure is the harmonic mean, or weighted average, of the precision and recall scores. Also called the f-measure or the f-score, the F1 score is calculated using the following formula: F1 = 2PR / (P + R) The F1 measure penalizes classifiers with imbalanced precision and recall scores, like the trivial classifier that always predicts the positive class. A model with perfect precision and recall scores will achieve an F1 score of one. A model with a perfect precision score and a recall score of zero will achieve an F1 score of zero. As for precision and recall, scikit-learn provides a function to calculate the F1 score for a set of predictions. Let's compute our classifier's F1 score.
f1s = cross_val_score(lr, X_train_std, y_train, cv=5, scoring='f1') print('F1', np.mean(f1s), f1s)
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
The arithmetic mean of our classifier's precision and recall scores is 0.98. As the difference between the classifier's precision and recall is small, the F1 measure's penalty is small. Models are sometimes evaluated using the F0.5 and F2 scores, which favor precision over recall and recall over precision, respectively. ROC AUC A Receiver Operating Characteristic, or ROC curve, visualizes a classifier's performance. Unlike accuracy, the ROC curve is insensitive to data sets with unbalanced class proportions; unlike precision and recall, the ROC curve illustrates the classifier's performance for all values of the discrimination threshold. ROC curves plot the classifier's recall against its fall-out. Fall-out, or the false positive rate, is the number of false positives divided by the total number of negatives. It is calculated using the following formula: F = FP / (TN + FP)
from sklearn.metrics import roc_auc_score, roc_curve, auc roc_auc_score(y_test,lr.predict(X_test_std))
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Plotting the ROC curve for our SMS spam classifier:
import matplotlib.pyplot as plt %matplotlib inline y_pred = lr.predict_proba(X_test_std) false_positive_rate, recall, thresholds = roc_curve(y_test, y_pred[:, 1]) roc_auc = auc(false_positive_rate, recall) plt.title('Receiver Operating Characteristic') plt.plot(false_positive_rate, recall, 'b', label='AUC = %0.2f' % roc_auc) plt.legend(loc='lower right') plt.plot([0, 1], [0, 1], 'r--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.0]) plt.ylabel('Recall') plt.xlabel('Fall-out') plt.show()
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
From the ROC AUC plot, it is apparent that our classifier outperforms random guessing and does a very good job in classifying; almost all of the plot area lies under its curve. Finding the most important features with forests of trees This examples shows the use of forests of trees to evaluate the importance of features on an artificial classification task. The red bars are the feature importances of the forest, along with their inter-trees variability.
import numpy as np import matplotlib.pyplot as plt from sklearn.ensemble import ExtraTreesClassifier # Build a classification task using 3 informative features # Build a forest and compute the feature importances forest = ExtraTreesClassifier(n_estimators=250, random_state=0) forest.fit(X, y) importances = forest.feature_importances_ std = np.std([tree.feature_importances_ for tree in forest.estimators_], axis=0) indices = np.argsort(importances)[::-1] # Print the feature ranking print("Feature ranking:") for f in range(X.shape[1]): print("%d. feature %d - %s (%f) " % (f + 1, indices[f], features[indices[f]], importances[indices[f]])) # Plot the feature importances of the forest plt.figure(num=None, figsize=(10, 6), dpi=80, facecolor='w', edgecolor='k') plt.title("Feature importances") plt.bar(range(X.shape[1]), importances[indices], color="r", yerr=std[indices], align="center") plt.xticks(range(X.shape[1]), indices) plt.xlim([-1, X.shape[1]]) plt.show()
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
We'll cover the details of the code later. For now it can be evidently seen that our most important features that are helping us to correctly classify are: Variance and Skewness. We'll use these two features to plot our graph. Plotting our model decison regions Finally, we can plot the decision regions of our newly trained perceptron model and visualize how well it separates the different samples.
X_train, X_test, y_train, y_test = train_test_split( X[['Variance','Skewness']], y, test_size=0.3, random_state=0) sc = StandardScaler() sc.fit(X_train) X_train_std = sc.transform(X_train) X_test_std = sc.transform(X_test) from matplotlib.colors import ListedColormap import matplotlib.pyplot as plt import warnings def versiontuple(v): return tuple(map(int, (v.split(".")))) def plot_decision_regions(X, y, classifier, test_idx=None, resolution=0.02): # setup marker generator and color map markers = ('s', 'x', 'o', '^', 'v') colors = ('red', 'blue', 'lightgreen', 'gray', 'cyan') cmap = ListedColormap(colors[:len(np.unique(y))]) # plot the decision surface x1_min, x1_max = X[:, 0].min() - 1, X[:, 0].max() + 1 x2_min, x2_max = X[:, 1].min() - 1, X[:, 1].max() + 1 xx1, xx2 = np.meshgrid(np.arange(x1_min, x1_max, resolution), np.arange(x2_min, x2_max, resolution)) Z = classifier.predict(np.array([xx1.ravel(), xx2.ravel()]).T) Z = Z.reshape(xx1.shape) plt.contourf(xx1, xx2, Z, alpha=0.4, cmap=cmap) plt.xlim(xx1.min(), xx1.max()) plt.ylim(xx2.min(), xx2.max()) for idx, cl in enumerate(np.unique(y)): plt.scatter(x=X[y == cl, 0], y=X[y == cl, 1], alpha=0.8, c=cmap(idx), marker=markers[idx], label=cl) # highlight test samples if test_idx: # plot all samples if not versiontuple(np.__version__) >= versiontuple('1.9.0'): X_test, y_test = X[list(test_idx), :], y[list(test_idx)] warnings.warn('Please update to NumPy 1.9.0 or newer') else: X_test, y_test = X[test_idx, :], y[test_idx] plt.scatter(X_test[:, 0], X_test[:, 1], c='', alpha=1.0, linewidths=1, marker='o', s=55, label='test set') from sklearn.linear_model import LogisticRegression lr = LogisticRegression(C=1000.0, random_state=0) lr.fit(X_train_std, y_train) plot_decision_regions(X_combined_std, y_combined, classifier=lr, test_idx=range(105, 150)) plt.xlabel('Variance') plt.ylabel('Skewness') plt.legend(loc='upper left') plt.tight_layout() plt.show()
Classification/Logistic Regression for Banknote Authentication.ipynb
Aniruddha-Tapas/Applied-Machine-Learning
mit
Implement Preprocessing Functions The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below: - Lookup Table - Tokenize Punctuation Lookup Table To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries: - Dictionary to go from the words to an id, we'll call vocab_to_int - Dictionary to go from the id to word, we'll call int_to_vocab Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
import numpy as np import problem_unittests as tests from collections import Counter def create_lookup_tables(text): """ Create lookup tables for vocabulary :param text: The text of tv scripts split into words :return: A tuple of dicts (vocab_to_int, int_to_vocab) """ vocab = set(text) vocab_to_int = {word: index for index, word in enumerate(vocab)} int_to_vocab = dict(enumerate(vocab)) # TODO: Implement Function return vocab_to_int, int_to_vocab """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_create_lookup_tables(create_lookup_tables)
tv-script-generation/dlnd_tv_script_generation.ipynb
ClementPhil/deep-learning
mit
Tokenize Punctuation We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!". Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token: - Period ( . ) - Comma ( , ) - Quotation Mark ( " ) - Semicolon ( ; ) - Exclamation mark ( ! ) - Question mark ( ? ) - Left Parentheses ( ( ) - Right Parentheses ( ) ) - Dash ( -- ) - Return ( \n ) This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
def token_lookup(): """ Generate a dict to turn punctuation into a token. :return: Tokenize dictionary where the key is the punctuation and the value is the token """ # TODO: Implement Function token_dick = {'.':'||Period||', ',':'||Comma||', '"':'||Quotation_Mark||',';':'||Semicolon||','!':'||Exclamation_mark||','?':'||Question_mark||','(':'||Left_Parentheses||',')':'||Right_Parentheses||','--':'||Dash||','\n':'||Return||' } return token_dick """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_tokenize(token_lookup)
tv-script-generation/dlnd_tv_script_generation.ipynb
ClementPhil/deep-learning
mit
Input Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders: - Input text placeholder named "input" using the TF Placeholder name parameter. - Targets placeholder - Learning Rate placeholder Return the placeholders in the following tuple (Input, Targets, LearningRate)
def get_inputs(): """ Create TF Placeholders for input, targets, and learning rate. :return: Tuple (input, targets, learning rate) """ # TODO: Implement Function Input = tf.placeholder(tf.int32,[None,None],name='input') Targets = tf.placeholder(tf.int32,[None,None],name='target') LearningRate = tf.placeholder(tf.float32); return Input, Targets, LearningRate """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_inputs(get_inputs)
tv-script-generation/dlnd_tv_script_generation.ipynb
ClementPhil/deep-learning
mit
Build RNN Cell and Initialize Stack one or more BasicLSTMCells in a MultiRNNCell. - The Rnn size should be set using rnn_size - Initalize Cell State using the MultiRNNCell's zero_state() function - Apply the name "initial_state" to the initial state using tf.identity() Return the cell and initial state in the following tuple (Cell, InitialState)
def get_init_cell(batch_size, rnn_size): """ Create an RNN Cell and initialize it. :param batch_size: Size of batches :param rnn_size: Size of RNNs :return: Tuple (cell, initialize state) """ # TODO: Implement Function lstm = tf.contrib.rnn.BasicLSTMCell(rnn_size) drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=0.75) Cell = tf.contrib.rnn.MultiRNNCell([lstm] * 2) initial_state = Cell.zero_state(batch_size,tf.float32) InitialState= tf.identity(initial_state, name='initial_state') return Cell, InitialState """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_init_cell(get_init_cell)
tv-script-generation/dlnd_tv_script_generation.ipynb
ClementPhil/deep-learning
mit
Build RNN You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN. - Build the RNN using the tf.nn.dynamic_rnn() - Apply the name "final_state" to the final state using tf.identity() Return the outputs and final_state state in the following tuple (Outputs, FinalState)
def build_rnn(cell, inputs): """ Create a RNN using a RNN Cell :param cell: RNN Cell :param inputs: Input text data :return: Tuple (Outputs, Final State) """ # TODO: Implement Function outputs,final_state = tf.nn.dynamic_rnn(cell,inputs,dtype=tf.float32) final = tf.identity(final_state,name="final_state") return outputs, final """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_rnn(build_rnn)
tv-script-generation/dlnd_tv_script_generation.ipynb
ClementPhil/deep-learning
mit
Build the Neural Network Apply the functions you implemented above to: - Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function. - Build RNN using cell and your build_rnn(cell, inputs) function. - Apply a fully connected layer with a linear activation and vocab_size as the number of outputs. Return the logits and final state in the following tuple (Logits, FinalState)
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim): """ Build part of the neural network :param cell: RNN cell :param rnn_size: Size of rnns :param input_data: Input data :param vocab_size: Vocabulary size :param embed_dim: Number of embedding dimensions :return: Tuple (Logits, FinalState) """ # TODO: Implement Function embed = get_embed(input_data,vocab_size,rnn_size) outputs ,final = build_rnn(cell,embed) logits = tf.contrib.layers.fully_connected(outputs,vocab_size,activation_fn=None,weights_initializer=tf.truncated_normal_initializer(stddev=0.1),biases_initializer=tf.zeros_initializer()) return logits, final """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_build_nn(build_nn)
tv-script-generation/dlnd_tv_script_generation.ipynb
ClementPhil/deep-learning
mit
Batches Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements: - The first element is a single batch of input with the shape [batch size, sequence length] - The second element is a single batch of targets with the shape [batch size, sequence length] If you can't fill the last batch with enough data, drop the last batch. For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following: ``` [ # First Batch [ # Batch of Input [[ 1 2], [ 7 8], [13 14]] # Batch of targets [[ 2 3], [ 8 9], [14 15]] ] # Second Batch [ # Batch of Input [[ 3 4], [ 9 10], [15 16]] # Batch of targets [[ 4 5], [10 11], [16 17]] ] # Third Batch [ # Batch of Input [[ 5 6], [11 12], [17 18]] # Batch of targets [[ 6 7], [12 13], [18 1]] ] ] ``` Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
def get_batches(int_text, batch_size, seq_length): """ Return batches of input and target :param int_text: Text with the words replaced by their ids :param batch_size: The size of batch :param seq_length: The length of sequence :return: Batches as a Numpy array """ # I turned for help in forum using the utils.py, I sruggled for days # TODO: Implement Function n_batches = int(len(int_text) / (batch_size * seq_length)) # Drop the last few characters to make only full batches xdata = np.array(int_text[: n_batches * batch_size * seq_length]) ydata = np.array(int_text[1: n_batches * batch_size * seq_length + 1]) x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1) y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1) return np.array(list(zip(x_batches, y_batches))) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_batches(get_batches)
tv-script-generation/dlnd_tv_script_generation.ipynb
ClementPhil/deep-learning
mit
Neural Network Training Hyperparameters Tune the following parameters: Set num_epochs to the number of epochs. Set batch_size to the batch size. Set rnn_size to the size of the RNNs. Set embed_dim to the size of the embedding. Set seq_length to the length of sequence. Set learning_rate to the learning rate. Set show_every_n_batches to the number of batches the neural network should print progress.
# Number of Epochs num_epochs = 200 # Batch Size batch_size = 256 # RNN Size rnn_size = 128 # Embedding Dimension Size embed_dim = None # Sequence Length seq_length = 7 # Learning Rate learning_rate = 0.005 # Show stats for every n number of batches show_every_n_batches = 20 """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ save_dir = './save'
tv-script-generation/dlnd_tv_script_generation.ipynb
ClementPhil/deep-learning
mit
Implement Generate Functions Get Tensors Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names: - "input:0" - "initial_state:0" - "final_state:0" - "probs:0" Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
def get_tensors(loaded_graph): """ Get input, initial state, final state, and probabilities tensor from <loaded_graph> :param loaded_graph: TensorFlow graph loaded from file :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor) """ # TODO: Implement Function inputTensor = loaded_graph.get_tensor_by_name("input:0") initialState = loaded_graph.get_tensor_by_name("initial_state:0") finalStateTensor = loaded_graph.get_tensor_by_name("final_state:0") probsTensor = loaded_graph.get_tensor_by_name("probs:0") return inputTensor, initialState,finalStateTensor, probsTensor """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_get_tensors(get_tensors)
tv-script-generation/dlnd_tv_script_generation.ipynb
ClementPhil/deep-learning
mit
Choose Word Implement the pick_word() function to select the next word using probabilities.
def pick_word(probabilities, int_to_vocab): """ Pick the next word in the generated text :param probabilities: Probabilites of the next word :param int_to_vocab: Dictionary of word ids as the keys and words as the values :return: String of the predicted word """ return np.random.choice(list(int_to_vocab.values()),p=probabilities) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_pick_word(pick_word)
tv-script-generation/dlnd_tv_script_generation.ipynb
ClementPhil/deep-learning
mit
Soft drinks
%matplotlib inline plt.scatter(sugarwo,softdrinkswo) plt.show()
legacy-examples/Overdetermined system resolution - sugar in soft drinks and spirits.ipynb
BONSAMURAIS/bonsai
bsd-3-clause
Identify and remove outliers: Canada (The sugar consumption is quite high compared to the amount of soft drinks produced in Canada)
softdrinks = np.array([1.75E9,9.44E9,6.94E8,6.30E9,4.09E8,5.51E9,2.96E8,3.87E9,4.25E9,5.84E8,6.46E9,5.13E8]) sugar = np.array([2.62E7,1.46E8,1.02E7,1.03E8,6.82E6,9.00E7,8.78E6,7.22E7,5.90E7,7.54E6,8.03E7,6.18E6]) #Create linear regression object regr = linear_model.LinearRegression() #Train the model using the sets regr.fit(sugar[:,np.newaxis],softdrinks) #The coefficients print('Coefficients:\n', regr.coef_)
legacy-examples/Overdetermined system resolution - sugar in soft drinks and spirits.ipynb
BONSAMURAIS/bonsai
bsd-3-clause
We know that part of the sugar goes into other drinks. The 64 kg sugar per L soft drink coefficient is too high, meaning that not all the sugar goes into soft drinks. The sugar rate per L beverage is supposed to be around 0.0028-0.042 kg /L beverage.
#The mean square error print("Residual sum of squares: %.2f" % np.mean((regr.predict(sugar.reshape(-1,1))-softdrinks.reshape(-1,1))**2)) #Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(sugar.reshape(-1,1), softdrinks.reshape(-1,1)))
legacy-examples/Overdetermined system resolution - sugar in soft drinks and spirits.ipynb
BONSAMURAIS/bonsai
bsd-3-clause
The variance obtained for the correlation between softdrinks and sugar is very close to 1; which confirms the sugar consumption in soft drinks. (linear relationship between sugar consumption and softdrinks)
%matplotlib inline plt.scatter(sugar,softdrinks, color='black') plt.plot(sugar.reshape(-1,1),regr.predict(sugar.reshape(-1,1)), color='blue',linewidth=2) plt.xticks(()) plt.yticks(()) plt.show()
legacy-examples/Overdetermined system resolution - sugar in soft drinks and spirits.ipynb
BONSAMURAIS/bonsai
bsd-3-clause
Distilled beverages
%matplotlib inline plt.scatter(sugarwo,spiritswo)
legacy-examples/Overdetermined system resolution - sugar in soft drinks and spirits.ipynb
BONSAMURAIS/bonsai
bsd-3-clause
Identify and remove outlier : United States (The volume of spirits produced is quite high and the sugar consumption quite low)
spirits = np.array([1.82E7,1.11E8,6.08E6,1.74E8,2.21E7,1.91E8,2.55E7,1.61E8,1.26E8,3.45E7,7.47E8,1.25E6]) sugar = np.array([2.62E7,1.46E8,1.02E7,1.03E8,6.82E6,9.00E7,8.78E6,7.22E7,5.90E7,7.54E6,8.03E7,6.18E6]) %matplotlib inline #Create linear regression object regr = linear_model.LinearRegression() #Train the model using the sets regr.fit(sugar[:,np.newaxis],spirits) #The coefficients print('Coefficients:\n', regr.coef_) #The mean square error print("Residual sum of squares: %.2f" % np.mean((regr.predict(sugar.reshape(-1,1))-spirits.reshape(-1,1))**2)) #Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(sugar.reshape(-1,1),spirits.reshape(-1,1))) plt.scatter(sugar,spirits, color='black') plt.plot(sugar.reshape(-1,1),regr.predict(sugar.reshape(-1,1)), color='blue',linewidth=2) plt.xticks(()) plt.yticks(()) plt.show()
legacy-examples/Overdetermined system resolution - sugar in soft drinks and spirits.ipynb
BONSAMURAIS/bonsai
bsd-3-clause
Cider
%matplotlib inline plt.scatter(sugarwo,ciderwo)
legacy-examples/Overdetermined system resolution - sugar in soft drinks and spirits.ipynb
BONSAMURAIS/bonsai
bsd-3-clause
Identify and remove outliers: United Kingdom (World's largest consumer of cider: http://greatist.com/health/beer-or-cider-healthier)
sugar = np.array([2.62E7,1.46E8,1.02E7,1.03E8,6.82E6,9.00E7,8.78E6,7.22E7,5.90E7,7.54E6,6.18E6]) cider = np.array([8.09E6,8.55E7,4.49E6,6.24E7,8.21E7,1.18E8,6.84E7,0,1.57E8,1.26E7,1.02E7]) %matplotlib inline #Create linear regression object regr = linear_model.LinearRegression() #Train the model using the sets regr.fit(sugar[:,np.newaxis],cider) #The coefficients print('Coefficients:\n', regr.coef_) #The mean square error print("Residual sum of squares: %.2f" % np.mean((regr.predict(sugar.reshape(-1,1))-cider.reshape(-1,1))**2)) #Explained variance score: 1 is perfect prediction print('Variance score: %.2f' % regr.score(sugar.reshape(-1,1),cider.reshape(-1,1))) plt.scatter(sugar,cider, color='black') plt.plot(sugar.reshape(-1,1),regr.predict(sugar.reshape(-1,1)), color='blue',linewidth=2) plt.xticks(()) plt.yticks(()) plt.show()
legacy-examples/Overdetermined system resolution - sugar in soft drinks and spirits.ipynb
BONSAMURAIS/bonsai
bsd-3-clause
2. Resolution with least-square method Calculation of a pseudo-solution of the system using the Moore-Penrose pseudo-inverse. https://en.wikipedia.org/wiki/Linear_least_squares_(mathematics)
from numpy import matrix from scipy.linalg import pinv # Matrix of the overdetermined system volumes = matrix([[1.82E7,1.75E9], [1.11E8,9.44E9], [6.08E6, 6.94E8], [1.74E8,6.30E9],[2.21E7,4.09E8],[1.91E8,5.51E9],[2.55E7,2.96E8],[1.61E8,3.87E9],[1.26E8,4.25E9],[3.45E7,5.84E8], [7.47E8,6.46E9],[7.06E8,4.96E10],[1.25E6,5.13E8]]) print(volumes) # The second member sugar=matrix([[2.62E7],[1.46E8],[1.02E7],[1.03E8],[6.82E6],[9.00E7],[8.78E6],[7.22E7],[5.90E7],[7.54E6],[8.03E7],[5.49E8],[6.18E6]]) print(sugar) # Calculation of Moore-Penrose inverse PIA=pinv(volumes) print(PIA) # Application to second member for pseudo solution print (PIA*sugar)
legacy-examples/Overdetermined system resolution - sugar in soft drinks and spirits.ipynb
BONSAMURAIS/bonsai
bsd-3-clause
The solutions we obtain with the least square method are: - sugar rate in distilled beverages: 0.028 kg/L - sugar rate in soft drinks: 0.011 kg/L The sugar rate is higher for distilled beverages? (Is it because we removed soft drink Canada production which contains a lot of sugar?) CCL: The least square method gives equal probability to the linear relationship between the sugar intake in distilled beverages or soft drinks. Resolution with the QR decomposition vs least square method In linear algebra, a QR decomposition (also called a QR factorization) of a matrix is a decomposition of a matrix A into a product A = QR of an orthogonal matrix Q and an upper triangular matrix R. QR decomposition is often used to solve the linear least squares problem. The method of least squares is a standard approach in regression analysis to the approximate solution of overdetermined systems, i.e., sets of equations in which there are more equations than unknowns. "Least squares" means that the overall solution minimizes the sum of the squares of the errors made in the results of every single equation. Different outliers were removed from the equations.
# When not removing any outliers from the equation from numpy import * # generating the overdetermined system A = matrix([[1.82E7,1.75E9], [1.11E8,9.44E9], [6.08E6, 6.94E8], [1.74E8,6.30E9],[2.21E7,4.09E8],[1.91E8,5.51E9],[2.55E7,2.96E8],[1.61E8,3.87E9],[1.26E8,4.25E9],[3.45E7,5.84E8], [7.47E8,6.46E9],[7.06E8,4.96E10],[2.30E8,3.53E9],[1.25E6,5.13E8]]) b = matrix([[2.62E7],[1.46E8],[1.02E7],[1.03E8],[6.82E6],[9.00E7],[8.78E6],[7.22E7],[5.90E7],[7.54E6],[8.03E7],[5.49E8],[3.40E10],[6.18E6]]) x_lstsq = linalg.lstsq(A,b)[0] # computing the numpy solution Q,R = linalg.qr(A) #qr decomposition of A Qb = dot(Q.T,b) # computing Q^T*b (project b onto the range of A) x_qr = linalg.solve(R,Qb) # solving R*x = Q^T*b # comparing the solutions print('qr solution') print(x_qr) print ('lstsq solution') print(x_lstsq) # When removing only Canada from the equation from numpy import * # generating the overdetermined system A = matrix([[1.82E7,1.75E9], [1.11E8,9.44E9], [6.08E6, 6.94E8], [1.74E8,6.30E9],[2.21E7,4.09E8],[1.91E8,5.51E9],[2.55E7,2.96E8],[1.61E8,3.87E9],[1.26E8,4.25E9],[3.45E7,5.84E8], [7.47E8,6.46E9],[7.06E8,4.96E10],[1.25E6,5.13E8]]) b = matrix([[2.62E7],[1.46E8],[1.02E7],[1.03E8],[6.82E6],[9.00E7],[8.78E6],[7.22E7],[5.90E7],[7.54E6],[8.03E7],[5.49E8],[6.18E6]]) x_lstsq = linalg.lstsq(A,b)[0] # computing the numpy solution Q,R = linalg.qr(A) #qr decomposition of A Qb = dot(Q.T,b) # computing Q^T*b (project b onto the range of A) x_qr = linalg.solve(R,Qb) # solving R*x = Q^T*b # comparing the solutions print('qr solution') print(x_qr) print ('lstsq solution') print(x_lstsq) #When removing only the United States from the equation from numpy import * # generating the overdetermined system A = matrix([[1.82E7,1.75E9], [1.11E8,9.44E9], [6.08E6, 6.94E8], [1.74E8,6.30E9],[2.21E7,4.09E8],[1.91E8,5.51E9],[2.55E7,2.96E8],[1.61E8,3.87E9],[1.26E8,4.25E9],[3.45E7,5.84E8], [7.47E8,6.46E9],[2.30E8,3.53E9],[1.25E6,5.13E8]]) b = matrix([[2.62E7],[1.46E8],[1.02E7],[1.03E8],[6.82E6],[9.00E7],[8.78E6],[7.22E7],[5.90E7],[7.54E6],[8.03E7],[3.40E8],[6.18E6]]) x_lstsq = linalg.lstsq(A,b)[0] # computing the numpy solution Q,R = linalg.qr(A) #qr decomposition of A Qb = dot(Q.T,b) # computing Q^T*b (project b onto the range of A) x_qr = linalg.solve(R,Qb) # solving R*x = Q^T*b # comparing the solutions print('qr solution') print(x_qr) print ('lstsq solution') print(x_lstsq) #When removing the United States and Canada from the equation from numpy import * # generating the overdetermined system A = matrix([[1.82E7,1.75E9], [1.11E8,9.44E9], [6.08E6, 6.94E8], [1.74E8,6.30E9],[2.21E7,4.09E8],[1.91E8,5.51E9],[2.55E7,2.96E8],[1.61E8,3.87E9],[1.26E8,4.25E9],[3.45E7,5.84E8], [7.47E8,6.46E9],[1.25E6,5.13E8]]) b = matrix([[2.62E7],[1.46E8],[1.02E7],[1.03E8],[6.82E6],[9.00E7],[8.78E6],[7.22E7],[5.90E7],[7.54E6],[8.03E7],[6.18E6]]) x_lstsq = linalg.lstsq(A,b)[0] # computing the numpy solution Q,R = linalg.qr(A) #qr decomposition of A Qb = dot(Q.T,b) # computing Q^T*b (project b onto the range of A) x_qr = linalg.solve(R,Qb) # solving R*x = Q^T*b # comparing the solutions print('qr solution') print(x_qr) print ('lstsq solution') print(x_lstsq)
legacy-examples/Overdetermined system resolution - sugar in soft drinks and spirits.ipynb
BONSAMURAIS/bonsai
bsd-3-clause
We can see that depending on the outliers we decide to remove from the over-determined system, the solutions obtained are extremely different. 3. Substantial uncertainties in the independant variable : fitting errors-in-variables models The most important application is in data fitting. The best fit in the least-squares sense minimizes the sum of squared residuals, a residual being the difference between an observed value and the fitted value provided by a model. When the problem has substantial uncertainties in the independent variable (the x variable), then simple regression and least squares methods have problems; in such cases, the methodology required for fitting errors-in-variables models may be considered instead of that for least squares. In statistics, errors-in-variables models or measurement error models are regression models that account for measurement errors in the independent variables. In contrast, standard regression models assume that those regressors have been measured exactly, or observed without error; as such, those models account only for errors in the dependent variables, or responses. In the case when some regressors have been measured with errors, estimation based on the standard assumption leads to inconsistent estimates, meaning that the parameter estimates do not tend to the true values even in very large samples. For simple linear regression the effect is an underestimate of the coefficient, known as the attenuation bias. Consider a simple linear regression model of the form y t = α + β x t ∗ + ε t , t = 1 , … , T , {\displaystyle y_{t}=\alpha +\beta x_{t}^{*}+\varepsilon _{t}\,,\quad t=1,\ldots ,T,} y_{t} = \alpha + \beta x_{t}^{*} + \varepsilon_t\,, \quad t=1,\ldots,T, where x t ∗ {\displaystyle x_{t}^{}} x_{t}^{} denotes the true but unobserved regressor. Instead we observe this value with an error: x t = x t ∗ + η t {\displaystyle x_{t}=x_{t}^{*}+\eta _{t}\,} x_{t} = x_{t}^{*} + \eta_{t}\, where the measurement error η t {\displaystyle \eta {t}} \eta{t} is assumed to be independent from the true value x t ∗ {\displaystyle x_{t}^{}} x_{t}^{}. If the y t {\displaystyle y_{t}} y_{t}′s are simply regressed on the x t {\displaystyle x_{t}} x_{t}′s (see simple linear regression), then the estimator for the slope coefficient is β ^ = 1 T ∑ t = 1 T ( x t − x ¯ ) ( y t − y ¯ ) 1 T ∑ t = 1 T ( x t − x ¯ ) 2 , {\displaystyle {\hat {\beta }}={\frac {{\tfrac {1}{T}}\sum _{t=1}^{T}(x_{t}-{\bar {x}})(y_{t}-{\bar {y}})}{{\tfrac {1}{T}}\sum _{t=1}^{T}(x_{t}-{\bar {x}})^{2}}}\,,} \hat{\beta} = \frac{\tfrac{1}{T}\sum_{t=1}^T(x_t-\bar{x})(y_t-\bar{y})} {\tfrac{1}{T}\sum_{t=1}^T(x_t-\bar{x})^2}\,, which converges as the sample size T {\displaystyle T} T increases without bound: β ^ → p Cov ⁡ [ x t , y t ] Var ⁡ [ x t ] = β σ x ∗ 2 σ x ∗ 2 + σ η 2 = β 1 + σ η 2 / σ x ∗ 2 . {\displaystyle {\hat {\beta }}{\xrightarrow {p}}{\frac {\operatorname {Cov} [\,x_{t},y_{t}\,]}{\operatorname {Var} [\,x_{t}\,]}}={\frac {\beta \sigma _{x^{*}}^{2}}{\sigma _{x^{*}}^{2}+\sigma _{\eta }^{2}}}={\frac {\beta }{1+\sigma _{\eta }^{2}/\sigma _{x^{*}}^{2}}}\,.} \hat{\beta} \xrightarrow{p} \frac{\operatorname{Cov}[\,x_t,y_t\,]}{\operatorname{Var}[\,x_t\,]} = \frac{\beta \sigma^2_{x^*}} {\sigma_{x^*}^2 + \sigma_\eta^2} = \frac{\beta} {1 + \sigma_\eta^2/\sigma_{x^*}^2}\,. Variances are non-negative, so that in the limit the estimate is smaller in magnitude than the true value of β {\displaystyle \beta } \beta , an effect which statisticians call attenuation or regression dilution.[5] Thus the ‘naїve’ least squares estimator is inconsistent in this setting. However, the estimator is a consistent estimator of the parameter required for a best linear predictor of y {\displaystyle y} y given x {\displaystyle x} x: in some applications this may be what is required, rather than an estimate of the ‘true’ regression coefficient, although that would assume that the variance of the errors in observing x ∗ {\displaystyle x^{}} x^{} remains fixed. This follows directly from the result quoted immediately above, and the fact that the regression coefficient relating the y t {\displaystyle y_{t}} y_{t}′s to the actually observed x t {\displaystyle x_{t}} x_{t}′s, in a simple linear regression, is given by β x = Cov ⁡ [ x t , y t ] Var ⁡ [ x t ] . {\displaystyle \beta _{x}={\frac {\operatorname {Cov} [\,x_{t},y_{t}\,]}{\operatorname {Var} [\,x_{t}\,]}}.} \beta _{x}={\frac {\operatorname {Cov}[\,x_{t},y_{t}\,]}{\operatorname {Var}[\,x_{t}\,]}}. It is this coefficient, rather than β {\displaystyle \beta } \beta , that would be required for constructing a predictor of y {\displaystyle y} y based on an observed x {\displaystyle x} x which is subject to noise. It can be argued that almost all existing data sets contain errors of different nature and magnitude, so that attenuation bias is extremely frequent (although in multivariate regression the direction of bias is ambiguous. Jerry Hausman sees this as an iron law of econometrics: "The magnitude of the estimate is usually smaller than expected."
import numpy from warnings import warn from scipy.odr import __odrpack
legacy-examples/Overdetermined system resolution - sugar in soft drinks and spirits.ipynb
BONSAMURAIS/bonsai
bsd-3-clause
Každý řádek představuje cenu pro daný den a to nejvyšší (High), nejnižší (Low), otevírací (Open - začátek dne) a uzavírací (Close - konec dne). Volatilní pohyb pro daný den pak vidím v grafu na první pohled jako výrazné velké svíce (viz. graf 1). Abych je mohl automaticky v mém analytickém softwaru (python - pandas) označit, definuji volatilní svíce pomocí pravidla například jako: Velikost změny ceny musí být větší než 4 předchozí svíce K tomuto zjištění musím vypočítat velikost vzdálenosti pro jednotlivé svíce $Close-Open$. Pandas mi tuhle práci velmi pěkně usnadní:
spy_data['C-O'] = spy_data['Close'] - spy_data['Open'] spy_data.tail()
Analyzes/Volatile movements/01 Volatile movements in python and pandas 1.ipynb
vanheck/blog-notes
mit
Nyní znám přesnou změnu ceny každý den. Abych mohl porovnávat velikosti, aniž by mi záleželo na tom, zda se daný den cena propadla, nebo stoupla, aplikuji absolutní hodnotu.
spy_data['Abs(C-O)'] = spy_data['C-O'].abs() spy_data.tail()
Analyzes/Volatile movements/01 Volatile movements in python and pandas 1.ipynb
vanheck/blog-notes
mit
Identifikace volatilní úsečky Volatilní svíce identifikuji pomocí funkcionality rolling a funkce apply. Funkcionalita rolling umožnuje rozdělit pandas DataFrame na jednotlivé menší "okna", které bude předávat postupně funkci apply v parametru. Tzn. že v následujícím kódu se provede výpočet funkce is_bigger pro každý řádek dat uložených v spy_data. Do parametru rows se bude postupně vkládat výřez dat, obsahující 4 řádky (aktuální počátaný řádek + 3 řádky předchozí). Jako výsledek funkce is_bigger bude hodnota, zda je aktuálně počítaný řádek volatilnější, než 4 předchozí.
def is_bigger(rows): result = rows[-1] > rows[:-1].max() # rows[-1] - poslední hodnota je větší než maximum z předchozích return result spy_data['VolBar'] = spy_data['Abs(C-O)'].rolling(4).apply(is_bigger,raw=True) spy_data.tail(10)
Analyzes/Volatile movements/01 Volatile movements in python and pandas 1.ipynb
vanheck/blog-notes
mit
Které svíce jsou volatilnější, než 4 předchozí, si zobrazím pomocí jednoduché selekce, kde ve sloupečku VolBar == 1.
spy_data[spy_data['VolBar'] == 1].tail()
Analyzes/Volatile movements/01 Volatile movements in python and pandas 1.ipynb
vanheck/blog-notes
mit
Built-in RNN layers: a simple example There are three built-in RNN layers in Keras: keras.layers.SimpleRNN, a fully-connected RNN where the output from previous timestep is to be fed to next timestep. keras.layers.GRU, first proposed in Cho et al., 2014. keras.layers.LSTM, first proposed in Hochreiter & Schmidhuber, 1997. In early 2015, Keras had the first reusable open-source Python implementations of LSTM and GRU. Here is a simple example of a Sequential model that processes sequences of integers, embeds each integer into a 64-dimensional vector, then processes the sequence of vectors using a LSTM layer.
model = keras.Sequential() # Add an Embedding layer expecting input vocab of size 1000, and # output embedding dimension of size 64. model.add(layers.Embedding(input_dim=1000, output_dim=64)) # Add a LSTM layer with 128 internal units. # TODO model.add(layers.LSTM(128)) # Add a Dense layer with 10 units. # TODO model.add(layers.Dense(10)) model.summary()
courses/machine_learning/deepdive2/text_classification/solutions/rnn.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
RNN layers and RNN cells In addition to the built-in RNN layers, the RNN API also provides cell-level APIs. Unlike RNN layers, which processes whole batches of input sequences, the RNN cell only processes a single timestep. The cell is the inside of the for loop of a RNN layer. Wrapping a cell inside a keras.layers.RNN layer gives you a layer capable of processing batches of sequences, e.g. RNN(LSTMCell(10)). Mathematically, RNN(LSTMCell(10)) produces the same result as LSTM(10). In fact, the implementation of this layer in TF v1.x was just creating the corresponding RNN cell and wrapping it in a RNN layer. However using the built-in GRU and LSTM layers enable the use of CuDNN and you may see better performance. There are three built-in RNN cells, each of them corresponding to the matching RNN layer. keras.layers.SimpleRNNCell corresponds to the SimpleRNN layer. keras.layers.GRUCell corresponds to the GRU layer. keras.layers.LSTMCell corresponds to the LSTM layer. The cell abstraction, together with the generic keras.layers.RNN class, make it very easy to implement custom RNN architectures for your research. Cross-batch statefulness When processing very long sequences (possibly infinite), you may want to use the pattern of cross-batch statefulness. Normally, the internal state of a RNN layer is reset every time it sees a new batch (i.e. every sample seen by the layer is assumed to be independent of the past). The layer will only maintain a state while processing a given sample. If you have very long sequences though, it is useful to break them into shorter sequences, and to feed these shorter sequences sequentially into a RNN layer without resetting the layer's state. That way, the layer can retain information about the entirety of the sequence, even though it's only seeing one sub-sequence at a time. You can do this by setting stateful=True in the constructor. If you have a sequence s = [t0, t1, ... t1546, t1547], you would split it into e.g. s1 = [t0, t1, ... t100] s2 = [t101, ... t201] ... s16 = [t1501, ... t1547] Then you would process it via: python lstm_layer = layers.LSTM(64, stateful=True) for s in sub_sequences: output = lstm_layer(s) When you want to clear the state, you can use layer.reset_states(). Note: In this setup, sample i in a given batch is assumed to be the continuation of sample i in the previous batch. This means that all batches should contain the same number of samples (batch size). E.g. if a batch contains [sequence_A_from_t0_to_t100, sequence_B_from_t0_to_t100], the next batch should contain [sequence_A_from_t101_to_t200, sequence_B_from_t101_to_t200]. Here is a complete example:
paragraph1 = np.random.random((20, 10, 50)).astype(np.float32) paragraph2 = np.random.random((20, 10, 50)).astype(np.float32) paragraph3 = np.random.random((20, 10, 50)).astype(np.float32) lstm_layer = layers.LSTM(64, stateful=True) output = lstm_layer(paragraph1) output = lstm_layer(paragraph2) output = lstm_layer(paragraph3) # reset_states() will reset the cached state to the original initial_state. # If no initial_state was provided, zero-states will be used by default. # TODO lstm_layer.reset_states()
courses/machine_learning/deepdive2/text_classification/solutions/rnn.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Bidirectional RNNs For sequences other than time series (e.g. text), it is often the case that a RNN model can perform better if it not only processes sequence from start to end, but also backwards. For example, to predict the next word in a sentence, it is often useful to have the context around the word, not only just the words that come before it. Keras provides an easy API for you to build such bidirectional RNNs: the keras.layers.Bidirectional wrapper.
model = keras.Sequential() # Add Bidirectional layers # TODO model.add( layers.Bidirectional(layers.LSTM(64, return_sequences=True), input_shape=(5, 10)) ) model.add(layers.Bidirectional(layers.LSTM(32))) model.add(layers.Dense(10)) model.summary()
courses/machine_learning/deepdive2/text_classification/solutions/rnn.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's create a model instance and train it. We choose sparse_categorical_crossentropy as the loss function for the model. The output of the model has shape of [batch_size, 10]. The target for the model is an integer vector, each of the integer is in the range of 0 to 9.
model = build_model(allow_cudnn_kernel=True) # Compile the model # TODO model.compile( loss=keras.losses.SparseCategoricalCrossentropy(from_logits=True), optimizer="sgd", metrics=["accuracy"], ) model.fit( x_train, y_train, validation_data=(x_test, y_test), batch_size=batch_size, epochs=1 )
courses/machine_learning/deepdive2/text_classification/solutions/rnn.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
3: Reading in files Instructions Run the read() method on the File object f to return the string representation of crime_rates.csv. Assign the resulting string to the variable data.
f = open("crime_rates.csv", "r") data = f.read() print(data)
python_introduction/beginner/files and loops.ipynb
datascience-practice/data-quest
mit
4: Splitting Instructions Split the string object data on the new-line character "\n" and store the result in a variable named rows. Then use the print() function to display the first 5 elements in rows. Answer
# We can split a string into a list. sample = "john,plastic,joe" split_list = sample.split(",") print(split_list) # Here's another example. string_two = "How much wood\ncan a woodchuck chuck\nif a woodchuck\ncan chuck wood?" split_string_two = string_two.split('\n') print(split_string_two) # Code from previous cells f = open('crime_rates.csv', 'r') data = f.read() rows = data.split('\n') print(rows[0:5])
python_introduction/beginner/files and loops.ipynb
datascience-practice/data-quest
mit
5: Loops Instructions ... Answer 6: Practice, loops Instructions The variable ten_rows contains the first 10 elements in rows. Write a for loop that iterates over each element in ten_rows and uses the print() function to display each element. Answer
ten_rows = rows[0:10] for row in ten_rows: print(row)
python_introduction/beginner/files and loops.ipynb
datascience-practice/data-quest
mit
7: List of lists Instructions For now, explore and run the code we dissected in this step in the code cell below Answer
three_rows = ["Albuquerque,749", "Anaheim,371", "Anchorage,828"] final_list = [] for row in three_rows: split_list = row.split(',') final_list.append(split_list) print(final_list) for elem in final_list: print(elem) print(final_list[0]) print(final_list[1]) print(final_list[2])
python_introduction/beginner/files and loops.ipynb
datascience-practice/data-quest
mit
8: Practice, splitting elements in a list Let's now convert the full dataset, rows, into a list of lists using the same logic from the step before. Instructions Write a for loop that splits each element in rows on the comma delimiter and appends the resulting list to a new list named final_data. Then, display the first 5 elements in final_data using list slicing and the print() function. Answer
f = open('crime_rates.csv', 'r') data = f.read() rows = data.split('\n') final_data = [row.split(",") for row in rows] print(final_data[0:5])
python_introduction/beginner/files and loops.ipynb
datascience-practice/data-quest
mit
9: Accessing elements in a list of lists, the manual way Instructions five_elements contains the first 5 elements from final_data. Create a list of strings named cities_list that contains the city names from each list in five_elements. Answer
five_elements = final_data[:5] print(five_elements) cities_list = [city for city,_ in five_elements]
python_introduction/beginner/files and loops.ipynb
datascience-practice/data-quest
mit
10: Looping through a list of lists Instructions Create a list of strings named cities_list that contains just the city names from final_data. Recall that the city name is located at index 0 for each list in final_data. Answer
crime_rates = [] for row in five_elements: # row is a list variable, not a string. crime_rate = row[1] # crime_rate is a string, the city name. crime_rates.append(crime_rate) cities_list = [row[0] for row in final_data]
python_introduction/beginner/files and loops.ipynb
datascience-practice/data-quest
mit
11: Practice Instructions Create a list of integers named int_crime_rates that contains just the crime rates as integers from the list rows. First create an empty list and assign it to int_crime_rates. Then, write a for loop that iterates over rows that executes the following: uses the split() method to convert each string in rows into a list on the comma delimiter converts the value at index 1 from that list to an integer using the int() function then uses the append() method to add each integer to int_crime_rates
f = open('crime_rates.csv', 'r') data = f.read() rows = data.split('\n') print(rows[0:5]) int_crime_rates = [] for row in rows: data = row.split(",") if len(data) < 2: continue int_crime_rates.append(int(row.split(",")[1])) print(int_crime_rates)
python_introduction/beginner/files and loops.ipynb
datascience-practice/data-quest
mit
Download permit data from the city of Chicago We save the output to data/permits.csv
class DownloadData(luigi.ExternalTask): """ Downloads permit data from city of Chicago """ def run(self): url = 'https://data.cityofchicago.org/api/views/ydr8-5enu/rows.csv?accessType=DOWNLOAD' response = requests.get(url) with self.output().open('w') as out_file: out_file.write(response.text) def output(self): return luigi.LocalTarget("data/permits.csv")
chicago/chicago_permits.ipynb
hunterowens/data-pipelines
mit
Clean the data The ESTIMATED_COST column will create Inf values for "$" so we must clean the strings. Save the cleaned data to data/permits_clean.csv
def to_float(s): default = np.nan try: r = float(s.replace('$', '')) except: return default return r def to_int(s): default = None if s == '': return default return int(s) def to_date(s): default = '01/01/1900' if s == '': s = default return datetime.datetime.strptime(s, "%m/%d/%Y") # *** Additional available headers at the end *** converter = {'ID': to_int, 'PERMIT#': str, 'PERMIT_TYPE': str, 'ISSUE_DATE': to_date, 'ESTIMATED_COST': to_float, 'AMOUNT_WAIVED': to_float, 'AMOUNT_PAID': to_float, 'TOTAL_FEE': to_float, 'STREET_NUMBER': to_int, 'STREET DIRECTION': str, 'STREET_NAME': str, 'SUFFIX': str, 'WORK_DESCRIPTION': str, 'LATITUDE': to_float, 'LONGITUDE': to_float, 'LOCATION': str, } class cleanCSV(luigi.Task): """This is our cleaning step""" def requires(self): return DownloadData() def run(self): df = pd.read_csv(self.input().open('r'), usecols=converter.keys(), converters=converter, skipinitialspace=True) df.to_csv(self.output().fn) def output(self): return luigi.LocalTarget("data/permits_clean.csv")
chicago/chicago_permits.ipynb
hunterowens/data-pipelines
mit
Download the ward shapefiles The response is in ZIP format, so we need to extract and return the *.shp file as the output
import shutil class DownloadWards(luigi.ExternalTask): """ Downloads ward shapefiles from city of Chicago """ def run(self): url = "https://data.cityofchicago.org/api/geospatial/sp34-6z76?method=export&format=Shapefile" response = requests.get(url) z = zipfile.ZipFile(StringIO.StringIO(response.content)) files = z.namelist() z.extractall('data/') for fname in files: shutil.move('data/' + fname, 'data/geo_export' + fname[-4:]) def output(self): return luigi.LocalTarget("data/geo_export.shp")
chicago/chicago_permits.ipynb
hunterowens/data-pipelines
mit
Convienence functions
def plot(m, ldn_points, df_map, bds, sizes, title, label, output): plt.clf() fig = plt.figure() ax = fig.add_subplot(111, axisbg='w', frame_on=False) # we don't need to pass points to m() because we calculated using map_points and shapefile polygons dev = m.scatter( [geom.x for geom in ldn_points], [geom.y for geom in ldn_points], s=sizes, marker='.', lw=.25, facecolor='#33ccff', edgecolor='none', alpha=0.9, antialiased=True, label=label, zorder=3) # plot boroughs by adding the PatchCollection to the axes instance ax.add_collection(PatchCollection(df_map['patches'].values, match_original=True)) # copyright and source data info smallprint = ax.text( 1.03, 0, 'Total points: %s' % len(ldn_points), ha='right', va='bottom', size=4, color='#555555', transform=ax.transAxes) # Draw a map scale m.drawmapscale( bds[0] + 0.08, bds[1] + 0.015, bds[0], bds[1], 10., barstyle='fancy', labelstyle='simple', fillcolor1='w', fillcolor2='#555555', fontcolor='#555555', zorder=5) plt.title(title) plt.tight_layout() # this will set the image width to 722px at 100dpi fig.set_size_inches(7.22, 5.25) plt.savefig(output, dpi=500, alpha=True) # plt.show() def make_basemap(infile): with fiona.open(infile) as shp: bds = shp.bounds extra = 0.05 ll = (bds[0], bds[1]) ur = (bds[2], bds[3]) w, h = bds[2] - bds[0], bds[3] - bds[1] # Check w & h calculations assert bds[0] + w == bds[2] and bds[1] + h == bds[3], "Width or height of image not correct!" center = (bds[0] + (w / 2.0), bds[1] + (h / 2.0)) m = Basemap(projection='tmerc', lon_0=center[0], lat_0=center[1], ellps = 'WGS84', width=w * 100000 + 10000, height=h * 100000 + 10000, lat_ts=0, resolution='i', suppress_ticks=True ) m.readshapefile(infile[:-4], 'chicago', color='blue', zorder=3) # m.fillcontinents() return m, bds def data_map(m): df_map = pd.DataFrame({'poly': [Polygon(xy) for xy in m.chicago], 'ward_name': [ward['ward'] for ward in m.chicago_info]}) df_map['area_m'] = df_map['poly'].map(lambda x: x.area) df_map['area_km'] = df_map['area_m'] / 100000 # draw ward patches from polygons df_map['patches'] = df_map['poly'].map(lambda x: PolygonPatch(x, fc='#555555', ec='#787878', lw=.25, alpha=.9, zorder=4)) return df_map def point_objs(m, df, df_map): # Create Point objects in map coordinates from dataframe lon and lat values map_points = pd.Series( [Point(m(mapped_x, mapped_y)) for mapped_x, mapped_y in zip(df['LONGITUDE'], df['LATITUDE'])]) permit_points = MultiPoint(list(map_points.values)) wards_polygon = prep(MultiPolygon(list(df_map['poly'].values))) return filter(wards_polygon.contains, permit_points)
chicago/chicago_permits.ipynb
hunterowens/data-pipelines
mit
Map Permit Distribution
class MakePermitMap(luigi.Task): def requires(self): return dict(wards=DownloadWards(), data=cleanCSV()) def run(self): m, bds = make_basemap(self.input()['wards'].fn) df = pd.read_csv(self.input()['data'].open('r')) df_map = data_map(m) ldn_points = point_objs(m, df, df_map) plot(m, ldn_points, df_map, bds, sizes=5, title="Permit Locations, Chicago", label="Permit Locations", output='data/chicago_permits.png') def output(self): return luigi.LocalTarget('data/chicago_permits.png')
chicago/chicago_permits.ipynb
hunterowens/data-pipelines
mit
Map Estimated Costs
class MakeEstimatedCostMap(luigi.Task): """ Plot the permits and scale the size by the estimated cost (relative to range)""" def requires(self): return dict(wards=DownloadWards(), data=cleanCSV()) def run(self): m, bds = make_basemap(self.input()['wards'].fn) df = pd.read_csv(self.input()['data'].open('r')) # Get the estimated costs, normalize, and scale by 5 <-- optional costs = df['ESTIMATED_COST'] costs.fillna(costs.min() * 2, inplace=True) assert not np.any([cost is np.inf for cost in costs]), "Inf in column!" # plt.hist(costs, 3000, log=True); sizes = ((costs - costs.min()) / (costs.max() - costs.min())) * 100 #scale factor df_map = data_map(m) ldn_points = point_objs(m, df, df_map) plot(m, ldn_points, df_map, bds, sizes=sizes, title="Relative Estimated Permit Cost, Chicago", label="Relative Estimated Permit Cost", output='data/chicago_rel_est_cost.png') def output(self): return luigi.LocalTarget('data/chicago_est_cost.png')
chicago/chicago_permits.ipynb
hunterowens/data-pipelines
mit
Run All Tasks
class MakeMaps(luigi.WrapperTask): """ RUN ALL THE PLOTS!!! """ def requires(self): yield MakePermitMap() yield MakeEstimatedCostMap() def run(self): pass
chicago/chicago_permits.ipynb
hunterowens/data-pipelines
mit
Note: to run from the commandline: Export to '.py' file Run: python -m luigi chicago_permits MakeMaps --local-scheduler
# if __name__ == '__main__': luigi.run(['MakeMaps', '--local-scheduler'])
chicago/chicago_permits.ipynb
hunterowens/data-pipelines
mit
Miscellaneous notes The estimated cost spread is predictably non-linear, so further direction could be to filter out the "$0" as unestimated (which they likely are)! Suggested work Map estimated costs overlaid with actual cost Map permits by # of contractors involved and cost Plot estimated cost accuracy based on contractor count Map permits by contractors Chloropleth maps by permit count, cost, etc. Include population density (Census data) or distance from major routes Etc...
# For reference, cost spread is exponential # plt.hist(costs, 3000, log=True); # Additional headers available... """ # PIN1, # PIN2, # PIN3, # PIN4, # PIN5, # PIN6, # PIN7, # PIN8, # PIN9, # PIN10, 'CONTRACTOR_1_TYPE, 'CONTRACTOR_1_NAME, 'CONTRACTOR_1_ADDRESS, 'CONTRACTOR_1_CITY, 'CONTRACTOR_1_STATE, 'CONTRACTOR_1_ZIPCODE, 'CONTRACTOR_1_PHONE, 'CONTRACTOR_2_TYPE, 'CONTRACTOR_2_NAME, 'CONTRACTOR_2_ADDRESS, 'CONTRACTOR_2_CITY, 'CONTRACTOR_2_STATE, 'CONTRACTOR_2_ZIPCODE, 'CONTRACTOR_2_PHONE, 'CONTRACTOR_3_TYPE, 'CONTRACTOR_3_NAME, 'CONTRACTOR_3_ADDRESS, 'CONTRACTOR_3_CITY, 'CONTRACTOR_3_STATE, 'CONTRACTOR_3_ZIPCODE, 'CONTRACTOR_3_PHONE, 'CONTRACTOR_4_TYPE, 'CONTRACTOR_4_NAME, 'CONTRACTOR_4_ADDRESS, 'CONTRACTOR_4_CITY, 'CONTRACTOR_4_STATE, 'CONTRACTOR_4_ZIPCODE, 'CONTRACTOR_4_PHONE, 'CONTRACTOR_5_TYPE, 'CONTRACTOR_5_NAME, 'CONTRACTOR_5_ADDRESS, 'CONTRACTOR_5_CITY, 'CONTRACTOR_5_STATE, 'CONTRACTOR_5_ZIPCODE, 'CONTRACTOR_5_PHONE, 'CONTRACTOR_6_TYPE, 'CONTRACTOR_6_NAME, 'CONTRACTOR_6_ADDRESS, 'CONTRACTOR_6_CITY, 'CONTRACTOR_6_STATE, 'CONTRACTOR_6_ZIPCODE, 'CONTRACTOR_6_PHONE, 'CONTRACTOR_7_TYPE, 'CONTRACTOR_7_NAME, 'CONTRACTOR_7_ADDRESS, 'CONTRACTOR_7_CITY, 'CONTRACTOR_7_STATE, 'CONTRACTOR_7_ZIPCODE, 'CONTRACTOR_7_PHONE, 'CONTRACTOR_8_TYPE, 'CONTRACTOR_8_NAME, 'CONTRACTOR_8_ADDRESS, 'CONTRACTOR_8_CITY, 'CONTRACTOR_8_STATE, 'CONTRACTOR_8_ZIPCODE, 'CONTRACTOR_8_PHONE, 'CONTRACTOR_9_TYPE, 'CONTRACTOR_9_NAME, 'CONTRACTOR_9_ADDRESS, 'CONTRACTOR_9_CITY, 'CONTRACTOR_9_STATE, 'CONTRACTOR_9_ZIPCODE, 'CONTRACTOR_9_PHONE, 'CONTRACTOR_10_TYPE, 'CONTRACTOR_10_NAME, 'CONTRACTOR_10_ADDRESS, 'CONTRACTOR_10_CITY, 'CONTRACTOR_10_STATE, 'CONTRACTOR_10_ZIPCODE, 'CONTRACTOR_10_PHONE, 'CONTRACTOR_11_TYPE, 'CONTRACTOR_11_NAME, 'CONTRACTOR_11_ADDRESS, 'CONTRACTOR_11_CITY, 'CONTRACTOR_11_STATE, 'CONTRACTOR_11_ZIPCODE, 'CONTRACTOR_11_PHONE, 'CONTRACTOR_12_TYPE, 'CONTRACTOR_12_NAME, 'CONTRACTOR_12_ADDRESS, 'CONTRACTOR_12_CITY, 'CONTRACTOR_12_STATE, 'CONTRACTOR_12_ZIPCODE, 'CONTRACTOR_12_PHONE, 'CONTRACTOR_13_TYPE, 'CONTRACTOR_13_NAME, 'CONTRACTOR_13_ADDRESS, 'CONTRACTOR_13_CITY, 'CONTRACTOR_13_STATE, 'CONTRACTOR_13_ZIPCODE, 'CONTRACTOR_13_PHONE, 'CONTRACTOR_14_TYPE, 'CONTRACTOR_14_NAME, 'CONTRACTOR_14_ADDRESS, 'CONTRACTOR_14_CITY, 'CONTRACTOR_14_STATE, 'CONTRACTOR_14_ZIPCODE, 'CONTRACTOR_14_PHONE, 'CONTRACTOR_15_TYPE, 'CONTRACTOR_15_NAME, 'CONTRACTOR_15_ADDRESS, 'CONTRACTOR_15_CITY, 'CONTRACTOR_15_STATE, 'CONTRACTOR_15_ZIPCODE, 'CONTRACTOR_15_PHONE, """
chicago/chicago_permits.ipynb
hunterowens/data-pipelines
mit
Première fonction : pour tirer trois dés, à 6 faces, indépendants.
def tirage(nb=1): """ Renvoie un numpy array de taille (3,) si nb == 1, sinon (nb, 3).""" if nb == 1: return rn.randint(1, 7, 3) else: return rn.randint(1, 7, (nb, 3))
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Testons là :
tirage() tirage(10)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
1.2. Points d'un tirage Le jeu de 151 associe les points suivants, multiples de 50, aux tirages de 3 dés : 200 pour un brelan de 2, 300 pour un brelan de 3, .., 600 pour un brelan de 6, 700 pour un brelan de 1, 100 pour chaque 1, si ce n'est pas un brelan, 50 pour chaque 5, si ce n'est pas un brelan.
COMPTE_SUITE = False # Savoir si on implémente aussi la variante avec les suites def _points(valeurs, compte_suite=COMPTE_SUITE): if valeurs[0] == valeurs[1] == valeurs[2]: # Un brelan ! if valeurs[0] == 1: return 700 else: return 100 * valeurs[0] else: # Pas de brelan # Code pour compter les suites : bonus_suite = compte_suite and set(np.diff(np.sort(valeurs))) == {1} return 100 * (np.sum(valeurs == 1) + bonus_suite) + 50 * np.sum(valeurs == 5) def points(valeurs, compte_suite=COMPTE_SUITE): """ Calcule les points du tirage correspondant à valeurs. - si valeurs est de taille (3,), renvoie un seul entier, - si valeurs est de taille (nb, 3), renvoie un tableau de points. """ if len(np.shape(valeurs)) > 1: return np.array([_points(valeurs[i,:], compte_suite) for i in range(np.shape(valeurs)[0])]) else: return _points(valeurs, compte_suite)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
1.2.1. Un seul tirage Testons ces fonctions :
valeurs = tirage() print("La valeur {} donne {:>5} points.".format(valeurs, points(valeurs))) for _ in range(20): valeurs = tirage() print("- La valeur {} donne {:>5} points.".format(valeurs, points(valeurs)))
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Testons quelques valeurs particulières : Les brelans :
for valeur in range(1, 7): valeurs = valeur * np.ones(3, dtype=int) print("- La valeur {} donne {:>5} points.".format(valeurs, points(valeurs)))
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Les 1 :
for valeurs in [np.array([2, 3, 6]), np.array([1, 3, 6]), np.array([1, 1, 6])]: print("- La valeur {} donne {:>5} points.".format(valeurs, points(valeurs)))
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Les 5 :
for valeurs in [np.array([2, 3, 6]), np.array([5, 3, 6]), np.array([5, 5, 6])]: print("- La valeur {} donne {:>5} points.".format(valeurs, points(valeurs)))
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
→ C'est bon, tout marche ! Note : certaines variants du 151 accordent une valeur supplémentaire aux suites (non ordonnées) : [1, 2, 3] vaut 200, [2, 3, 4] vaut 100, et [3, 4, 5] et [4, 5, 6] vaut 150. Ce n'est pas difficile à intégrer dans notre fonction points. Testons quand même les suites :
for valeurs in [np.array([1, 2, 3]), np.array([2, 3, 4]), np.array([3, 4, 5]), np.array([4, 5, 6])]: print("- La valeur {} donne {:>5} points.".format(valeurs, points(valeurs)))
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
1.2.2. Plusieurs tirages Testons ces fonctions :
valeurs = tirage(10) print(valeurs) print(points(valeurs))
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
1.2.3. Moyenne d'un tirage, et quelques figures On peut faire quelques tests statistiques dès maintenant : Points moyens d'un tirage :
def moyenneTirage(nb=1000): return np.mean(points(tirage(nb), False)) def moyenneTirage_avecSuite(nb=1000): return np.mean(points(tirage(nb), True)) for p in range(2, 7): nb = 10 ** p print("- Pour {:>7} tirages, les tirages valent en moyenne {:>4} points.".format(nb, moyenneTirage(nb))) print("- Pour {:>7} tirages, les tirages valent en moyenne {:>4} points si on compte aussi les suites.".format(nb, moyenneTirage_avecSuite(nb)))
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Ça semble converger vers 85 : en moyenne, un tirage vaut entre 50 et 100 points, plutôt du côté des 100. Et si on compte les suites, la valeur moyenne d'un tirage vaut plutôt 96 points (ça augmente comme prévu, mais ça augmente peu). Moyenne et écart type :
def moyenneStdTirage(nb=1000): pts = points(tirage(nb)) return np.mean(pts), np.std(pts) for p in range(2, 7): nb = 10 ** p m, s = moyenneStdTirage(nb) print("- Pour {:>7} tirages, les tirages valent en moyenne {:6.2f} +- {:>6.2f} points.".format(nb, m, s))
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Quelques courbes :
def plotPoints(nb=2000): pts = np.sort(points(tirage(nb))) m = np.mean(pts) plt.figure() plt.plot(pts, 'ro') plt.title("Valeurs de {} tirages. Moyenne = {:.2f}".format(nb, m)) plt.show() plotPoints() plotPoints(10**5) plotPoints(10**6)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
On peut calculer la probabilité d'avoir un tirage valant 0 points :
def probaPoints(nb=1000, pt=0, compte_suite=COMPTE_SUITE): pts = points(tirage(nb), compte_suite) return np.sum(pts == pt) / float(nb) for p in range(2, 7): nb = 10 ** p prob = probaPoints(nb, compte_suite=False) print("- Pour {:>7} tirages, il y a une probabilité {:7.2%} d'avoir 0 point.".format(nb, prob)) prob = probaPoints(nb, compte_suite=True) print("- Pour {:>7} tirages, il y a une probabilité {:7.2%} d'avoir 0 point si on compte les suites.".format(nb, prob))
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Donc un tirage apporte 85 points en moyenne, mais il y a environ 28% de chance qu'un tirage rate. Si on compte les suites, un tirage apporte 97 points en moyenne, mais il y a environ 25% de chance qu'un tirage rate. On peut faire le même genre de calcul pour les différentes valeurs de points possibles :
# valeursPossibles = list(set(points(tirage(10000)))) valeursPossibles = [0, 50, 100, 150, 200, 250, 300, 400, 500, 600, 700] for p in range(4, 7): nb = 10 ** p tirages = tirage(nb) pts = points(tirages, False) pts_s = points(tirages, True) print("\n- Pour {:>7} tirages :".format(nb)) for pt in valeursPossibles: prob = np.sum(pts == pt) / float(nb) print(" - Il y a une probabilité {:7.2%} d'avoir {:3} point{}.".format(prob, pt, 's' if pt > 0 else '')) prob = np.sum(pts_s == pt) / float(nb) print(" - Il y a une probabilité {:7.2%} d'avoir {:3} point{} si on compte les suites.".format(prob, pt, 's' if pt > 0 else ''))
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
On devrait faire des histogrammes, mais j'ai la flemme... Ces quelques expériences montrent qu'on a : - une chance d'environ 2.5% d'avoir plus de 300 points (par un brelan), - une chance d'environ 9% d'avoir entre 200 et 300 points, - une chance d'environ 11% d'avoir 150 points, - une chance d'environ 27% d'avoir 100 points, - une chance d'environ 22% d'avoir 50 points, - une chance d'environ 28% d'avoir 0 point. Autant de chance d'avoir 100 points que 0 ? Et oui ! La variante comptant les suites augmente la chance d'avoir 200 points (de 7.5% à 10%), d'avoir 150 points (de 11% à 16%), et diminue la chance d'avoir 0 point, mais ne change pas vraiment le reste du jeu. 1.3. Simuler des parties 1.3.1. Simuler une partie On va d'abord écrire une fonction qui prend deux joeurs, un total, et simule la partie, puis donne l'indice (0 ou 1) du joueur qui gagne.
DEBUG = False # Par défaut, on n'affiche rien def unJeu(joueur, compte, total, debug=DEBUG): accu = 0 if debug: print(" - Le joueur {.__name__} commence à jouer, son compte est {} et le total est {} ...".format(joueur, compte, total)) t = tirage() nbLance = 1 if points(t) == 0: if debug: print(" - Hoho, ce tirage {} vallait 0 points, le joueur doit arrêter.".format(t)) return 0, nbLance if debug: print(" - Le joueur a obtenu {} ...".format(t)) while compte + accu <= total and joueur(compte, accu, t, total): accu += points(t) t = tirage() nbLance += 1 if debug: print(" - Le joueur a décidé de rejouer, accumulant {} points, et a ré-obtenu {} ...".format(accu, t)) if points(t) == 0: if debug: print(" - Hoho, ce tirage {} vallait 0 points, le joueur doit arrêter.".format(t)) break accu += points(t) if compte + accu > total: if debug: print(" - Le joueur a dépassé le total : impossible de marquer ! compte = {} + accu = {} > total = {} !".format(compte, accu, total)) return 0, nbLance else: if accu > 0: if debug: print(" - Le joueur peut marquer les {} points accumulés en {} lancés !".format(accu, nbLance)) return accu, nbLance def unePartie(joueurs, total=1000, debug=DEBUG, i0=0): assert len(joueurs) == 2, "Erreur, seulement 2 joueurs sont acceptés !" comptes = [0, 0] nbCoups = [0, 0] nbLances = [0, 0] scores = [[0], [0]] if debug: print("- Le joueur #{} va commencer ...".format(i0)) i = i0 while max(comptes) != total: # Tant qu'aucun joueur n'a gagné nbCoups[i] += 1 if debug: print("- C'est au joueur #{} ({.__name__}) de jouer, son compte est {} et le total est {} ...".format(i, joueurs[i], comptes[i], total)) accu, nbLance = unJeu(joueurs[i], comptes[i], total, debug) nbLances[i] += nbLance if accu > 0: comptes[i] += accu scores[i].append(comptes[i]) # Historique if comptes[i] == total: if debug: print("- Le joueur #{} ({.__name__}) a gagné en {} coups et {} lancés de dés !".format(i, joueurs[i], nbCoups[i], nbLances[i])) if debug: print("- Le joueur #{} ({.__name__}) a perdu, avec un score de {}, après {} coups et {} lancés de dés !".format(i^1, joueurs[i^1], comptes[i^1], nbCoups[i^1], nbLances[i^1])) return i, scores i ^= 1 # 0 → 1, 1 → 0 (ou exclusif) # Note : on pourrait implémenter une partie à plus de 2 joueurs
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
1.3.2. Des stratégies On doit définir des stratégies, sous la forme de fonctions joueur(compte, accu, t, total), qui renvoie True si elle doit continuer à jouer, ou False si elle doit marquer. D'abord, deux stratégies un peu stupides :
def unCoup(compte, accu, t, total): """ Stratégie qui marque toujours au premier coup, peu importe le 1er tirage obtenu.""" return False # Marque toujours ! def jusquauBout(compte, accu, t, total): """ Stratégie qui ne marque que si elle peut gagner exactement .""" if compte + accu + points(t) >= total: return False # Marque si elle peut gagner else: return True # Continue à jouer
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Une autre stratégie, qui marche seulement si elle peut marquer plus de X points (100, 150 etc). C'est la version plus "gourmande" de unCoup, qui marque si elle a plus de 50 points.
def auMoinsX(X): def joueur(compte, accu, t, total): """ Stratégie qui marque si elle a eu plus de {} points.""".format(X) if accu + points(t) >= X: return False # Marque si elle a obtenu plus de X points elif compte + accu + points(t) == total: return False # Marque si elle peut gagner elif total - compte < X: # S'il reste peu de points, marque toujours # (sinon la stratégie d'accumuler plus de X points ne marche plus) return False else: return True # Continue de jouer, essaie d'obtenir X points joueur.__name__ = "auMoins{}".format(X) # Triche sur le nom return joueur auMoins50 = auMoinsX(50) # == unCoup, en fait auMoins100 = auMoinsX(100) auMoins150 = auMoinsX(150) auMoins200 = auMoinsX(200) # Commence à devenir très audacieux auMoins250 = auMoinsX(250) auMoins300 = auMoinsX(300) # Compètement fou, très peu de chance de marquer ça ou plus! auMoins350 = auMoinsX(350) auMoins400 = auMoinsX(400) auMoins450 = auMoinsX(450) auMoins500 = auMoinsX(500) auMoins550 = auMoinsX(550) auMoins600 = auMoinsX(600) auMoins650 = auMoinsX(650) auMoins700 = auMoinsX(700) # On pourrait continuer ... auMoins800 = auMoinsX(800) auMoins850 = auMoinsX(850) auMoins900 = auMoinsX(900) auMoins950 = auMoinsX(950) auMoins1000 = auMoinsX(1000)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Une autre stratégie "stupide" : décider aléatoirement, selon une loi de Bernoulli, si elle continue ou si elle s'arrête.
def bernoulli(p=0.5): def joueur(compte, accu, t, total): """ Marque les points accumulés avec probabilité p = {} (Bernoulli).""".format(p) return rn.random() > p joueur.__name__ = "bernoulli_{:.3g}".format(p) return joueur
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
1.3.3. Quelques exemples Essayons de faire jouer deux stratégies face à l'autre.
joueurs = [unCoup, unCoup] total = 200 unePartie(joueurs, total, True) unePartie(joueurs, total) joueurs = [unCoup, jusquauBout] total = 200 unePartie(joueurs, total) joueurs = [unCoup, auMoins100] total = 500 unePartie(joueurs, total) joueurs = [unCoup, auMoins200] total = 1000 unePartie(joueurs, total)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
1.3.4. Générer plusieurs parties On peut maintenant lancer plusieurs centaines de simulations de parties, sans afficher le déroulement de chaque parties. La fonction unePartie renvoie un tuple, (i, comptes), où : - i est l'indice (0 ou 1) du joueur ayant gagné la partie, - et comptes est une liste contenant les deux historiques des points des deux joueurs. Par exemple, pour un total = 500, la sortie (1, [[0, 100, 150, 250, 450], [0, 50, 450, 500]]) signifie : - le joueur 1 a gagné, après avoir marqué 50 points, puis 400, et enfin 50, - le joueur 2 a perdu, après avoir marqué 100 points, puis 50, puis 100, puis 200, mais a perdu avec 450 points.
def desParties(nb, joueurs, total=1000, i0=0): indices, historiques = [], [] for _ in range(nb): i, h = unePartie(joueurs, total=total, i0=i0, debug=False) indices.append(i) historiques.append(h) return indices, historiques
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Par exemple, on peut opposer le joueur pas courageux (unCoup) au joueur très gourmand (jusquauBout) sur 100 parties avec un total de 250 points :
def freqGain(indiceMoyen, i): # (1^i) + ((-1)**(i==0)) * indiceMoyen if i == 0: return 1 - indiceMoyen else: return indiceMoyen def afficheResultatsDesParties(nb, joueurs, total, indices, historiques): indiceMoyen = np.mean(indices) pointsFinaux = [np.mean(list(historiques[k][i][-1] for k in range(nb))) for i in [0, 1]] print("Dans {} parties simulées, contre le total {} :".format(nb, total)) for i in [0, 1]: print(" - le joueur {} ({.__name__:<11}) a gagné {:>5.2%} du temps, et a eu un score final moyen de {:>5g} points ...".format(i, joueurs[i], freqGain(indiceMoyen, i), pointsFinaux[i])) nb = 10000 joueurs = [unCoup, jusquauBout] total = 1000 indices, historiques = desParties(nb, joueurs, total) afficheResultatsDesParties(nb, joueurs, total, indices, historiques) nb = 10000 joueurs = [unCoup, jusquauBout] total = 500 indices, historiques = desParties(nb, joueurs, total) afficheResultatsDesParties(nb, joueurs, total, indices, historiques) nb = 10000 joueurs = [unCoup, jusquauBout] total = 5000 indices, historiques = desParties(nb, joueurs, total) afficheResultatsDesParties(nb, joueurs, total, indices, historiques)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Affichons une première courbe qui montrera la supériorité d'une stratégie face à la plus peureuse, en fonction du total.
def plotResultatsDesParties(nb, joueurs, totaux): N = len(totaux) indicesMoyens = [] for total in totaux: indices, _ = desParties(nb, joueurs, total) indicesMoyens.append(np.mean(indices)) plt.figure() plt.plot(totaux, indicesMoyens, 'ro') plt.xlabel("Objectif (points totaux à atteindre)") plt.ylabel("Taux de victoire de 1 face à 0") plt.title("Taux de victoire du joueur 1 ({.__name__}) face au joueur 0 ({.__name__}),\n pour {} parties simulees pour chaque total.".format(joueurs[1], joueurs[0], nb)) plt.show() nb = 1000 joueurs = [unCoup, jusquauBout] totaux = [50, 100, 150, 200, 250, 300, 350, 400, 450, 500] plotResultatsDesParties(nb, joueurs, totaux) nb = 1000 joueurs = [unCoup, jusquauBout] totalMax = 2000 totaux = list(range(50, totalMax + 50, 50)) plotResultatsDesParties(nb, joueurs, totaux)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
D'autres comparaisons, entre stratégies gourmandes.
nb = 5000 joueurs = [auMoins100, auMoins200] totalMax = 1000 totaux = list(range(50, totalMax + 50, 50)) plotResultatsDesParties(nb, joueurs, totaux) nb = 1000 joueurs = [auMoins100, jusquauBout] totalMax = 2000 totaux = list(range(50, totalMax + 50, 100)) plotResultatsDesParties(nb, joueurs, totaux) nb = 1000 totalMax = 2000 totaux = list(range(50, totalMax + 50, 50)) joueurs = [unCoup, bernoulli(0.5)] plotResultatsDesParties(nb, joueurs, totaux) joueurs = [unCoup, bernoulli(0.1)] plotResultatsDesParties(nb, joueurs, totaux) joueurs = [unCoup, bernoulli(0.25)] plotResultatsDesParties(nb, joueurs, totaux) joueurs = [unCoup, bernoulli(0.75)] plotResultatsDesParties(nb, joueurs, totaux) joueurs = [unCoup, bernoulli(0.9)] plotResultatsDesParties(nb, joueurs, totaux)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Évaluation en self-play Plutôt que de faire jouer une stratégie face à une autre, et d'utiliser le taux de victoire comme une mesure de performance (ce que j'ai fait plus haut), on peut chercher à mesure un autre taux de victoire. On peut laisser une stratégie jouer tout seule, et mesurer plutôt le nombre de coup requis pour gagner.
def unePartieSeul(joueur, total=1000, debug=DEBUG): compte = 0 nbCoups = 0 nbLances = 0 score = [0] if debug: print("Simulation pour le joueur ({.__name__}), le total à atteindre est {} :".format(joueur, total)) while compte < total: # Tant que joueur n'a pas gagné nbCoups += 1 if debug: print(" - Coup #{}, son compte est {} / {} ...".format(nbCoups, compte, total)) accu, nbLance = unJeu(joueur, compte, total, debug) nbLances += nbLance if accu > 0: compte += accu score.append(compte) # Historique if compte == total: if debug: print("- Le joueur ({.__name__}) a gagné en {} coups et {} lancés de dés !".format(joueur, nbCoups, nbLances)) return score
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Testons ça avec la stratégie naïve unCoup :
h = unePartieSeul(unCoup, 1000) print("Partie gagnée en {} coups par le joueur ({.__name__}), avec le score {} ...".format(len(h), unCoup, h))
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Comme précédemment, on peut générer plusieurs simulations pour la même tâche, et obtenir ainsi une liste d'historiques de jeu.
def desPartiesSeul(nb, joueur, total=1000, debug=False): historique = [] for _ in range(nb): h = unePartieSeul(joueur, total=total, debug=debug) historique.append(h) return historique desPartiesSeul(4, unCoup)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Ce qui nous intéresse est uniquement le nombre de coups qu'une certaine stratégie va devoir jouer avant de gagner :
[len(l)-1 for l in desPartiesSeul(4, unCoup)]
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Avec un joli affichage et un calcul du nombre moyen de coups :
def afficheResultatsDesPartiesSeul(nb, joueur, total, historique): nbCoupMoyens = np.mean([len(h) - 1 for h in historique]) print("Dans {} parties simulées, contre le total {}, le joueur ({.__name__}) a gagné en moyenne en {} coups ...".format(nb, total, joueur, nbCoupMoyens)) historique = desPartiesSeul(100, unCoup, 1000) afficheResultatsDesPartiesSeul(100, unCoup, 1000, historique)
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit
Comme précédemment, on peut afficher un graphique montrant l'évolution de ce nombre moyen de coups, disons pour $1000$ parties simulées, en fonction du total à atteindre. La courbe obtenue devrait être croissante, mais difficile de prévoir davantage son comportement.
def plotResultatsDesPartiesSeul(nb, joueur, totaux): N = len(totaux) nbCoupMoyens = [] for total in totaux: historique = desPartiesSeul(nb, joueur, total) nbCoupMoyens.append(np.mean([len(h) - 1 for h in historique])) plt.figure() plt.plot(totaux, nbCoupMoyens, 'ro') plt.xlabel("Objectif (points totaux à atteindre)") plt.ylabel("Nombre moyen de coups joués avant de gagner") plt.title("Nombre moyen de coups requis par {.__name__}\n pour {} parties simulées pour chaque total.".format(joueur, nb)) plt.show()
simus/Simulations_du_jeu_de_151.ipynb
Naereen/notebooks
mit