markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Use the QuantileDiscretizer model to split our continuous variable into 5 buckets (see the numBuckets parameter).
discretizer = ft.QuantileDiscretizer( numBuckets=5, inputCol='continuous_var', outputCol='discretized')
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Let's see what we got.
data_discretized = discretizer.fit(data).transform(data) data_discretized \ .groupby('discretized')\ .mean('continuous_var')\ .sort('discretized')\ .collect()
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Standardizing continuous variables Create a vector representation of our continuous variable (as it is only a single float)
vectorizer = ft.VectorAssembler( inputCols=['continuous_var'], outputCol= 'continuous_vec')
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Build a normalizer and a pipeline.
normalizer = ft.StandardScaler( inputCol=vectorizer.getOutputCol(), outputCol='normalized', withMean=True, withStd=True ) pipeline = Pipeline(stages=[vectorizer, normalizer]) data_standardized = pipeline.fit(data).transform(data)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Classification We will now use the RandomForestClassfier to model the chances of survival for an infant. First, we need to cast the label feature to DoubleType.
import pyspark.sql.functions as func births = births.withColumn( 'INFANT_ALIVE_AT_REPORT', func.col('INFANT_ALIVE_AT_REPORT').cast(typ.DoubleType()) ) births_train, births_test = births \ .randomSplit([0.7, 0.3], seed=666)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
We are ready to build our model.
classifier = cl.RandomForestClassifier( numTrees=5, maxDepth=5, labelCol='INFANT_ALIVE_AT_REPORT') pipeline = Pipeline( stages=[ encoder, featuresCreator, classifier]) model = pipeline.fit(births_train) test = model.transform(births_test)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Let's now see how the RandomForestClassifier model performs compared to the LogisticRegression.
evaluator = ev.BinaryClassificationEvaluator( labelCol='INFANT_ALIVE_AT_REPORT') print(evaluator.evaluate(test, {evaluator.metricName: "areaUnderROC"})) print(evaluator.evaluate(test, {evaluator.metricName: "areaUnderPR"}))
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Let's test how well would one tree do, then.
classifier = cl.DecisionTreeClassifier( maxDepth=5, labelCol='INFANT_ALIVE_AT_REPORT') pipeline = Pipeline(stages=[ encoder, featuresCreator, classifier] ) model = pipeline.fit(births_train) test = model.transform(births_test) evaluator = ev.BinaryClassificationEvaluator( labelCol='INFANT_ALIVE_AT_REPORT') print(evaluator.evaluate(test, {evaluator.metricName: "areaUnderROC"})) print(evaluator.evaluate(test, {evaluator.metricName: "areaUnderPR"}))
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Clustering In this example we will use k-means model to find similarities in the births data.
import pyspark.ml.clustering as clus kmeans = clus.KMeans(k = 5, featuresCol='features') pipeline = Pipeline(stages=[ encoder, featuresCreator, kmeans] ) model = pipeline.fit(births_train)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Having estimated the model, let's see if we can find some differences between clusters.
test = model.transform(births_test) test \ .groupBy('prediction') \ .agg({ '*': 'count', 'MOTHER_HEIGHT_IN': 'avg' }).collect()
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
In the field of NLP, problems such as topic extract rely on clustering to detect documents with similar topics. First, let's create our dataset.
text_data = spark.createDataFrame([ ['''To make a computer do anything, you have to write a computer program. To write a computer program, you have to tell the computer, step by step, exactly what you want it to do. The computer then "executes" the program, following each step mechanically, to accomplish the end goal. When you are telling the computer what to do, you also get to choose how it's going to do it. That's where computer algorithms come in. The algorithm is the basic technique used to get the job done. Let's follow an example to help get an understanding of the algorithm concept.'''], ['''Laptop computers use batteries to run while not connected to mains. When we overcharge or overheat lithium ion batteries, the materials inside start to break down and produce bubbles of oxygen, carbon dioxide, and other gases. Pressure builds up, and the hot battery swells from a rectangle into a pillow shape. Sometimes the phone involved will operate afterwards. Other times it will die. And occasionallyโ€”kapow! To see what's happening inside the battery when it swells, the CLS team used an x-ray technology called computed tomography.'''], ['''This technology describes a technique where touch sensors can be placed around any side of a device allowing for new input sources. The patent also notes that physical buttons (such as the volume controls) could be replaced by these embedded touch sensors. In essence Apple could drop the current buttons and move towards touch-enabled areas on the device for the existing UI. It could also open up areas for new UI paradigms, such as using the back of the smartphone for quick scrolling or page turning.'''], ['''The National Park Service is a proud protector of Americaโ€™s lands. Preserving our land not only safeguards the natural environment, but it also protects the stories, cultures, and histories of our ancestors. As we face the increasingly dire consequences of climate change, it is imperative that we continue to expand Americaโ€™s protected lands under the oversight of the National Park Service. Doing so combats climate change and allows all Americanโ€™s to visit, explore, and learn from these treasured places for generations to come. It is critical that President Obama acts swiftly to preserve land that is at risk of external threats before the end of his term as it has become blatantly clear that the next administration will not hold the same value for our environment over the next four years.'''], ['''The National Park Foundation, the official charitable partner of the National Park Service, enriches Americaโ€™s national parks and programs through the support of private citizens, park lovers, stewards of nature, history enthusiasts, and wilderness adventurers. Chartered by Congress in 1967, the Foundation grew out of a legacy of park protection that began over a century ago, when ordinary citizens took action to establish and protect our national parks. Today, the National Park Foundation carries on the tradition of early park advocates, big thinkers, doers and dreamersโ€”from John Muir and Ansel Adams to President Theodore Roosevelt.'''], ['''Australia has over 500 national parks. Over 28 million hectares of land is designated as national parkland, accounting for almost four per cent of Australia's land areas. In addition, a further six per cent of Australia is protected and includes state forests, nature parks and conservation reserves.National parks are usually large areas of land that are protected because they have unspoilt landscapes and a diverse number of native plants and animals. This means that commercial activities such as farming are prohibited and human activity is strictly monitored.'''] ], ['documents'])
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
First, we will once again use the RegexTokenizer and the StopWordsRemover models.
tokenizer = ft.RegexTokenizer( inputCol='documents', outputCol='input_arr', pattern='\s+|[,.\"]') stopwords = ft.StopWordsRemover( inputCol=tokenizer.getOutputCol(), outputCol='input_stop')
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Next in our pipeline is the CountVectorizer.
stringIndexer = ft.CountVectorizer( inputCol=stopwords.getOutputCol(), outputCol="input_indexed") tokenized = stopwords \ .transform( tokenizer\ .transform(text_data) ) stringIndexer \ .fit(tokenized)\ .transform(tokenized)\ .select('input_indexed')\ .take(2)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
We will use the LDA model - the Latent Dirichlet Allocation model - to extract the topics.
clustering = clus.LDA(k=2, optimizer='online', featuresCol=stringIndexer.getOutputCol())
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Put these puzzles together.
pipeline = Pipeline(stages=[ tokenizer, stopwords, stringIndexer, clustering] )
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Let's see if we have properly uncovered the topics.
topics = pipeline \ .fit(text_data) \ .transform(text_data) topics.select('topicDistribution').collect()
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Regression In this section we will try to predict the MOTHER_WEIGHT_GAIN.
features = ['MOTHER_AGE_YEARS','MOTHER_HEIGHT_IN', 'MOTHER_PRE_WEIGHT','DIABETES_PRE', 'DIABETES_GEST','HYP_TENS_PRE', 'HYP_TENS_GEST', 'PREV_BIRTH_PRETERM', 'CIG_BEFORE','CIG_1_TRI', 'CIG_2_TRI', 'CIG_3_TRI' ]
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
First, we will collate all the features together and use the ChiSqSelector to select only the top 6 most important features.
featuresCreator = ft.VectorAssembler( inputCols=[col for col in features[1:]], outputCol='features' ) selector = ft.ChiSqSelector( numTopFeatures=6, outputCol="selectedFeatures", labelCol='MOTHER_WEIGHT_GAIN' )
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
In order to predict the weight gain we will use the gradient boosted trees regressor.
import pyspark.ml.regression as reg regressor = reg.GBTRegressor( maxIter=15, maxDepth=3, labelCol='MOTHER_WEIGHT_GAIN')
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Finally, again, we put it all together into a Pipeline.
pipeline = Pipeline(stages=[ featuresCreator, selector, regressor]) weightGain = pipeline.fit(births_train)
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
Having created the weightGain model, let's see if it performs well on our testing data.
evaluator = ev.RegressionEvaluator( predictionCol="prediction", labelCol='MOTHER_WEIGHT_GAIN') print(evaluator.evaluate( weightGain.transform(births_test), {evaluator.metricName: 'r2'}))
Chapter06/LearningPySpark_Chapter06.ipynb
drabastomek/learningPySpark
gpl-3.0
First we need to download the Caltech256 dataset.
DATASET_URL = r"http://homes.esat.kuleuven.be/~tuytelaa/"\ "unsup/unsup_caltech256_dense_sift_1000_bow.tar.gz" DATASET_DIR = "../../../projects/weiyen/data" filename = os.path.split(DATASET_URL)[1] dest_path = os.path.join(DATASET_DIR, filename) if os.path.exists(dest_path): print("{} exists. Skipping download...".format(dest_path)) else: with urllib.request.urlopen(DATASET_URL) as response, open(dest_path, 'wb') as out_file: shutil.copyfileobj(response, out_file) print("Dataset downloaded. Extracting files...") tar = tarfile.open(dest_path) tar.extractall(path=DATASET_DIR) print("Files extracted.") tar.close() path = os.path.join(DATASET_DIR, "bow_1000_dense/")
mclearn/knfst/python/test.ipynb
chengsoonong/mclass-sky
bsd-3-clause
Calculate multi-class KNFST model for multi-class novelty detection INPUT K: NxN kernel matrix containing similarities of n training samples labels: Nx1 column vector containing multi-class labels of N training samples OUTPUT proj: Projection of KNFST target_points: The projections of training data into the null space Load the dataset into memory
ds = datasets.load_files(path) ds.data = np.vstack([np.fromstring(txt, sep='\t') for txt in ds.data]) data = ds.data target = ds.target
mclearn/knfst/python/test.ipynb
chengsoonong/mclass-sky
bsd-3-clause
Select a few "known" classes
classes = np.unique(target) num_class = len(classes) num_known = 5 known = np.random.choice(classes, num_known) mask = np.array([y in known for y in target]) X_train = data[mask] y_train = target[mask] idx = y_train.argsort() X_train = X_train[idx] y_train = y_train[idx] print(X_train.shape) print(y_train.shape) def _hik(x, y): ''' Implements the histogram intersection kernel. ''' return np.minimum(x, y).sum() from scipy.linalg import svd def nullspace(A, eps=1e-12): u, s, vh = svd(A) null_mask = (s <= eps) null_space = sp.compress(null_mask, vh, axis=0) return sp.transpose(null_space) A = np.array([[2,3,5],[-4,2,3],[0,0,0]]) np.array([-4,2,3]).dot(nullspace(A))
mclearn/knfst/python/test.ipynb
chengsoonong/mclass-sky
bsd-3-clause
Train the model, and obtain the projection and class target points.
def learn(K, labels): classes = np.unique(labels) if len(classes) < 2: raise Exception("KNFST requires 2 or more classes") n, m = K.shape if n != m: raise Exception("Kernel matrix must be quadratic") centered_k = KernelCenterer().fit_transform(K) basis_values, basis_vecs = np.linalg.eigh(centered_k) basis_vecs = basis_vecs[:,basis_values > 1e-12] basis_values = basis_values[basis_values > 1e-12] basis_values = np.diag(1.0/np.sqrt(basis_values)) basis_vecs = basis_vecs.dot(basis_values) L = np.zeros([n,n]) for cl in classes: for idx1, x in enumerate(labels == cl): for idx2, y in enumerate(labels == cl): if x and y: L[idx1, idx2] = 1.0/np.sum(labels==cl) M = np.ones([m,m])/m H = (((np.eye(m,m)-M).dot(basis_vecs)).T).dot(K).dot(np.eye(n,m)-L) t_sw = H.dot(H.T) eigenvecs = nullspace(t_sw) if eigenvecs.shape[1] < 1: eigenvals, eigenvecs = np.linalg.eigh(t_sw) eigenvals = np.diag(eigenvals) min_idx = eigenvals.argsort()[0] eigenvecs = eigenvecs[:, min_idx] proj = ((np.eye(m,m)-M).dot(basis_vecs)).dot(eigenvecs) target_points = [] for cl in classes: k_cl = K[labels==cl, :] pt = np.mean(k_cl.dot(proj), axis=0) target_points.append(pt) return proj, np.array(target_points) kernel_mat = metrics.pairwise_kernels(X_train, metric=_hik) proj, target_points = learn(kernel_mat, y_train) def squared_euclidean_distances(x, y): n = np.shape(x)[0] m = np.shape(y)[0] distmat = np.zeros((n,m)) for i in range(n): for j in range(m): buff = x[i,:] - y[j,:] distmat[i,j] = buff.dot(buff.T) return distmat def assign_score(proj, target_points, ks): projection_vectors = ks.T.dot(proj) sq_dist = squared_euclidean_distances(projection_vectors, target_points) scores = np.sqrt(np.amin(sq_dist, 1)) return scores auc_scores = [] classes = np.unique(target) num_known = 5 for n in range(20): num_class = len(classes) known = np.random.choice(classes, num_known) mask = np.array([y in known for y in target]) X_train = data[mask] y_train = target[mask] idx = y_train.argsort() X_train = X_train[idx] y_train = y_train[idx] sample_idx = np.random.randint(0, len(data), size=1000) X_test = data[sample_idx,:] y_labels = target[sample_idx] # Test labels are 1 if novel, otherwise 0. y_test = np.array([1 if cl not in known else 0 for cl in y_labels]) # Train model kernel_mat = metrics.pairwise_kernels(X_train, metric=_hik) proj, target_points = learn(kernel_mat, y_train) # Test ks = metrics.pairwise_kernels(X_train, X_test, metric=_hik) scores = assign_score(proj, target_points, ks) auc = metrics.roc_auc_score(y_test, scores) print("AUC:", auc) auc_scores.append(auc) fpr, tpr, thresholds = metrics.roc_curve(y_test, scores) plt.figure() plt.plot(fpr, tpr, label='ROC curve') plt.plot([0, 1], [0, 1], 'k--') plt.xlim([0.0, 1.0]) plt.ylim([0.0, 1.05]) plt.xlabel('False Positive Rate') plt.ylabel('True Positive Rate') plt.title('ROC Curve of the KNFST Novelty Classifier') plt.legend(loc="lower right") plt.show()
mclearn/knfst/python/test.ipynb
chengsoonong/mclass-sky
bsd-3-clause
X๊ฐ’ ์‚ด์ง ๋ฐ”๋€ ๊ฒฝ์šฐ(์Šค๋ฌด๋”ฉ์„ ์จ์•ผ ํ•˜๋Š” ๊ฒฝ์šฐ)
X1 = np.array([[1,0,0],[1,0,1], [0,1,1],[0,1,0],[0,0,1],[1,1,1]]) y01 = np.zeros(2) y11 = np.ones(4) y1 = np.hstack([y01, y11]) clf_bern1 = BernoulliNB().fit(X1, y1) fc1 = clf_bern1.feature_count_ fc1 np.repeat(clf_bern1.class_count_[:, np.newaxis], 3, axis=1) fc1 / np.repeat(clf_bern1.class_count_[:, np.newaxis], 3, axis=1) clf_bern1.predict_proba([x_new]) np.exp(clf_bern1.feature_log_prob_) theta = np.exp(clf_bern1.feature_log_prob_) theta p = ((theta**x_new)*(1-theta)**(1-x_new)).prod(axis=1)*np.exp(clf_bern1.class_log_prior_) p / p.sum()
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
๋‹คํ•ญ์˜ ๊ฒฝ์šฐ ์‹ค์Šต ์˜ˆ์ œ
X = np.array([[4,4,2],[4,3,3], [6,3,1],[4,6,0],[0,4,1],[1,3,1],[1,1,3],[0,3,2]]) y0 = np.zeros(4) y1 = np.ones(4) y = np.hstack([y0, y1]) print(X) print(y) from sklearn.naive_bayes import MultinomialNB clf_mult = MultinomialNB().fit(X, y) clf_mult.classes_ clf_mult.class_count_ fc = clf_mult.feature_count_ fc np.repeat(fc.sum(axis=1)[:, np.newaxis], 3, axis=1) fc / np.repeat(fc.sum(axis=1)[:, np.newaxis], 3, axis=1) clf_mult.alpha (fc + clf_mult.alpha) / (np.repeat(fc.sum(axis=1)[:, np.newaxis], 3, axis=1) + clf_mult.alpha * X.shape[1]) np.repeat(fc.sum(axis=1)[:, np.newaxis], 3, axis=1) + clf_mult.alpha * X.shape[1] x_new1 = np.array([1,1,1]) clf_mult.predict_proba([x_new1]) x_new2 = np.array([2,2,2]) clf_mult.predict_proba([x_new2]) x_new3 = np.array([3,3,3]) clf_mult.predict_proba([x_new3])
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
๋ฌธ์ œ1 feature์™€ target์ด ๋‹ค์Œ๊ณผ ๊ฐ™์„ ๋•Œ, ๋ฒ ๋ฅด๋ˆ„์ด ๋‚˜์ด๋ธŒ ๋ฒ ์ด์ง€์•ˆ ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ ๋‹ค์Œ ๋ฌธ์ œ๋ฅผ ํ‘ธ์„ธ์š”.
X = np.array([ [1, 0, 0], [1, 0, 1], [0, 0, 1], [0, 0, 0], [1, 1, 1], [0, 1, 1], [0, 0, 1], [0, 1, 0], ]) y = np.array([0,0,0,0,1,1,1,1])
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
(1) ์‚ฌ์ „ ๋ถ„ํฌ(prior) p(y)๋ฅผ ๊ตฌํ•˜์„ธ์š”. p(y=0) = 0.5 p(y=1) = 0.5
py0, py1 = (y==0).sum()/len(y), (y==1).sum()/len(y) py0, py1
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
(2) ์Šค๋ฌด๋”ฉ ๋ฒกํ„ฐ ์•ŒํŒŒ=0 ์ผ ๋•Œ, ๋‹ค์Œ x_new์— ๋Œ€ํ•ด ์šฐ๋„(likelihood)ํ•จ์ˆ˜ p(x|y)๋ฅผ ๊ตฌํ•˜๊ณ  ์กฐ๊ฑด๋ถ€ ํ™•๋ฅ  ๋ถ„ํฌ p(y|x)๋ฅผ ๊ตฌํ•˜์„ธ์š”.(normalize ๋œ ๊ฐ’์ด ์•„๋‹˜!) * x_new = [1 1 0] <img src="1.png.jpg" style="width:70%; margin: 0 auto 0 auto;">
x_new = np.array([1, 1, 0]) theta0 = X[y==0, :].sum(axis=0)/len(X[y==0, :]) theta0 theta1 = X[y==1, :].sum(axis=0)/len(X[y==1, :]) theta1 likelihood0 = (theta0**x_new).prod()*((1-theta0)**(1-x_new)).prod() likelihood0 likelihood1 = (theta1**x_new).prod()*((1-theta1)**(1-x_new)).prod() likelihood1 px = likelihood0 * py0 + likelihood1 * py1 px likelihood0 * py0 / px, likelihood1 * py1 / px from sklearn.naive_bayes import BernoulliNB model = BernoulliNB(alpha=0).fit(X, y) model.predict_proba([x_new])
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
(3) ์Šค๋ฌด๋”ฉ ํŒฉํ„ฐ ์•ŒํŒŒ=0.5์ผ ๋•Œ, ๋ฌธ์ œ(2)๋ฅผ ๋‹ค์‹œ ํ’€์–ด๋ณด์„ธ์š”. <img src="22.png.jpg" style="width:70%; margin: 0 auto 0 auto;">
theta0 = (X[y==0, :].sum(axis=0) + 0.5*np.ones(3))/(len(X[y==0,:])+1) theta0 theta1 = (X[y==1, :].sum(axis=0) + 0.5*np.ones(3))/(len(X[y==1,:])+1) theta1 x_new = np.array([1, 1, 0]) likelihood0 = (theta0**x_new).prod()*((1-theta0)**(1-x_new)).prod() likelihood0 likelihood1 = (theta1**x_new).prod()*((1-theta1)**(1-x_new)).prod() likelihood1 px = likelihood0 * py0 + likelihood1 * py1 px likelihood0 * py0 / px, likelihood1 * py1 / px from sklearn.naive_bayes import BernoulliNB model = BernoulliNB(alpha=0.5).fit(X, y) model.predict_proba([x_new])
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
๋ฌธ์ œ2 ๋ฌธ์ œ 1์„ ๋‹คํ•ญ ๋‚˜์ด๋ธŒ ๋ฒ ์ด์ง€์•ˆ(Multinomial Naive Bayesian) ๋ฐฉ๋ฒ•์„ ์‚ฌ์šฉํ•˜์—ฌ (1), (2), (3)์„ ๋‹ค์‹œ ํ’€์–ด๋ณด์„ธ์š” (1) ์‚ฌ์ „ ๋ถ„ํฌ(prior) p(y)๋ฅผ ๊ตฌํ•˜์„ธ์š”. p(y = 0) = 0.5 p(y = 1) = 0.5 (2) ์Šค๋ฌด๋”ฉ ํŒฉํ„ฐ ์•ŒํŒŒ=0 ์ผ ๋•Œ, ๋‹ค์Œ x_new์— ๋Œ€ํ•ด ์šฐ๋„(likelihood)ํ•จ์ˆ˜ p(x|y)๋ฅผ ๊ตฌํ•˜๊ณ  ์กฐ๊ฑด๋ถ€ ํ™•๋ฅ  ๋ถ„ํฌ p(y|x)๋ฅผ ๊ตฌํ•˜์„ธ์š”.(normalize ๋œ ๊ฐ’์ด ์•„๋‹˜!) * x_new = [2 3 1] <img src="3.png.jpg" style="width:70%; margin: 0 auto 0 auto;">
x_new = np.array([2, 3, 1]) theta0 = X[y==0, :].sum(axis=0)/X[y==0, :].sum() theta0 theta1 = X[y==1, :].sum(axis=0)/X[y==1, :].sum() theta1 likelihood0 = (theta0**x_new).prod() likelihood0 likelihood1 = (theta1**x_new).prod() likelihood1 px = likelihood0 * py0 + likelihood1 * py1 px likelihood0 * py0 / px, likelihood1 * py1 / px from sklearn.naive_bayes import MultinomialNB model = MultinomialNB(alpha=0).fit(X, y) model.predict_proba([x_new])
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
(3) ์Šค๋ฌด๋”ฉ ํŒฉํ„ฐ ์•ŒํŒŒ=0.5์ผ ๋•Œ, ๋ฌธ์ œ(2)๋ฅผ ๋‹ค์‹œ ํ’€์–ด๋ณด์„ธ์š”. <img src="4.png.jpg" style="width:70%; margin: 0 auto 0 auto;">
theta0 = (X[y==0, :].sum(axis=0) + 0.5*np.ones(3))/ (X[y==0, :].sum() + 1.5) theta0 theta1 = (X[y==1, :].sum(axis=0) + 0.5*np.ones(3))/ (X[y==1, :].sum() + 1.5) theta1 likelihood0 = (theta0**x_new).prod() likelihood0 likelihood1 = (theta1**x_new).prod() likelihood1 px = likelihood0 * py0 + likelihood1 * py1 px likelihood0 * py0 / px, likelihood1 * py1 / px from sklearn.naive_bayes import MultinomialNB model = MultinomialNB(alpha=0.5).fit(X, y) model.predict_proba([x_new])
ํ†ต๊ณ„, ๋จธ์‹ ๋Ÿฌ๋‹ ๋ณต์Šต/160620์›”_17์ผ์ฐจ_๋‚˜์ด๋ธŒ ๋ฒ ์ด์ฆˆ Naive Bayes/2.์‹ค์ „ ์˜ˆ์ œ.ipynb
kimkipyo/dss_git_kkp
mit
import a LiDAR swath
swath = np.genfromtxt('../../PhD/python-phd/swaths/is6_f11_pass1_aa_nr2_522816_523019_c.xyz') import pandas as pd columns = ['time', 'X', 'Y', 'Z', 'I','A', 'x_u', 'y_u', 'z_u', '3D_u'] swath = pd.DataFrame(swath, columns=columns) swath[1:5]
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
Now load up the aircraft trajectory
air_traj = np.genfromtxt('../../PhD/is6_f11/trajectory/is6_f11_pass1_local_ice_rot.3dp') columns = ['time', 'X', 'Y', 'Z', 'R', 'P', 'H', 'x_u', 'y_u', 'z_u', 'r_u', 'p_u', 'h_u'] air_traj = pd.DataFrame(air_traj, columns=columns) air_traj[1:5]
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
take a quick look at the data
fig = plt.figure(figsize = ([30/2.54, 6/2.54])) ax0 = fig.add_subplot(111) a0 = ax0.scatter(swath['Y'], swath['X'], c=swath['Z'] - np.min(swath['Z']), cmap = 'gist_earth', vmin=0, vmax=10, edgecolors=None,lw=0, s=0.6) a1 = ax0.scatter(air_traj['Y'], air_traj['X'], c=air_traj['Z'], cmap = 'Reds', lw=0, s=1) plt.tight_layout()
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
Making a HDF file out of those points
import h5py #create a file instance, with the intention to write it out lidar_test = h5py.File('lidar_test.hdf5', 'w') swath_data = lidar_test.create_group('swath_data') swath_data.create_dataset('GPS_SOW', data=swath['time']) #some data swath_data.create_dataset('UTM_X', data=swath['X']) swath_data.create_dataset('UTM_Y', data=swath['Y']) swath_data.create_dataset('Z', data=swath['Z']) swath_data.create_dataset('INTENS', data=swath['I']) swath_data.create_dataset('ANGLE', data=swath['A']) swath_data.create_dataset('X_UNCERT', data=swath['x_u']) swath_data.create_dataset('Y_UNCERT', data=swath['y_u']) swath_data.create_dataset('Z_UNCERT', data=swath['z_u']) swath_data.create_dataset('3D_UNCERT', data=swath['3D_u']) #some attributes lidar_test.attrs['file_name'] = 'lidar_test.hdf5' lidar_test.attrs['codebase'] = 'https://github.com/adamsteer/matlab_LIDAR'
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
That's some swath data, now some trajectory data at a different sampling rate
traj_data = lidar_test.create_group('traj_data') #some attributes traj_data.attrs['flight'] = 11 traj_data.attrs['pass'] = 1 traj_data.attrs['source'] = 'RAPPLS flight 11, SIPEX-II 2012' #some data traj_data.create_dataset('pos_x', data = air_traj['X']) traj_data.create_dataset('pos_y', data = air_traj['Y']) traj_data.create_dataset('pos_z', data = air_traj['Z'])
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
close and write the file out
lidar_test.close()
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
OK, that's an arbitrary HDF file built The generated file is substantially smaller than the combined sources - 158 MB from 193, with no attention paid to optimisation. The .LAZ version of the input text file here is 66 MB. More compact, but we can't query it directly - and we have to fake fields! Everything in the swath dataset can be stored, but we need to pretend uncertainties are RGB, so if person X comes along and doesn't read the metadata well, they get crazy colours, call us up and complain. Or we need to use .LAZ extra bits, and deal with awkward ways of describing things. It's also probably a terrible HDF, with no respect to CF compliance at all. That's to come :) And now we add some 3D photogrammetry at about 80 points/m^2:
photo = np.genfromtxt('/Users/adam/Documents/PhD/is6_f11/photoscan/is6_f11_photoscan_Cloud.txt',skip_header=1) columns = ['X', 'Y', 'Z', 'R', 'G', 'B'] photo = pd.DataFrame(photo[:,0:6], columns=columns) #create a file instance, with the intention to write it out lidar_test = h5py.File('lidar_test.hdf5', 'r+') photo_data = lidar_test.create_group('3d_photo') photo_data.create_dataset('UTM_X', data=photo['X']) photo_data.create_dataset('UTM_Y', data=photo['Y']) photo_data.create_dataset('Z', data=photo['Z']) photo_data.create_dataset('R', data=photo['R']) photo_data.create_dataset('G', data=photo['G']) photo_data.create_dataset('B', data=photo['B']) #del lidar_test['3d_photo'] lidar_test.close()
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
Storage is a bit less efficient here. ASCII cloud: 2.1 GB .LAZ format with same data: 215 MB HDF file containing LiDAR, trajectory, 3D photo cloud: 1.33 GB So, there's probably a case for keeping super dense clouds in different files (along with all their ancillary data). Note that .LAZ is able to store all the data used for the super dense cloud here. But - how do we query it efficiently? Also, this is just a demonstration, so we push on! now, lets look at the HDF file... and get stuff
from netCDF4 import Dataset thedata = Dataset('lidar_test.hdf5', 'r') thedata
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
There are the two groups - swath_data and traj_data
swath = thedata['swath_data'] swath utm_xy = np.column_stack((swath['UTM_X'],swath['UTM_Y'])) idx = np.where((utm_xy[:,0] > -100) & (utm_xy[:,0] < 200) & (utm_xy[:,1] > -100) & (utm_xy[:,1] < 200) ) chunk_z = swath['Z'][idx] chunk_z.size max(chunk_z) chunk_x = swath['UTM_X'][idx] chunk_x.size chunk_y = swath['UTM_Y'][idx] chunk_y.size chunk_uncert = swath['Z_UNCERT'][idx] chunk_uncert.size plt.scatter(chunk_x, chunk_y, c=chunk_z, lw=0, s=2)
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
That gave us a small chunk of LIDAR points, without loading the whole point dataset. Neat! ...but being continually dissatisfied, we want more! Lets get just the corresponding trajectory:
traj = thedata['traj_data'] traj
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
Because there's essentiually no X extent for flight data, only the Y coordinate of the flight data are needed...
pos_y = traj['pos_y'] idx = np.where((pos_y[:] > -100.) & (pos_y[:] < 200.)) cpos_x = traj['pos_x'][idx] cpos_y = traj['pos_y'][idx] cpos_z = traj['pos_z'][idx]
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
Now plot the flight line and LiDAR together
plt.scatter(chunk_x, chunk_y, c=chunk_z, lw=0, s=3, cmap='gist_earth') plt.scatter(cpos_x, cpos_y, c=cpos_z, lw=0, s=5, cmap='Oranges')
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
...and prove that we are looking at a trajectory and some LiDAR
from mpl_toolkits.mplot3d import Axes3D #set up a plot plt_az=310 plt_elev = 40. plt_s = 3 cb_fmt = '%.1f' cmap1 = plt.get_cmap('gist_earth', 10) #make a plot fig = plt.figure() fig.set_size_inches(35/2.51, 20/2.51) ax0 = fig.add_subplot(111, projection='3d') a0 = ax0.scatter(chunk_x, chunk_y, (chunk_z-min(chunk_z))*2, c=np.ndarray.tolist((chunk_z-min(chunk_z))*2),\ cmap=cmap1,lw=0, vmin = -0.5, vmax = 5, s=plt_s) ax0.scatter(cpos_x, cpos_y, cpos_z, c=np.ndarray.tolist(cpos_z),\ cmap='hot', lw=0, vmin = 250, vmax = 265, s=10) ax0.view_init(elev=plt_elev, azim=plt_az) plt.tight_layout()
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
plot coloured by point uncertainty
#set up a plot plt_az=310 plt_elev = 40. plt_s = 3 cb_fmt = '%.1f' cmap1 = plt.get_cmap('gist_earth', 30) #make a plot fig = plt.figure() fig.set_size_inches(35/2.51, 20/2.51) ax0 = fig.add_subplot(111, projection='3d') a0 = ax0.scatter(chunk_x, chunk_y, (chunk_z-min(chunk_z))*2, c=np.ndarray.tolist(chunk_uncert),\ cmap=cmap1, lw=0, vmin = 0, vmax = 0.2, s=plt_s) ax0.scatter(cpos_x, cpos_y, cpos_z, c=np.ndarray.tolist(cpos_z),\ cmap='hot', lw=0, vmin = 250, vmax = 265, s=10) ax0.view_init(elev=plt_elev, azim=plt_az) plt.tight_layout() plt.savefig('thefig.png')
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
now pull in the photogrammetry cloud This gets a little messy, since it appears we still need to grab X and Y dimensions - so still 20 x 10^6 x 2 points. Better than 20 x 10^6 x 6, but I wonder if I'm missing something about indexing.
photo = thedata['3d_photo'] photo photo_xy = np.column_stack((photo['UTM_X'],photo['UTM_Y'])) idx_p = np.where((photo_xy[:,0] > 0) & (photo_xy[:,0] < 100) & (photo_xy[:,1] > 0) & (photo_xy[:,1] < 100) ) plt.scatter(photo['UTM_X'][idx_p], photo['UTM_Y'][idx_p], c = photo['Z'][idx_p],\ cmap='hot',vmin=-1, vmax=1, lw=0, s=plt_s) p_x = photo['UTM_X'][idx_p] p_y = photo['UTM_Y'][idx_p] p_z = photo['Z'][idx_p] plt_az=310 plt_elev = 70. plt_s = 2 #make a plot fig = plt.figure() fig.set_size_inches(25/2.51, 10/2.51) ax0 = fig.add_subplot(111, projection='3d') #LiDAR points ax0.scatter(chunk_x, chunk_y, chunk_z-50, \ c=np.ndarray.tolist(chunk_z),\ cmap=cmap1, vmin=-30, vmax=2, lw=0, s=plt_s) #3D photogrammetry pointd ax0.scatter(p_x, p_y, p_z, c=np.ndarray.tolist(p_z),\ cmap='hot', vmin=-1, vmax=1, lw=0, s=5) #aicraft trajectory ax0.scatter(cpos_x, cpos_y, cpos_z, c=np.ndarray.tolist(cpos_z),\ cmap='hot', lw=0, vmin = 250, vmax = 265, s=10) ax0.view_init(elev=plt_elev, azim=plt_az) plt.tight_layout() plt.savefig('with_photo.png')
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
This is kind of a clunky plot - but you get the idea (I hope). LiDAR is in blues, the 100 x 100 photogrammetry patch in orange, trajectory in orange. Different data sources, different resolutions, extracted using pretty much the same set of queries.
print('LiDAR points: {0}\nphotogrammetry points: {1}\ntrajectory points: {2}'. format(len(chunk_x), len(p_x), len(cpos_x) ))
.ipynb_checkpoints/Point cloud to HDF-checkpoint.ipynb
adamsteer/nci-notebooks
apache-2.0
Lists always have their order preserved in Python, so you can guarantee that shopping_list[0] will have the value "bread" Tuples A tuple is another of the standard Python data strucure. They behave in a similar way to the list but have one key difference, they are immutable. Let's look at what this means. A more detailed intro to Tuples can be found here
# A tuple is declared with the curved brackets () instead of the [] for a list my_tuple = (1,2,'cat','dog') # But since a tuple is immutable the next line will not run my_tuple[0] = 4
lesson4.ipynb
trsherborne/learn-python
mit
So what can we learn from this? Once you declare a tuple, the object cannot be changed. For this reason, tuples have more optimised methods when you use them so can be more efficient and faster in your code. A closer look at using Tuples
# A tuple might be immutable but can contain mutable objects my_list_tuple = ([1,2,3],[4,5,6]) # This won't work # my_list_tuple[0] = [3,2,1] # But this will! my_list_tuple[0][0:3] = [3,2,1] print(my_list_tuple) # You can add tuples together t1 = (1,2,3) t1 += (4,5,6) print(t1) t2 = (10,20,30) t3 = (40,50,60) print(t2+t3) # Use index() and count() to look at a tuple t1 = (1,2,3,1,1,2) print(t1.index(2)) # Returns the first index of 2 print(t1.count(1)) # Returns how many 1's are in the tuple # You can use tuples for multiple assignments and for multiple return from functions (x,y,z) = (1,2,3) print(x) # This is a basic function doing multiple return in Python def norm_and_square(a): return a,a**2 (a,b) = norm_and_square(4) print(a) print(b) # Swap items using tuples x = 10 y = 20 print('x is {} and y is {}'.format(x,y)) (x,y) = (y,x) print('x is {} and y is {}'.format(x,y))
lesson4.ipynb
trsherborne/learn-python
mit
Question - Write a function which swaps two elements using tuples
# TO DO def my_swap_function(a,b): # write here! return b,a #ย END TO DO a = 1 b = 2 x = my_swap_function(a,b) print(x)
lesson4.ipynb
trsherborne/learn-python
mit
Dictionaries Dictionaries are perhaps the most useful and hardest to grasp data structure from the basic set in Python. Dictionaries are not iterable in the same sense as lists and tuples and using them required a different approach. Dictionaries are sometimes called hash maps, hash tables or maps in other programming languages. You can think of a dictionary as the same as a physical dictionary, it is a collection of key (the word) and value (the definition) pairs. Each key is unique and has an associated value, the key functions as the index for the value but it can be anything. In contrast to alphabetical dictionaries, the order of a Python dictionary is not guaranteed.
# Declare a dictionary using the {} brackets or the dict() method my_dict = {} # Add new items to the dictionary by stating the key as the index and the value my_dict['bananas'] = 'this is a fruit and a berry' my_dict['apples'] = 'this is a fruit' my_dict['avocados'] = 'this is a berry' print(my_dict) # So now we can use the key to get a value in the dictionary print(my_dict['bananas']) # But this won't work if we haven't added an item to the dict #print(my_dict['cherries']) # We can fix this line using the get(key,def) method. This is safer as you wont get KeyError! print(my_dict.get('cherries','Not found :(')) # If you are given a dictionary data file you know nothing about you can inspect it like so # Get all the keys of a dictionary print(my_dict.keys()) # Get all the values from a dictionary print(my_dict.values()) # Of course you could print the whole dictionary, but it might be huge! These methods break # the dict down, but the downside is that you can't match up the keys and values! # Test for membership in the keys using the in operator if 'avocados' in my_dict: print(my_dict['avocados']) # Dictionary values can also be lists or other data structures my_lists = {} my_lists['shopping list'] = shopping_list my_lists['holidays'] = ['Munich','Naples','New York','Tokyo','San Francisco','Los Angeles'] # Now my I store a dictionary with each list named with keys and the lists as values print(my_lists)
lesson4.ipynb
trsherborne/learn-python
mit
Wrapping everything up, we can create a list of dictionaries with multiple fields and iterate over a dictionary
# Declare a list europe = [] # Create dicts and add to lists germany = {"name": "Germany", "population": 81000000,"speak_german":True} europe.append(germany) luxembourg = {"name": "Luxembourg", "population": 512000,"speak_german":True} europe.append(luxembourg) uk = {"name":"United Kingdom","population":64100000,"speak_german":False} europe.append(uk) print(europe) print() for country in europe: for key, value in country.items(): print('{}\t{}'.format(key,value)) print()
lesson4.ipynb
trsherborne/learn-python
mit
Question - Add at least 3 more countries to the europe list and use a for loop to get a new list of every country which speaks German
# TO DO - You might need more than just a for loop! # END TO DO
lesson4.ipynb
trsherborne/learn-python
mit
A peek at Pandas We've seen some of the standard library of Data structures in Python. We will briefly look at Pandas now, a powerful data manipulation library which is a sensible next step to organising your data when you need to use something more complex than standard Python data structures. The core of Pandas is the DataFrame, which will look familiar if you have worked with R before. This organises data in a table format and gives you spreadsheet like handling of your information. Using Pandas can make your job handling data easier, and many libraries for plotting data (such as Seaborn) can handle a Pandas DataFrame much easier than a list as input. Note: Pandas uses NumPy under the hood, another package for simplifying numerical operations and working with arrays. We will look at NumPy and Pandas together in 2 lessons time.
# We import the Pandas packages using the import statement we've seen before import pandas as pd # To create a Pandas DataFrame from a simpler data structure we use the following routine europe_df = pd.DataFrame.from_dict(europe) print(type(europe_df)) # Running this cell as is provides the fancy formatting of Pandas which can prove useful. europe_df
lesson4.ipynb
trsherborne/learn-python
mit
With that out of the way, let's load the MNIST data set and scale the images to a range between 0 and 1. If you haven't already downloaded the data set, the Keras load_data function will download the data directly from S3 on AWS.
# Loads the training and test data sets (ignoring class labels) (x_train, _), (x_test, _) = mnist.load_data() # Scales the training and test data to range between 0 and 1. max_value = float(x_train.max()) x_train = x_train.astype('float32') / max_value x_test = x_test.astype('float32') / max_value
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
The data set consists 3D arrays with 60K training and 10K test images. The images have a resolution of 28 x 28 (pixels).
x_train.shape, x_test.shape
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
To work with the images as vectors, let's reshape the 3D arrays as matrices. In doing so, we'll reshape the 28 x 28 images into vectors of length 784
x_train = x_train.reshape((len(x_train), np.prod(x_train.shape[1:]))) x_test = x_test.reshape((len(x_test), np.prod(x_test.shape[1:]))) (x_train.shape, x_test.shape)
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Simple Autoencoder Let's start with a simple autoencoder for illustration. The encoder and decoder functions are each fully-connected neural layers. The encoder function uses a ReLU activation function, while the decoder function uses a sigmoid activation function. So what are the encoder and the decoder layers doing? The encoder layer "encodes" the input image as a compressed representation in a reduced dimension. The compressed image typically looks garbled, nothing like the original image. The decoder layer "decodes" the encoded image back to the original dimension. The decoded image is a lossy reconstruction of the original image. In our example, the compressed image has a dimension of 32. The encoder model reduces the dimension from the original 784-dimensional vector to the encoded 32-dimensional vector. The decoder model restores the dimension from the encoded 32-dimensional representation back to the original 784-dimensional vector. The compression factor is the ratio of the input dimension to the encoded dimension. In our case, the factor is 24.5 = 784 / 32. The autoencoder model maps an input image to its reconstructed image.
# input dimension = 784 input_dim = x_train.shape[1] encoding_dim = 32 compression_factor = float(input_dim) / encoding_dim print("Compression factor: %s" % compression_factor) autoencoder = Sequential() autoencoder.add( Dense(encoding_dim, input_shape=(input_dim,), activation='relu') ) autoencoder.add( Dense(input_dim, activation='sigmoid') ) autoencoder.summary()
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Encoder Model We can extract the encoder model from the first layer of the autoencoder model. The reason we want to extract the encoder model is to examine what an encoded image looks like.
input_img = Input(shape=(input_dim,)) encoder_layer = autoencoder.layers[0] encoder = Model(input_img, encoder_layer(input_img)) encoder.summary()
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Okay, now we're ready to train our first autoencoder. We'll iterate on the training data in batches of 256 in 50 epochs. Let's also use the Adam optimizer and per-pixel binary crossentropy loss. The purpose of the loss function is to reconstruct an image similar to the input image. I want to call out something that may look like a typo or may not be obvious at first glance. Notice the repeat of x_train in autoencoder.fit(x_train, x_train, ...). This implies that x_train is both the input and output, which is exactly what we want for image reconstruction. I'm running this code on a laptop, so you'll notice the training times are a bit slow (no GPU).
autoencoder.compile(optimizer='adam', loss='binary_crossentropy') autoencoder.fit(x_train, x_train, epochs=50, batch_size=256, shuffle=True, validation_data=(x_test, x_test))
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
We've successfully trained our first autoencoder. With a mere 50,992 parameters, our autoencoder model can compress an MNIST digit down to 32 floating-point digits. Not that impressive, but it works. To check out the encoded images and the reconstructed image quality, we randomly sample 10 test images. I really like how the encoded images look. Do they make sense? No. Are they eye candy though? Most definitely. However, the reconstructed images are quite lossy. You can see the digits clearly, but notice the loss in image quality.
num_images = 10 np.random.seed(42) random_test_images = np.random.randint(x_test.shape[0], size=num_images) encoded_imgs = encoder.predict(x_test) decoded_imgs = autoencoder.predict(x_test) plt.figure(figsize=(18, 4)) for i, image_idx in enumerate(random_test_images): # plot original image ax = plt.subplot(3, num_images, i + 1) plt.imshow(x_test[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot encoded image ax = plt.subplot(3, num_images, num_images + i + 1) plt.imshow(encoded_imgs[image_idx].reshape(8, 4)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot reconstructed image ax = plt.subplot(3, num_images, 2*num_images + i + 1) plt.imshow(decoded_imgs[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show()
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Deep Autoencoder Above, we used single fully-connected layers for both the encoding and decoding models. Instead, we can stack multiple fully-connected layers to make each of the encoder and decoder functions deep. You know because deep learning. In this next model, we'll use 3 fully-connected layers for the encoding model with decreasing dimensions from 128 to 64 32 again. Likewise, we'll add 3 fully-connected decoder layers that reconstruct the image back to 784 dimensions. Except for the last layer, we'll use ReLU activation functions again. In Keras, this model is painfully simple to do, so let's get started. We'll use the same training configuration: Adam + 50 epochs + batch size of 256.
autoencoder = Sequential() # Encoder Layers autoencoder.add(Dense(4 * encoding_dim, input_shape=(input_dim,), activation='relu')) autoencoder.add(Dense(2 * encoding_dim, activation='relu')) autoencoder.add(Dense(encoding_dim, activation='relu')) # Decoder Layers autoencoder.add(Dense(2 * encoding_dim, activation='relu')) autoencoder.add(Dense(4 * encoding_dim, activation='relu')) autoencoder.add(Dense(input_dim, activation='sigmoid')) autoencoder.summary()
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Encoder Model Like we did above, we can extract the encoder model from the autoencoder. The encoder model consists of the first 3 layers in the autoencoder, so let's extract them to visualize the encoded images.
input_img = Input(shape=(input_dim,)) encoder_layer1 = autoencoder.layers[0] encoder_layer2 = autoencoder.layers[1] encoder_layer3 = autoencoder.layers[2] encoder = Model(input_img, encoder_layer3(encoder_layer2(encoder_layer1(input_img)))) encoder.summary() autoencoder.compile(optimizer='adam', loss='binary_crossentropy') autoencoder.fit(x_train, x_train, epochs=50, batch_size=256, validation_data=(x_test, x_test))
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
As with the simple autoencoder, we randomly sample 10 test images (the same ones as before). The reconstructed digits look much better than those from the single-layer autoencoder. This observation aligns with the reduction in validation loss after adding multiple layers to the autoencoder.
num_images = 10 np.random.seed(42) random_test_images = np.random.randint(x_test.shape[0], size=num_images) encoded_imgs = encoder.predict(x_test) decoded_imgs = autoencoder.predict(x_test) plt.figure(figsize=(18, 4)) for i, image_idx in enumerate(random_test_images): # plot original image ax = plt.subplot(3, num_images, i + 1) plt.imshow(x_test[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot encoded image ax = plt.subplot(3, num_images, num_images + i + 1) plt.imshow(encoded_imgs[image_idx].reshape(8, 4)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot reconstructed image ax = plt.subplot(3, num_images, 2*num_images + i + 1) plt.imshow(decoded_imgs[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show()
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Convolutional Autoencoder Now that we've explored deep autoencoders, let's use a convolutional autoencoder instead, given that the input objects are images. What this means is our encoding and decoding models will be convolutional neural networks instead of fully-connected networks. Again, Keras makes this very easy for us. Before we get started though, we need to reshapes the images back to 28 x 28 x 1 for the convnets. The 1 is for 1 channel because black and white. If we had RGB color, there would be 3 channels.
x_train = x_train.reshape((len(x_train), 28, 28, 1)) x_test = x_test.reshape((len(x_test), 28, 28, 1))
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
To build the convolutional autoencoder, we'll make use of Conv2D and MaxPooling2D layers for the encoder and Conv2D and UpSampling2D layers for the decoder. The encoded images are transformed to a 3D array of dimensions 4 x 4 x 8, but to visualize the encoding, we'll flatten it to a vector of length 128. I tried to use an encoding dimension of 32 like above, but I kept getting subpar results. After the flattening layer, we reshape the image back to a 4 x 4 x 8 array before upsampling back to a 28 x 28 x 1 image.
autoencoder = Sequential() # Encoder Layers autoencoder.add(Conv2D(16, (3, 3), activation='relu', padding='same', input_shape=x_train.shape[1:])) autoencoder.add(MaxPooling2D((2, 2), padding='same')) autoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same')) autoencoder.add(MaxPooling2D((2, 2), padding='same')) autoencoder.add(Conv2D(8, (3, 3), strides=(2,2), activation='relu', padding='same')) # Flatten encoding for visualization autoencoder.add(Flatten()) autoencoder.add(Reshape((4, 4, 8))) # Decoder Layers autoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same')) autoencoder.add(UpSampling2D((2, 2))) autoencoder.add(Conv2D(8, (3, 3), activation='relu', padding='same')) autoencoder.add(UpSampling2D((2, 2))) autoencoder.add(Conv2D(16, (3, 3), activation='relu')) autoencoder.add(UpSampling2D((2, 2))) autoencoder.add(Conv2D(1, (3, 3), activation='sigmoid', padding='same')) autoencoder.summary()
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Encoder Model To extract the encoder model for the autoencoder, we're going to use a slightly different approach than before. Rather than extracting the first 6 layers, we're going to create a new Model with the same input as the autoencoder, but the output will be that of the flattening layer. As a side note, this is a very useful technique for grabbing submodels for things like transfer learning. As I mentioned before, the encoded image is a vector of length 128.
encoder = Model(inputs=autoencoder.input, outputs=autoencoder.get_layer('flatten_1').output) encoder.summary() autoencoder.compile(optimizer='adam', loss='binary_crossentropy') autoencoder.fit(x_train, x_train, epochs=100, batch_size=128, validation_data=(x_test, x_test))
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
The reconstructed digits look even better than before. This is no surprise given an even lower validation loss. Other than slight improved reconstruction, check out how the encoded image has changed. What's even cooler is that the encoded images of the 9 look similar as do those of the 8's. This similarity was far less pronounced for the simple and deep autoencoders.
num_images = 10 np.random.seed(42) random_test_images = np.random.randint(x_test.shape[0], size=num_images) encoded_imgs = encoder.predict(x_test) decoded_imgs = autoencoder.predict(x_test) plt.figure(figsize=(18, 4)) for i, image_idx in enumerate(random_test_images): # plot original image ax = plt.subplot(3, num_images, i + 1) plt.imshow(x_test[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot encoded image ax = plt.subplot(3, num_images, num_images + i + 1) plt.imshow(encoded_imgs[image_idx].reshape(16, 8)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot reconstructed image ax = plt.subplot(3, num_images, 2*num_images + i + 1) plt.imshow(decoded_imgs[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show()
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Denoising Images with the Convolutional Autoencoder Earlier, I mentioned that autoencoders are useful for denoising data including images. When I learned about this concept in grad school, my mind was blown. This simple task helped me realize data can be manipulated in very useful ways and that the dirty data we often inherit can be cleansed using more advanced techniques. With that in mind, let's add bit of noise to the test images and see how good the convolutional autoencoder is at removing the noise.
x_train_noisy = x_train + np.random.normal(loc=0.0, scale=0.5, size=x_train.shape) x_train_noisy = np.clip(x_train_noisy, 0., 1.) x_test_noisy = x_test + np.random.normal(loc=0.0, scale=0.5, size=x_test.shape) x_test_noisy = np.clip(x_test_noisy, 0., 1.) num_images = 10 np.random.seed(42) random_test_images = np.random.randint(x_test.shape[0], size=num_images) # Denoise test images x_test_denoised = autoencoder.predict(x_test_noisy) plt.figure(figsize=(18, 4)) for i, image_idx in enumerate(random_test_images): # plot original image ax = plt.subplot(2, num_images, i + 1) plt.imshow(x_test_noisy[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot reconstructed image ax = plt.subplot(2, num_images, num_images + i + 1) plt.imshow(x_test_denoised[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show()
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Convolutional Autoencoder - Take 2 Well, those images are terrible. They remind me of the mask from the movie Scream. Okay, so let's try that again. This time we're going to build a ConvNet with a lot more parameters and forego visualizing the encoding layer. The network will be a bit larger and slower to train, but the results are definitely worth the effort. One more thing: this time, let's use (x_train_noisy, x_train) as training data and (x_test_noisy, x_test) as validation data.
autoencoder = Sequential() # Encoder Layers autoencoder.add(Conv2D(32, (3, 3), activation='relu', padding='same', input_shape=x_train.shape[1:])) autoencoder.add(MaxPooling2D((2, 2), padding='same')) autoencoder.add(Conv2D(32, (3, 3), activation='relu', padding='same')) autoencoder.add(MaxPooling2D((2, 2), padding='same')) # Decoder Layers autoencoder.add(Conv2D(32, (3, 3), activation='relu', padding='same')) autoencoder.add(UpSampling2D((2, 2))) autoencoder.add(Conv2D(32, (3, 3), activation='relu', padding='same')) autoencoder.add(UpSampling2D((2, 2))) autoencoder.add(Conv2D(1, (3, 3), activation='sigmoid', padding='same')) autoencoder.summary() autoencoder.compile(optimizer='adam', loss='binary_crossentropy') autoencoder.fit(x_train_noisy, x_train, epochs=100, batch_size=128, validation_data=(x_test_noisy, x_test)) # Denoise test images x_test_denoised = autoencoder.predict(x_test_noisy) plt.figure(figsize=(18, 4)) for i, image_idx in enumerate(random_test_images): # plot original image ax = plt.subplot(2, num_images, i + 1) plt.imshow(x_test_noisy[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) # plot reconstructed image ax = plt.subplot(2, num_images, num_images + i + 1) plt.imshow(x_test_denoised[image_idx].reshape(28, 28)) plt.gray() ax.get_xaxis().set_visible(False) ax.get_yaxis().set_visible(False) plt.show()
notebooks/06_autoencoder.ipynb
ramhiser/Keras-Tutorials
mit
Set up Network
import network # 784 (28 x 28 pixel images) input neurons; 30 hidden neurons; 10 output neurons net = network.Network([784, 30, 10])
neural-networks-and-deep-learning/src/run_network.ipynb
the-deep-learners/study-group
mit
Train Network
# Use stochastic gradient descent over 30 epochs, with mini-batch size of 10, learning rate of 3.0 net.SGD(training_data, 30, 10, 3.0, test_data=test_data)
neural-networks-and-deep-learning/src/run_network.ipynb
the-deep-learners/study-group
mit
Exercise: Create network with just two layers
two_layer_net = network.Network([784, 10]) two_layer_net.SGD(training_data, 10, 10, 1.0, test_data=test_data) two_layer_net.SGD(training_data, 10, 10, 2.0, test_data=test_data) two_layer_net.SGD(training_data, 10, 10, 3.0, test_data=test_data) two_layer_net.SGD(training_data, 10, 10, 4.0, test_data=test_data) two_layer_net.SGD(training_data, 20, 10, 3.0, test_data=test_data)
neural-networks-and-deep-learning/src/run_network.ipynb
the-deep-learners/study-group
mit
The data can be obtained from the World Bank web site, but here we work with a slightly cleaned-up version of the data:
data = sm.datasets.fertility.load_pandas().data data.head()
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Here we construct a DataFrame that contains only the numerical fertility rate data and set the index to the country names. We also drop all the countries with any missing data.
columns = list(map(str, range(1960, 2012))) data.set_index("Country Name", inplace=True) dta = data[columns] dta = dta.dropna() dta.head()
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
There are two ways to use PCA to analyze a rectangular matrix: we can treat the rows as the "objects" and the columns as the "variables", or vice-versa. Here we will treat the fertility measures as "variables" used to measure the countries as "objects". Thus the goal will be to reduce the yearly fertility rate values to a small number of fertility rate "profiles" or "basis functions" that capture most of the variation over time in the different countries. The mean trend is removed in PCA, but its worthwhile taking a look at it. It shows that fertility has dropped steadily over the time period covered in this dataset. Note that the mean is calculated using a country as the unit of analysis, ignoring population size. This is also true for the PC analysis conducted below. A more sophisticated analysis might weight the countries, say by population in 1980.
ax = dta.mean().plot(grid=False) ax.set_xlabel("Year", size=17) ax.set_ylabel("Fertility rate", size=17) ax.set_xlim(0, 51)
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Next we perform the PCA:
pca_model = PCA(dta.T, standardize=False, demean=True)
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Based on the eigenvalues, we see that the first PC dominates, with perhaps a small amount of meaningful variation captured in the second and third PC's.
fig = pca_model.plot_scree(log_scale=False)
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Next we will plot the PC factors. The dominant factor is monotonically increasing. Countries with a positive score on the first factor will increase faster (or decrease slower) compared to the mean shown above. Countries with a negative score on the first factor will decrease faster than the mean. The second factor is U-shaped with a positive peak at around 1985. Countries with a large positive score on the second factor will have lower than average fertilities at the beginning and end of the data range, but higher than average fertility in the middle of the range.
fig, ax = plt.subplots(figsize=(8, 4)) lines = ax.plot(pca_model.factors.iloc[:, :3], lw=4, alpha=0.6) ax.set_xticklabels(dta.columns.values[::10]) ax.set_xlim(0, 51) ax.set_xlabel("Year", size=17) fig.subplots_adjust(0.1, 0.1, 0.85, 0.9) legend = fig.legend(lines, ["PC 1", "PC 2", "PC 3"], loc="center right") legend.draw_frame(False)
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
To better understand what is going on, we will plot the fertility trajectories for sets of countries with similar PC scores. The following convenience function produces such a plot.
idx = pca_model.loadings.iloc[:, 0].argsort()
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
First we plot the five countries with the greatest scores on PC 1. These countries have a higher rate of fertility increase than the global mean (which is decreasing).
def make_plot(labels): fig, ax = plt.subplots(figsize=(9, 5)) ax = dta.loc[labels].T.plot(legend=False, grid=False, ax=ax) dta.mean().plot(ax=ax, grid=False, label="Mean") ax.set_xlim(0, 51) fig.subplots_adjust(0.1, 0.1, 0.75, 0.9) ax.set_xlabel("Year", size=17) ax.set_ylabel("Fertility", size=17) legend = ax.legend( *ax.get_legend_handles_labels(), loc="center left", bbox_to_anchor=(1, 0.5) ) legend.draw_frame(False) labels = dta.index[idx[-5:]] make_plot(labels)
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Here are the five countries with the greatest scores on factor 2. These are countries that reached peak fertility around 1980, later than much of the rest of the world, followed by a rapid decrease in fertility.
idx = pca_model.loadings.iloc[:, 1].argsort() make_plot(dta.index[idx[-5:]])
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Finally we have the countries with the most negative scores on PC 2. These are the countries where the fertility rate declined much faster than the global mean during the 1960's and 1970's, then flattened out.
make_plot(dta.index[idx[:5]])
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
We can also look at a scatterplot of the first two principal component scores. We see that the variation among countries is fairly continuous, except perhaps that the two countries with highest scores for PC 2 are somewhat separated from the other points. These countries, Oman and Yemen, are unique in having a sharp spike in fertility around 1980. No other country has such a spike. In contrast, the countries with high scores on PC 1 (that have continuously increasing fertility), are part of a continuum of variation.
fig, ax = plt.subplots() pca_model.loadings.plot.scatter(x="comp_00", y="comp_01", ax=ax) ax.set_xlabel("PC 1", size=17) ax.set_ylabel("PC 2", size=17) dta.index[pca_model.loadings.iloc[:, 1] > 0.2].values
v0.13.1/examples/notebooks/generated/pca_fertility_factors.ipynb
statsmodels/statsmodels.github.io
bsd-3-clause
Dataset We are using CelebA Dataset which is a large-scale face attributes dataset with more than 200K celebrity images, each with 40 attribute annotations. The images in this dataset cover large pose variations and background clutter. We randomly downsample it by a factor of 30 for computational reasons.
#N=int(len(imgfiles)/30) N=len(imgfiles) print("Number of images = {}".format(N)) test = imgfiles[0:N] test[1]
projects/reports/face_manifold/NTDS_Project.ipynb
mdeff/ntds_2017
mit
Loading the data
sample_path = imgfiles[0] sample_im = load_image(sample_path) sample_im = np.array(sample_im) img_shape = (sample_im.shape[0],sample_im.shape[1]) ims = np.zeros((N, sample_im.shape[1]*sample_im.shape[0])) for i, filepath in enumerate(test): im = load_image(filepath) im = np.array(im) im = im.mean(axis=2) im = np.asarray(im).ravel().astype(float) ims[i] = im
projects/reports/face_manifold/NTDS_Project.ipynb
mdeff/ntds_2017
mit
Learning the Manifold We are using Isomap for dimensionality reduction as we believe that the face image data lies on a structured manifold in a higher dimension and thus is embeddable in a much lower dimension without much loss of information. Further, Isomap is a graph based technique which aligns with our scope.
#iso = manifold.Isomap(n_neighbors=2, n_components=3, max_iter=500, n_jobs=-1) #Z = iso.fit_transform(ims) #don't run, can load from pickle as in below cells #saving the learnt embedding #with open('var6753_n2_d3.pkl', 'wb') as f: #model learnt with n_neighbors=2 and n_components=3 # pickle.dump(Z,f) #with open('var6753_n2_d2.pkl', 'wb') as f: #model learnt with n_neighbors=2 and n_components=2 # pickle.dump(Z,f) #with open('var6753_n4_d3.pkl', 'wb') as f: #model learnt with n_neighbors=4 and n_components=3 # pickle.dump(Z,f) with open('var6753_n2_d2.pkl', 'rb') as f: Z = pickle.load(f) #Visualizing the learnt 3D-manifold in two dimensions source = ColumnDataSource( data=dict( x=Z[:, 0], y=Z[:, 1], desc=list(range(Z.shape[0])), ) ) hover = HoverTool( tooltips=[ ("index", "$index"), ("(x,y)", "($x, $y)"), ("desc", "@desc"), ] ) p = figure(plot_width=700, plot_height=700, tools=[hover],title="Mouse over the dots") p.circle('x', 'y', size=10, source=source) show(p)
projects/reports/face_manifold/NTDS_Project.ipynb
mdeff/ntds_2017
mit
Regeneration from Lower Dimensional Space While traversing the chosen path, we are also sub sampling in the lower dimensional space in order to create smooth transitions in the video. We naturally expect smoothness as points closer in the lower dimensional space should correspond to similar images. Since we do not have an exact representation for these sub-sampled points in the original image space, we need a method to map these back to the higher dimension. We will be using Extremely randomized trees for regression. As an alternative, we would also be testing convex combination approach to generate representations for the sub-sampled points. Path Selection Heuristic Method 1 Generating k-nearest graph using the Gaussian kernel. We further generate all pair shortest paths from this graph and randomly choose any path from that list for visualization. For regeneration of sub-sampled points, we use Extremely randomized trees as mentioned above.
#Mapping the regressor from low dimension space to high dimension space lin = ExtraTreeRegressor(max_depth=19) lin.fit(Z, ims) lin.score(Z, ims) pred = lin.predict(Z[502].reshape(1, -1)); fig_new, [ax1,ax2] = plt.subplots(1,2) ax1.imshow(ims[502].reshape(*img_shape), cmap = 'gray') ax1.set_title('Original') ax2.imshow(pred.reshape(*img_shape), cmap = 'gray') ax2.set_title('Reconstructed') person1 = 34 person2 = 35 test = ((Z[person1] + Z[person2]) / 2) #+ 0.5*np.random.randn(*Z[person1].shape) pred = lin.predict(test.reshape(1, -1)) fig_newer, [ax1, ax2, ax3] = plt.subplots(1, 3) ax1.imshow(ims[person1].reshape(*img_shape), cmap = 'gray') ax1.set_title('Face 1') ax2.imshow(ims[person2].reshape(*img_shape), cmap = 'gray') ax2.set_title('Face 2') ax3.imshow(pred.reshape(*img_shape), cmap = 'gray') ax3.set_title('Face between lying on manifold'); distances = spatial.distance.squareform(spatial.distance.pdist(Z, 'braycurtis')) kernel_width = distances.mean() weights = np.exp(-np.square(distances) / (kernel_width ** 0.1)) for i in range(weights.shape[0]): weights[i][i] = 0 NEIGHBORS = 2 #NEIGHBORS = 100 # Your code here. #Find sorted indices of weights for each row indices = np.argsort(weights, axis = 1) #Create a zero matrix which would later be filled with sparse weights n_weights = np.zeros((weights.shape[0], weights.shape[1])) #Loop that iterates over the 'K' strongest weights in each row, and assigns them to sparse matrix, leaving others zero for i in range(indices.shape[0]): for j in range(indices.shape[1] - NEIGHBORS, indices.shape[1]): col = indices[i][j] n_weights[i][col] = weights[i][col] #Imposing symmetricity big = n_weights.T > n_weights n_weights_s = n_weights - n_weights * big + n_weights.T * big G = nx.from_numpy_matrix(n_weights_s) pos = {} for i in range(Z.shape[0]): pos[i] = Z[i,0:2] fig2,ax2 = plt.subplots() nx.draw(G, pos, ax=ax2, node_size=10) imlist=nx.all_pairs_dijkstra_path(G)[0][102] #choosing the path starting at node 0 and ending at node 102 imlist N=25 #number of sub-samples between each consecutive pair in the path lbd = np.linspace(0, 1, N) counter = 0 for count, i in enumerate(imlist): if count != len(imlist) - 1: person1 = i person2 = imlist[count + 1] for j in range(N): test = (lbd[j] * Z[person2]) + ((1 - lbd[j]) * Z[person1]) pred = lin.predict(test.reshape(1, -1)) im = Image.fromarray(pred.reshape(*img_shape)) im = im.convert('RGB') im.save('{}.png'.format(counter)) counter += 1 os.system("ffmpeg -f image2 -r 10 -i ./%d.png -vcodec mpeg4 -y ./method1.mp4")
projects/reports/face_manifold/NTDS_Project.ipynb
mdeff/ntds_2017
mit
Please check the generated video in the same enclosing folder. Observing the output of the tree regressor we notice sudden jumps in the reconstructed video. We suspect that these discontinuities are either an artefact of the isomap embedding in a much lower dimension or because of the reconstruction method. To investigate further we plot the frobenius norm of the sampled image in the isomap domain and that of the reconstructed image in the original domain. Since, we are sampling on a linear line between two images, the plot of the norm of the image of expected to be either an increasing or a decreasing linear graph. This indeed turnout the case for the sampled images in the isomap domain. However, as we suspected, after reconstruction we observed sudden jumps in the plot. Clearly, this is because of the tree regressor which is overfitting the data, in which case there are sudden jumps in the plot.
norm_vary = list() norm_im = list() lbd = np.linspace(0, 1, 101) person1=12 person2=14 for i in range(101): test = (lbd[i] * Z[person2]) + ((1-lbd[i]) * Z[person1]) norm_vary.append(norm(test)) pred = lin.predict(test.reshape(1, -1)) im = Image.fromarray(pred.reshape(*img_shape)) norm_im.append(norm(im)) f, ax = plt.subplots(1,1) ax.plot(norm_vary) ax.set_title('Norm for the mean image in projected space') norm_vary = list() norm_im = list() lbd = np.linspace(0, 1, 101) for i in range(101): test = (lbd[i] * Z[person1]) + ((1-lbd[i]) * Z[person2]) norm_vary.append(norm(test)) pred = lin.predict(test.reshape(1, -1)) im = Image.fromarray(pred.reshape(*img_shape)) norm_im.append(norm(im)) f, ax = plt.subplots(1,1) ax.plot(norm_im) ax.set_title('Norm for mean image in original space')
projects/reports/face_manifold/NTDS_Project.ipynb
mdeff/ntds_2017
mit
Even after extensive hyperparamter tuning, we are unable to learn a reasonable regressor hence we use the convex combination approach in him dim. Method 2 Instead of choosing a path from the graph, manually choosing a set of points which visibbly lie on a 2D manifold. For regeneration of sub-sampled points, we use convex combinations of the consecutive pairs in high dimensional space itself.
#Interesting paths with N4D3 model #imlist = [1912,3961,2861,4870,146,6648] #imlist = [3182,5012,5084,1113,2333,1375] #imlist = [5105,5874,4255,2069,1178] #imlist = [3583,2134,1034, 3917,3704, 5920,6493] #imlist = [1678,6535,6699,344,6677,5115,6433] #Interesting paths with N2D3 model imlist = [1959,3432,6709,4103, 4850,6231,4418,4324] #imlist = [369,2749,1542,366,1436,2836] #Interesting paths with N2D2 model #imlist = [2617,4574,4939,5682,1917,3599,6324,1927] N=25 lbd = np.linspace(0, 1, N) counter = 0 for count, i in enumerate(imlist): if count != len(imlist) - 1: person1 = i person2 = imlist[count + 1] for j in range(N): im = (lbd[j] * ims[person2]) + ((1 - lbd[j]) * ims[person1]) im = Image.fromarray(im.reshape((218, 178))) im = im.convert('RGB') im.save('{}.png'.format(counter)) counter += 1 os.system("ffmpeg -f image2 -r 10 -i ./%d.png -vcodec mpeg4 -y ./method2.mp4")
projects/reports/face_manifold/NTDS_Project.ipynb
mdeff/ntds_2017
mit
Description A synchronous machine has a synchronous reactance of $1.0\,\Omega$ per phase and an armature resistance of $0.1\,\Omega$ per phase. If $\vec{E}A = 460\,V\angle-10ยฐ$ and $\vec{V}\phi = 480\,V\angle0ยฐ$, is this machine a motor or a generator? How much power P is this machine consuming from or supplying to the electrical system? How much reactive power Q is this machine consuming from or supplying to the electrical system?
Ea = 460 # [V] EA_angle = -10/180*pi # [rad] EA = Ea * (cos(EA_angle) + 1j*sin(EA_angle)) Vphi = 480 # [V] VPhi_angle = 0/180*pi # [rad] VPhi = Vphi*exp(1j*VPhi_angle) Ra = 0.1 # [Ohm] Xs = 1.0 # [Ohm]
Chapman/Ch5-Problem_5-10.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
SOLUTION This machine is a motor, consuming power from the power system, because $\vec{E}A$ is lagging $\vec{V}\phi$ It is also consuming reactive power, because $E_A \cos{\delta} < V_\phi$ . The current flowing in this machine is: $$\vec{I}A = \frac{\vec{V}\phi - \vec{E}_A}{R_A + jX_s}$$
IA = (VPhi - EA) / (Ra + Xs*1j) IA_angle = arctan(IA.imag/IA.real) print('IA = {:.1f} A โˆ  {:.2f}ยฐ'.format(abs(IA), IA_angle/pi*180))
Chapman/Ch5-Problem_5-10.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
Therefore the real power consumed by this motor is: $$P =3V_\phi I_A \cos{\theta}$$
theta = abs(IA_angle) P = 3* abs(VPhi)* abs(IA)* cos(theta) print(''' P = {:.1f} kW ============'''.format(P/1e3))
Chapman/Ch5-Problem_5-10.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
and the reactive power consumed by this motor is: $$Q = 3V_\phi I_A \sin{\theta}$$
Q = 3* abs(VPhi)* abs(IA)* sin(theta) print(''' Q = {:.1f} kvar ============='''.format(Q/1e3))
Chapman/Ch5-Problem_5-10.ipynb
dietmarw/EK5312_ElectricalMachines
unlicense
Define categorical data types
s = ["Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7, Insurance_History_8, Insurance_History_9, Family_Hist_1, Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7, Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13, Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19, Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25, Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30, Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36, Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41", "Product_Info_4, Ins_Age, Ht, Wt, BMI, Employment_Info_1, Employment_Info_4, Employment_Info_6, Insurance_History_5, Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5", "Medical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32"] varTypes = dict() #Very hacky way of inserting and appending ID and Response columns to the required dataframes #Make this better varTypes['categorical'] = s[0].split(', ') #varTypes['categorical'].insert(0, 'Id') #varTypes['categorical'].append('Response') varTypes['continuous'] = s[1].split(', ') #varTypes['continuous'].insert(0, 'Id') #varTypes['continuous'].append('Response') varTypes['discrete'] = s[2].split(', ') #varTypes['discrete'].insert(0, 'Id') #varTypes['discrete'].append('Response') varTypes['dummy'] = ["Medical_Keyword_"+str(i) for i in range(1,49)] varTypes['dummy'].insert(0, 'Id') varTypes['dummy'].append('Response') #Prints out each of the the variable types as a check #for i in iter(varTypes['dummy']): #print i
.ipynb_checkpoints/data-exploration-life-insurance-checkpoint.ipynb
ramabrahma/data-sci-int-capstone
gpl-3.0
Importing life insurance data set The following variables are all categorical (nominal): Product_Info_1, Product_Info_2, Product_Info_3, Product_Info_5, Product_Info_6, Product_Info_7, Employment_Info_2, Employment_Info_3, Employment_Info_5, InsuredInfo_1, InsuredInfo_2, InsuredInfo_3, InsuredInfo_4, InsuredInfo_5, InsuredInfo_6, InsuredInfo_7, Insurance_History_1, Insurance_History_2, Insurance_History_3, Insurance_History_4, Insurance_History_7, Insurance_History_8, Insurance_History_9, Family_Hist_1, Medical_History_2, Medical_History_3, Medical_History_4, Medical_History_5, Medical_History_6, Medical_History_7, Medical_History_8, Medical_History_9, Medical_History_11, Medical_History_12, Medical_History_13, Medical_History_14, Medical_History_16, Medical_History_17, Medical_History_18, Medical_History_19, Medical_History_20, Medical_History_21, Medical_History_22, Medical_History_23, Medical_History_25, Medical_History_26, Medical_History_27, Medical_History_28, Medical_History_29, Medical_History_30, Medical_History_31, Medical_History_33, Medical_History_34, Medical_History_35, Medical_History_36, Medical_History_37, Medical_History_38, Medical_History_39, Medical_History_40, Medical_History_41 The following variables are continuous: Product_Info_4, Ins_Age, Ht, Wt, BMI, Employment_Info_1, Employment_Info_4, Employment_Info_6, Insurance_History_5, Family_Hist_2, Family_Hist_3, Family_Hist_4, Family_Hist_5 The following variables are discrete: Medical_History_1, Medical_History_10, Medical_History_15, Medical_History_24, Medical_History_32 Medical_Keyword_1-48 are dummy variables.
#Import training data d = pd.read_csv('prud_files/train.csv') def normalize_df(d): min_max_scaler = preprocessing.MinMaxScaler() x = d.values.astype(np.float) return pd.DataFrame(min_max_scaler.fit_transform(x)) # Import training data d = pd.read_csv('prud_files/train.csv') #Separation into groups df_cat = pd.DataFrame(d, columns=["Id","Response"]+varTypes["categorical"]) df_disc = pd.DataFrame(d, columns=["Id","Response"]+varTypes["categorical"]) df_cont = pd.DataFrame(d, columns=["Id","Response"]+varTypes["categorical"]) d_cat = df_cat.copy() #normalizes the columns for binary classification norm_product_info_2 = [pd.get_dummies(d_cat["Product_Info_2"])] a = pd.DataFrame(normalize_df(d_cat["Response"])) a.columns=["nResponse"] d_cat = pd.concat([d_cat, a], axis=1, join='outer') for x in varTypes["categorical"]: try: a = pd.DataFrame(normalize_df(d_cat[x])) a.columns=[str("n"+x)] d_cat = pd.concat([d_cat, a], axis=1, join='outer') except Exception as e: print e.args print "Error on "+str(x)+" w error: "+str(e) d_cat.iloc[:,62:66].head(5) # Normalization of columns # Create a minimum and maximum processor object # Define various group by data streams df = d gb_PI2 = df.groupby('Product_Info_1') gb_PI2 = df.groupby('Product_Info_2') gb_Ins_Age = df.groupby('Ins_Age') gb_Ht = df.groupby('Ht') gb_Wt = df.groupby('Wt') gb_response = df.groupby('Response') #Outputs rows the differnet categorical groups for c in df.columns: if (c in varTypes['categorical']): if(c != 'Id'): a = [ str(x)+", " for x in df.groupby(c).groups ] print c + " : " + str(a) df_prod_info = pd.DataFrame(d, columns=(["Response"]+ [ "Product_Info_"+str(x) for x in range(1,8)])) df_emp_info = pd.DataFrame(d, columns=(["Response"]+ [ "Employment_Info_"+str(x) for x in range(1,6)])) df_bio = pd.DataFrame(d, columns=["Response", "Ins_Age", "Ht", "Wt","BMI"]) df_med_kw = pd.DataFrame(d, columns=(["Response"]+ [ "Medical_Keyword_"+str(x) for x in range(1,48)])).add(axis=[ "Medical_Keyword_"+str(x) for x in range(1,48)]) df_med_kw.describe() df.head(5) df.describe()
.ipynb_checkpoints/data-exploration-life-insurance-checkpoint.ipynb
ramabrahma/data-sci-int-capstone
gpl-3.0
Grouping of various categorical data sets Histograms and descriptive statistics for Risk Response, Ins_Age, BMI, Wt
plt.figure(0) plt.title("Categorical - Histogram for Risk Response") plt.xlabel("Risk Response (1-7)") plt.ylabel("Frequency") plt.hist(df.Response) plt.savefig('images/hist_Response.png') print df.Response.describe() print "" plt.figure(1) plt.title("Continuous - Histogram for Ins_Age") plt.xlabel("Normalized Ins_Age [0,1]") plt.ylabel("Frequency") plt.hist(df.Ins_Age) plt.savefig('images/hist_Ins_Age.png') print df.Ins_Age.describe() print "" plt.figure(2) plt.title("Continuous - Histogram for BMI") plt.xlabel("Normalized BMI [0,1]") plt.ylabel("Frequency") plt.hist(df.BMI) plt.savefig('images/hist_BMI.png') print df.BMI.describe() print "" plt.figure(3) plt.title("Continuous - Histogram for Wt") plt.xlabel("Normalized Wt [0,1]") plt.ylabel("Frequency") plt.hist(df.Wt) plt.savefig('images/hist_Wt.png') print df.Wt.describe() print "" plt.show()
.ipynb_checkpoints/data-exploration-life-insurance-checkpoint.ipynb
ramabrahma/data-sci-int-capstone
gpl-3.0
Histograms and descriptive statistics for Product_Info_1-7
for i in range(1,8): print "The iteration is: "+str(i) print df['Product_Info_'+str(i)].describe() print "" plt.figure(i) if(i == 4): plt.title("Continuous - Histogram for Product_Info_"+str(i)) plt.xlabel("Normalized value: [0,1]") plt.ylabel("Frequency") else: plt.title("Categorical - Histogram of Product_Info_"+str(i)) plt.xlabel("Categories") plt.ylabel("Frequency") if(i == 2): df.Product_Info_2.value_counts().plot(kind='bar') else: plt.hist(df['Product_Info_'+str(i)]) plt.savefig('images/hist_Product_Info_'+str(i)+'.png') plt.show()
.ipynb_checkpoints/data-exploration-life-insurance-checkpoint.ipynb
ramabrahma/data-sci-int-capstone
gpl-3.0