markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Now this is much closer to the performance of the `RandomForestRegressor` (but not quite there yet). Let's check the best hyperparameters found:
rnd_search.best_params_
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
This time the search found a good set of hyperparameters for the RBF kernel. Randomized search tends to find better hyperparameters than grid search in the same amount of time. Let's look at the exponential distribution we used, with `scale=1.0`. Note that some samples are much larger or smaller than 1.0, but when you look at the log of the distribution, you can see that most values are actually concentrated roughly in the range of exp(-2) to exp(+2), which is about 0.1 to 7.4.
expon_distrib = expon(scale=1.) samples = expon_distrib.rvs(10000, random_state=42) plt.figure(figsize=(10, 4)) plt.subplot(121) plt.title("Exponential distribution (scale=1.0)") plt.hist(samples, bins=50) plt.subplot(122) plt.title("Log of this distribution") plt.hist(np.log(samples), bins=50) plt.show()
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
The distribution we used for `C` looks quite different: the scale of the samples is picked from a uniform distribution within a given range, which is why the right graph, which represents the log of the samples, looks roughly constant. This distribution is useful when you don't have a clue of what the target scale is:
reciprocal_distrib = reciprocal(20, 200000) samples = reciprocal_distrib.rvs(10000, random_state=42) plt.figure(figsize=(10, 4)) plt.subplot(121) plt.title("Reciprocal distribution (scale=1.0)") plt.hist(samples, bins=50) plt.subplot(122) plt.title("Log of this distribution") plt.hist(np.log(samples), bins=50) plt.show()
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
The reciprocal distribution is useful when you have no idea what the scale of the hyperparameter should be (indeed, as you can see on the figure on the right, all scales are equally likely, within the given range), whereas the exponential distribution is best when you know (more or less) what the scale of the hyperparameter should be. 3. Question: Try adding a transformer in the preparation pipeline to select only the most important attributes.
from sklearn.base import BaseEstimator, TransformerMixin def indices_of_top_k(arr, k): return np.sort(np.argpartition(np.array(arr), -k)[-k:]) class TopFeatureSelector(BaseEstimator, TransformerMixin): def __init__(self, feature_importances, k): self.feature_importances = feature_importances self.k = k def fit(self, X, y=None): self.feature_indices_ = indices_of_top_k(self.feature_importances, self.k) return self def transform(self, X): return X[:, self.feature_indices_]
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Note: this feature selector assumes that you have already computed the feature importances somehow (for example using a `RandomForestRegressor`). You may be tempted to compute them directly in the `TopFeatureSelector`'s `fit()` method, however this would likely slow down grid/randomized search since the feature importances would have to be computed for every hyperparameter combination (unless you implement some sort of cache). Let's define the number of top features we want to keep:
k = 5
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Now let's look for the indices of the top k features:
top_k_feature_indices = indices_of_top_k(feature_importances, k) top_k_feature_indices np.array(attributes)[top_k_feature_indices]
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Let's double check that these are indeed the top k features:
sorted(zip(feature_importances, attributes), reverse=True)[:k]
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Looking good... Now let's create a new pipeline that runs the previously defined preparation pipeline, and adds top k feature selection:
preparation_and_feature_selection_pipeline = Pipeline([ ('preparation', full_pipeline), ('feature_selection', TopFeatureSelector(feature_importances, k)) ]) housing_prepared_top_k_features = preparation_and_feature_selection_pipeline.fit_transform(housing)
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Let's look at the features of the first 3 instances:
housing_prepared_top_k_features[0:3]
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Now let's double check that these are indeed the top k features:
housing_prepared[0:3, top_k_feature_indices]
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Works great! :) 4. Question: Try creating a single pipeline that does the full data preparation plus the final prediction.
prepare_select_and_predict_pipeline = Pipeline([ ('preparation', full_pipeline), ('feature_selection', TopFeatureSelector(feature_importances, k)), ('svm_reg', SVR(**rnd_search.best_params_)) ]) prepare_select_and_predict_pipeline.fit(housing, housing_labels)
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Let's try the full pipeline on a few instances:
some_data = housing.iloc[:4] some_labels = housing_labels.iloc[:4] print("Predictions:\t", prepare_select_and_predict_pipeline.predict(some_data)) print("Labels:\t\t", list(some_labels))
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Well, the full pipeline seems to work fine. Of course, the predictions are not fantastic: they would be better if we used the best `RandomForestRegressor` that we found earlier, rather than the best `SVR`. 5. Question: Automatically explore some preparation options using `GridSearchCV`.
param_grid = [{ 'preparation__num__imputer__strategy': ['mean', 'median', 'most_frequent'], 'feature_selection__k': list(range(1, len(feature_importances) + 1)) }] grid_search_prep = GridSearchCV(prepare_select_and_predict_pipeline, param_grid, cv=5, scoring='neg_mean_squared_error', verbose=2) grid_search_prep.fit(housing, housing_labels) grid_search_prep.best_params_
_____no_output_____
Apache-2.0
02_end_to_end_machine_learning_project.ipynb
Ruqyai/handson-ml2
Extracting embeddings with ALBERTWith Hugging Face transformers, we can use the ALBERT model just like how we used BERT. Let's explore this with a small example. Suppose, we need to get the contextual word embedding of every word in the sentence Paris is a beautiful city. Let's see how to that with ALBERT. Import the necessary modules:
!pip install transformers==3.5.1 from transformers import AlbertTokenizer, AlbertModel
_____no_output_____
MIT
Chapter04/.ipynb_checkpoints/4.03. Extracting embeddings with ALBERT-checkpoint.ipynb
shizukanaskytree/Getting-Started-with-Google-BERT
Download and load the pre-trained Albert model and tokenizer. In this tutorial, we use the ALBERT-base model:
model = AlbertModel.from_pretrained('albert-base-v2') tokenizer = AlbertTokenizer.from_pretrained('albert-base-v2')
_____no_output_____
MIT
Chapter04/.ipynb_checkpoints/4.03. Extracting embeddings with ALBERT-checkpoint.ipynb
shizukanaskytree/Getting-Started-with-Google-BERT
Now, feed the sentence to the tokenizer and get the preprocessed input:
sentence = "Paris is a beautiful city" inputs = tokenizer(sentence, return_tensors="pt")
_____no_output_____
MIT
Chapter04/.ipynb_checkpoints/4.03. Extracting embeddings with ALBERT-checkpoint.ipynb
shizukanaskytree/Getting-Started-with-Google-BERT
Let's print the inputs:
print(inputs)
{'input_ids': tensor([[ 2, 1162, 25, 21, 1632, 136, 3]]), 'token_type_ids': tensor([[0, 0, 0, 0, 0, 0, 0]]), 'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1]])}
MIT
Chapter04/.ipynb_checkpoints/4.03. Extracting embeddings with ALBERT-checkpoint.ipynb
shizukanaskytree/Getting-Started-with-Google-BERT
Now we just feed the inputs to the model and get the result. The model returns the hidden_rep which contains the hidden state representation of all the tokens from the final encoder layer and cls_head which contains the hidden state representation of the [CLS] token from the final encoder layer:
hidden_rep, cls_head = model(**inputs)
_____no_output_____
MIT
Chapter04/.ipynb_checkpoints/4.03. Extracting embeddings with ALBERT-checkpoint.ipynb
shizukanaskytree/Getting-Started-with-Google-BERT
module name here> API details.
#hide from nbdev.showdoc import * #export import numpy class Matrix(): """ Class generates a zero matrix """ def __init__(self, n_matrix:int, m_matrix:int): self.n = n_matrix self.m = m_matrix def make_matrix(self): return numpy.zeros(self.n * self.m).reshape(self.n, self.m)
_____no_output_____
Apache-2.0
00_core.ipynb
VladislavYak/test_repo
Matrix
a = Matrix(2, 5) a.make_matrix()
_____no_output_____
Apache-2.0
00_core.ipynb
VladislavYak/test_repo
1. Hashing task! * We've put all passwords from passwords1.txt and passwords2.txt in two lists: listPasswords1 and listPasswords2
listPasswords1=[] listPasswords2=[] filePassword1 = open("passwords1.txt", 'r') lines_p1 = filePassword1.readlines() for line in lines_p1: listPasswords1.append(line.strip()) filePassword2 = open("passwords2.txt", 'r') lines_p2 = filePassword2.readlines() for line in lines_p2: listPasswords2.append(line.strip())
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
We have set all variables that we need to build our Bloom Filter:* n = Number of items in the filter* p = Probability of false positives, fraction between 0 and 1* m = Number of bits in the filter (size of Bloom Filter bit-array)* k = Number of hash functionsWe did these following considerations:N is the amount of passwords that are in passwordss1.txt (i.e. in listPasswords1).We need to calculate p,m and k from following formulas:\begin{align*}k = (m/n)*ln(2)\end{align*}\begin{align*}m = ((n*ln(p)) / (ln(2))^2\end{align*}Since we don't know the value of any previous varaibles, from https://hackernoon.com/probabilistic-data-structures-bloom-filter-5374112a7832 (suggested link in track) we have set initially p=0.01. P must be a value between 0 and 1. With smaller value of p, we have a lower probability to have a false positives in search on bloom filter. Bloom filter can have false positives and NOT false negatives. We have choosen this value (0.01) from a different test that we have tryed. We think that it should be an optimal value to have also optimal values about k and m.Finally we calculated k and m (with previous formulas).
n=len(listPasswords1) p=0.01 import math m=math.ceil((n * math.log(p)) / math.log(1 / pow(2, math.log(2)))) k = round((m / n) * math.log(2))
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
This following function is our hash function. We have written from scratch our fnv function based on fnv hash function. FNV hashes are designed to be fast while maintaining a low collision rate. The FNV speed allows one to quickly hash lots of data while maintaining a reasonable collision rate. To build bloom filter, we need of various hash functions that should be independent, uniformly distributed and fast.From previous formulas, we have calculated the number of different hash functions that we should use. We set a variable, called 'seed' that is an integer from 0 to k(number of hash functions); in this way we'll have k different hash fucntions that transform our input string in k different number. These k numbers will be the index of our bit-array that represent respective inout password.The initial values of 'FNV_prime' and 'offset_basis' are taken from this wikipedia page: https://en.wikipedia.org/wiki/Fowler–Noll–Vo_hash_function in 'FNV hash parameters'.
def fnv1_64(password, seed=0): """ Returns: The FNV-1 hash of a given string. """ #Constants FNV_prime = 1099511628211 offset_basis = 14695981039346656037 #FNV-1a Hash Function hash = offset_basis + seed for char in password: hash = hash * FNV_prime hash = hash ^ ord(char) return hash
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
This following class 'BloomFilter' represent our Bloom Filter. It has three attributes:* sizeArray = dimension of bit-array* number_HashFucntion = number of hash functions* array_BloomFilter = bit-array of bloom filterIt has also two methods:* init: it takes, as parameteers, our bloom filter, k (number of hash functions that we have calculated from previous formula) and m (size of bit-array that we have calculated with previous function). It initializes the bit-array of Bloom filter (size=m) with all '0', its size with m and its number of hash functions with k.* add: this method allows to add an element from input list to our bloom filter. It takes this list and our bloom filter as parameters and it calculates the effective bit-array with right '1' and '0'.
class BloomFilter: sizeArray=0 number_HashFucntion=0 array_BloomFilter=[] @property def size(self): return self.sizeArray @property def numHash(self): return self.number_HashFucntion @property def arrayBloom(self): return self.array_BloomFilter def init(self,k,m): self.sizeArray=m self.number_HashFucntion=k for i in range(m): self.array_BloomFilter.append(0) def add(self,strings): #print(self.number_HashFucntion) #print(self.sizeArray) h=0 for psw in strings: for seed in range(self.number_HashFucntion): index=fnv1_64(psw,seed) % self.sizeArray self.array_BloomFilter[index]=1
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Then we made a function 'checkPassw' that checks how many password in listPasswords2 (i.e. in passowrds2.txt) are in our Bloom Filter. It takes our bloom filter and the list of passwords that are in passwords2.txt and it returns how many passwords are in bloom filter. A password is in our bloom filter if and only if its conversion with hash functions coresponds to all '1' in the bit-array. If there is only one '0', current password is not in bloom filter.'countCheck' is the number of occurences of passwords (from passwords2.txt) in bloom filter. This number represents the occurences of passwords from passwords2.txt that are 'probably' in password1.txt
def checkPassw(BloomFilter, listPasswords2): countCheck=0 for psw in listPasswords2: count=0 for seed in range(BloomFilter.number_HashFucntion): index=fnv1_64(psw,seed) % BloomFilter.sizeArray if BloomFilter.array_BloomFilter[index]==1: count+=1 if count==BloomFilter.number_HashFucntion: countCheck+=1 return countCheck
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Bonus section: We calculate the number of false positiveThis following function 'falsePositives' allows to calculate the exact number of false positive. It takes these following data, as parameter:* BloomFilter = our bloom filter* listPassowrds1 = passwords from passwords1.txt* listPassowrds2 = passwords from passwords2.txtFor every password in listPasswords2, we check if current password is in the bloom filter. Then if this password is in bloom filter, it means that it is 'probably' in listPasswords2. With conversion, through hash functions, If there is only one '0' in bloom filter, it means that this current password is not in bloom filter.To check if it's a false positive (i.e. it is in bloom fileter but it is not in listPasswords1), we'll continue to verify if this current password is in listPassword1. If it's true, it means that it is a False positive and we increment our counter 'countFalsePositives'; if it's True it means this password is actually in listPassword1 and we continue with next password in listPassword2.We used a SET structure to verify if a password is in listPassword1, beacuse it's faster to find an element in large structure than in a list. So we get a set 's' from listPassword1. This following figure shows the reason to use a set to find an element in a large collection of data than a simple list. https://stackoverflow.com/questions/7571635/fastest-way-to-check-if-a-value-exists-in-a-list
def falsePositives(BloomFilter, listPasswords1, listPasswords2): s= set(listPasswords1) countFalsePositives=0 for psw in listPasswords2: count=0 for seed in range(BloomFilter.number_HashFucntion): index=fnv1_64(psw,seed) % BloomFilter.sizeArray if (BloomFilter.array_BloomFilter[index]==1): count+=1 else: break if count==BloomFilter.number_HashFucntion: if not(psw in s): countFalsePositives+=1 #print(psw) return countFalsePositives
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Then we did the main function, as wirtten in homework track. With this function we did these following steps:* init bit-array, its size adn number of hash functions of our bloom filter, with 'BloomFilter.init(BloomFilter,k,m)'* add passowrds from listPasswords1 into our bloom filter, with 'BloomFilter.add(BloomFilter,listPasswords1)'* calculate how many passwords (from passwords2.txt) are present in our bloom filter, with 'checkPassw(BloomFilter,listPasswords2)' Finally we print these following functions:* Number of hash function used* Number of duplicates detected* Probability of false positives* Execution time
import time def BloomFilterFunc(listPasswords1, listPasswords2): start = time.time() #init our bloom filter BloomFilter.init(BloomFilter,k,m) #add all passowrd from listPassowrds1 to our bloom filter BloomFilter.add(BloomFilter,listPasswords1) #check and save into 'countPassw' the number of occurences of password (from passwords2) in the bloom filter countPassw=checkPassw(BloomFilter,listPasswords2) end = time.time() #print output data print('Number of hash function used: ', k) print('Number of duplicates detected: ', countPassw) print('Probability of false positives: ', p) print('Execution time: ', end-start)
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
* Execute main function
BloomFilterFunc(listPasswords1, listPasswords2)
Number of hash function used: 7 Number of duplicates detected: 14251447 Probability of false positives: 0.01 Execution time: 4041.6586632728577
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
* Execute bonus section
falsPositive=falsePositives(BloomFilter,listPasswords1,listPasswords2) print('Number of false positive: ', falsPositive)
Number of false positive: 251447
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
2. Alphabetical Sort Given a set of words, a common natural task is the one of sorting them in alphabetical order. It is something that you have for sure already done once in your life, using your own algorithm without maybe knowing it.In order to be everyone on the same page, we will refer to the rules defined here. As for multi-word string, let stick with the first plicy proposed there.What you might know is that we can relate this task to a simple algorithm that runs in linear time: Counting Sort. Counting Sort is based on a simple assumption: you know the range of the possible values of the instances you have to sort. In this exercise you are asked to perform Alphabetical Sort exploiting the algorithm of Counting Sort.
import string improt numpay as np lower = list(string.ascii_lowercase) upper = string.ascii_uppercase lower
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Build your own implementation of Counting Sort... Here is the counting sort algorithm with a bite innovation and fewer assignments than the original one explained in the attached website to the hw
def s_counting(A): m = max(A) sorted_A = [] temp = 0 d = [0]*(m+1) for i in range(len(A)): d[A[i]]+=1 for x,y in enumerate(d): sorted_A += [x]*y return sorted_A A = [0,3,2,3,3,0,5,2,3] sort_counting(A)
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Build an algorithm, based on your implementation of Counting Sort, that receives in input a list with all the letters of the alphabet (not in alphabetical order), and returns the list ordered according to alphabetical order We continue the same approch as first part here and keep
def sort_letters(B): B = list( ''.join(B).lower()) m = len(B) d = [] sorted_letters = [] for i in B: d.append(lower.index(i)) c = s_counting(d) for j in c: sorted_letters += lower[j] return sorted_letters B = ['p' , 'w' , 'x', 'k' , 'p' , 'a' , 'c' ,'a' , 'b' , 'a' , 'c'] sort_letters(B)
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Build an algorithm, based on your implementation of Counting Sort, that receives in input a list of length m, that contains words with maximum length equal to n, and returns the list ordered according to alphabetical order. Here is the first algurithim try to follow the counting sort algurithim order, is our first approch to do it..
Lower = lower.copy() Lower.insert(0,'') C = ['words' , 'amount' , 'efficiently' , 'thumb' , 'rule' , 'solvable' , 'Another' ,'open' ,'problem', 'is' ,'whether'] C = [a.lower() for a in C] m = max([len(e) for e in C]) d = [[] for _ in C] for i in range(len(C)): w2l = list(C[i]) l2num = [Lower.index(a) for a in w2l ] d[i] = l2num + [0]*(m-len(l2num)) sorted_d =[[] for _ in d] for x in reversed(range(m)): count = [0]*len(Lower) cum_count = [0]*len(Lower) for e in range(len(d)): count[d[e][x]]+=1 for r in range(len(count)): if count[r]>0: for g in range(len(d)): if r ==d[g][x]: sorted_d.append( d[g]) final_sort = [] for h in sorted_d[-len(d):]: final_sort.append( ''.join([Lower[a] for a in h])) print(final_sort)
['amount', 'another', 'efficiently', 'is', 'open', 'problem', 'rule', 'solvable', 'thumb', 'words', 'whether']
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Here in this algorithm, we have some limitation due to the big numbers, We tried to fix the problem with normalizing the numbers between and 1000 however this will only could distinguish the words which are equal at 3 to 5 letters and after that, it would be really time-consuming to calculate and normally end up with errors
x = ['words' , 'amount' , 'are' , 'efficiently' , 'thumb' , 'rule' , 'solvable' , 'Another' ,'open' ,'problem', 'is' ,'whether'] #x = ['asd', 'bedf','mog','zor','bze'] #x = input('Enter the words, seperated by comma(,)').split(',') x_2 = x.copy() # Finding the longest word max_length = 0 for i in x: if max_length < len(list(i)): max_length = len(list(i)) # Turning words to numbers for i in range(len(x)): x[i] = convert_w_2_n(x[i], max_length) #Normalize between 0 to 1000 mx = max(x) mn = min(x) for i,j in enumerate(x): x[i] = int((10**(3))*(j-mn)/(mx-mn)) max_int = max(x) aux_array = [0]*(max_int+1) aux_array_2 = aux_array # Step 1 : counting for i in x: aux_array[i] = aux_array[i] + 1 # Step 2 : comulating aux_array_2[0] = aux_array[0] for i in range(1,max_int+1): aux_array_2[i] = aux_array_2[i-1] + aux_array[i] # Sorting final_list = ['']*len(x) for i in reversed(range(len(x))): #e.g: i = 2, 1, 0 final_list[aux_array_2[x[i]]-1] = x_2[i] aux_array_2[x[i]] = aux_array_2[x[i]] -1 print(final_list)
['Another', 'amount', 'are', 'efficiently', 'is', 'open', 'problem', 'rule', 'solvable', 'thumb', 'whether', 'words']
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Final version with best performance and without any limitation Here we rewirte the count sorting algurithim again and we keep the orderOrder is the origilanl list index wich if we use order list as a index we can soert it, it shows how the elements moves from where to wher to have the final arry
#sorting def sort_counting(A): m = max(A) sorted_A = [] temp = 0 d = [0]*(m+1) for i in range(len(A)): d[A[i]]+=1 cum_d =[0]*(m+1) for i in d: cum_d.append(i) order = list() for x,y in enumerate(d): sorted_A += [x]*y for i in range(y): order.append(A.index(x)) A[order[-1]] = -1 return sorted_A , order
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Now we use the counting sort algruthin and it's order to make this workWe have used pure counting sort algurtim and keep the $big O$ time still in $O(n)$ Howerver the total time it use is $T(n) = kn+c$ the one is exatly same big O and principle as counting sort
Lower = lower.copy() Lower.insert(0,'') import numpy as np C = [ 'Alessio', 'Alessandro' , 'Angela', 'Alessand','Anita', 'Anna','Alessandrx', 'Arianna' ,'Alessandra'] C = [a.lower() for a in C] m = max([len(e) for e in C]) d = [[] for _ in C] final_sort = C.copy() for i in range(len(C)): w2l = list(C[i]) l2num = [Lower.index(a) for a in w2l ] d[i] = l2num + [0]*(m-len(l2num)) dd = np.array(d) for x in reversed(range(m)): count, order = sort_counting(list(dd[: ,x])) temp = [[] for _ in d] temp_sort = final_sort.copy() for i,j in enumerate(order): temp[i] += list(dd[j]) final_sort[i] =temp_sort[j] dd = np.array(temp) final_sort
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
3. Find similar wines! Imports
import random import pandas as pd import numpy as np from collections import defaultdict import matplotlib import matplotlib.pyplot as plt
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Functions created for implementing the kMeans Create the Euclidean distance
def distance_2(vec1, vec2): if len(vec1) == len(vec2): if len(vec1) > 1: add = 0 for i in range(len(vec1)): add = add + (vec1[i] - vec2[i])**2 return add**(1/2) else: return abs(vec1 - vec2) else: return "Wrong Input"
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
We will now check the dissimilarity of our clusters. To do that we need to define the variability of every cluster. Meaning the sum of the distances of every element in the cluster from the mean(centroid).
def dissimilarity(cluster): def kmeansreduce(centroid, dictionary): a = dictionary[centroid] if len(a) > 0 : vector = a[0] for i in range(1,len(a)): vector = np.add(vector, a[i]) return vector else: pass var = [] add = 0 for i in range(len(cluster.keys())): if len(cluster[i]) > 0: m = kmeansreduce(i, cluster) / len(cluster[i]) for j in range(len(cluster[i])): add = add + distance_2(m, cluster[i][j]) return(add)
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Compute the sum of the squared distance between data points and all centroids (distance_2). Assign each data point to the closest cluster (clusters dictionary). Compute the centroids for the clusters by taking the average of the all data points that belong to each cluster.(initial centroids) We define also two functions to show that this algorithm can be done by MapReduce method
def kmeans(data, k): def kmeansmap(information, num_centroids, centroids): clusters = defaultdict(list) for i in range(num_centroids): clusters[i] = [] classes = defaultdict(list) for i in range(information.shape[0]): d = [] for j in range(num_centroids): d.append(distance_2(information[i,], centroids[j])) clusters[np.argmin(d,axis=0)].append(information[i,]) classes[i].append(np.argmin(d,axis=0)) return [clusters, classes] def kmeansreduce(centroid, dictionary): a = dictionary[centroid] if len(a) > 0 : vector = a[0] for i in range(1,len(a)): vector = np.add(vector, a[i]) return vector else: pass # =================================================================================== initial_centroids = random.sample(list(data), k) while True: dict1 = kmeansmap(data, k, initial_centroids)[0] dict2 = kmeansmap(data, k, initial_centroids)[1] dict3 = defaultdict(list) for i in range(k): dict3[i] = [] old_clusters = initial_centroids for i in range(k): dict3[i] = kmeansreduce(i, dict1) if len(dict3[i]) > 0: initial_centroids[i] = dict3[i]/len(dict3[i]) if old_clusters == initial_centroids: break return [dict1, dict2]
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Now we will implement the algorithms and functions to the data To implement the algorithms we will clean a bit the data
url = r"C:\Users\HP\Documents\ADM\HW 4\wine.data" header = ["Class", "Alcohol", "Malic acid", "Ash","Alcalinity of ash", "Magnesium", "Total phenols", "Flavanoids", "Nonflavanoid phenols", "Proanthocyanins", "Color intensity", "Hue", "OD280/OD315 of diluted wines", "Proline"] data = pd.read_table(url, delimiter = ",", names = header ) data.head(3)
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
We normalize the values of the DataFrame, so we can measure the distance Some columns are not saved as floats , so we will have an error normalizing them, so we make them floats and then normalize them
for col in data.columns[1:]: if data[col].dtype == 'int64': data[col] = data[col].astype("float64") for col in data.columns[1:]: r = (max(data[col]) - min(data[col])) minimum = min(data[col]) for i in range(len(data[col])): data[col][i] = (float(data[col][i]) - minimum)/r data.head(3)
C:\Users\HP\Anaconda3\lib\site-packages\ipykernel_launcher.py:15: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy from ipykernel import kernelapp as app
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
We will not test the variable class, since this is the classification we target. So we are going to save it in a file called target and work with the other variables.
target = data["Class"] data = data.drop(columns = ["Class"]) data = data.to_numpy() data
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
This way the elements of each row are to be taken as a vector
data[1,]
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
Now we will implement the kmeans algorithm, with an unknown number of clusters. We will use the elbow method to figure whats the best number of clusters for our data. We will run the method for up to k = 10 clusters
elbow = {} for k in range(1, 11): best = kmeans(data, k) for t in range(100): C = kmeans(data, k) if dissimilarity(C[0]) < dissimilarity(best[0]): best = C elbow[k] = dissimilarity(best[0]) plt.plot(list(elbow.keys()), list(elbow.values()))
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
From the previous plot we can figure out what's the best k for me... We will implement the kmeans algorithm for the specific k
best = kmeans(data, 3) for t in range(100): C = kmeans(data, 3) if dissimilarity(C[0]) < dissimilarity(best[0]): best = C outcome = [] for i in range(data.shape[0]): outcome.append(best[1][i][0] + 1)
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
We did the following commands for all the columns, and we observed that two columns/features have a big effect on the clustering of the other features. Here we will show the distribution of the features, when plotted with Magnesium and Total Phenols
f, axes = plt.subplots(4,3,figsize=(20,20)) axes[0][0].scatter(data[:, 5], data[:, 1], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[0][0].set_xlabel(header[1]) axes[0][1].scatter(data[:, 5], data[:, 2], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[0][1].set_xlabel(header[2]) axes[0][2].scatter(data[:, 5], data[:, 3], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[0][2].set_xlabel(header[3]) axes[1][0].scatter(data[:, 5], data[:, 4], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[1][0].set_xlabel(header[4]) axes[1][1].scatter(data[:, 5], data[:, 6], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[1][1].set_xlabel(header[6]) axes[1][2].scatter(data[:, 5], data[:, 7], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[1][2].set_xlabel(header[7]) axes[2][0].scatter(data[:, 5], data[:, 8], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[2][0].set_xlabel(header[8]) axes[2][1].scatter(data[:, 5], data[:, 9], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[2][1].set_xlabel(header[9]) axes[2][2].scatter(data[:, 5], data[:, 10], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[2][2].set_xlabel(header[10]) axes[3][0].scatter(data[:, 5], data[:, 11], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[3][0].set_xlabel(header[11]) axes[3][1].scatter(data[:, 5], data[:, 12], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[3][1].set_xlabel(header[12]) plt.suptitle(header[5]) plt.show() f, axes = plt.subplots(4,3,figsize=(20,20)) axes[0][0].scatter(data[:, 6], data[:, 1], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[0][0].set_xlabel(header[1]) axes[0][1].scatter(data[:, 6], data[:, 2], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[0][1].set_xlabel(header[2]) axes[0][2].scatter(data[:, 6], data[:, 3], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[0][2].set_xlabel(header[3]) axes[1][0].scatter(data[:, 6], data[:, 4], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[1][0].set_xlabel(header[4]) axes[1][1].scatter(data[:, 6], data[:, 5], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[1][1].set_xlabel(header[5]) axes[1][2].scatter(data[:, 6], data[:, 7], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[1][2].set_xlabel(header[7]) axes[2][0].scatter(data[:, 6], data[:, 8], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[2][0].set_xlabel(header[8]) axes[2][1].scatter(data[:, 6], data[:, 9], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[2][1].set_xlabel(header[9]) axes[2][2].scatter(data[:, 6], data[:, 10], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[2][2].set_xlabel(header[10]) axes[3][0].scatter(data[:, 6], data[:, 11], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[3][0].set_xlabel(header[11]) axes[3][1].scatter(data[:, 6], data[:, 12], c=outcome, cmap=matplotlib.colors.ListedColormap(["purple", "blue", "red"])) axes[3][1].set_xlabel(header[12]) plt.suptitle(header[6]) plt.show()
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
4. K-means can go wrong! Clustring problem is non-linear over the $S \in (R^{d})^{K}$ where $d$ is the number of features in the dataset and $K$ is the number clusters. This problem can turn to linear form with binary variables while the $\hat{S} \in S$ only cosists of the points in the dataset. This problem can be formulated as K-median problem that is in familiy of MILP (Mixed Integer Linear Programming)Since the K-median is MILP, algorithm like Branch & Bound can quarntee the global optimum over $\hat{S}$. In K-median, each centroid is one of the point in the dataset and it minimize the summation of the distance to corresponding centroids. So if K-median find the solution $\hat{X}$ with $Cost(\hat{X})$ that is less than $Cost(X)$ where $X$ is the solution stem from K-means we can say that K-means fails to find the global optimum ($Global\: optimum \: \leq Cost(\hat{X})\:< Cost(X)$ [Link](https://stats.stackexchange.com/questions/48757/why-doesnt-k-means-give-the-global-minimum). K-median formulation$x_{i,j} = 1 \: \: \text{if point i is in cluster} \: c_j \: \text{that is} \: \: p_j \: \text{otherwise} \: 0$$z_{j} = 1 \: \: \text{if} \: p_j \: \text{is a centroid otherwise} \: 0$$d_{i,j} = \text{squared distance of point} \: i \: \text{and} \: j$$Cost = \sum_{i}\sum_{j}{x_{i,j}d_{i,j}}$subject to:$x_{i,j} \leq z_{j} \:\:\forall i,j$$\sum_{j}{x_{i,j}} = 1 \:\:\forall i$$\sum_{j}{z_{j}} = K$ The above example shows the True clusters and possible solution we can get from K-means with different seed value. However K-median yeilds to the centroids with a bit higher Cost, but it found the true clusters while K-means fails. Example - Exercise 4
import numpy as np import matplotlib.pyplot as plt import pandas as pd from sklearn.datasets import load_wine import json from sklearn.cluster import KMeans from pandas.io.json import json_normalize wine = load_wine() wine.target[[10, 80, 140]] list(wine.target_names) wine_df = pd.DataFrame(wine.data , columns= wine.feature_names) wine_df['target'] = wine.target
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
The real classes of the wines
plt.scatter(wine_df['alcohol'] , wine_df['od280/od315_of_diluted_wines'] , c = wine_df['target']) plt.show() kmean = KMeans(n_clusters= 3, init = 'random' ).fit(wine_df[['alcohol', 'od280/od315_of_diluted_wines']]) plt.scatter(wine_df['alcohol'] , wine_df['od280/od315_of_diluted_wines'] , c = kmean.labels_) plt.scatter( kmean.cluster_centers_[: , 0], kmean.cluster_centers_[: , 1] , marker='X' , c = 'r') plt.title(label = '# of iretations: {}'.format(kmean.n_iter_ ) ,loc='center') plt.show()
_____no_output_____
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
We can see after choosing approximate points of initializations it would be faster to do the clustering
kmean = KMeans(n_clusters= 3, init = np.array([[12,3] ,[13,3] ,[13,1.7]]) ).fit(wine_df[['alcohol', 'od280/od315_of_diluted_wines']]) plt.scatter(wine_df['alcohol'] , wine_df['od280/od315_of_diluted_wines'] , c = kmean.labels_) plt.scatter( kmean.cluster_centers_[: , 0], kmean.cluster_centers_[: , 1] , marker='X' , c = 'r') plt.title(label = '# of iretations: {}'.format(kmean.n_iter_ ) ,loc='center') plt.show()
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\cluster\k_means_.py:971: RuntimeWarning: Explicit initial center position passed: performing only one init in k-means instead of n_init=10 return_n_iter=True)
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
We can easily show how random initialization can effect on the number of iteration and the cost increases
for k in range(12): kmean = KMeans(n_clusters= 3, init = 'random' ).fit(wine_df[['alcohol', 'od280/od315_of_diluted_wines']]) print('# of iretations: {}'.format(kmean.n_iter_ ) , 'And inertia: {}'.format(kmean.inertia_))
# of iretations: 7 And inertia: 58.32594553894382 # of iretations: 10 And inertia: 58.32594553894382 # of iretations: 9 And inertia: 58.32594553894382 # of iretations: 10 And inertia: 58.32594553894382 # of iretations: 8 And inertia: 58.32594553894382 # of iretations: 8 And inertia: 58.32594553894382 # of iretations: 8 And inertia: 58.32594553894382 # of iretations: 6 And inertia: 58.32594553894382 # of iretations: 12 And inertia: 58.32594553894382 # of iretations: 8 And inertia: 58.32594553894382 # of iretations: 6 And inertia: 58.32594553894382 # of iretations: 4 And inertia: 58.32594553894382
MIT
main.ipynb
AlessandroTaglieri/ADM-HW4
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
!pip install git+https://github.com/google/starthinker
_____no_output_____
Apache-2.0
colabs/smartsheet_report_to_bigquery.ipynb
quan/starthinker
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
CLOUD_PROJECT = 'PASTE PROJECT ID HERE' print("Cloud Project Set To: %s" % CLOUD_PROJECT)
_____no_output_____
Apache-2.0
colabs/smartsheet_report_to_bigquery.ipynb
quan/starthinker
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE' print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
_____no_output_____
Apache-2.0
colabs/smartsheet_report_to_bigquery.ipynb
quan/starthinker
4. Enter SmartSheet Report To BigQuery ParametersMove report data into a BigQuery table. 1. Specify SmartSheet Report token. 1. Locate the ID of a report by viewing its properties. 1. Provide a BigQuery dataset ( must exist ) and table to write the data into. 1. StarThinker will automatically map the correct schema.Modify the values below for your use case, can be done multiple times, then click play.
FIELDS = { 'auth_read': 'user', # Credentials used for reading data. 'auth_write': 'service', # Credentials used for writing data. 'token': '', # Retrieve from SmartSheet account settings. 'report': '', # Retrieve from report properties. 'dataset': '', # Existing BigQuery dataset. 'table': '', # Table to create from this report. 'schema': '', # Schema provided in JSON list format or leave empty to auto detect. } print("Parameters Set To: %s" % FIELDS)
_____no_output_____
Apache-2.0
colabs/smartsheet_report_to_bigquery.ipynb
quan/starthinker
5. Execute SmartSheet Report To BigQueryThis does NOT need to be modified unles you are changing the recipe, click play.
from starthinker.util.project import project from starthinker.script.parse import json_set_fields USER_CREDENTIALS = '/content/user.json' TASKS = [ { 'smartsheet': { 'auth': 'user', 'report': {'field': {'kind': 'string','name': 'report','order': 3,'description': 'Retrieve from report properties.'}}, 'token': {'field': {'description': 'Retrieve from SmartSheet account settings.','name': 'token','order': 2,'default': '','kind': 'string'}}, 'out': { 'bigquery': { 'auth': 'user', 'dataset': {'field': {'description': 'Existing BigQuery dataset.','name': 'dataset','order': 4,'default': '','kind': 'string'}}, 'table': {'field': {'description': 'Table to create from this report.','name': 'table','order': 5,'default': '','kind': 'string'}}, 'schema': {'field': {'kind': 'json','name': 'schema','order': 6,'description': 'Schema provided in JSON list format or leave empty to auto detect.'}} } } } } ] json_set_fields(TASKS, FIELDS) project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True) project.execute(_force=True)
_____no_output_____
Apache-2.0
colabs/smartsheet_report_to_bigquery.ipynb
quan/starthinker
import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt import plotly.graph_objs as go import os import warnings plt.style.use('ggplot') # 구글 드라이브 마운트 from google.colab import drive drive.mount('/content/drive') weather_station_location = pd.read_csv("./drive/MyDrive/study/weather_station_locations.csv") weather = pd.read_csv("./drive/MyDrive/study/summary_of_weather.csv") weather_station_location = weather_station_location.loc[:,["WBAN","NAME","STATE/COUNTRY ID","Latitude","Longitude"]] weather = weather.loc[:,["STA","Date","MeanTemp"]] weather_station_id = weather_station_location[weather_station_location.NAME == "BINDUKURI"].WBAN weather_bin = weather[weather.STA == int(weather_station_id)] weather_bin["Date"] = pd.to_datetime(weather_bin["Date"]) plt.figure(figsize=(22,8)) plt.plot(weather_bin.Date,weather_bin.MeanTemp) plt.title("Mean Temperature of Bindukuri Area") plt.xlabel("Date") plt.ylabel("Mean Temperature") plt.show() # lets create time series from weather timeSeries = weather_bin.loc[:, ["Date","MeanTemp"]] timeSeries.index = timeSeries.Date ts = timeSeries.drop("Date",axis=1) ts from statsmodels.tsa.seasonal import seasonal_decompose result = seasonal_decompose(ts['MeanTemp'], model='additive', freq=7) fig = plt.figure() fig = result.plot() fig.set_size_inches(20,15) import statsmodels.api as sm fig = plt.figure(figsize=(20,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(ts, lags=20, ax=ax1) ts_diff = ts - ts.shift() plt.figure(figsize=(22,8)) plt.plot(ts_diff) plt.title("Differencing method") plt.xlabel("Date") plt.ylabel("Differencing Mean Temperature") plt.show() import statsmodels.api as sm fig = plt.figure(figsize=(20,8)) ax1 = fig.add_subplot(211) fig = sm.graphics.tsa.plot_acf(ts_diff[1:], lags=20, ax=ax1) # ax2 = fig.add_subplot(212) fig = sm.graphics.tsa.plot_pacf(ts_diff[1:], lags=20, ax=ax2)# , lags=40 # fit model from statsmodels.tsa.arima_model import ARIMA from pandas import datetime model = ARIMA(ts, order=(2,1,2)) model_fit = model.fit(disp=0) # predict start_index = datetime(1944, 6, 25) end_index = datetime(1945, 5, 31) forecast = model_fit.predict(start=start_index, end=end_index, typ='levels') # visualization plt.figure(figsize=(22,8)) plt.plot(weather_bin.Date,weather_bin.MeanTemp,label = "original") plt.plot(forecast,label = "predicted") plt.title("Time Series Forecast") plt.xlabel("Date") plt.ylabel("Mean Temperature") plt.legend() plt.show() resi = np.array(weather_bin[weather_bin.Date>=start_index].MeanTemp) - np.array(forecast) plt.figure(figsize=(22,8)) plt.plot(weather_bin.Date[weather_bin.Date>=start_index],resi) plt.xlabel("Date") plt.ylabel("Residual") plt.legend() plt.show() from sklearn import metrics def scoring(y_true, y_pred): r2 = round(metrics.r2_score(y_true, y_pred) * 100, 3) # mae = round(metrics.mean_absolute_error(y_true, y_pred),3) corr = round(np.corrcoef(y_true, y_pred)[0, 1], 3) mape = round( metrics.mean_absolute_percentage_error(y_true, y_pred) * 100, 3) rmse = round(metrics.mean_squared_error(y_true, y_pred,squared=False), 3) df = pd.DataFrame({ 'R2': r2, "Corr": corr, "RMSE": rmse, "MAPE": mape },index=[0]) return df scoring(np.array(weather_bin[weather_bin.Date>=start_index].MeanTemp),np.array(forecast))
_____no_output_____
MIT
Day1_practice2.ipynb
andreYoo/Time-series-analysis-anomaly-detection
Note This notebook assumes that you are familiar with NumPy & Pandas. No worries if you are not! Like music & MRI? You can learn NumPy and SciPy as you are making music using MRI sounds: https://www.loom.com/share/4b08c4df903c40b397e87b2ec9de572dGitHub repo: https://github.com/agahkarakuzu/sunrise If you are using Plotly for the first time, lucky you! > `plotly.express` is to `plotly` what `seaborn` is to `matplotlib`If you know what `seaborn` and `matplotlib` are, you won't need further explanation to understand what `plotly.express` has to offer. If you are not familiar with any of these, forget what I just said and focus on the examples. See how you can create superb interactive figures with a single line of code! I assume that you are familiar with [tidy Pandas data frame](https://www.jeannicholashould.com/tidy-data-in-python.html). If you've never heard such a thing before, give it a quick read before proceeding, because this is the data format accepted by `plotly.express`. > Plotly Express supports a wide variety of charts, including otherwise verbose-to-create animations, facetted plots and multidimensional plots like Scatterplot Matrices (SPLOMs), Parallel Coordinates and Parallel Categories plots.
import plotly.express as px
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
In the older version of Plotly, at this stage, you had to `init_notebook_mode()` and tell plotly that you will be using it offline. Good news: * Now plotly can automatically detect which renderer to use! * Plus, you don't have to write extra code to tell Plotly you will be working offline. Plotly figures now have the ability to display themselves in the following contexts: * JupyterLab & classic Jupyter notebook * Other notebooks like Colab, nteract, Azure & Kaggle * IDEs and CLIs like VSCode, PyCharm, QtConsole & Spyder * Other contexts such as sphinx-gallery * Dash apps (with dash_core_components.Graph()) * Static raster and vector files (with fig.write_image()) * Standalone interactive HTML files (with fig.write_html()) * Embedded into any website (with fig.to_json() and Plotly.js) Now lets import the famous `iris` dataset, which comes with `plotly.express` and display it. ![](https://miro.medium.com/max/3500/1*f6KbPXwksAliMIsibFyGJw.png)Hint: Plotly 4.0 supports tab completion as well! Type `px.` then hit the tab from your keyboard. Available methods and attributes will appear in a dropdown list.
# Read iris data into the variable named iris iris = px.data.iris() # Display first last 5 rows of the dataframe iris.tail()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Create scatter plotsAs you see, `iris` dataset has 6 columns, each having their own label. Now let's take a look at how `sepal_width` is corralated with `sepal_length`.
fig = px.scatter(iris, x="sepal_width", y="sepal_length") fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Yes, that easy! 🎉You can change the column indexes to observe other correlations such as `petal_length` and `petal_height`. What if you were also able to color markers with respect to the `species` category? Well, all it takes is to pass another argument :)
fig = px.scatter(iris, x="sepal_width", y="sepal_length",color='species') fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
💬**Scatter plots are not enough! I want my histograms displayed on their respective axes.** 👏Plotly express got you covered.
fig = px.scatter(iris, x="sepal_width", y="sepal_length", color="species", marginal_y="rug", marginal_x="histogram") fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
🙄Of course scatter plots need their best fit line. And why not show `boxplots` or `violinpots` instead of histograms and rug lines? 🚀
fig = px.scatter(iris, x="sepal_width", y="sepal_length", color="species", marginal_y="violin", marginal_x="box", trendline="ols") fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
- What is better than a scatter plot? > A scatter plot matrix! 🤯You can explore cross-filtering ability of SPLOM charts in plotly. Hover your cursor over a point cloud in one of the panels, and select a poriton of them by left click + dragging. Selected data points will be highlighted in the remaining sub-panels! Double click to reset.
fig = px.scatter_matrix(iris, dimensions=["sepal_width", "sepal_length", "petal_width", "petal_length"], color="species") fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Remember parallel sets? Let's create oneIn [the presentation](https://zenodo.org/record/3841775.XsqgFJ5Kg1I), we saw that parallel set can be useful for visualization of proportions if there are more than two grouping variables are present. In this example, we will be working with the `tips` dataset, which has five grouping conditions: `sex`, `smoker`, `day`, `time`, `size`. Each of these will represent a column, and each column will be split into number of pieces equal to the unique entries listed in the corresponding category. Each row represents a restaurant bill.
tips = px.data.tips() tips.tail() # Hint: You can change colorscale. Type px.colors.sequential. then hit tab :) fig = px.parallel_categories(tips, color="total_bill", dimensions=['sex','smoker','day','time','size'], color_continuous_scale='viridis',template='plotly_dark') fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Sunburst chart & Treemap**Data:** A `pandas.DataFrame` with 1704 rows and the following columns:`['country', 'continent', 'year', 'lifeExp', 'pop', 'gdpPercap',iso_alpha', 'iso_num']`.
df = px.data.gapminder().query("year == 2007") fig = px.sunburst(df, path=['continent', 'country'], values='pop', color='lifeExp', hover_data=['iso_alpha'],color_continuous_scale='viridis',template='plotly_white') fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Polar coordinates**Data**: Level of wind intensity in a cardinal direction, and its frequency.- Scatter polar- Line polar - Bar polar
df = px.data.wind() fig = px.scatter_polar(df, r="frequency", theta="direction", color="strength", symbol="strength", color_discrete_sequence=px.colors.sequential.Plasma_r, template='plotly_dark') fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Ternary plot**Data:** Results for an electoral district in the 2013 Montreal mayoral election.
df = px.data.election() fig = px.scatter_ternary(df, a="Joly", b="Coderre", c="Bergeron", color="winner", size="total", hover_name="district", size_max=15, color_discrete_map = {"Joly": "blue", "Bergeron": "green", "Coderre":"red"}, template="plotly_dark" ) fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
See all available `px` charts, attributes and more Plotly express gives you the liberty to change visual attributes of the plots as you like! There are many other charts made available out of the box, all can be plotted with a single line of code. Here is the [complete reference documentation](https://www.plotly.express/plotly_express/) for `plotly.express`. Saving the best for the last Remember I said > including otherwise verbose-to-create animationsat the beginning of this notebook? Show time! Lets load `gapminder` dataset and observe the relationship between life expectancy and gdp per capita from 1952 to 2017 for five continents.
gapminder = px.data.gapminder() gapminder.tail() fig = px.scatter(gapminder, x="gdpPercap", y="lifeExp", animation_frame="year", animation_group="country", size="pop", color="continent", hover_name="country", facet_col="continent", log_x=True, size_max=45, range_x=[100,100000], range_y=[25,90]) fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
👽I know you like dark themes.
# See the last argument (template) I passed to the function. To see other alternatives # visit https://plot.ly/python/templates/ fig = px.scatter(gapminder, x="gdpPercap", y="lifeExp", animation_frame="year", animation_group="country", size="pop", color="continent", hover_name="country", facet_col="continent", log_x=True, size_max=45, range_x=[100,100000], range_y=[25,90], template="plotly_dark") fig.show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Let's work with our own dataWe will load raw MRI data (K-Space), which is saved in `ISMRM-RD` format.
from ismrmrd import Dataset as read_ismrmrd from ismrmrd.xsd import CreateFromDocument as parse_ismrmd_header import numpy as np # Here, we are just loading a 3D data into a numpy matrix, so that we can use plotly with it! dset = read_ismrmrd('Kspace/sub-ismrm_ses-sunrise_acq-chord1.h5', 'dataset') header = parse_ismrmd_header(dset.read_xml_header()) nX = header.encoding[0].encodedSpace.matrixSize.x nY = header.encoding[0].encodedSpace.matrixSize.y nZ = header.encoding[0].encodedSpace.matrixSize.z nCoils = header.acquisitionSystemInformation.receiverChannels raw = np.zeros((nCoils, nX, nY), dtype=np.complex64) for tr in range(nY): raw[:,:,tr] = dset.read_acquisition(tr).data
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
100X100 matrix, 16 receive channels
raw.shape fig = px.imshow(raw.real,color_continuous_scale='viridis',facet_col=0,facet_col_wrap=4,template='plotly_dark') fig.update_layout(title='Channel Raw')
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Simple image reconstruction
from scipy.fft import fft2, fftshift from scipy import ndimage im = np.zeros(raw.shape) # Let's apply some ellipsoid filter. raw = ndimage.fourier_ellipsoid(fftshift(raw),size=2) #raw = ndimage.fourier_ellipsoid(raw,size=2) for ch in range(nCoils): # Comment in and see what it gives im[ch,:,:] = abs(fftshift(fft2(raw[ch,:,:]))) # Normalize im[ch,:,:] /= im[ch,:,:].max() fig = px.imshow(im,color_continuous_scale='viridis', animation_frame=0,template='plotly_dark') fig.update_layout(title='Channel Recon').show()
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
SAVE HTML OUTPUT* This is the file under the `.docs` directory, from which a `GitHub page` is served:![](gh_pages.png)
fig.write_html('multichannel.html')
_____no_output_____
MIT
Plotly_Express.ipynb
agahkarakuzu/datavis_edu
Binary Tree Level Order Traversal (easy) Given a binary tree, populate an array to represent its level-by-level traversal. You should populate the values of all nodes of each level from left to right in separate sub-arrays.
def get_depth(root): def helper(root, i): if not root.left and not root.right: return i r,l = 0,0 if root.left: l = helper(root.left, i+1) if root.right: r = helper(root.right, i+1) return max(l,r) return helper(root, 0) def traverse(root): res = [[] for _ in range(get_depth(root)+1)] def helper(root, i): res[i].append(root.val) if root.left: helper(root.left, i+1) if root.right: helper(root.right, i+1) helper(root, 0) return res get_depth(root) traverse(root) from collections import deque def traverse(root): res = [] q = deque([root]) while q: num_vals = len(q) level_res = [] for _ in range(num_vals): r = q.popleft() level_res.append(r.val) if r.left: q.append(r.left) if r.right: q.append(r.right) res.append(level_res) return res traverse(root) def get_depth(root): q = deque([root]) depth = 0 while q: has_next_level = False num_items = len(q) for _ in range(num_items): r = q.popleft() if r.right: q.append(r.right) if r.left: q.append(r.left) depth += 1 return depth get_depth(root)
_____no_output_____
MIT
ed_breadth_first_search.ipynb
devkosal/code_challenges
Reverse level order traversal
def reverse_traverse(root): q = deque([root]) res = deque() while q: num_items = len(q) level_res = deque() for _ in range(num_items): r = q.popleft() level_res.append(r.val) if r.left: q.append(r.left) if r.right: q.append(r.right) res.appendleft(level_res) return res reverse_traverse(root)
_____no_output_____
MIT
ed_breadth_first_search.ipynb
devkosal/code_challenges
zigzag order traversal
def zig_traverse(root): q = deque([root]) res = deque() while q: num_items = len(q) level_res = deque() for _ in range(num_items): r = q.popleft() level_res.append(r.val) if r.left: q.append(r.left) if r.right: q.append(r.right) res.appendleft(level_res) return res
_____no_output_____
MIT
ed_breadth_first_search.ipynb
devkosal/code_challenges
Connect All Level Order Siblings (medium) -- Problem 1
def connect_level_order(tree): pass
_____no_output_____
MIT
ed_breadth_first_search.ipynb
devkosal/code_challenges
Plotting final spectra with MC error-bar (without MCMC error-bar)
from pathlib import Path import numpy as np import pandas as pd import matplotlib.pyplot as plt import seaborn as sns %matplotlib inline import plotly.offline as py import plotly.express as px %load_ext autoreload %autoreload 2 %time from tail.analysis.container import Spectra, FSky, NullSpectra basedir = Path('/scratch/largepatch_new') path = basedir / 'high_1_0_2.hdf5' path_cache = basedir / 'high_1_0_2_spectra_final.hdf5' path_mask = basedir / 'masks.hdf5' save = False # select spectra bin_width = 100 # narrow l-range for science l_min = 600 l_max = 3000 ev_est = 'signal' ddof = None f_sky = FSky.load(path_mask) %time spectra = Spectra.load(path_cache, bin_width, full=True, path_theory_spectra=path) spectra.ev_est, spectra.ddof spectra.ev_est = ev_est spectra.ddof = ddof %time spectra = spectra.slicing_l(l_min, l_max, ascontiguousarray=True) spectra.pte_to_frame
_____no_output_____
BSD-3-Clause
examples/plot_spectra.ipynb
ickc/TAIL
Plotting code
fig = spectra.plot_spectra(subtract_leakage=True) if save: Path('media').mkdir(exist_ok=True) py.plot(fig, filename='media/spectra.html', include_plotlyjs='cdn', include_mathjax='cdn') fig fig.update_layout(xaxis_type="log", yaxis_type="log") if save: py.plot(fig, filename='media/spectra-log.html', include_plotlyjs='cdn', include_mathjax='cdn')
_____no_output_____
BSD-3-Clause
examples/plot_spectra.ipynb
ickc/TAIL
Analytical Error-bar
err_analytic = spectra.err_analytic(f_sky) df_err = spectra.to_frame_4d(err_analytic) df_err.T
_____no_output_____
BSD-3-Clause
examples/plot_spectra.ipynb
ickc/TAIL
TF-IDF Draw back of Bag of Words
# All the words have given same importance # No Semantic information preserved # For above two problems TF-IDF model is the solution
_____no_output_____
Unlicense
vk_NLP - TF IDF.ipynb
vitthalkcontact/NLP
Steps in TF-IDF
# 1. Lower case the corpus or paragraph. # 2. Tokenization. # 3. TF: Term Frequency, IDF: Inverse Document Frequency, TF-IDF = TF*log(IDF). # 4. TF = No. of occurance of a word in a document / No. of words in that document. # 5. IDF = log(No. of documents/No. of documents containing the word) # 6. TFIDF(word) = TF(Document, word) * IDF (word) import nltk nltk.download() paragraph = '''In a country like India with a galloping population, unfortunately nobody is paying attention to the issue of population. Political parties are feeling shy, politicians are feeling shy, Parliament also does not adequately discuss about the issue,” said Naidu while addressing the 58th convocation of Indian Agricultural Research Institute (IARI). He said, “You know how population is growing, creating problems. See the problems in Delhi, traffic, more human beings, more vehicles, more tension, less attention. If you have tension you cannot pay attention.” Emphasising on the need to increase food production to meet demand of growing population, Naidu said, “In future if population increases like this, and you are not able to adequately match it with increase in production, there will be problem''' # Cleaning the Text import re from nltk.corpus import stopwords from nltk.stem.porter import PorterStemmer from nltk.stem import WordNetLemmatizer ps = PorterStemmer() wordnet = WordNetLemmatizer() sentences = nltk.sent_tokenize(paragraph) # sentences corpus = [] for i in range(len(sentences)): review = re.sub("[^a-zA-Z]", ' ', sentences[i]) review = review.lower() review = review.split() review = [wordnet.lemmatize(word) for word in review if word not in set(stopwords.words('english'))] review = ' '.join(review) corpus.append(review) # Creatung the TF-IDF Model # # Creating the TF-IDF model # from sklearn.feature_extraction.text import TfidfVectorizer # cv = TfidfVectorizer() # X = cv.fit_transform(corpus).toarray() from sklearn.feature_extraction.text import TfidfVectorizer tfidf = TfidfVectorizer() X = tfidf.fit_transform(corpus) X.toarray() type(X) X.shape print(X[:,0]) print(X[:,:])
(0, 26) 0.2611488808945384 (0, 5) 0.21677716168619507 (0, 38) 0.3236873066380182 (0, 34) 0.3236873066380182 (0, 51) 0.3236873066380182 (0, 41) 0.43355432337239014 (0, 18) 0.3236873066380182 (0, 23) 0.3236873066380182 (0, 29) 0.2611488808945384 (0, 9) 0.3236873066380182 (1, 21) 0.20243884765910772 (1, 25) 0.20243884765910772 (1, 44) 0.20243884765910772 (1, 3) 0.20243884765910772 (1, 24) 0.20243884765910772 (1, 8) 0.20243884765910772 (1, 49) 0.20243884765910772 (1, 1) 0.20243884765910772 (1, 32) 0.16332638763273197 (1, 45) 0.13557565561148596 (1, 13) 0.20243884765910772 (1, 2) 0.16332638763273197 (1, 4) 0.20243884765910772 (1, 35) 0.20243884765910772 (1, 40) 0.20243884765910772 : : (3, 11) 0.3420339209721722 (3, 46) 0.3420339209721722 (3, 42) 0.2290641031273583 (3, 5) 0.2290641031273583 (4, 30) 0.18444604729119288 (4, 0) 0.18444604729119288 (4, 17) 0.18444604729119288 (4, 12) 0.18444604729119288 (4, 31) 0.18444604729119288 (4, 43) 0.36889209458238575 (4, 16) 0.18444604729119288 (4, 22) 0.5533381418735785 (4, 33) 0.18444604729119288 (4, 14) 0.18444604729119288 (4, 37) 0.18444604729119288 (4, 7) 0.18444604729119288 (4, 48) 0.14880990958778192 (4, 42) 0.12352566750705657 (4, 19) 0.14880990958778192 (4, 32) 0.14880990958778192 (4, 45) 0.12352566750705657 (4, 2) 0.14880990958778192 (4, 5) 0.12352566750705657 (4, 41) 0.24705133501411314 (4, 29) 0.14880990958778192
Unlicense
vk_NLP - TF IDF.ipynb
vitthalkcontact/NLP
OpenStreetMap Data Case Study Problems Encountered in the MapDiscuss the main problems with the data in the following order:- Over­abbreviated street names (“S Tryon St Ste 105”)- Second level “k” tags with the value "type"(which overwrites the element’s previously processed node[“type”]field).- Street names in second ­level “k” tags pulled from Tiger GPS data and divided into segments, in the following format:- Unstructure Unique ID (1, 42653, 2321, 5030230) Map Area - DatasetIn this project, I choose San Jose which is a large city surrounded by rolling hills in Silicon Valley, a major technology hub in California's Bay Area. I want to learn more about the place to see what database querying reveals. This location is one of my dreams working area as it's all over the world-class Tech corporations around there. San Jose, United States (OSM XML: 364.6 MB)- https://mapzen.com/data/metro-extracts/metro/san-jose_california/
# -*- coding: utf-8 -*- import pprint import xml.etree.ElementTree as ET from collections import defaultdict import re import os DATASET = "san-jose_california.osm" # osm filename PATH = "./" # directory contain the osm file OSMFILE = PATH + DATASET print('Dataset folder:', OSMFILE)
Dataset folder: ./san-jose_california.osm
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Iterative Parsing the OSM file.
# mapparser.py # iterative parsing from mapparser import count_tags, count_tags_total tags = count_tags(OSMFILE) print('Numbers of tag: ', len(tags)) print('Numbers of tag elements: ', count_tags_total(tags)) pprint.pprint(tags)
Numbers of tag: 8 Numbers of tag elements: 4599618 {'bounds': 1, 'member': 18333, 'nd': 1965111, 'node': 1679378, 'osm': 1, 'relation': 1759, 'tag': 705634, 'way': 229401}
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Categorize the tag keys.Categorize the tag keys in the followings:- "lower", for tags that contain only lowercase letters and are valid,- "lower_colon", for otherwise valid tags with a colon in their names,- "problemchars", for tags with problematic characters, and- "other", for other tags that do not fall into the other three categories.
# tags.py from tags import key_type def process_map_tags(filename): keys = {"lower": 0, "lower_colon": 0, "problemchars": 0, "other": 0} for _, element in ET.iterparse(filename): keys = key_type(element, keys) return keys keys = process_map_tags(OSMFILE) pprint.pprint(keys)
{'lower': 459030, 'lower_colon': 224633, 'other': 21969, 'problemchars': 2}
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Number of Unique UsersAs you can see, each of the user has their own unique ID. However, the ID is unstructured likes 1, 1005885, 1030, 100744. I structured all the unique user id in the followings:- 25663 => 0025663- 951370 => 0951370
# users.py from users import unique_user_id, max_length_user_id, structure_user_id def test(): users = unique_user_id(OSMFILE) # structured = structure_user_id(users) # pprint.pprint(structured) max_length = max_length_user_id(users) print('Number of users: ', len(users)) print('User ID maximum length', max_length) print_limit = 10 for user_id in users: if len(user_id) < max_length: structured_id = user_id while len(structured_id) < max_length: structured_id = str('0' + structured_id) if print_limit > 0: print_limit -= 1 print(user_id, "=>", structured_id) else: break if __name__ == '__main__': test()
Number of users: 1359 User ID maximum length 7 25663 => 0025663 951370 => 0951370 199089 => 0199089 637707 => 0637707 28145 => 0028145 941449 => 0941449 281267 => 0281267 41907 => 0041907 166129 => 0166129 173623 => 0173623
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Over-abbreviated Street NamesSome basic query is over-abbreviated. I updated all the problematic address strings in the followings:- Seaboard Ave => Seaboard Avenue- Cherry Ave => Cherry Avenue
#audit.py from audit import audit, update_name, street_type_re, mapping def test(): st_types = audit(OSMFILE) # pprint.pprint(dict(st_types)) #print out dictonary of potentially incorrect street types print_limit = 10 for st_type, ways in st_types.items(): # .iteritems() for python2 for name in ways: if street_type_re.search(name).group() in mapping: better_name = update_name(name, mapping) if print_limit > 0: print_limit -= 1 print (name, "=>", better_name) else: break if __name__ == '__main__': test()
Hillsdale Ave => Hillsdale Avenue Meridian Ave => Meridian Avenue Walsh Ave => Walsh Avenue Seaboard Ave => Seaboard Avenue N Blaney Ave => N Blaney Avenue Saratoga Ave => Saratoga Avenue 1425 E Dunne Ave => 1425 E Dunne Avenue Blake Ave => Blake Avenue The Alameda Ave => The Alameda Avenue Hollenbeck Ave => Hollenbeck Avenue
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Insert data into Mongodb
# data.py from data import process_map data = process_map(OSMFILE, True) data[0]
_____no_output_____
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Data Overview
from pymongo import MongoClient client = MongoClient('localhost:27017') db = client.SanJose collection = db.SanJoseMAP #collection.insert(data) collection print('Size of the original xml file: ',os.path.getsize(OSMFILE)/(1024*1024.0), 'MB') print('Size of the processed json file: ',os.path.getsize(os.path.join(PATH, "san-jose_california.osm.json"))/(1024*1024.0), 'MB') print('Number of documents: ' + str(collection.find().count())) print('Number of nodes: ' + str(collection.find({"type":"node"}).count())) print('Number of ways: ' + str(collection.find({"type":"way"}).count())) print('Number of relations: ' + str(collection.find({"type":"relation"}).count())) print('Number of unique users: ' + str(len(collection.distinct("created.user")))) print('Number of pizza places: ' + str(collection.find({"cuisine":"pizza"}).count()))
Size of the original xml file: 348.08773612976074 MB Size of the processed json file: 512.8097190856934 MB Number of documents: 19761132 Number of nodes: 17470242 Number of ways: 2290674 Number of relations: 0 Number of unique users: 1356 Number of pizza places: 636
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Contributor statistics and gamification suggestioThe contributions of users seems incredibly skewed, possibly due to automated versus manual map editing (the word “bot” appears in some usernames). Here are some user percentage statistics:- Top user contribution percentage (“nmixter”) - 15.08%- Combined top 2 users' contribution (“nmixter” and “andygol”) - 30.07%- Combined Top 10 users contribution - 64.12%Thinking about these user percentages, I’m reminded of “gamification” as a motivating force for contribution. In the context of the OpenStreetMap, if user data were more prominently displayed, perhaps others would take an initiative in submitting more edits to the map. It's so surprise, the only top 10 users that contributed over than 50% of this dataset. That might spur the creation of more efficient bots, especially if certain gamification elements were present, such as rewards, badges, or a leaderboard. Top 10 users with most contributions
# Top 10 users with most contributions pipeline = [{"$group":{"_id": "$created.user", "count": {"$sum": 1}}}, {"$sort": {"count": -1}}, {"$limit": 10}] result = collection.aggregate(pipeline) for r in range(10): print (result.next())
{'_id': 'nmixter', 'count': 2980568} {'_id': 'andygol', 'count': 2961664} {'_id': 'mk408', 'count': 1615791} {'_id': 'Bike Mapper', 'count': 969105} {'_id': 'samely', 'count': 813227} {'_id': 'RichRico', 'count': 768741} {'_id': 'dannykath', 'count': 752101} {'_id': 'MustangBuyer', 'count': 646129} {'_id': 'karitotp', 'count': 645535} {'_id': 'Minh Nguyen', 'count': 517383}
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Number of users appearing only once (having 1 post) There's only one user appearing only once. Which means most the the user appear at least once.
# Number of users appearing only once (having 1 post) pipeline = [{"$group":{"_id":"$created.user", "count":{"$sum":1}}}, {"$group":{"_id":"$count", "num_users":{"$sum":1}}}, {"$sort":{"_id":1}}, {"$limit":1}] result = collection.aggregate(pipeline) for r in range(1): print (result.next())
{'_id': 1, 'num_users': 1}
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Top 10 Biggest religion The result show in Sanjose area, the Christian is one of the biggest religion. The the seconds largest is Unknown, seem like the record is missing. Then coming to the third largest religion is jewish.
# Top 10 Biggest religion pipeline = [{"$match":{"amenity":{"$exists":1}, "amenity":"place_of_worship"}}, {"$group":{"_id":"$religion", "count":{"$sum":1}}}, {"$sort":{"count":-1}}, {"$limit":10}] result = collection.aggregate(pipeline) for r in range(10): print (result.next())
{'_id': 'christian', 'count': 1996} {'_id': None, 'count': 139} {'_id': 'jewish', 'count': 33} {'_id': 'buddhist', 'count': 26} {'_id': 'muslim', 'count': 18} {'_id': 'hindu', 'count': 14} {'_id': 'unitarian_universalist', 'count': 13} {'_id': 'sikh', 'count': 7} {'_id': 'caodaism', 'count': 7} {'_id': 'zoroastrian', 'count': 7}
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Top 10 appearing amenities
# Top 10 appearing amenities pipeline = [{"$match":{"amenity":{"$exists":1}}}, {"$group":{"_id":"$amenity","count":{"$sum":1}}}, {"$sort":{"count":-1}}, {"$limit":10}] result = collection.aggregate(pipeline) for r in range(10): print (result.next())
{'_id': '', 'count': 3798126} {'_id': 'parking', 'count': 12302} {'_id': 'restaurant', 'count': 6573} {'_id': 'fast_food', 'count': 3406} {'_id': 'school', 'count': 3321} {'_id': 'place_of_worship', 'count': 2298} {'_id': 'bench', 'count': 1807} {'_id': 'cafe', 'count': 1753} {'_id': 'fuel', 'count': 1580} {'_id': 'bicycle_parking', 'count': 1347}
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Top 10 popular cuisines
# Top 10 popular cuisines pipeline = [{"$match":{"amenity":{"$exists":1}, "amenity":"restaurant"}}, {"$group":{"_id":"$cuisine", "count":{"$sum":1}}}, {"$sort":{"count":-1}}, {"$limit":10}] result = collection.aggregate(pipeline) for r in range(10): print (result.next())
{'_id': None, 'count': 1296} {'_id': '', 'count': 572} {'_id': 'mexican', 'count': 570} {'_id': 'chinese', 'count': 504} {'_id': 'vietnamese', 'count': 459} {'_id': 'pizza', 'count': 390} {'_id': 'japanese', 'count': 293} {'_id': 'american', 'count': 283} {'_id': 'italian', 'count': 222} {'_id': 'indian', 'count': 214}
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Sort postcodes by count, descending
# Sort postcodes by count, descending pipeline = [{"$match":{"address.postcode":{"$exists":1}}}, {"$group":{"_id":"$address.postcode", "count":{"$sum":1}}}, {"$sort":{"count":-1}}] result = collection.aggregate(pipeline) for r in range(10): print (result.next())
{'_id': '', 'count': 1893917} {'_id': '95014', 'count': 3503} {'_id': '95070', 'count': 2438} {'_id': '94087', 'count': 2205} {'_id': '94086', 'count': 2052} {'_id': '95051', 'count': 1772} {'_id': '95129', 'count': 1397} {'_id': '95127', 'count': 1130} {'_id': '95054', 'count': 1023} {'_id': '95035', 'count': 1018}
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Sort street by count, descending
# Sort street by count, descending pipeline = [{"$match":{"address.street":{"$exists":1}}}, {"$group":{"_id":"$address.street", "count":{"$sum":1}}}, {"$sort":{"count":-1}}] result = collection.aggregate(pipeline) for r in range(10): print (result.next())
{'_id': '', 'count': 1885122} {'_id': 'Stevens Creek Boulevard', 'count': 2898} {'_id': 'Hollenbeck Avenue', 'count': 1745} {'_id': 'South Stelling Road', 'count': 1300} {'_id': 'East Estates Drive', 'count': 1230} {'_id': 'Johnson Avenue', 'count': 1200} {'_id': 'Miller Avenue', 'count': 1170} {'_id': 'Bollinger Road', 'count': 1170} {'_id': 'North Santa Cruz Avenue', 'count': 1160} {'_id': 'South De Anza Boulevard', 'count': 1127}
MIT
.ipynb_checkpoints/Data_Analyst_ND_Project3-checkpoint.ipynb
jorcus/DAND-Wrangle-OpenStreetMap-Data
Building a Variational Autoencoder in MXNet Xiaoyu Lu, July 5th, 2017This tutorial guides you through the process of building a variational encoder in MXNet. In this notebook we'll focus on an example using the MNIST handwritten digit recognition dataset. Refer to [Auto-Encoding Variational Bayes](https://arxiv.org/abs/1312.6114/) for more details on the model description. PrerequisitesTo complete this tutorial, we need following python packages:- numpy, matplotlib 1. Loading the DataWe first load the MNIST dataset, which contains 60000 training and 10000 test examples. The following code imports required modules and loads the data. These images are stored in a 4-D matrix with shape (`batch_size, num_channels, width, height`). For the MNIST dataset, there is only one color channel, and both width and height are 28, so we reshape each image as a 28x28 array. See below for a visualization:
mnist = mx.test_utils.get_mnist() image = np.reshape(mnist['train_data'],(60000,28*28)) label = image image_test = np.reshape(mnist['test_data'],(10000,28*28)) label_test = image_test [N,features] = np.shape(image) #number of examples and features f, (ax1, ax2, ax3, ax4) = plt.subplots(1,4, sharex='col', sharey='row',figsize=(12,3)) ax1.imshow(np.reshape(image[0,:],(28,28)), interpolation='nearest', cmap=cm.Greys) ax2.imshow(np.reshape(image[1,:],(28,28)), interpolation='nearest', cmap=cm.Greys) ax3.imshow(np.reshape(image[2,:],(28,28)), interpolation='nearest', cmap=cm.Greys) ax4.imshow(np.reshape(image[3,:],(28,28)), interpolation='nearest', cmap=cm.Greys) plt.show()
_____no_output_____
Apache-2.0
example/vae/VAE_example.ipynb
dkuspawono/incubator-mxnet
We can optionally save the parameters in the directory variable 'model_prefix'. We first create data iterators for MXNet, with each batch of data containing 100 images.
model_prefix = None batch_size = 100 nd_iter = mx.io.NDArrayIter(data={'data':image},label={'loss_label':label}, batch_size = batch_size) nd_iter_test = mx.io.NDArrayIter(data={'data':image_test},label={'loss_label':label_test}, batch_size = batch_size)
_____no_output_____
Apache-2.0
example/vae/VAE_example.ipynb
dkuspawono/incubator-mxnet
2. Building the Network Architecture 2.1 Gaussian MLP as encoderNext we constuct the neural network, as in the [paper](https://arxiv.org/abs/1312.6114/), we use *Multilayer Perceptron (MLP)* for both the encoder and decoder. For encoder, a Gaussian MLP is used as follows:\begin{align}\log q_{\phi}(z|x) &= \log \mathcal{N}(z:\mu,\sigma^2I) \\\textit{ where } \mu &= W_2h+b_2, \log \sigma^2 = W_3h+b_3\\h &= \tanh(W_1x+b_1)\end{align}where $\{W_1,W_2,W_3,b_1,b_2,b_3\}$ are the weights and biases of the MLP.Note below that `encoder_mu`(`mu`) and `encoder_logvar`(`logvar`) are symbols. So, we can use `get_internals()` to get the values of them, after which we can sample the latent variable $z$.
## define data and loss labels as symbols data = mx.sym.var('data') loss_label = mx.sym.var('loss_label') ## define fully connected and activation layers for the encoder, where we used tanh activation function. encoder_h = mx.sym.FullyConnected(data=data, name="encoder_h",num_hidden=400) act_h = mx.sym.Activation(data=encoder_h, act_type="tanh",name="activation_h") ## define mu and log variance which are the fully connected layers of the previous activation layer mu = mx.sym.FullyConnected(data=act_h, name="mu",num_hidden = 5) logvar = mx.sym.FullyConnected(data=act_h, name="logvar",num_hidden = 5) ## sample the latent variables z according to Normal(mu,var) z = mu + np.multiply(mx.symbol.exp(0.5 * logvar), mx.symbol.random_normal(loc=0, scale=1, shape=np.shape(logvar.get_internals()["logvar_output"])))
_____no_output_____
Apache-2.0
example/vae/VAE_example.ipynb
dkuspawono/incubator-mxnet