markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
'What is relationship to company'? And what are the most common relationships?
recent['relationshiptocompany'] recent['relationshiptocompany'].describe() # the most common relationship to company is founder
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit
Most common source of wealth? Male vs. female?
recent['sourceofwealth'].describe() # the most common source of wealth is real estate recent.groupby('gender')['sourceofwealth'].describe() #describe the content of a given column # the most common source of wealth for male is real estate, while for female is diversified
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit
Given the richest person in a country, what % of the GDP is their wealth?
recent.sort_values(by='networthusbillion', ascending=False).head(10)['gdpcurrentus'] #From the website, I learned that the GDP for USA in 2014 is $17348 billion #from the previous dataframe, I learned that the richest USA billionaire made $76 billion networth richest = 76 usa_gdp = 17348 percent = round(richest / usa_gdp * 100,2) print(percent, "% of the US GDP is his wealth.")
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit
Add up the wealth of all of the billionaires in a given country (or a few countries) and then compare it to the GDP of the country, or other billionaires, so like pit the US vs India
recent.groupby('countrycode')['networthusbillion'].sum().sort_values(ascending=False) # USA is $2322 billion, compared to Russian is $422 billion
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit
What are the most common industries for billionaires to come from? What's the total amount of billionaire money from each industry?
recent['sourceofwealth'].describe() recent.groupby('sourceofwealth')['networthusbillion'].sum().sort_values(ascending=False) How old are billionaires? How old are billionaires self made vs. non self made? or different industries? Who are the youngest billionaires? The oldest? Age distribution - maybe make a graph about it? Maybe just made a graph about how wealthy they are in general? Maybe plot their net worth vs age (scatterplot) Make a bar graph of the top 10 or 20 richest
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit
How many self made billionaires vs. others?
recent['selfmade'].value_counts()
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit
How old are billionaires? How old are billionaires self made vs. non self made? or different industries?
recent.sort_values(by='age',ascending=False).head() columns_want = recent[['name', 'age', 'selfmade','industry']] #[[]]:dataframe columns_want.head()
.ipynb_checkpoints/homework7_billionaire_shengyingzhao-checkpoint.ipynb
sz2472/foundations-homework
mit
The type of variable is a tuple.
type(tuple1)
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
Each element of a tuple can be accessed via an index. The following table represents the relationship between the index and the items in the tuple. Each element can be obtained by the name of the tuple followed by a square bracket with the index number: <img src = "https://ibm.box.com/shared/static/83kpang0opwen5e5gbwck6ktqw7btwoe.gif" width = 750, align = "center"></a> We can print out each value in the tuple:
print( tuple1[0]) print( tuple1[1]) print( tuple1[2])
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
We can print out the type of each value in the tuple:
print( type(tuple1[0])) print( type(tuple1[1])) print( type(tuple1[2]))
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
We can also use negative indexing. We use the same table above with corresponding negative values: <img src = "https://ibm.box.com/shared/static/uwlfzo367bekwg0p5s5odxlz7vhpojyj.png" width = 750, align = "center"></a> We can obtain the last element as follows (this time we will not use the print statement to display the values):
tuple1[-1]
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
We can display the next two elements as follows:
tuple1[-2] tuple1[-3]
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
We can concatenate or combine tuples by using the + sign:
tuple2=tuple1+("hard rock", 10) tuple2
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
We can slice tuples obtaining multiple values as demonstrated by the figure below: <img src = "https://ibm.box.com/shared/static/s9nofy728bcnsgnx3vh159bu16w7frnc.gif" width = 750, align = "center"></a> We can slice tuples, obtaining new tuples with the corresponding elements:
tuple2[0:3]
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
We can obtain the last two elements of the tuple:
tuple2[3:5]
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
We can obtain the length of a tuple using the length command:
len(tuple2)
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
This figure shows the number of elements: <img src = "https://ibm.box.com/shared/static/apxe8l3w42f597yjhizg305merlm4ijf.png" width = 750, align = "center"></a> Consider the following tuple:
Ratings =(0,9,6,5,10,8,9,6,2)
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
We can assign the tuple to a 2nd variable:
Ratings1=Ratings Ratings
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
We can sort the values in a tuple and save it to a new tuple:
RatingsSorted=sorted(Ratings ) RatingsSorted
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
A tuple can contain another tuple as well as other more complex data types. This process is called 'nesting'. Consider the following tuple with several elements:
NestedT =(1, 2, ("pop", "rock") ,(3,4),("disco",(1,2)))
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
Each element in the tuple including other tuples can be obtained via an index as shown in the figure: <img src = "https://ibm.box.com/shared/static/estqe2bczv5weocc4ag4mx9dtqy952fp.png" width = 750, align = "center"></a>
print("Element 0 of Tuple: ", NestedT[0]) print("Element 1 of Tuple: ", NestedT[1]) print("Element 2 of Tuple: ", NestedT[2]) print("Element 3 of Tuple: ", NestedT[3]) print("Element 4 of Tuple: ", NestedT[4])
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
We can use the second index to access other tuples as demonstrated in the figure: <img src = "https://ibm.box.com/shared/static/j1orgjuasaaj3d0feymedrnoqv8trqyo.png" width = 750, align = "center"></a> We can access the nested tuples :
print("Element 2,0 of Tuple: ", NestedT[2][0]) print("Element 2,1 of Tuple: ", NestedT[2][1]) print("Element 3,0 of Tuple: ", NestedT[3][0]) print("Element 3,1 of Tuple: ", NestedT[3][1]) print("Element 4,0 of Tuple: ", NestedT[4][0]) print("Element 4,1 of Tuple: ", NestedT[4][1])
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
We can access strings in the second nested tuples using a third index:
NestedT[2][1][0] NestedT[2][1][1]
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
We can use a tree to visualise the process. Each new index corresponds to a deeper level in the tree: <img src ='https://ibm.box.com/shared/static/vjvsygpzpwcr6czsucgno1wukyhk5vxq.gif' width = 750, align = "center"></a> Similarly, we can access elements nested deeper in the tree with a fourth index:
NestedT[4][1][0] NestedT[4][1][1]
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
The following figure shows the relationship of the tree and the element NestedT[4][1][1]: <img src ='https://ibm.box.com/shared/static/9y5s7515zwzc9v6i4f67yj3np2fv9evs.gif'width = 750, align = "center"></a> <a id="ref2"></a> <h2 align=center> Quiz on Tuples </h2> Consider the following tuple:
genres_tuple = ("pop", "rock", "soul", "hard rock", "soft rock", \ "R&B", "progressive rock", "disco") genres_tuple
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
Find the length of the tuple, "genres_tuple":
len(genres_tuple)
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
<div align="right"> <a href="#String1" class="btn btn-default" data-toggle="collapse">Click here for the solution</a> </div> <div id="String1" class="collapse"> "len(genres_tuple)" <a ><img src = "https://ibm.box.com/shared/static/n4969qbta8hhsycs2dc4n8jqbf062wdw.png" width = 1100, align = "center"></a> ``` ``` </div> Access the element, with respect to index 3:
genres_tuple[3]
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
<div align="right"> <a href="#2" class="btn btn-default" data-toggle="collapse">Click here for the solution</a> </div> <div id="2" class="collapse"> <a ><img src = "https://ibm.box.com/shared/static/s6r8v2uy6wifmaqv53w6adabqci47zme.png" width = 1100, align = "center"></a> </div> Use slicing to obtain indexes 3, 4 and 5:
genres_tuple[3:6]
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
<div align="right"> <a href="#3" class="btn btn-default" data-toggle="collapse">Click here for the solution</a> </div> <div id="3" class="collapse"> <a ><img src = "https://ibm.box.com/shared/static/nqo84vydw6eixdex0trybuvactcw7ffi.png" width = 1100, align = "center"></a> </div> Find the first two elements of the tuple "genres_tuple":
genres_tuple[:2]
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
<div align="right"> <a href="#q5" class="btn btn-default" data-toggle="collapse">Click here for the solution</a> </div> <div id="q5" class="collapse"> ``` genres_tuple[0:2] ``` #### Find the first index of 'disco':
genres_tuple.index("disco")
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
<div align="right"> <a href="#q6" class="btn btn-default" data-toggle="collapse">Click here for the solution</a> </div> <div id="q6" class="collapse"> ``` genres_tuple.index("disco") ``` <hr> #### Generate a sorted List from the Tuple C_tuple=(-5,1,-3):
C_tuple=sorted((-5, 1, -3)) C_tuple
coursera/python_for_data_science/2.1_Tuples.ipynb
mohanprasath/Course-Work
gpl-3.0
另外一条规则是:位置参数优先权:
def func(a, b = 1): pass func(20, a = "G") # TypeError 对参数 a 重复赋值
Tips/2016-03-11-Arguments-and-Unpacking.ipynb
rainyear/pytips
mit
最保险的方法就是全部采用关键词参数。 任意参数 任意参数可以接受任意数量的参数,其中*a的形式代表任意数量的位置参数,**d代表任意数量的关键词参数:
def concat(*lst, sep = "/"): return sep.join((str(i) for i in lst)) print(concat("G", 20, "@", "Hz", sep = ""))
Tips/2016-03-11-Arguments-and-Unpacking.ipynb
rainyear/pytips
mit
上面的这个def concat(*lst, sep = "/")的语法是PEP 3102提出的,在 Python 3.0 之后实现。这里的关键词函数必须明确指明,不能通过位置推断:
print(concat("G", 20, "-")) # Not G-20
Tips/2016-03-11-Arguments-and-Unpacking.ipynb
rainyear/pytips
mit
**d则代表任意数量的关键词参数
def dconcat(sep = ":", **dic): for k in dic.keys(): print("{}{}{}".format(k, sep, dic[k])) dconcat(hello = "world", python = "rocks", sep = "~")
Tips/2016-03-11-Arguments-and-Unpacking.ipynb
rainyear/pytips
mit
Unpacking Python 3.5 添加的新特性(PEP 448),使得*a、**d可以在函数参数之外使用:
print(*range(5)) lst = [0, 1, 2, 3] print(*lst) a = *range(3), # 这里的逗号不能漏掉 print(a) d = {"hello": "world", "python": "rocks"} print({**d}["python"])
Tips/2016-03-11-Arguments-and-Unpacking.ipynb
rainyear/pytips
mit
所谓的解包(Unpacking)实际上可以看做是去掉()的元组或者是去掉{}的字典。这一语法也提供了一个更加 Pythonic 地合并字典的方法:
user = {'name': "Trey", 'website': "http://treyhunner.com"} defaults = {'name': "Anonymous User", 'page_name': "Profile Page"} print({**defaults, **user})
Tips/2016-03-11-Arguments-and-Unpacking.ipynb
rainyear/pytips
mit
在函数调用的时候使用这种解包的方法则是 Python 2.7 也可以使用的:
print(concat(*"ILovePython"))
Tips/2016-03-11-Arguments-and-Unpacking.ipynb
rainyear/pytips
mit
Parameters The next cell initializes the parameters that are used throughout the code. They are listed as: N: The original sequence length N, which is also the length of the sequences that are going to be generated by the PFSA generated by DCGraM; drange: range of values of D for which D-Markov and DCGraM machines that will be generated; a: value up to which the autocorrelation is computed.
N = 10000000 drange = range(4,11) a = 20
dcgram.ipynb
franchenstein/dcgram
gpl-3.0
Original Sequence Analysis Make sure that the original sequence of length N is stored in the correct directory and run the cell to load it to X. After this, run the cells corresponding to the computation of the subsequence probabilities and the conditional probabilites for the value d_max, which is the last value in drange. Additional results can also be computed in the respective cells (autocorrelation and conditional entropy).
#Open original sequence from yaml file with open(name + '/sequences/original_len_' + str(N) + '_' + tag + '.yaml', 'r') as f: X = yaml.load(f) #Value up to which results are computed d_max = drange[-1] #Initialization of variables: p = None p_cond = None #Compute subsequence probabilities of occurrence up to length d_max p, alphabet = sa.calc_probs(X, d_max) with open(name + '/results/probabilities/original_' + tag + '.yaml', 'w') as f: yaml.dump(p, f) with open(name + '/alphabet.yaml', 'w') as f: yaml.dump(alphabet, f) #If p has been previously computed, use this cell to load the values if not p: with open(name + '/results/probabilities/original_' + tag + '.yaml', 'r') as f: p = yaml.load(f) with open(name + '/alphabet.yaml', 'r') as f: alphabet = yaml.load(f) #Compute conditional probabilities of subsequences occurring after given each symbol of the alphabet #One of the two previous cells needs to be executed first. if p: p_cond = sa.calc_cond_probs(p, alphabet, d_max - 1) with open(name + '/results/probabilities/conditional/original_' + tag + '.yaml', 'w') as f: yaml.dump(p_cond, f) else: print("Run a cell that either computes or opens the probabilities.") #If p_cond has been previously computed, use this cell to load the values if not p_cond: with open(name + '/results/probabilities/conditional/original_' + tag + '.yaml', 'r') as f: p_cond = yaml.load(f) #Compute conditional entropy if p and p_cond: h = sa.calc_cond_entropy(p, p_cond, d_max) h.to_csv(name + '/results/cond_entropies/original_' + tag + '.csv') else: print("Run the conditional probabilities cell first.") #If p_cond has been previously computed, use this cell to load the values if not h: h = pd.read_csv(name + '/results/cond_entropies/original_' + tag + '.csv') #Compute autocorrelation aut = sa.calc_autocorr(X, a) aut.to_csv(name + '/results/autocorrelations/original_' + tag + '.csv') #If aut has been previously computed, use this cell to load the values if not aut: aut = pd.read_csv(name + '/results/autocorrelations/original_' + tag + '.csv')
dcgram.ipynb
franchenstein/dcgram
gpl-3.0
D-Markov Machines The next step of DCGraM consists of generating D-Markov Machines for each value of D in drange defined above. The values of p_cond for each of these values is then needed, so it is necessary to compute it above. A D-Markov Machine is a PFSA with $|\Sigma|^D$ states, each one labeled with one of the subsquences of length $D$. Given a state $\omega = \sigma_1\sigma_2\ldots\sigma_D$, for each $\sigma \in \Sigma$, it transitions to the state $\sigma_2\sigma_3\ldots\sigma_D\sigma$ with probability $\Pr(\sigma|\omega)$. This is done for all states in the D-Markov machine.
dmark_machines = [] #If the D-Markov machines have not been previously created, generate them with this cell for D in list(map(str,drange)): dmark_machines.append(dmarkov.create(p_cond, D)) dmark_machines[-1].to_csv(name + '/pfsa/dmarkov_D' + D + '_' + tag + '.csv') #On the other hand, if there already are D-Markov machines, load them with this cell if not dmark_machines: for D in drange: dmark_machines.append(pd.read_csv(name + '/pfsa/dmarkov_D' + D + '_' + tag + '.csv'))
dcgram.ipynb
franchenstein/dcgram
gpl-3.0
D-Markov Machine Analysis First of all, sequences should be generated from the D-Markov Machines. The same parameters computed in the analysis of the original sequence should be computed for the D-Markov Machines' sequences. Besides those parameters, the Kullback-Leibler Divergence and Distribution Distance between these sequences and the original sequence.
dmark_seqs = [] #Generate sequences: count = 0 for machine in dmark_machines: seq = machine.generate_sequence(N) with open(name + '/sequences/dmarkov_D' + str(drange[count]) + '_' + tag + '.yaml', 'w') as f: yaml.dump(seq, f) dmark_seqs.append(seq) count += 1 #If the sequences have been previously generated, load them here: if not dmark_seqs: for D in list(map(str,drange)): with open(name + '/sequences/dmarkov_D' + D + '_' + tag + '.yaml', 'w') as f: dmark_seqs.append(yaml.load(f)) #Compute subsequence probabilities of occurrence of the D-Markov sequences count = 0 p_dmark = [] for seq in dmark_seqs: p_dm, alphabet = sa.calc_probs(seq, d_max) p_dm.to_csv(name + '/results/probabilities/dmarkov_D'+ str(drange[count]) + '_' + tag + '.csv') p_dmark.append(p_dm) count += 1 #If p_dmark has been previously computed, use this cell to load the values if not p_dmark: for D in list(map(str,drange)): p_dm = pd.read_csv(name + '/results/probabilities/dmarkov_D' + D + '_' + tag + '.csv') p_dmark.append(p_dm) with open(name + '/alphabet.yaml', 'r') as f: alphabet = yaml.load(f) #Compute conditional probabilities of subsequences occurring after given each symbol of the alphabet #One of the two previous cells needs to be executed first. p_cond_dmark = [] count = 0 if p_dmark: for p_dm in p_dmark: p_cond_dm = sa.calc_cond_probs(p_dm, alphabet, d_max) p_cond_dm.to_csv(name + '/results/probabilities/conditional/dmarkov_D' + str(drange[count]) + '_' + tag + '.csv') p_cond_dmark.append(p_cond_dm) count += 1 else: print("Run a cell that either computes or opens the probabilities.") #If p_cond has been previously computed, use this cell to load the values if not p_cond_dmark: for D in list(map(str,drange)): p_cond_dmark.append(pd.read_csv(name + '/results/probabilities/conditional/dmarkov_D' + D + '_' + tag + '.csv')) #Compute conditional entropy count = 0 h_dmark = [] if p_dmark and p_cond_dmark: for p_dm in p_dmark: h_dm = sa.calc_cond_entropy(p_dm, p_cond_dmark[count], d_max) h_dm.to_csv(name + '/results/cond_entropies/dmarkov_D' + str(drange[count]) + '_' + tag + '.csv') h_dmark.append(h_dm) count += 1 else: print("Run the conditional probabilities cell first.") #If h_dmark has been previously computed, use this cell to load the values if not h_dmark: for D in list(map(str,drange)): h_dmark.append(pd.read_csv(name + '/results/cond_entropies/dmarkov_D' + D + '_' + tag + '.csv')) #Compute autocorrelation aut_dmark = [] count = 0 for dseq in dmark_seqs: aut_dm = sa.calc_autocorr(dseq, a) aut_dm.to_csv(name + '/results/autocorrelations/dmarkov_D' + str(drange[count]) + '_' + tag + '.csv') aut_dmark.append(aut_dm) count += 1 #If aut has been previously computed, use this cell to load the values if not aut_dmark: for D in list(map(str,drange)): aut_dmark.append(pd.read_csv(name + '/results/autocorrelations/dmarkov_D' + D + '_' + tag + '.csv')) #Compute the Kullback-Leibler Divergence between the sequences generated by the D-Markov Machines and the original #sequence. kld_dmark = [] for dseq in dmark_seqs: kld_dm = sa.calc_kld(dseq, X, d_max) kld_dmark.append(kld_dm) kld_dmark.to_csv(name + '/results/kldivergences/dmarkov_' + tag + '.csv') #If the D-Markov Kullback-Leibler divergence has been previously computed, use this cell to load the values if not kld_dmark: kld_dmark = pd.read_csv(name + '/results/kldivergences/dmarkov_' + tag + '.csv') #Compute the Probability Distances between the sequences generated by the D-Markov Machines and the original #sequence. pdist_dmark = [] for p_dm in p_dmark: pdist_dm = sa.calc_pdist(p_dm, p, d_max) pdist_dmark.append(pdist_dm) pdist_dmark.to_csv(name + '/results/prob_distances/dmarkov_' + tag + '.csv') #If the Probability Distances of the D-Markov Machines have been previously computed, load them with this cell. if not pdist_dmark: pdist_dmark = pd.read_csv(name + '/results/prob_distances/dmarkov_' + tag + '.csv')
dcgram.ipynb
franchenstein/dcgram
gpl-3.0
Clustering Now that we have obtained the D-Markov Machines, the next step of DCGraM is to cluster the states of these machines. For a given D-Markov Machine G$_D$, its states $q$ are considered points in a $\Sigma$-dimensional space, in which each dimension is labeled with a symbol $\sigma$ from the alphabet and the position of the state $q$ in this dimension is its probability of transitioning with this symbol. These point-states are then clustered together in $K$ clusters using a variation of the K-Means clustering algorithm that instead of using an Euclidean distance between points, uses the Kullback-Leibler Divergence between the point-state and the cluster centroids.
clustered = [] K = 4 for machine in dmark_machines: clustered.append(clustering.kmeans_kld(machine, K))
dcgram.ipynb
franchenstein/dcgram
gpl-3.0
Graph Minimization Once that the states of the D-Markov Machines are clustered, these clusterings are then used as initial partitions of the D-Markov Machines' states. To these machines and initial partitions, a graph minimization algorithm (in the current version, only Moore) is applied in order to obtain a final reduced PFSA, the DCGraM PFSA.
dcgram_machines = [] for ini_part in clustered: dcgram_machines.append(graphmin.moore(clustered))
dcgram.ipynb
franchenstein/dcgram
gpl-3.0
DCGraM Analysis Now that the DCGraM machines have been generated, the same analysis done for the D-Markov Machines is used for them. Sequences are generated for each of the DCGraM machines and afterwards all of the analysis is applied to them so the comparison can be made between regular D-Markov and DCGraM.
dcgram_seqs = [] #Generate sequences: count = 0 for machine in dcgram_machines: seq = machine.generate_sequence(N) with open(name + '/sequences/dcgram_D' + str(drange[count]) + '_' + tag + '.yaml', 'w') as f: yaml.dump(seq, f) dcgram_seqs.append(seq) count += 1 #If the sequences have been previously generated, load them here: if not dcgram_seqs: for D in list(map(str,drange)): with open(name + '/sequences/dcgram_D' + D + '_' + tag + '.yaml', 'w') as f: dcgram_seqs.append(yaml.load(f)) #Compute subsequence probabilities of occurrence of the DCGraM sequences count = 0 p_dcgram = [] for seq in dcgram_seqs: p_dc, alphabet = sa.calc_probs(seq, d_max) p_dc.to_csv(name + '/results/probabilities/dcgram_D'+ str(drange[count]) + '_' + tag + '.csv') p_dcgram.append(p_dc) count += 1 #If p_dcgram has been previously computed, use this cell to load the values if not p_dcgram: for D in list(map(str,drange)): p_dc = pd.read_csv(name + '/results/probabilities/dcgram_D' + D + '_' + tag + '.csv') p_dcgram.append(p_dm) with open(name + '/alphabet.yaml', 'r') as f: alphabet = yaml.load(f) #Compute conditional probabilities of subsequences occurring after given each symbol of the alphabet #One of the two previous cells needs to be executed first. p_cond_dcgram = [] count = 0 if p_dcgram: for p_dc in p_dcgram: p_cond_dc = sa.calc_cond_probs(p_dc, alphabet, d_max) p_cond_dc.to_csv(name + '/results/probabilities/conditional/dcgram_D' + str(drange[count]) + '_' + tag + '.csv') p_cond_dcgram.append(p_cond_dc) count += 1 else: print("Run a cell that either computes or opens the probabilities.") #If p_cond_dcgram has been previously computed, use this cell to load the values if not p_cond_dcgram: for D in list(map(str,drange)): p_cond_dcgram.append(pd.read_csv(name + '/results/probabilities/conditional/dcgram_D' + D + '_' + tag + '.csv')) #Compute conditional entropy count = 0 h_dcgram = [] if p_dcgram and p_cond_dcgram: for p_dc in p_dcgram: h_dc = sa.calc_cond_entropy(p_dc, p_cond_dcgram[count], d_max) h_dc.to_csv(name + '/results/cond_entropies/dcgram_D' + str(drange[count]) + '_' + tag + '.csv') h_dcgram.append(h_dc) count += 1 else: print("Run the conditional probabilities cell first.") #If h_dcgram has been previously computed, use this cell to load the values if not h_dcgram: for D in list(map(str,drange)): h_dcgram.append(pd.read_csv(name + '/results/cond_entropies/dcgram_D' + D + '_' + tag + '.csv')) #Compute autocorrelation aut_dcgram = [] count = 0 for dcseq in dcgram_seqs: aut_dc = sa.calc_autocorr(dcseq, a) aut_dc.to_csv(name + '/results/autocorrelations/dcgram_D' + str(drange[count]) + '_' + tag + '.csv') aut_dcgram.append(aut_dc) count += 1 #If aut has been previously computed, use this cell to load the values if not aut_dcgram: for D in list(map(str,drange)): aut_dmark.append(pd.read_csv(name + '/results/autocorrelations/dcgram_D' + D + '_' + tag + '.csv')) #Compute the Kullback-Leibler Divergence between the sequences generated by the DCGraM Machines and the original #sequence. kld_dcgram = [] for dcseq in dcgram_seqs: kld_dc = sa.calc_kld(dcseq, X, d_max) kld_dcgram.append(kld_dc) kld_dcgram.to_csv(name + '/results/kldivergences/dcgram_' + tag + '.csv') #If the DCGraM Kullback-Leibler divergence has been previously computed, use this cell to load the values if not kld_dcgram: kld_dcgram = pd.read_csv(name + '/results/kldivergences/dcgram_' + tag + '.csv') #Compute the Probability Distances between the sequences generated by the DCGraM Machines and the original #sequence. pdist_dcgram = [] for p_dc in p_dcgram: pdist_dc = sa.calc_pdist(p_dc, p, d_max) pdist_dcgram.append(pdist_dc) pdist_dcgram.to_csv(name + '/results/prob_distances/dcgram_' + tag + '.csv') #If the Probability Distances of the DCGraM Machines have been previously computed, load them with this cell. if not pdist_dcgram: pdist_dcgram = pd.read_csv(name + '/results/prob_distances/dcgram_' + tag + '.csv')
dcgram.ipynb
franchenstein/dcgram
gpl-3.0
Plots Once all analysis have been made, the plots of each of those parameters is created to visualize the performance. The plots have the x-axis representing the number of states of each PFSA and the y-axis represents the parameters being observed. There are always two curves: one for the DCGraM machines and one for the D-Markov Machines. Each point in these curves represents a machine of that type for a certain value of $D$. The further right a point is in the curve, the higher its $D$-value. On the curve for conditional entropy there is also a black representing the original sequence's conditional entropy for the $L$ being used as a baseline.
#initialization import matplotlib.pyplot as plt #Labels to be used in the plots' legends labels = ['D-Markov Machines, D from ' + str(drange[0]) + ' to ' + str(d_max), 'DCGraM Machines, D from ' + str(drange[0]) + ' to ' + str(d_max), 'Original Sequence Baseline'] #Obtaining number of states of the machines to be used in the x-axis: states_dmarkov = [] for dm in dmark_machines: states_dmarkov.append(dm.shape[0]) states_dcgram = [] for dc in dcgram_machines: states_dcgram.append(dc.shape[0]) states = [states_dmarkov, states_dcgram] #Conditional Entropy plots H = 10 h_dmark_curve = [] for h_dm in h_dmarkov: h_dmark_curve.append(h_dm[H]) plt.semilogx(states[0], h_dmark_curve, marker='o', label=labels[0]) h_dcgram_curve = [] for h_dc in h_dcgram: h_dcgram_curve.append(h_dc[H]) plt.semilogx(states[1], h_dcgram_curve, marker='x', label=labels[1]) #Opening original sequence baseline: h_base = h[H] plt.axhline(y=h_base, color='k', linewidth = 3, label=labels[2]) plt.xlabel('Number of States', fontsize=16) plt.yalbel('$h_' + str(H) + '$', fontsize=16) plt.legend(loc='upper right', shadow=False, fontsize='large') plt.title('Conditional Entropy',fontsize=18,weight='bold') plt.savefig(name + '/plots/conditional_entropy_' + tag + '.eps' , bbox_inches='tight', format='eps',dpi=1000) plt.show() #Kullback-Leibler plots plt.semilogx(states[0], kld_dmark, marker='o', label=labels[0]) plt.semilogx(states[1], kld_dcgram, marker='x', label=labels[1]) plt.xlabel('Number of States', fontsize=16) plt.yalbel('$k_' + str(H) + '$', fontsize=16) plt.legend(loc='upper right', shadow=False, fontsize='large') plt.title('Kullback-Leibler Divergence',fontsize=18,weight='bold') plt.savefig(name + '/plots/kldivergence_' + tag + '.eps' , bbox_inches='tight', format='eps',dpi=1000) plt.show() #Probability Distance plots plt.semilogx(states[0], pdist_dmark, marker='o', label=labels[0]) plt.semilogx(states[1], pdist_dcgram, marker='x', label=labels[1]) plt.xlabel('Number of States', fontsize=16) plt.yalbel('$P_' + str(H) + '$', fontsize=16) plt.legend(loc='upper right', shadow=False, fontsize='large') plt.title('Probability Distance',fontsize=18,weight='bold') plt.savefig(name + '/plots/prob_distance_' + tag + '.eps' , bbox_inches='tight', format='eps',dpi=1000) plt.show() #TODO: Think how to have good plots for autocorrelation
dcgram.ipynb
franchenstein/dcgram
gpl-3.0
Finetuning and Training
%cd $DATA_HOME_DIR #Set path to sample/ path if desired path = DATA_HOME_DIR + '/' #'/sample/' test_path = DATA_HOME_DIR + '/test/' #We use all the test data results_path=DATA_HOME_DIR + '/results/' train_path=path + '/train/' valid_path=path + '/valid/' #import Vgg16 helper class vgg = Vgg16() #Set constants. You can experiment with no_of_epochs to improve the model batch_size=64 no_of_epochs=3 #Finetune the model batches = vgg.get_batches(train_path, batch_size=batch_size) val_batches = vgg.get_batches(valid_path, batch_size=batch_size*2) vgg.finetune(batches) #Not sure if we set this for all fits vgg.model.optimizer.lr = 0.01 #Notice we are passing in the validation dataset to the fit() method #For each epoch we test our model against the validation set latest_weights_filename = None for epoch in range(no_of_epochs): print "Running epoch: %d" % epoch vgg.fit(batches, val_batches, nb_epoch=1) # latest_weights_filename = 'ft%d.h5' % epoch # vgg.model.save_weights(results_path+latest_weights_filename) print "Completed %s fit operations" % no_of_epochs
FAI_old/lesson1/dogs_cats_redux.ipynb
WNoxchi/Kaukasos
mit
In the next piece of code we will cycle through our directory again: first assigning readable names to our files and storing them as a list in the variable filenames; then we will remove the case and punctuation from the text, split the words into a list of tokens, and assign the words in each file to a list in the variable corpus.
filenames = [] for files in list_textfiles('../Counting Word Frequencies/data'): files = get_filename(files) filenames.append(files) corpus = [] for filename in list_textfiles('../Counting Word Frequencies/data'): text = read_file(filename) words = text.split() clean = [w.lower() for w in words if w.isalpha()] corpus.append(clean)
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Here we recreate our list from the last exercise, counting the instances of the word privacy in each file.
for words, names in zip(corpus, filenames): print("Instances of the word \'privacy\' in", names, ":", count_in_list("privacy", words))
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Next we use the len function to count the total number of words in each file.
for files, names in zip(corpus, filenames): print("There are", len(files), "words in", names)
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Now we can calculate the ratio of the word privacy to the total number of words in the file. To accomplish this we simply divide the two numbers.
print("Ratio of instances of privacy to total number of words in the corpus:") for words, names in zip(corpus, filenames): print('{:.6f}'.format(float(count_in_list("privacy", words))/(float(len(words)))),":",names)
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Now our descriptive statistics concerning word frequencies have added value. We can see that there has indeed been a steady increase in the frequency of the use of the word privacy in our corpus. When we investigate the yearly usage, we can see that the frequency almost doubled between 2008 and 2009, as well as dramatic increase between 2012 and 2014. This is also apparent in the difference between the 39th and the 40th sittings of Parliament. Let's package all of the data together so it can be displayed as a table or exported to a CSV file. First we will write our values to a list: raw contains the raw frequencies, and ratio contains the ratios. Then we will create a <span style="cursor:help;" title="a type of list where the values are permanent"><b>tuple</b></span> that contains the filename variable and includes the corresponding raw and ratio variables. Here we'll generate the ratio as a percentage.
raw = [] for i in range(len(corpus)): raw.append(count_in_list("privacy", corpus[i])) ratio = [] for i in range(len(corpus)): ratio.append('{:.3f}'.format((float(count_in_list("privacy", corpus[i]))/(float(len(corpus[i])))) * 100)) table = zip(filenames, raw, ratio)
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Using the tabulate module, we will display our tuple as a table.
print(tabulate(table, headers = ["Filename", "Raw", "Ratio %"], floatfmt=".3f", numalign="left"))
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
And finally, we will write the values to a CSV file called privacyFreqTable.
import csv with open('privacyFreqTable.csv','wb') as f: w = csv.writer(f) w.writerows(table)
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Part 2: Counting the number of transcripts Another way we can provide context is to process the corpus in a different way. Instead of splitting the data by word, we will split it in larger chunks pertaining to each individual transcript. Each transcript corresponds to a unique debate but starts with exactly the same formatting, making the files easy to split. The text below shows the beginning of a transcript. The first words are OFFICIAL REPORT (HANSARD). <img src="hansardText.png"> Here we will pass the files to another variable, called corpus_1. Instead of removing capitalization and punctuation, all we will do is split the files at every occurence of OFFICIAL REPORT (HANSARD).
corpus_1 = [] for filename in list_textfiles('../Counting Word Frequencies/data'): text = read_file(filename) words = text.split(" OFFICIAL REPORT (HANSARD)") corpus_1.append(words)
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Now, we can count the number of files in each dataset. This is also an important activity for error-checking. While it is easy to trust the numerical output of the code when it works sucessfully, we must always be sure to check that the code is actually performing in exactly the way we want it to. In this case, these numbers can be cross-referenced with the original XML data, where each transcript exists as its own file. A quick check of the directory shows that the numbers are correct.
for files, names in zip(corpus_1, filenames): print("There are", len(files), "files in", names)
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Here is a screenshot of some of the raw data. We can see that there are <u>97</u> files in 2006, <u>117</u> in 2007 and <u>93</u> in 2008. The rest of the data is also correct. <img src="filecount.png"> Now we can compare the amount of occurences of privacy with the number of debates occuring in each dataset.
for names, files, words in zip(filenames, corpus_1, corpus): print("In", names, "there were", len(files), "debates. The word privacy was said", \ count_in_list('privacy', words), "times.")
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
These numbers confirm our earlier results. There is a clear indication that the usage of the term privacy is increasing, with major changes occuring between the years 2008 and 2009, as well as between 2012 and 2014. This trend is also clearly obervable between the 39th and 40th sittings of Parliament. Part 3: Looking at the corpus as a whole While chunking the corpus into pieces can help us understand the distribution or dispersion of words throughout the corpus, it's valuable to look at the corpus as a whole. Here we will create a third corpus variable corpus_3 that only contains the files named 39, 40, and 41. Note the new directory named data2. We only need these files; if we used all of the files we would literally duplicate the results.
corpus_3 = [] for filename in list_textfiles('../Counting Word Frequencies/data2'): text = read_file(filename) words = text.split() clean = [w.lower() for w in words if w.isalpha()] corpus_3.append(clean)
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Now we will combine the three lists into one large list and assign it to the variable large.
large = list(sum(corpus_3, []))
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
We can use the same calculations to determine the total number of occurences of privacy, as well as the total number of words in the corpus. We can also calculate the total ratio of privacy to the total number of words.
print("There are", count_in_list('privacy', large), "occurences of the word 'privacy' and a total of", \ len(large), "words.") print("The ratio of instances of privacy to total number of words in the corpus is:", \ '{:.6f}'.format(float(count_in_list("privacy", large))/(float(len(large)))), "or", \ '{:.3f}'.format((float(count_in_list("privacy", large))/(float(len(large)))) * 100),"%")
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Another type of word frequency statistic we can generate is a type/token ratio. The types are the total number of unique words in the corpus, while the tokens are the total number of words. The type/token ratio is used to determine the variability of the language used in the text. The higher the ratio, the more complex the text will be. First we'll determine the total number of types, using <i>Python's</i> set function.
print("There are", (len(set(large))), "unique words in the Hansard corpus.")
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Now we can divide the types by the tokens to determine the ratio.
print("The type/token ratio is:", ('{:.6f}'.format(len(set(large))/(float(len(large))))), "or",\ '{:.3f}'.format(len(set(large))/(float(len(large)))*100),"%")
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Finally, we will use the NLTK module to create a graph that shows the top 50 most frequent words in the Hansard corpus. Although privacy will not appear in the graph, it's always interesting to see what types of words are most common, and what their distribution is. NLTK will be introduced with more detail in the next section featuring concordance outputs, but here all we need to know is that we assign our variable large to the NLTK function Text in order to work with the corpus data. From there we can determine the frequency distribution for the whole text.
text = nltk.Text(large) fd = nltk.FreqDist(text)
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Here we will assign the frequency distribution to the plot function to produce a graph. While it's a little hard to read, the most commonly used word in the Hansard corpus is the, with a frequency just over 400,000 occurences. The next most frequent word is to, which only has a frequency of about 225,000 occurences, almost half of the first most common word. The first 10 most frequent words appear with a much greater frequency than any of the other words in the corpus.
%matplotlib inline fd.plot(50,cumulative=False)
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Another feature of the NLTK frequency distribution function is the generation of a list of hapaxes. These are words that appear only once in the entire corpus. While not meaningful for this study, it's an interesting way to explore the data.
fd.hapaxes()
Adding Context to Word Frequency Counts.ipynb
mediagestalt/Adding-Context
mit
Rating-specialized model Depending on the weights we assign, the model will encode a different balance of the tasks. Let's start with a model that only considers ratings.
# Here, configuring the model with losses and metrics. # TODO 1: Your code goes here. cached_train = train.shuffle(100_000).batch(8192).cache() cached_test = test.batch(4096).cache() # Training the ratings model. model.fit(cached_train, epochs=3) metrics = model.evaluate(cached_test, return_dict=True) print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.") print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.")
courses/machine_learning/deepdive2/recommendation_systems/labs/multitask.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The model does OK on predicting ratings (with an RMSE of around 1.11), but performs poorly at predicting which movies will be watched or not: its accuracy at 100 is almost 4 times worse than a model trained solely to predict watches. Retrieval-specialized model Let's now try a model that focuses on retrieval only.
# Here, configuring the model with losses and metrics. # TODO 2: Your code goes here. # Training the retrieval model. model.fit(cached_train, epochs=3) metrics = model.evaluate(cached_test, return_dict=True) print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.") print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.")
courses/machine_learning/deepdive2/recommendation_systems/labs/multitask.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
We get the opposite result: a model that does well on retrieval, but poorly on predicting ratings. Joint model Let's now train a model that assigns positive weights to both tasks.
# Here, configuring the model with losses and metrics. # TODO 3: Your code goes here. # Training the joint model. model.fit(cached_train, epochs=3) metrics = model.evaluate(cached_test, return_dict=True) print(f"Retrieval top-100 accuracy: {metrics['factorized_top_k/top_100_categorical_accuracy']:.3f}.") print(f"Ranking RMSE: {metrics['root_mean_squared_error']:.3f}.")
courses/machine_learning/deepdive2/recommendation_systems/labs/multitask.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Proper use of Matplotlib We will use interactive plots inline in the notebook. This feature is enabled through:
%matplotlib import matplotlib.pyplot as plt import numpy as np # define a figure which can contains several plots, you can define resolution and so on here... fig2 = plt.figure() # add one axis, axes are actual plots where you can put data.fits (nx, ny, index) ax = fig2.add_subplot(1, 1, 1)
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Add a cruve with a title to the plot
x = np.linspace(0, 2*np.pi) ax.plot(x, np.sin(x), '+') ax.set_title('this title') plt.show() # is a simpler syntax to add one axis into the figure (we will stick to this) fig, ax = plt.subplots() ax.plot(x, np.sin(x), '+') ax.set_title('simple subplot')
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
A long list of markers can be found at http://matplotlib.org/api/markers_api.html as for the colors, there is a nice discussion at http://stackoverflow.com/questions/22408237/named-colors-in-matplotlib All the components of a figure can be accessed throught the 'Figure' object
print(type(fig)) print(dir(fig)) print(fig.axes) print('This is the x-axis object', fig.axes[0].xaxis) print('And this is the y-axis object', fig.axes[0].yaxis) # arrow pointing to the origin of the axes ax_arrow = ax.annotate('ax = fig.axes[0]', xy=(0, -1), # tip of the arrow xytext=(1, -0.5), # location of the text arrowprops={'facecolor':'red', 'shrink':0.05}) # arrow pointing to the x axis x_ax_arrow = ax.annotate('ax.xaxis', xy=(3, -1), # tip of the arrow xytext=(3, -0.5), # location of the text arrowprops={'facecolor':'red', 'shrink':0.05}) xax = ax.xaxis # arrow pointing to the y axis y_ax_arrow = ax.annotate('ax.yaxis', xy=(0, 0), # tip of the arrow xytext=(1, 0.5), # location of the text arrowprops={'facecolor':'red', 'shrink':0.05})
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Add a labels to the x and y axes
# add some ascii text label # this is equivelant to: # ax.set_xlabel('x') xax.set_label_text('x') # add latex rendered text to the y axis ax.set_ylabel('$sin(x)$', size=20, color='g', rotation=0)
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Finally dump the figure to a png file
fig.savefig('myplot.png') !ls !eog myplot.png
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Lets define a function that creates an empty base plot to which we will add stuff for each demonstration. The function returns the figure and the axes object.
from matplotlib import pyplot as plt import numpy as np def create_base_plot(): fig, ax = plt.subplots() ax.set_title('sample figure') return fig, ax def plot_something(): fig, ax = create_base_plot() x = np.linspace(0, 2*np.pi) ax.semilogx(x, np.cos(x)*np.cos(x/2), 'r--.') plt.show()
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Log plots
fig, ax = create_base_plot() # normal-xlog plots ax.semilogx(x, np.cos(x)*np.cos(x/2), 'r--.') # clear the plot and plot a function using the y axis in log scale ax.clear() ax.semilogy(x, np.exp(x)) # you can (un)set it, whenever you want #ax.set_yscale('linear') # change they y axis to linear scale #ax.set_yscale('log') # change the y axis to log scale # you can also make loglog plots #ax.clear() #ax.loglog(x, np.exp(x)*np.sin(x)) plt.setp(ax, **dict(yscale='log', xscale='log'))
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
This is equivelant to: ax.plot(x, np.exp(x)*np.sin(x)) plt.setp(ax, 'yscale', 'log', 'xscale', 'log') here we have introduced a new method of setting property values via pyplot.setp. setp takes as first argument a matplotlib object. Each pair of positional argument after that is treated as a key value pair for the set method name and its value. For example: ax.set_scale('linear') becomes plt.setp(ax, 'scale', 'linear') This is useful if you need to set lots of properties, such as:
plt.setp(ax, 'xscale', 'linear', 'xlim', [1, 5], 'ylim', [0.1, 10], 'xlabel', 'x', 'ylabel', 'y', 'title', 'foo', 'xticks', [1, 2, 3, 4, 5], 'yticks', [0.1, 1, 10], 'yticklabels', ['low', 'medium', 'high'])
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Histograms
fig1, ax = create_base_plot() n, bins, patches = ax.hist(np.random.normal(0, 0.1, 10000), bins=50)
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Subplots Making subplots is relatively easy. Just pass the shape of the grid of plots to plt.subplots() that was used in the above examples.
# Create one figure with two plots/axes, with their xaxis shared fig, (ax1, ax2) = plt.subplots(2, sharex=True) ax1.plot(x, np.sin(x), '-.', color='r', label='first line') other = ax2.plot(x, np.cos(x)*np.cos(x/2), 'o-', linewidth=3, label='other') ax1.legend() ax2.legend() # adjust the spacing between the axes fig.subplots_adjust(hspace=0.0) # add a scatter plot to the first axis ax1.scatter(x, np.sin(x)+np.random.normal(0, 0.1, np.size(x)))
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
create a 3x3 grid of plots
fig, axs = plt.subplots(3, 3) print(axs.shape) # add an index to all the subplots for ax_index, ax in enumerate(axs.flatten()): ax.set_title(ax_index) # remove all ticks for ax in axs.flatten(): plt.setp(ax, 'xticks', [], 'yticks', []) fig.subplots_adjust(hspace=0, wspace=0) # plot a curve in the diagonal subplots for ax, func in zip(axs.diagonal(), [np.sin, np.cos, np.exp]): ax.plot(x, func(x))
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Images and contours
xx, yy = np.mgrid[-2:2:100j, -2:2:100j] img = np.sin(xx) + np.cos(yy) fig, ax = create_base_plot() # to have 0,0 in the lower left corner and no interpolation img_plot = ax.imshow(img, origin='lower', interpolation='None') # to add a grid to any axis ax.grid() img_plot.set_cmap('hot') # changing the colormap img_plot.set_cmap('spectral') # changing the colormap colorb = fig.colorbar(img_plot) # adding a color bar img_plot.set_clim(-0.5, 0.5) # changing the dynamical range # add contour levels img_contours = ax.contour(img, [-1, -0.5, 0.0, 0.5]) plt.clabel(img_contours, inline=True, fontsize=20)
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Animation
from IPython.display import HTML import matplotlib.animation as animation def f(x, y): return np.sin(x) + np.cos(y) fig, ax = create_base_plot() im = ax.imshow(f(xx, yy), cmap=plt.get_cmap('viridis')) def updatefig(*args): global xx, yy xx += np.pi / 15. yy += np.pi / 20. im.set_array(f(xx, yy)) return im, ani = animation.FuncAnimation(fig, updatefig, interval=50, blit=True) _ = ani.to_html5_video() # change title during animation!! ax.set_title('runtime title')
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Styles Configuring matplotlib Most of the matplotlib code chunk that are written are usually about styling and not actual plotting. One feature that might be of great help if you are in this case is to use the matplotlib.style module. In this notebook, we will go through the available matplotlib styles and their corresponding configuration files. Then we will explain the two ways of using the styles and finally show you how to write a personalized style. Pre-configured style files An available variable returns a list of the names of some pre-configured matplotlib style files.
print('\n'.join(plt.style.available)) x = np.arange(0, 10, 0.01) def f(x, t): return np.sin(x) * np.exp(1 - x / 10 + t / 2) def simple_plot(style): plt.figure() with plt.style.context(style, after_reset=True): for t in range(5): plt.plot(x, f(x, t)) plt.title('Simple plot') simple_plot('ggplot') simple_plot('dark_background') simple_plot('grayscale') simple_plot('fivethirtyeight') simple_plot('bmh')
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Content of the style files A matplotlib style file is a simple text file containing the desired matplotlib rcParam configuration, with the .mplstyle extension. Let's display the content of the 'ggplot' style.
import os ggplotfile = os.path.join(plt.style.core.BASE_LIBRARY_PATH, 'ggplot.mplstyle') !cat $ggplotfile
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Maybe the most interesting feature of this style file is the redefinition of the color cycle using hexadecimal notation. This allows the user to define is own color palette for its multi-line plots. use versus context There are two ways of using the matplotlib styles. plt.style.use(style) plt.style.context(style): The use method applied at the beginning of a script will be the default choice in most cases when the style is to be set for the entire script. The only issue is that it sets the matplotlib style for the given Python session, meaning that a second call to use with a different style will only apply new style parameters and not reset the first style. That is if the axes.grid is set to True by the first style and there is nothing concerning the grid in the second style config, the grid will remain set to True which is not matplotlib default. On the contrary, the context method will be useful when only one or two figures are to be set to a given style. It shall be used with the with statement to create a context manager in which the plot will be made. Let's illustrate this.
plt.style.use('ggplot') plt.figure() plt.plot(x, f(x, 0))
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
The 'ggplot' style has been applied to the current session. One of its features that differs from standard matplotlib configuration is to put the ticks outside the main figure (axes.axisbelow: True)
with plt.style.context('dark_background'): plt.figure() plt.plot(x, f(x, 1))
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Now using the 'dark_background' style as a context, we can spot the main changes (background, line color, axis color) and we can also see the outside ticks, although they are not part of this particular style. This is the 'ggplot' axes.axisbelow setup that has not been overwritten by the new style. Once the with block has ended, the style goes back to its previous status, that is the 'ggplot' style.
plt.figure() plt.plot(x, f(x, 2))
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Custom style file Starting from these configured files, it is easy to now create our own styles for textbook figures and talk figures and switch from one to another in a single code line plt.style.use('mystyle') at the beginning of the plotting script. Where to create it ? matplotlib will look for the user style files at the following path :
print(plt.style.core.USER_LIBRARY_PATHS)
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Note: The directory corresponding to this path will most probably not exist so one will need to create it.
styledir = plt.style.core.USER_LIBRARY_PATHS[0] !mkdir -p $styledir
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
One can now copy an existing style file to serve as a boilerplate.
mystylefile = os.path.join(styledir, 'mystyle.mplstyle') !cp $ggplotfile $mystylefile !cd $styledir %%file mystyle.mplstyle font.size: 16.0 # large font axes.linewidth: 2 axes.grid: True axes.titlesize: x-large axes.labelsize: x-large axes.labelcolor: 555555 axes.axisbelow: True xtick.color: 555555 xtick.direction: out ytick.color: 555555 ytick.direction: out grid.color: white grid.linestyle: : # dotted line
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
D3
%matplotlib inline import matplotlib.pyplot as plt import numpy as np import mpld3 mpld3.enable_notebook() # Scatter points fig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE')) ax.grid(color='white', linestyle='solid') N = 50 scatter = ax.scatter(np.random.normal(size=N), np.random.normal(size=N), c=np.random.random(size=N), s = 1000 * np.random.random(size=N), alpha=0.3, cmap=plt.cm.jet) ax.set_title("D3 Scatter Plot", size=18); import mpld3 mpld3.display(fig) from mpld3 import plugins fig, ax = plt.subplots(subplot_kw=dict(axisbg='#EEEEEE')) ax.grid(color='white', linestyle='solid') N = 50 scatter = ax.scatter(np.random.normal(size=N), np.random.normal(size=N), c=np.random.random(size=N), s = 1000 * np.random.random(size=N), alpha=0.3, cmap=plt.cm.jet) ax.set_title("D3 Scatter Plot (with tooltips!)", size=20) labels = ['point {0}'.format(i + 1) for i in range(N)] fig.plugins = [plugins.PointLabelTooltip(scatter, labels)]
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Seaborn
%matplotlib plot_something() import seaborn plot_something()
notebooks/03-Plotting.ipynb
aboucaud/python-euclid2016
bsd-3-clause
Packet Forwarding This category of questions allows you to query how different types of traffic is forwarded by the network and if endpoints are able to communicate. You can analyze these aspects in a few different ways. Traceroute Bi-directional Traceroute Reachability Bi-directional Reachability Loop detection Multipath Consistency for host-subnets Multipath Consistency for router loopbacks
bf.set_network('generate_questions') bf.set_snapshot('generate_questions')
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Traceroute Traces the path(s) for the specified flow. Performs a virtual traceroute in the network from a starting node. A destination IP and ingress (source) node must be specified. Other IP headers are given default values if unspecified. Unlike a real traceroute, this traceroute is directional. That is, for it to succeed, the reverse connectivity is not needed. This feature can help debug connectivity issues by decoupling the two directions. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- startLocation | Location (node and interface combination) to start tracing from. | LocationSpec | False | headers | Packet header constraints. | HeaderConstraints | False | maxTraces | Limit the number of traces returned. | int | True | ignoreFilters | If set, filters/ACLs encountered along the path are ignored. | bool | True | Invocation
result = bf.q.traceroute(startLocation='@enter(as2border1[GigabitEthernet2/0])', headers=HeaderConstraints(dstIps='2.34.201.10', srcIps='8.8.8.8')).answer().frame()
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Return Value Name | Description | Type --- | --- | --- Flow | The flow | Flow Traces | The traces for this flow | Set of Trace TraceCount | The total number traces for this flow | int Retrieving the flow definition
result.Flow
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Retrieving the detailed Trace information
len(result.Traces) result.Traces[0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Evaluating the first Trace
result.Traces[0][0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Retrieving the disposition of the first Trace
result.Traces[0][0].disposition
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Retrieving the first hop of the first Trace
result.Traces[0][0][0]
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Retrieving the last hop of the first Trace
result.Traces[0][0][-1] bf.set_network('generate_questions') bf.set_snapshot('generate_questions')
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0
Bi-directional Traceroute Traces the path(s) for the specified flow, along with path(s) for reverse flows. This question performs a virtual traceroute in the network from a starting node. A destination IP and ingress (source) node must be specified. Other IP headers are given default values if unspecified. If the trace succeeds, a traceroute is performed in the reverse direction. Inputs Name | Description | Type | Optional | Default Value --- | --- | --- | --- | --- startLocation | Location (node and interface combination) to start tracing from. | LocationSpec | False | headers | Packet header constraints. | HeaderConstraints | False | maxTraces | Limit the number of traces returned. | int | True | ignoreFilters | If set, filters/ACLs encountered along the path are ignored. | bool | True | Invocation
result = bf.q.bidirectionalTraceroute(startLocation='@enter(as2border1[GigabitEthernet2/0])', headers=HeaderConstraints(dstIps='2.34.201.10', srcIps='8.8.8.8')).answer().frame()
docs/source/notebooks/forwarding.ipynb
batfish/pybatfish
apache-2.0