markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
可以用方法ranking_來看輸入的特徵權重關係。而方法estimator_可以取得訓練好的分類機狀態。比較特別的是當我們核函數是以線性來做分類時,estimator_下的方法coef_即為特徵的分類權重矩陣。權重矩陣的大小會因為n_features_to_select與資料的分類類別而改變,譬如本範例是十個數字的分類,並選擇以一個特徵來做分類訓練,就會得到45*1的係數矩陣,其中45是從分類類別所需要的判斷式而來,與巴斯卡三角形的第三層數正比。 (三)畫出每個像素所對應的權重順序 取得每個像素位置對於判斷數字的權重順序後,我們把權重順序依照顏色畫在對應的位置,數值愈大代表該像素是較不重要之特徵。由結果來看,不重要之特徵多半位於影像之外圍部份。而所有的訓練影像中,外圍像素多半為空白,因此較不重要。
# Plot pixel ranking plt.matshow(ranking, cmap=plt.cm.Blues) plt.colorbar() plt.title("Ranking of pixels with RFE") plt.show()
Feature_Selection/ipython_notebook/ex2_Recursive_feature_elimination.ipynb
dryadb11781/machine-learning-python
bsd-3-clause
(四)原始碼 Python source code: plot_rfe_digits.py
print(__doc__) from sklearn.svm import SVC from sklearn.datasets import load_digits from sklearn.feature_selection import RFE import matplotlib.pyplot as plt # Load the digits dataset digits = load_digits() X = digits.images.reshape((len(digits.images), -1)) y = digits.target # Create the RFE object and rank each pixel svc = SVC(kernel="linear", C=1) rfe = RFE(estimator=svc, n_features_to_select=1, step=1) rfe.fit(X, y) ranking = rfe.ranking_.reshape(digits.images[0].shape) # Plot pixel ranking plt.matshow(ranking, cmap=plt.cm.Blues) plt.colorbar() plt.title("Ranking of pixels with RFE") plt.show()
Feature_Selection/ipython_notebook/ex2_Recursive_feature_elimination.ipynb
dryadb11781/machine-learning-python
bsd-3-clause
Learn about the Model Input <br> HydroTrend will now be activated in PyMT. You can find information on the model, the developer, the papers that describe the moel in more detail etc. Importantly you can scroll down a bit to the Parameters list, it shows what parameters the model uses to control the simulations. The list is alphabetical and uses precisely specified 'Standard Names'. Note that every parameter has a 'default' value, so that when you do not list it in the configure command, you will run with these values.
# Get basic information about the HydroTrend model help(hydrotrend)
notebooks/hydrotrend.ipynb
csdms/pymt
mit
Exercise 1: Explore the Hydrotrend base-case river simulation For this case study, first we will create a subdirectory in which the basecase (BC) simulation will be implemented. Then we specify for how long we will run a simulation: for 100 years at daily time-step. This means you run Hydrotrend for 36,500 days total. This is also the line of code where you would add other input parameters with their values.
# Set up Hydrotrend model by indicating the number of years to run config_file, config_folder = hydrotrend.setup("_hydrotrendBC", run_duration=100)
notebooks/hydrotrend.ipynb
csdms/pymt
mit
With the cat command you can print character by character one of the two input files that HydroTrend uses. HYDRO0.HYPS: This first file specifies the River Basin Hysometry - the surface area per elevation zone. The hypsometry captures the geometric characteristics of the river basin, how high is the relief, how much uplands are there versus lowlands, where would the snow fall elevation line be etcetera. <br> HYDRO.IN: This other file specifies the basin and climate input data.
cat _hydrotrendBC/HYDRO0.HYPS cat _hydrotrendBC/HYDRO.IN #In pymt one can always find out what output a model generates by using the .output_var_names method. hydrotrend.output_var_names # Now we initialize the model with the configure file and in the configure folder hydrotrend.initialize(config_file, config_folder) # this line of code lists time parameters, when, how long and at what timestep will the model simulation work? hydrotrend.start_time, hydrotrend.time, hydrotrend.end_time, hydrotrend.time_step, hydrotrend.time_units # this code declares numpy arrays for several important parameters we want to save. n_days = int(hydrotrend.end_time) q = np.empty(n_days) #river discharge at the outlet qs = np.empty(n_days)# sediment load at the outlet cs = np.empty(n_days) # suspended sediment concentration for different grainsize classes at the outlet qb = np.empty(n_days) # bedload at the outlet # here we have coded up the time loop using i as the index # we update the model with one timestep at the time, untill we reach the end time # for each time step we also get the values for the output parameters we wish to for i in range(n_days): hydrotrend.update() q[i] = hydrotrend.get_value("channel_exit_water__volume_flow_rate") qs[i] = hydrotrend.get_value("channel_exit_water_sediment~suspended__mass_flow_rate") cs[i] = hydrotrend.get_value("channel_exit_water_sediment~suspended__mass_concentration") qb[i] = hydrotrend.get_value("channel_exit_water_sediment~bedload__mass_flow_rate") # We can plot the simulated output timeseries of Hydrotrend, for example the river discharge plt.plot(q) plt.title('HydroTrend simulation of 100 year river discharge, Waiapaoa River') plt.ylabel('river discharge in m3/sec') plt.show # Or you can plot a subset of the simulated daily timeseries using the index #for example the first year plt.plot(q[0:365], 'black') # compare with the last year plt.plot(q[-366:-1],'grey') plt.title('HydroTrend simulation of first and last year discharge, Waiapaoa River') plt.show() # Of course, it is important to calculate statistical properties of the simulated parameters print(q.mean()) hydrotrend.get_var_units("channel_exit_water__volume_flow_rate")
notebooks/hydrotrend.ipynb
csdms/pymt
mit
## <font color = green> Assignment 1 </font> Calculate mean water discharge Q, mean suspended load Qs, mean sediment concentration Cs, and mean bedload Qb for this 100 year simulation of the river dynamics of the Waiapaoa River. Note all values are reported as daily averages. What are the units?
# your code goes here
notebooks/hydrotrend.ipynb
csdms/pymt
mit
<font color = green> Assignment 2 </font> Identify the highest flood event for this simulation. Is this the 100-year flood? Please list a definition of a 100 year flood, and discuss whether the modeled extreme event fits this definition. Plot the year of Q-data which includes the flood.
# here you can calculate the maximum river discharge. # your code to determine which day and which year encompass the maximum discharge go here # Hint: you will want to determine the ndex of htis day first, look into the numpy.argmax and numpy.argmin # as a sanity check you can see whether the plot y-axis seems to go up to the maximum you had calculated in the previous step # as a sanity check you can look in the plot of all the years to see whether the timing your code predicts is correct # type your explanation about the 100 year flood here.
notebooks/hydrotrend.ipynb
csdms/pymt
mit
<font color = green> Assignment 3 </font> Calculate the mean annual sediment load for this river system. Then compare the annual load of the Waiapaoha river to the Mississippi River. <br> To compare the mean annual load to other river systems you will need to calculate its sediment yield. Sediment Yield is defined as sediment load normalized for the river drainage area; so it can be reported in T/km2/yr.
# your code goes here # you will have to sum all days of the individual years, to get the annual loads, then calculate the mean over the 100 years. # one possible trick is to use the .reshape() method # plot a graph of the 100 years timeseries of the total annual loads # take the mean over the 100 years #your evaluation of the sediment load of the Waiapaoha River and its comparison to the Mississippi River goes here. #Hint: use the following paper to read about the Mississippi sediment load (Blum, M, Roberts, H., 2009. Drowning of the Mississippi Delta due to insufficient sediment supply and global sea-level rise, Nature Geoscience).
notebooks/hydrotrend.ipynb
csdms/pymt
mit
HydroTrend Exercise 2: How does a river system respond to climate change; two simple scenarios for the coming century. Now we will look at changing climatic conditions in a small river basin. We'll change temperature and precipitation regimes and compare discharge and sediment load characteristics to the original basecase. And we will look at the are potential implications of changes in the peak events. Modify the mean annual temperature T, the mean annual precipitation P. You can specify trends over time, by modifying the parameter ‘change in mean annual temperature’ or ‘change in mean annual precipitation’. HydroTrend runs at daily timestep, and thus can deal with seasonal variations in temperature and precipitation for a basin. The model ingests monthly mean input values for these two climate parameters and their monthly standard deviations, ideally the values would be derived from analysis of a longterm record of daily climate data. You can adapt seasonal trends by using the monthly values. <font color = green> Assignment 4 </font> What happens to river discharge, suspended load and bedload if the mean annual temperature in this specific river basin increases by 4 °C over the next 50 years? In this assignment we set up a new simulation for a warming climate.
# Set up a new run of the Hydrotrend model # Create a new config file a different folder for input and output files, indicating the number of years to run, and specify the change in mean annual temparture parameter hydrotrendHT = pymt.models.Hydrotrend() config_file, config_folder = hydrotrendHT.setup("_hydrotrendhighT", run_duration=50, change_in_mean_annual_temperature=0.08) # intialize the new simulation hydrotrendHT.initialize(config_file, config_folder) # the code for the timeloop goes here # I use the abbrevation HT for 'High Temperature' scenario n_days = int(hydrotrendHT.end_time) q_HT = np.empty(n_days) #river discharge at the outlet qs_HT = np.empty(n_days)# sediment load at the outlet cs_HT = np.empty(n_days) # suspended sediment concentration for different grainsize classes at the outlet qb_HT = np.empty(n_days) # bedload at the outlet for i in range(n_days): hydrotrendHT.update() q_HT[i] = hydrotrendHT.get_value("channel_exit_water__volume_flow_rate") qs_HT[i] = hydrotrendHT.get_value("channel_exit_water_sediment~suspended__mass_flow_rate") cs_HT[i] = hydrotrendHT.get_value("channel_exit_water_sediment~suspended__mass_concentration") qb_HT[i] = hydrotrendHT.get_value("channel_exit_water_sediment~bedload__mass_flow_rate") # your code that prints out the mean river discharge, the mean sediment load and the mean bedload goes here # print out these same parameters for the basecase for comparison
notebooks/hydrotrend.ipynb
csdms/pymt
mit
<font color = green> Assignment 5 </font> So what is the effect of a warming basin temperature? How much increase or decrease of river discharge do you see after 50 years? <br> How is the mean suspended load affected? <br> How does the mean bedload change? <br> What happens to the peak event; look at the maximum sediment load event of the last 5 years of the simulation?
# type your answers here
notebooks/hydrotrend.ipynb
csdms/pymt
mit
<font color = green> Assignment 6 </font> What happens to river discharge, suspended load and bedload if the mean annual precipitation would increase by 50% in this specific river basin over the next 50 years? Create a new simulation folder, High Precipitation, HP, and set up a run with a trend in future precipitation.
# Set up a new run of the Hydrotrend model # Create a new config file indicating the number of years to run, and specify the change in mean annual precipitation parameter # initialize the new simulation # your code for the timeloop goes here ## your code that prints out the mean river discharge, the mean sediment load and the mean bedload goes here
notebooks/hydrotrend.ipynb
csdms/pymt
mit
<font color = green> Assignment 7 </font> In addition, climate model predictions indicate that perhaps precipitation intensity and variability could increase. How would you possibly model this? Discuss how you would modify your input settings for precipitation.
#type your answer here
notebooks/hydrotrend.ipynb
csdms/pymt
mit
Exercise 3: How do humans affect river sediment loads? Here we will look at the effect of human in a river basin. Humans can accelerate erosion processes, or reduce the sediment loads traveling through a river system. Both concepts can be simulated, first run 3 simulations systematically increasing the anthropogenic factor (0.5-8.0 is the range). <font color = green> Assignment 8 </font> Describe in your own words the meaning of the human-induced erosion factor, (Eh). This factor is parametrized as the “Antropogenic” factor in HydroTrend. Read more about this in: Syvitski & Milliman, 2007, Geology, Geography, and Humans Battle for Dominance over the Delivery of Fluvial Sediment to the Coastal Ocean. 2007, 115, p. 1–19.
# your explanation goes here, can you list two reasons why this factor would be unsuitable or it would fall short?
notebooks/hydrotrend.ipynb
csdms/pymt
mit
<font color = green> Bonus Assignment 9 </font> Model a scenario of a drinking water supply reservoir to be planned in the coastal area of the basin. The reservoir would have 800 km 2of contributing drainage area and be 3 km long, 200m wide and 100m deep. Set up a simulation with these parameters.
# Set up a new 50 year of the Hydrotrend model # Create a new directory, and a config file indicating the number of years to run, and specify different reservoir parameters # initialize the new simulation # your code for the timeloop and update loop goes here # plot a bar graph comparing Q mean, Qs mean, Qmax, Qs Max, Qb mean and Qbmax for the basecase run and the reservoir run # Describe how such a reservoir affects the water and sediment load at the coast (i.e. downstream of the reservoir)?
notebooks/hydrotrend.ipynb
csdms/pymt
mit
<font color = green> Bonus Assignment 10 </font> Set up a simulation for a different river basin. This means you would need to change the HYDRO0.HYPS file and change some climatic parameters. There are several hypsometric files packaged with HydroTrend, you can use one of those, but are welcome to do something different!
# write a short motivation and description of your scenario # make a 2 panel plot using the subplot functionality of matplotlib # One panel would show the hypsometry of the Waiapohoa and the other panel the hypsometry of your selected river basin # Set up a new 50 year of the Hydrotrend model # Create a new directory for this different basin # initialize the new simulation # your code for the timeloop and update loop goes here # plot a line graph comparing Q mean, Qs mean, for the basecase run and the new river basin run
notebooks/hydrotrend.ipynb
csdms/pymt
mit
4. Calculate the coefficient of correlation (r) and generate the scatter plot. Does there seem to be a correlation worthy of investigation?
df.plot(kind='scatter', x='Exposure', y='Mortality') r = df.corr()['Exposure']['Mortality'] r
class6/donow/Lee_Dongjin_6_Donow.ipynb
ledeprogram/algorithms
gpl-3.0
Yes, there seems to be a correlation wothy of investigation. 5. Create a linear regression model based on the available data to predict the mortality rate given a level of exposure
lm = smf.ols(formula="Mortality~Exposure",data=df).fit() intercept, slope = lm.params lm.params
class6/donow/Lee_Dongjin_6_Donow.ipynb
ledeprogram/algorithms
gpl-3.0
6. Plot the linear regression line on the scatter plot of values. Calculate the r^2 (coefficient of determination)
# Method 01 (What we've learned from the class) df.plot(kind='scatter', x='Exposure', y='Mortality') plt.plot(df["Exposure"],slope*df["Exposure"]+intercept,"-",color="red") # Method 02 (Another version) _ so much harder ...than what we have learned def plot_correlation( ds, x, y, ylim=(100,240) ): plt.xlim(0,14) plt.ylim(ylim[0],ylim[1]) plt.scatter(ds[x], ds[y], alpha=0.6, s=50) for abc, row in ds.iterrows(): plt.text(row[x], row[y],abc ) plt.xlabel(x) plt.ylabel(y) # Correlation trend_variable = np.poly1d(np.polyfit(ds[x], ds[y], 1)) trendx = np.linspace(0, 14, 4) plt.plot(trendx, trend_variable(trendx), color='r') r = sp.stats.pearsonr(ds[x],ds[y]) plt.text(trendx[3], trend_variable(trendx[3]),'r={:.3f}'.format(r[0]), color = 'r' ) plt.tight_layout() plot_correlation(df,'Exposure','Mortality') r_squared = r **2 r_squared
class6/donow/Lee_Dongjin_6_Donow.ipynb
ledeprogram/algorithms
gpl-3.0
7. Predict the mortality rate (Cancer per 100,000 man years) given an index of exposure = 10
def predicting_mortality_rate(exposure): return intercept + float(exposure) * slope predicting_mortality_rate(10)
class6/donow/Lee_Dongjin_6_Donow.ipynb
ledeprogram/algorithms
gpl-3.0
Custom experimental setup with item sampling The EigenRec paper follows a specific experimentation setup, mainly based on the settings, proposed in my another favorite paper Performance of recommender algorithms on top-n recommendation tasks, devoted to the PureSVD model itself. For evaluation purposes, the authors sample 1.4% of all available ratings and additionally shrink the resulting sample by leaving 5-star ratings only. Quote from the paper (Section 4.2.1): <div class="alert alert-block alert-info">"...we form a probeset $\mathcal{P}$ by randomly sampling 1.4% of the ratings of the dataset, and we use each item $v_j$, rated with 5-star by user $u_i$ in $\mathcal{P}$ to create the test set $\mathcal{T}$..."</div> This setup can be easily implemented in Polara with the help of test_ratio and holdout_size parameters of the RecommendeData instance. It requires a two-step preparation procedure. The first step is to sample data without filtering top-rated items. The following configuration does the thing:
data_model.test_ratio = 0 # do not split dataset into folds, use entire dataset for sampling data_model.holdout_size = 0.014 # sample this fraction of ratings from data data_model.random_holdout = True # sample ratings randomly (not just 5-star) data_model.warm_start = False # allow test users to be part of the training (excluding holdout items) data_model.prepare() # perform sampling
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
Mind the test_ratio parameter setting. Together with the test_fold parameter it controls, which fraction of the dataset to sample from; 0 means the whole dataset and turns off data splitting mechanism used by Polara for cross-validation. The value of test_fold has no effect in that case. Also note that by default Polara performs some additional manipulations with data like cleaning and reindexing to transform it into a uniform internal representation for further use. Key actions and their results are reported in an output text, which can be turned off by setting data_model.verbose = False. Here's how to see the final result of sampling:
data_model.test.holdout.head()
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
The second step is to leave only items with rating 5, as it was done in the original paper. The easiest way in our case would be to simply run: python data_model.test.holdout.query('rating==5', inplace=True) However, in general, you shouldn't manually change the data after it was processed by Polara, as it may break some internal logic. A more appropriate and a safier way to achieve the same is to use the set_test_data method, specifically designed to cover custom configurations:
data_model.set_test_data( holdout=data_model.test.holdout.query('rating==5'), # select only 5-star ratings warm_start=data_model.warm_start, reindex=False, # avoid reindexing users and items second time ensure_consistency=False # do not try to filter out unseen entities (already excluded) # leaving it as True wouldn't change the result but would lead to extra checks )
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
Note that we reuse the previously sampled holdout dataset (the $\mathcal{P}$ dataset in the authors' notation), which is already reindexed by Polara's built-in data pre-processing procedure. In order not to loose the index mapping between internal and external representation of movies and users (stored in the data_model.index attribute) it's very important to set reindex argument of the set_test_data method to False. Now the data_model.test.holdout dataframe stores the final result, namely the $\mathcal{T}$ dataset:
data_model.test.holdout.head()
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
Scaled SVD-based model In the simplest case of the EigenRec model, when only the scaling factor is changed, we can go with a very straightforward approach. Instead of computing similarity matrices and solving an eigendecomposition problem, it is sufficient to apply standard SVD to a scaled rating matrix $\tilde R$: $$ \tilde R = R \, S^{d-1} \approx U\Sigma V^T, $$ where $R$ is an $M \times N$ rating matrix, $S = \text{diag}{\|r_1\|_2, \dots, \|r_N\|_2}^d$ is a diagonal scaling matrix with its non-zero values depending on a scaling parameter $d$ and $r_i$ denotes an $i$-th column of $R$. Note that due to the orthogonality of columns in the SVD factors the approximation of $\tilde R$ can be written in an equivalent and more convenient form $\tilde RVV^T$, which can be used to generate recommendations. Scaling input data In order to calculate the scaled version of the PureSVD approach we can reuse the SVDModel class implemented in Polara. One of the ways to do that is to redefine the build method in an SVDModel's subclass. A simpler solution, however, is to directly modify an output of the get_training_matrix method, which is generally available for all models in Polara and is used internally in the SVDModel in particular. This method returns the rating matrix in a sparse format, which is then fed into the scipy's truncated SVD implementation within the build method (you can run the SVDModel.build?? command with double question mark to see it). Assuming we already have sparse rating matrix, the following function will help to scale it:
from scipy.sparse import diags from scipy.sparse.linalg import norm as spnorm def sparse_normalize(matrix, scaling, axis): '''Function to scale either rows or columns of the sparse rating matrix''' if scaling == 1: # no scaling (standard SVD case) return matrix norm = spnorm(matrix, axis=axis, ord=2) # compute Euclidean norm of rows or columns scaling_matrix = diags(np.power(norm, scaling-1, where=norm!=0)) if axis == 0: # scale columns return matrix.dot(scaling_matrix) if axis == 1: # scale rows return scaling_matrix.dot(matrix)
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
Sampling random items for evaluation Somewhat more involved modifications are required to generate model predictions, as it's based on an additional sampling of items not previously seen by the test users. Quote from the paper (Section 4.2.1): <div class="alert alert-block alert-info">"For each item in $\mathcal{T}$, we randomly select another 1000 unrated items of the same user..."</div> This means that we need to generate prediction scores for 1000 randomly selected unseen items in addition to every item from the holdout. Moreover, every set of 1001 items is treated independently of the user it belongs to. Normally, Polara performs evaluation on a per user basis; however, in this case the logic is different and we have to take care of users with mulltiple items in the holdout. From the line below it can be clearly seen that some test users can have up to 8 items:
data_model.test.holdout.userid.value_counts().max()
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
In order to "flatten" the holdout dataset and to independently generate prediction scores for every holdout item (and 1000 of additionally sampled items) we will customize the get_recommendations method of the SVDModel class. Below is the support function, that helps to achieve the necessary result. It iterates over all holdout items, randomly samples a predefined amount of previously unrated items and generates prediction scores for them:
def sample_scores_flat(useridx, itemidx, seen_data, all_items, user_factors, item_factors, sample_size=1000, random_state=None): '''Function to randomly sample unrated items and generate prediction scores for them.''' scores = [] for user, items in itemidx.groupby(useridx): # iterate over every test user and get all user items seen_items = seen_data[1][seen_data[0]==user].tolist() # list of the previously rated items of the user seen_items.extend(items.tolist()) # take holdout items into account as well item_pool = all_items[~all_items.isin(seen_items)] # exclude seen items from all available items for item in items: sampled_items = item_pool.sample(n=sample_size, random_state=random_state) scores.append(item_factors[sampled_items.values, :].dot(user_factors[user, :])) return scores
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
<div class="alert alert-block alert-warning">Prediction scores are generated similarly to the standard *PureSVD* model by an orthogonal projection of a vector $r$ of user ratings onto the latent feature space, defined by the formula $VV^Tr$. Note that unlike the model computation phase, no scaling is used in the prediction.</div> The code above complies with this definition by expecting user_factors to be the product $V^Tr$ for a set of test users and item_factors to be $V$ itself. Below you can find a full implementation of our new model. Defining the model
import numpy as np import pandas as pd from polara import SVDModel class ScaledSVD(SVDModel): '''Class that adds scaling functionality to the PureSVD model''' def __init__(self, *args, **kwargs): super().__init__(*args, **kwargs) self.col_scaling = 1 # scaling parameted d, initially corresponds to PureSVD self.n_rnd_items = 1000 # number of randomly sampled items self.seed = 0 # to control randomization self.method = 'ScaledSVD' def get_training_matrix(self, *args, **kwargs): svd_matrix = super().get_training_matrix(*args, **kwargs) # get sparse rating matrix return sparse_normalize(svd_matrix, self.col_scaling, 0) def get_recommendations(self): holdout = self.data.test.holdout itemid = self.data.fields.itemid # "movieid" in the case of Movielense dataset userid = self.data.fields.userid # "userid" in the case of Movielense dataset itemidx = holdout[itemid] # holdout items of the test users useridx = pd.factorize(holdout[userid])[0] # have to "rebase" user index; # necessary for indexing rows of the matrix with test user ratings # prediction scores for holdout items test_matrix, seen_data = self.get_test_matrix() item_factors = self.factors[itemid] # right singular vectors, matrix V user_factors = test_matrix.dot(item_factors) # similarly to PCA holdout_scores = ( user_factors[useridx, :] * item_factors[itemidx.values, :] ).sum(axis=1).squeeze() # scores for randomly sampled unseen items all_items = self.data.index.itemid.new # all unique (reindexed) items rs = np.random.RandomState(self.seed) # fixing random state to control random output sampled_scores = sample_scores_flat( useridx, itemidx, seen_data, all_items, user_factors, item_factors, self.n_rnd_items, random_state=rs ) # combine all scores and rank selected items scores = np.concatenate( # stack into array with 1001 columns (holdout_scores[:, None], sampled_scores), axis=1 ) rankings = np.apply_along_axis(np.argsort, 1, -scores) return rankings
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
The model is ready and can be used in a standard way:
svd = ScaledSVD(data_model) # create model svd.rank = 50 svd.col_scaling = 0.5 svd.build() # fit model
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
Now, when we have our model computed, its time to evaluate it. However, we cannot use the built-in evaluation routine. Normally, the number of test users is equal to the number of rows in recommendations array and that's the logic Polara relies on. In our case the number of test users is lower than the number of rows in recommendations array and actually corresponds to the total number of ratings in the holdout:
# if you run the cell for the first time you'll notice a short delay before print output due to calculation of recommendations print('# of test users:', data_model.test.holdout.userid.nunique()) print('# of rows and columns in recommendations array:', svd.recommendations.shape) print('# of ratinhgs in the holdout:', data_model.test.holdout.shape[0])
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
We will fix this inconsistency in the next section. Worth noting here that Polara implements a unified system of callbacks, which reset the svd.recommendations property whenever either the data_model or the model itself are changed in a way that affects the models' output (try, for example, call svd.recommendations, then set the rank of the model to some higher value and call svd.recommendations again). This mechanism helps to ensure predictable and consistent state and to prevent accidental reuse of the cached results during experiments. It can also be extended with user-defined triggers, which is probably the topic for another tutorial. Model evaluation Simple approach When you try to evaluate your model, it calls for the model.recommendations property which is automatically filled with the result of the get_recommendations method. The simplest way to evaluate the result in accordance with the new structure of the recommendations array is to define a small function as shown below:
def evaluate_mrr(model): '''Function to calculate MRR score.''' is_holdout = model.recommendations==0 # holdout items are always in the first column before sorting pos = np.where(is_holdout)[1] + 1.0 # position of holdout items (indexing starts from 0, so adding 1) mrr = np.reciprocal(pos).mean() # mean reciprocal rank return mrr
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
Finally, to compute the MRR score, as it is done in the original paper, simply run:
evaluate_mrr(svd)
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
More functional approach While the previously described approach is fully working and easy, in some cases you may want to use the built-in model.evaluate method, as it provides additional functionality. It is also useful to see how Polara can be customized to serve specific needs. The key ingredient here is the control of the type of entities that are recommended. By default, Polara expects items to be recommended to users and looks for the corresponding fields in the test data. These fields are defined via data_model.fields.userid and data_model.fields.itemid attributes respectively. The default behavior, however, can be redefined at the model level be setting model._prediction_key (users by default) and model._prediction_target (items by default) attributes to custom values. This scheme, for example, can be utilized in cold start experiments, where the task is to find users potentially interested in a "cold" item instead of recommending items to users (see polara.recommender.coldstart for implementation details). The following lines show how to change the default settings for our needs:
svd._prediction_key = 'xuser' svd._prediction_target = 'xitem'
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
Now, we need to specify the corresponding fields in the holdout data. Recall that our goal is to treat every item in the holdout independently of the user or, in other words, to assign every item to a unique "virtual" user ('xuser'). Furthermore, by construction, prediction scores for holdout items are located in the first column of the recommendations array. This means that every holdout item ('xitem') should have index 0. Here's the necessary modification:
data_model.test.holdout['xuser'] = np.arange(data_model.test.holdout.shape[0]) # number of rated items defines the range data_model.test.holdout['xitem'] = 0
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
Let's check that the result is the same (up to a small rounding error due to different calculation schemes):
svd.evaluate('ranking', simple_rates=True) # `simple_rates` is used to enforce calculation of MRR
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
<div class="alert alert-block alert-warning">If you'll do the math you'll see that the whole experiment took under 100 lines of code to program, and the most part of it was pretty standard (i.e., declaring variables and methods).</div> Less lines of code typically means less risks for having bugs or inconsistencies. By following a certain protocol, Polara provides a high-level interface that abstracts many technical aspects allowing to focus on the most important parts of research. Reproducing the results The next task is to repeat experiments from the EigenRec paper, where the authors compute <div class="alert alert-block alert-info">"...MRR scores as a function of the parameter $d$ for every case, using the number of latent factors that produces the best possible performance for each matrix."</div> Grid search The beauty of SVD-based models is that it is much easier to perform grid-search for finding optimal values of hyper-parameters. Once you have computed a model for a certain set of hyper parameters with some rank value $k$, you can quickly find all other models of rank "k' < k" without recomputing SVD. <div class="alert alert-block alert-info">Going from larger values of rank to smaller ones is performed by a simple truncation of the latent factor matrix.</div> This not only allows to perform experiments faster, but also simplifies the code for it. Moreover, SVDModel already has the necessary rank-check procedures, which allow to avoid rebuilding the model when user sets a smaller value of rank. No special actions are required here. Below is the code that implements the grid search experiment, taking that feature into account (note that on a moderate hardware the code will run for approximately half an hour):
try: from ipypb import track except ImportError: from tqdm import tqdm_notebook as track %matplotlib inline svd_mrr_flat = {} # will stor results here svd.verbose = False max_rank = 150 scaling_params = np.arange(-20, 21, 2) / 10 # values of d from -2 to 2 with step 0.2 svd_ranks = range(10, max_rank+1, 10) # ranks from 10 to max_ranks with step 10 for scaling in track(scaling_params): svd.col_scaling = scaling svd.rank = max_rank svd.build() for rank in list(reversed(svd_ranks)): # iterating over rank values in a descending order svd.rank = rank # allows to truncate factor matrices without recomputing SVD svd_mrr_flat[(scaling, rank)] = svd.evaluate('ranking', simple_rates=True).mrr
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
Results Now we have the results of the grid search stored in the svd_mrr_flat dictionary. There's one catch that wasn't clear for me at first: <div class="alert alert-block alert-warning">in order to show the effect of parameter $d$ the authors have fixed the value of rank corresponding to the best result achieved with EigenRec.</div> This means that the curve on Figure 1 in the original paper is obtained with a fixed value of rank, corresponding to the optimal point at the top of the curve, and all other points are obtained by only changing the scaling factor. Here's one way to draw it:
result_flat = pd.Series(svd_mrr_flat) best_d, best_rank = result_flat.idxmax() best_d, best_rank result_flat.xs(best_rank, axis=0, level=1).plot(label='fixed rank', legend=True, title='MRR', figsize=(4.3, 2), ylim=(0, None), xlim=(-2, 2), grid=True);
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
Comparing this picture to the bottom left graph of Figure 1 in the original paper leads to a satisfactory conclusion that the curves on the graphs are very close. Of course, there are slight differences; however, there are many factors that may affect it, like data sampling and unrated items randomization. It would be a good idea to repeat the experiment with different seed values and draw a confidence region around the curve. However, there are no drammatic differences in the general behavior of the curves, which is a very nice result that didn't take too much efforts. Here are some top-score configurations from the experiment:
result_flat.sort_values(ascending=False).head()
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
A bit of exploration The difference between the best result achieved with the EigenRec approach and the standard PureSVD result (that corresponds to the point with scaling parameter equal to 1) is quite large. However, such a comparison is a bit unfair as the restriction on having a fixed value of rank is artificial. We can draw another curve that corresponds to optimal values of both scaling parameter and rank of the decomposition:
result_flat.groupby(level=0).max().plot(label='optimal rank', legend=True) result_flat.xs(best_rank, axis=0, level=1).plot(label='fixed rank', legend=True, title='MRR', figsize=(4.3, 2), ylim=(0, None), xlim=(-2, 2), grid=True);
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
Now the difference is less pronounced. Anyway, the EigenRec approach still performs better. Moreover, the difference vary significantly from dataset to dataset and in some cases that difference can be much more noticeable. Another degree of freedom here, which may increase the top score, is the maximum value of rank used in the grid search. We have manually set it to be 150. Let's look which values of rank were used at each point of the curve:
ax = result_flat.groupby(level=0).idxmax().str[1].plot(label='optimal rank value', ls=":", legend=True, secondary_y=True, c='g') result_flat.groupby(level=0).max().plot(label='optimal rank experiment', legend=True) result_flat.xs(best_rank, axis=0, level=1).plot(label='fixed rank experiment', legend=True, title='MRR', figsize=(4.3, 2), ylim=(0, None), xlim=(-2, 2), grid=True);
examples/Reproducing_EIGENREC_results.ipynb
Evfro/polara
mit
Data preparation We will use some of Ben's Sjorgrens data for this. We will generate a random sample of 1 million reads from the full data set. Prepare data with Snakemake bash snakemake -s aligners.snakefile It appears that kallisto needs at least 51 bases of the reference to successfully align most of the reads. Must be some kind of off-by-one issue with the data structures. Load alignments
names = ['QNAME', 'FLAG', 'RNAME', 'POS', 'MAPQ', 'CIGAR', 'RNEXT', 'PNEXT', 'TLEN', 'SEQ', 'QUAL'] bowtie_alns = pd.read_csv('alns/bowtie-51mer.aln', sep='\t', header=None, usecols=list(range(11)), names=names) bowtie2_alns = pd.read_csv('alns/bowtie2-51mer.aln', sep='\t', header=None, usecols=list(range(11)), names=names) kallisto_alns = pd.read_csv('alns/kallisto-51mer.sam', sep='\t', header=None, usecols=list(range(11)), names=names, comment='@') (bowtie_alns.RNAME != '*').sum() / len(bowtie_alns) (bowtie2_alns.RNAME != '*').sum() / len(bowtie2_alns) (kallisto_alns.RNAME != '*').sum() / len(kallisto_alns)
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
Bowtie2 vs kallisto
bt2_k_joined = pd.merge(bowtie2_alns, kallisto_alns, how='inner', on='QNAME', suffixes=['_bt2', '_k'])
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
How many reads do bowtie2 and kallisto agree on?
(bt2_k_joined.RNAME_bt2 == bt2_k_joined.RNAME_k).sum() For the minority of reads they disagree on, what do they look like?
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
For the minority of reads they disagree on, what do they look like
bt2_k_joined[bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k].RNAME_k
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
Mostly lower sensitivity of kallisto due to indels in the read. Specifically, out of
(bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k).sum()
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
discordant reads, the number where kallisto failed to map is
(bt2_k_joined[bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k].RNAME_k == '*').sum()
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
or as a fraction
(bt2_k_joined[bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k].RNAME_k == '*').sum() / (bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k).sum()
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
Are there any cases where bowtie2 fails to align
(bt2_k_joined[bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k].RNAME_bt2 == '*').sum()
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
Which means there are no cases where bowtie and kallisto align to different peptides.
((bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k) & (bt2_k_joined.RNAME_bt2 != '*') & (bt2_k_joined.RNAME_k != '*')).sum()
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
What do examples look like of kallisto aligning and bowtie2 not?
bt2_k_joined[(bt2_k_joined.RNAME_bt2 != bt2_k_joined.RNAME_k) & (bt2_k_joined.RNAME_bt2 == '*')]
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
Looks like there is a perfect match to a prefix and the latter part of the read doesn't match ``` read AAATCCACCATTGTGAAGCAGATGAAGATCATTCATGGTTACTCAGAGCA ref AAATCCACCATTGTGAAGCAGATGAAGATCATTCATAAAAATGGTTACTCA read GGTCCTCACGCCGCCCGCGTTCGCGGGTTGGCATTACAATCCGCTTTCCA ref GGTCCTCACGCCGCCCGCGTTCGCGGGTTGGCATTCCTCCCACACCAGACT ``` Bowtie vs kallisto
bt_k_joined = pd.merge(bowtie_alns, kallisto_alns, how='inner', on='QNAME', suffixes=['_bt', '_k'])
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
How many reads do bowtie and kallisto agree on?
(bt_k_joined.RNAME_bt == bt_k_joined.RNAME_k).sum()
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
For the minority of reads they disagree on, what do they look like
bt_k_joined[bt_k_joined.RNAME_bt != bt_k_joined.RNAME_k][['RNAME_bt', 'RNAME_k']]
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
Looks like many disagreeents, but probably still few disagreements on a positive mapping.
(bt_k_joined.RNAME_bt != bt_k_joined.RNAME_k).sum()
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
discordant reads, the number where kallisto failed to map is
(bt_k_joined[bt_k_joined.RNAME_bt != bt_k_joined.RNAME_k].RNAME_k == '*').sum()
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
and the number where bowtie failed is
(bt_k_joined[bt_k_joined.RNAME_bt != bt_k_joined.RNAME_k].RNAME_bt == '*').sum()
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
which means there are no disagreements on mapping. kallisto appears to be somewhat higher sensitivity. Quantitation
bowtie_counts = pd.read_csv('counts/bowtie-51mer.tsv', sep='\t', header=0, names=['id', 'input', 'output']) bowtie2_counts = pd.read_csv('counts/bowtie2-51mer.tsv', sep='\t', header=0, names=['id', 'input', 'output']) kallisto_counts = pd.read_csv('counts/kallisto-51mer.tsv', sep='\t', header=0) fig, ax = plt.subplots() _ = ax.hist(bowtie_counts.output, bins=100, log=True) _ = ax.set(title='bowtie') fig, ax = plt.subplots() _ = ax.hist(bowtie2_counts.output, bins=100, log=True) _ = ax.set(title='bowtie2') fig, ax = plt.subplots() _ = ax.hist(kallisto_counts.est_counts, bins=100, log=True) _ = ax.set(title='kallisto') bt2_k_counts = pd.merge(bowtie2_counts, kallisto_counts, how='inner', left_on='id', right_on='target_id') fig, ax = plt.subplots() ax.scatter(bt2_k_counts.output, bt2_k_counts.est_counts) sp.stats.pearsonr(bt2_k_counts.output, bt2_k_counts.est_counts) sp.stats.spearmanr(bt2_k_counts.output, bt2_k_counts.est_counts)
notebooks/aligners/Aligners.ipynb
laserson/phip-stat
apache-2.0
Check significance In an e-mail received 19/07/2016 at 17:20, Don pointed out a couple of TOC plots on my trends map where he was surprised that the estimated trend was deemed insignificant: Little Woodford (site code X15:1C1-093) Partridge (station code X15:ME-9999) Checking this will provide a useful test of my trend analysis code. To make the test as independent as possible, I've started off by extracting TOC data for these two sites using the manual interface for RESA2. This method of accessing the database is completely separate to that used by my trends code, so it'll be interesting to see whether I get the same results!
# Read RESA2 export, calculate annual medians and plot # Input file in_xlsx = (r'C:\Data\James_Work\Staff\Heleen_d_W\ICP_Waters\TOC_Trends_Analysis_2015' r'\Data\TOC_Little_Woodford_Partridge.xlsx') df = pd.read_excel(in_xlsx, sheetname='DATA') # Pivot df = df.pivot(index='Date', columns='Station name', values='TOC') df.reset_index(inplace=True) # Calculate year df['year'] = df['Date'].apply(lambda x: x.year) # Take median in each year grpd = df.groupby(['year',]) df = grpd.aggregate('median') # Plot df.plot(figsize=(12, 8)) plt.show() # Print summary stats df.describe()
check_trend_signif.ipynb
JamesSample/icpw
mit
These plots and summary statistics are identical to the ones given on my web map (with the exception that, for plotting, the web map linearly interpolates over data gaps, so the break in the line for Little Woodford is not presented). This is a good start. The next step is to estimate the Theil-Sen slope. It would also be useful to plot the 95% confidence interval around the line, as this should make it easier to see whether a trend ought to be identified as significant or not. However, a little surprisingly, it seems there is no standard way of estimating confidence intervals for Theil-Sen regressions. This is because the Theil-Sen method is strictly a way of estimating the slope of the regression line, but not the intercept (see e.g. here). A number of intercept estimators have been proposed previously (e.g. here). For the median regression, which is what I've plotted on my web map, SciPy uses the Conover Estimator to calculate the intercept $$\beta_{median} = y_{median} - M_{median} * x_{median}$$ where $\beta$ is the intercept and $M$ is the slope calculated using the Theil-Sen method. Although I can't find many references for constructing confidence intervals for this type of regression, presumably I can just generalise the above formula to estimate slopes and intercepts for any percentile, $p$ $$\beta_{p} = y_{p} - M_{p} * x_{p}$$ It's worth a try, anyway.
# Theil-Sen regression # Set up plots fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(16, 8)) # Loop over sites for idx, site in enumerate(['LITTLE - WOODFORD', 'PARTRIDGE POND']): # Get data df2 = df[site].reset_index() # Drop NaNs df2.dropna(how='any', inplace='True') # Get quantiles qdf = df2.quantile([0.025, 0.975]) y_2_5 = qdf.ix[0.025, site] x_2_5 = qdf.ix[0.025, 'year'] y_97_5 = qdf.ix[0.975, site] x_97_5 = qdf.ix[0.975, 'year'] # Theil-Sen regression slp_50, icpt_50, slp_lb, slp_ub = theilslopes(df2[site].values, df2['year'].values, 0.95) # Calculate CI for intercepts icpt_lb = y_2_5 - (slp_lb * x_2_5) icpt_ub = y_97_5 - (slp_ub * x_97_5) # Plot # Data axes[idx].plot(df2['year'], df2[site], 'bo-', label='Data') # Lower and upper CIs axes[idx].plot(df2['year'], slp_lb * df2['year'] + icpt_lb, 'r-', label='') axes[idx].plot(df2['year'], slp_ub * df2['year'] + icpt_ub, 'r-', label='95% CI on trend') axes[idx].fill_between(df2['year'], slp_lb * df2['year'] + icpt_lb, slp_ub * df2['year'] + icpt_ub, facecolor='red', alpha=0.1) # Median axes[idx].plot(df2['year'], slp_50 * df2['year'] + icpt_50, 'k-', label='Median trend') axes[idx].legend(loc='best', fontsize=16) axes[idx].set_title(site, fontsize=20) plt.tight_layout() plt.show()
check_trend_signif.ipynb
JamesSample/icpw
mit
Next, a function that extracts the net rating for a team.
def team_net_ratings(the_team_name): """ team name is one of ["76ers", "Bucks", "Bulls", "Cavaliers", "Celtics", "Clippers", "Grizzlies", "Hawks", "Heat", "Hornets", "Jazz", "Kings", "Knicks", "Lakers", "Magic", "Mavericks", "Nets", "Nuggets", "Pacers", "Pelicans", "Pistons", "Raptors", "Rockets", "Spurs", "Suns", "Thunder", "Timberwolves", "Trail Blazers", "Warriors", "Wizards"] """ return [ float(net_rating) for game_id,game_date,team_id,team_name,net_rating in nba_games if team_name == the_team_name]
basic-stats/nba-games-net-rating-boxplots/NbaTeamGameNetRatingsPlots.ipynb
krosaen/ml-study
mit
The Pistons With this, we can make a histogram of the net ratings for a team. I like the Pistons, so let's check them out:
import matplotlib.mlab as mlab plt.hist(team_net_ratings('Pistons'), bins=15) plt.title("Pistons Net Rating for 2015/2016 Games") plt.xlabel("Net Rating") plt.ylabel("Num Games") plt.show()
basic-stats/nba-games-net-rating-boxplots/NbaTeamGameNetRatingsPlots.ipynb
krosaen/ml-study
mit
As experienced as a fan this year, we are a bit bi-modal, sometimes playing great, even beating some of the leagu's best teams, other times getting blown out (that -40 net rating was most recently against The Wizards). Best and worst teams Now let's compare this to the best and worst teams in the league
plt.hist(team_net_ratings("Warriors"), bins=15, color='b', label='Warriors') plt.hist(team_net_ratings("76ers"), bins=15, color='r', alpha=0.5, label='76ers') plt.title("Net Rating for 2015/2016 Games") plt.xlabel("Net Rating") plt.ylabel("Num Games") plt.legend() plt.show()
basic-stats/nba-games-net-rating-boxplots/NbaTeamGameNetRatingsPlots.ipynb
krosaen/ml-study
mit
Yep, the warriors usually win, and the 76ers usually lose. Still striking to see how many games the warriors win by a safe margin. Box Plots Box plots are a nice way to visually compare multiple team's distributions giving a quick snapshot of the median, range and interquartile range. Let's compare the top 9 seeds in the Eastern Conference (I'm hoping the Pistons fight their way into the 8th seed, they are currently 9th).
def box_plot_teams(team_names): reversed_names = list(reversed(team_names)) data = [team_net_ratings(team_name) for team_name in reversed_names] plt.figure() plt.xlabel('Game Net Ratings') plt.boxplot(data, labels=reversed_names, vert=False) plt.show() box_plot_teams(['Cavaliers', 'Raptors', 'Celtics', 'Heat', 'Hawks', 'Hornets', 'Pacers', 'Bulls', 'Pistons'])
basic-stats/nba-games-net-rating-boxplots/NbaTeamGameNetRatingsPlots.ipynb
krosaen/ml-study
mit
We can see that The Pistons have the largest spread, but have a median slightly better than The Bulls (they are in fact neck and neck) with potentially more upside. The 3rd quartile net rating of close to 10 is what makes us Pistons fans feel like we could have a shot against most teams in The Eastern Conference. Another note: the standard boxplot plots the dashed line up to 1.5 the IQR range, beyond that data points are considered outliers and plotted individually. The Pistons do not have any outliers by this standard; so on a given night we can get blown out or win big and it shouldn't really surprise use :) Finally, let's look at the mean and standard deviations of each.
def mean_std(team_name): nrs = team_net_ratings(team_name) return (team_name, np.mean(nrs), np.std(nrs)) [mean_std(team_name) for team_name in ['Cavaliers', 'Raptors', 'Celtics', 'Heat', 'Hawks', 'Hornets', 'Pacers', 'Bulls', 'Pistons']]
basic-stats/nba-games-net-rating-boxplots/NbaTeamGameNetRatingsPlots.ipynb
krosaen/ml-study
mit
Let's have a look at the dataset we just created using our trusty friend, Matplotlib:
import matplotlib.pyplot as plt plt.style.use('ggplot') %matplotlib inline
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
I'm sure this is getting easier every time. We use scatter to create a scatter plot of all $x$ values (X[:, 0]) and $y$ values (X[:, 1]), which will result in the following output:
plt.figure(figsize=(10, 6)) plt.scatter(X[:, 0], X[:, 1], c=y, s=50);
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
In agreement with our specifications, we see two different point clusters. They hardly overlap, so it should be relatively easy to classify them. What do you think—could a linear classifier do the job? Yes, it could. Recall that a linear classifier would try to draw a straight line through the figure, trying to put all blue dots on one side and all red dots on the other. A diagonal line going from the top-left corner to the bottom-right corner could clearly do the job. So we would expect the classification task to be relatively easy, even for a naive Bayes classifier. But first, don't forget to split the dataset into training and test sets! Here, I reserve 10% of the data points for testing:
import numpy as np from sklearn import model_selection as ms X_train, X_test, y_train, y_test = ms.train_test_split( X.astype(np.float32), y, test_size=0.1 )
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
Classifying the data with a normal Bayes classifier We will then use the same procedure as in earlier chapters to train a normal Bayes classifier. Wait, why not a naive Bayes classifier? Well, it turns out OpenCV doesn't really provide a true naive Bayes classifier... Instead, it comes with a Bayesian classifier that doesn't necessarily expect features to be independent, but rather expects the data to be clustered into Gaussian blobs. This is exactly the kind of dataset we created earlier! We can create a new classifier using the following function:
import cv2 model_norm = cv2.ml.NormalBayesClassifier_create()
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
Then, training is done via the train method:
model_norm.train(X_train, cv2.ml.ROW_SAMPLE, y_train)
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
Once the classifier has been trained successfully, it will return True. We go through the motions of predicting and scoring the classifier, just like we have done a million times before:
_, y_pred = model_norm.predict(X_test) from sklearn import metrics metrics.accuracy_score(y_test, y_pred)
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
Even better—we can reuse the plotting function from the last chapter to inspect the decision boundary! If you recall, the idea was to create a mesh grid that would encompass all data points and then classify every point on the grid. The mesh grid is created via the NumPy function of the same name:
def plot_decision_boundary(model, X_test, y_test): # create a mesh to plot in h = 0.02 # step size in mesh x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1 y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) X_hypo = np.column_stack((xx.ravel().astype(np.float32), yy.ravel().astype(np.float32))) ret = model.predict(X_hypo) if isinstance(ret, tuple): zz = ret[1] else: zz = ret zz = zz.reshape(xx.shape) plt.contourf(xx, yy, zz, cmap=plt.cm.coolwarm, alpha=0.8) plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=200) plt.figure(figsize=(10, 6)) plot_decision_boundary(model_norm, X, y)
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
So far, so good. The interesting part is that a Bayesian classifier also returns the probability with which each data point has been classified:
ret, y_pred, y_proba = model_norm.predictProb(X_test)
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
The function returns a Boolean flag (True for success, False for failure), the predicted target labels (y_pred), and the conditional probabilities (y_proba). Here, y_proba is an $N \times 2$ matrix that indicates, for every one of the $N$ data points, the probability with which it was classified as either class 0 or class 1:
y_proba.round(2)
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
Classifying the data with a naive Bayes classifier We can compare the result to a true naïve Bayes classifier by asking scikit-learn for help:
from sklearn import naive_bayes model_naive = naive_bayes.GaussianNB()
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
As usual, training the classifier is done via the fit method:
model_naive.fit(X_train, y_train)
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
Scoring the classifier is built in:
model_naive.score(X_test, y_test)
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
Again a perfect score! However, in contrast to OpenCV, this classifier's predict_proba method returns true probability values, because all values are between 0 and 1, and because all rows add up to 1:
yprob = model_naive.predict_proba(X_test) yprob.round(2)
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
You might have noticed something else: This classifier has absolutely no doubt about the target label of each and every data point. It's all or nothing. The decision boundary returned by the naive Bayes classifier looks slightly different, but can be considered identical to the previous command for the purpose of this exercise:
plt.figure(figsize=(10, 6)) plot_decision_boundary(model_naive, X, y)
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
Visualizing conditional probabilities Similarly, we can also visualize probabilities. For this, we slightly modify the plot function from the previous example. We start out by creating a mesh grid between (x_min, x_max) and (y_min, y_max):
def plot_proba(model, X_test, y_test): # create a mesh to plot in h = 0.02 # step size in mesh x_min, x_max = X_test[:, 0].min() - 1, X_test[:, 0].max() + 1 y_min, y_max = X_test[:, 1].min() - 1, X_test[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) X_hypo = np.column_stack((xx.ravel().astype(np.float32), yy.ravel().astype(np.float32))) if hasattr(model, 'predictProb'): _, _, y_proba = model.predictProb(X_hypo) else: y_proba = model.predict_proba(X_hypo) zz = y_proba[:, 1] - y_proba[:, 0] zz = zz.reshape(xx.shape) plt.contourf(xx, yy, zz, cmap=plt.cm.coolwarm, alpha=0.8) plt.scatter(X_test[:, 0], X_test[:, 1], c=y_test, s=200) plt.figure(figsize=(10, 6)) plot_proba(model_naive, X, y)
notebooks/07.01-Implementing-Our-First-Bayesian-Classifier.ipynb
mbeyeler/opencv-machine-learning
mit
To get started, make sure all of the folloing import statements work without error. You should get a message telling you there are 59 layers in the network and 7548 channels.
from __future__ import print_function from io import BytesIO import math, time, copy, json, os import glob from os import listdir from os.path import isfile, join from random import random from io import BytesIO from enum import Enum from functools import partial import PIL.Image from IPython.display import clear_output, Image, display, HTML import numpy as np import scipy.misc import tensorflow as tf # import everything from lapnorm.py from lapnorm import *
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
Let's inspect the network now. The following will give us the name of all the layers in the network, as well as the number of channels they contain. We can use this as a lookup table when selecting channels.
for l, layer in enumerate(layers): layer = layer.split("/")[1] num_channels = T(layer).shape[3] print(layer, num_channels)
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
The basic idea is to take any image as input, then iteratively optimize its pixels so as to maximally activate a particular channel (feature extractor) in a trained convolutional network. We reproduce tensorflow's recipe here to read the code in detail. In render_naive, we take img0 as input, then for iter_n steps, we calculate the gradient of the pixels with respect to our optimization objective, or in other words, the diff for all of the pixels we must add in order to make the image activate the objective. The objective we pass is a channel in one of the layers of the network, or an entire layer. Declare the function below.
def render_naive(t_obj, img0, iter_n=20, step=1.0): t_score = tf.reduce_mean(t_obj) # defining the optimization objective t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation! img = img0.copy() for i in range(iter_n): g, score = sess.run([t_grad, t_score], {t_input:img}) # normalizing the gradient, so the same step size should work g /= g.std()+1e-8 # for different layers and networks img += g*step return img
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
Now let's try running it. First, we initialize a 200x200 block of colored noise. We then select the layer mixed4d_5x5_bottleneck_pre_relu and the 20th channel in that layer as the objective, and run it through render_naive for 40 iterations. You can try to optimize at different layers or different channels to get a feel for how it looks.
img0 = np.random.uniform(size=(200, 200, 3)) + 100.0 layer = 'mixed4d_3x3_bottleneck_pre_relu' channel = 140 img1 = render_naive(T(layer)[:,:,:,channel], img0, 40, 1.0) display_image(img1)
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
The above isn't so interesting yet. One improvement is to use repeated upsampling to effectively detect features at multiple scales (what we call "octaves") of the image. What we do is we start with a smaller image and calculate the gradients for that, going through the procedure like before. Then we upsample it by a particular ratio and calculate the gradients and modify the pixels of the result. We do this several times. You can see that render_multiscale is similar to render_naive except now the addition of the outer "octave" loop which repeatedly upsamples the image using the resize function.
def render_multiscale(t_obj, img0, iter_n=10, step=1.0, octave_n=3, octave_scale=1.4): t_score = tf.reduce_mean(t_obj) # defining the optimization objective t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation! img = img0.copy() for octave in range(octave_n): if octave>0: hw = np.float32(img.shape[:2])*octave_scale img = resize(img, np.int32(hw)) for i in range(iter_n): g = calc_grad_tiled(img, t_grad) # normalizing the gradient, so the same step size should work g /= g.std()+1e-8 # for different layers and networks img += g*step print("octave %d/%d"%(octave+1, octave_n)) clear_output() return img
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
Let's try this on noise first. Note the new variables octave_n and octave_scale which control the parameters of the scaling. Thanks to tensorflow's patch to do the process on overlapping subrectangles, we don't have to worry about running out of memory. However, making the overall size large will mean the process takes longer to complete.
h, w = 200, 200 octave_n = 3 octave_scale = 1.4 iter_n = 50 img0 = np.random.uniform(size=(h, w, 3)) + 100.0 layer = 'mixed4c_5x5_bottleneck_pre_relu' channel = 20 img1 = render_multiscale(T(layer)[:,:,:,channel], img0, iter_n, 1.0, octave_n, octave_scale) display_image(img1)
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
Now load a real image and use that as the starting point. We'll use the kitty image in the assets folder. Here is the original. <img src="../assets/kitty.jpg" alt="kitty" style="width: 280px;"/>
h, w = 320, 480 octave_n = 3 octave_scale = 1.4 iter_n = 60 img0 = load_image('../assets/kitty.jpg', h, w) layer = 'mixed4d_5x5_bottleneck_pre_relu' channel = 21 img1 = render_multiscale(T(layer)[:,:,:,channel], img0, iter_n, 1.0, octave_n, octave_scale) display_image(img1)
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
Now we introduce Laplacian normalization. The problem is that although we are finding features at multiple scales, it seems to have a lot of unnatural high-frequency noise. We apply a Laplacian pyramid decomposition to the image as a regularization technique and calculate the pixel gradient at each scale, as before.
def render_lapnorm(t_obj, img0, iter_n=10, step=1.0, oct_n=3, oct_s=1.4, lap_n=4): t_score = tf.reduce_mean(t_obj) # defining the optimization objective t_grad = tf.gradients(t_score, t_input)[0] # behold the power of automatic differentiation! # build the laplacian normalization graph lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n)) img = img0.copy() for octave in range(oct_n): if octave>0: hw = np.float32(img.shape[:2])*oct_s img = resize(img, np.int32(hw)) for i in range(iter_n): g = calc_grad_tiled(img, t_grad) g = lap_norm_func(g) img += g*step print('.', end='') print("octave %d/%d"%(octave+1, oct_n)) clear_output() return img
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
With Laplacian normalization and multiple octaves, we have the core technique finished and are level with the Tensorflow example. Try running the example below and modifying some of the numbers to see how it affects the result. Remember you can use the layer lookup table at the top of this notebook to recall the different layers that are available to you. Note the differences between early (low-level) layers and later (high-level) layers.
h, w = 300, 400 octave_n = 3 octave_scale = 1.4 iter_n = 20 img0 = np.random.uniform(size=(h, w, 3)) + 100.0 layer = 'mixed5b_pool_reduce_pre_relu' channel = 99 img1 = render_lapnorm(T(layer)[:,:,:,channel], img0, iter_n, 1.0, octave_n, octave_scale) display_image(img1)
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
Now we are going to modify the render_lapnorm function in three ways. 1) Instead of passing just a single channel or layer to be optimized (the objective, t_obj), we can pass several in an array, letting us optimize several channels simultaneously (it must be an array even if it contains just one element). 2) We now also pass in mask, which is a numpy array of dimensions (h,w,n) where h and w are the height and width of the source image img0 and n is equal to the number of objectives in t_obj. The mask is like a gate or multiplier of the gradient for each channel. mask[:,:,0] gets multiplied by the gradient of the first objective, mask[:,:,1] by the second and so on. It should contain a float between 0 and 1 (0 to kill the gradient, 1 to let all of it pass). Another way to think of mask is it's like step for every individual pixel for each objective. 3) Internally, we use a convenience function get_mask_sizes which figures out for us the size of the image and mask at every octave, so we don't have to worry about calculating this ourselves, and can just pass in an img and mask of the same size.
def lapnorm_multi(t_obj, img0, mask, iter_n=10, step=1.0, oct_n=3, oct_s=1.4, lap_n=4, clear=True): mask_sizes = get_mask_sizes(mask.shape[0:2], oct_n, oct_s) img0 = resize(img0, np.int32(mask_sizes[0])) t_score = [tf.reduce_mean(t) for t in t_obj] # defining the optimization objective t_grad = [tf.gradients(t, t_input)[0] for t in t_score] # behold the power of automatic differentiation! # build the laplacian normalization graph lap_norm_func = tffunc(np.float32)(partial(lap_normalize, scale_n=lap_n)) img = img0.copy() for octave in range(oct_n): if octave>0: hw = mask_sizes[octave] #np.float32(img.shape[:2])*oct_s img = resize(img, np.int32(hw)) oct_mask = resize(mask, np.int32(mask_sizes[octave])) for i in range(iter_n): g_tiled = [lap_norm_func(calc_grad_tiled(img, t)) for t in t_grad] for g, gt in enumerate(g_tiled): img += gt * step * oct_mask[:,:,g].reshape((oct_mask.shape[0],oct_mask.shape[1],1)) print('.', end='') print("octave %d/%d"%(octave+1, oct_n)) if clear: clear_output() return img
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
Try first on noise, as before. This time, we pass in two objectives from different layers and we create a mask where the top half only lets in the first channel, and the bottom half only lets in the second.
h, w = 300, 400 octave_n = 3 octave_scale = 1.4 iter_n = 15 img0 = np.random.uniform(size=(h, w, 3)) + 100.0 objectives = [T('mixed3a_3x3_pre_relu')[:,:,:,79], T('mixed5a_1x1_pre_relu')[:,:,:,200], T('mixed4b_5x5_bottleneck_pre_relu')[:,:,:,22]] # mask mask = np.zeros((h, w, 3)) mask[0:100,:,0] = 1.0 mask[100:200,:,1] = 1.0 mask[200:,:,2] = 1.0 img1 = lapnorm_multi(objectives, img0, mask, iter_n, 1.0, octave_n, octave_scale) display_image(img1)
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
Now the same thing, but we optimize over the kitty instead and pick new channels.
h, w = 400, 400 octave_n = 3 octave_scale = 1.4 iter_n = 30 img0 = load_image('../assets/kitty.jpg', h, w) objectives = [T('mixed4d_3x3_bottleneck_pre_relu')[:,:,:,99], T('mixed5a_5x5_bottleneck_pre_relu')[:,:,:,40]] # mask mask = np.zeros((h, w, 2)) mask[:,:200,0] = 1.0 mask[:,200:,1] = 1.0 img1 = lapnorm_multi(objectives, img0, mask, iter_n, 1.0, octave_n, octave_scale) display_image(img1)
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
Let's make a more complicated mask. Here we use numpy's linspace function to linearly interpolate the mask between 0 and 1, going from left to right, in the first channel's mask, and the opposite for the second channel. Thus on the far left of the image, we let in only the second channel, on the far right only the first channel, and in the middle exactly 50% of each. We'll make a long one to show the smooth transition. We'll also visualize the first channel's mask right afterwards.
h, w = 256, 1024 img0 = np.random.uniform(size=(h, w, 3)) + 100.0 octave_n = 3 octave_scale = 1.4 objectives = [T('mixed4c_3x3_pre_relu')[:,:,:,50], T('mixed4d_5x5_bottleneck_pre_relu')[:,:,:,29]] mask = np.zeros((h, w, 2)) mask[:,:,0] = np.linspace(0,1,w) mask[:,:,1] = np.linspace(1,0,w) img1 = lapnorm_multi(objectives, img0, mask, iter_n=40, step=1.0, oct_n=3, oct_s=1.4, lap_n=4) print("image") display_image(img1) print("masks") display_image(255*mask[:,:,0]) display_image(255*mask[:,:,1])
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
One can think up many clever ways to make masks. Maybe they are arranged as overlapping concentric circles, or along diagonal lines, or even using Perlin noise to get smooth organic-looking variation. Here is one example making a circular mask.
h, w = 500, 500 cy, cx = 0.5, 0.5 # circle masks pts = np.array([[[i/(h-1.0),j/(w-1.0)] for j in range(w)] for i in range(h)]) ctr = np.array([[[cy, cx] for j in range(w)] for i in range(h)]) pts -= ctr dist = (pts[:,:,0]**2 + pts[:,:,1]**2)**0.5 dist = dist / np.max(dist) mask = np.ones((h, w, 2)) mask[:, :, 0] = dist mask[:, :, 1] = 1.0-dist img0 = np.random.uniform(size=(h, w, 3)) + 100.0 octave_n = 3 octave_scale = 1.4 objectives = [T('mixed3b_5x5_bottleneck_pre_relu')[:,:,:,9], T('mixed4d_5x5_bottleneck_pre_relu')[:,:,:,17]] img1 = lapnorm_multi(objectives, img0, mask, iter_n=20, step=1.0, oct_n=3, oct_s=1.4, lap_n=4) display_image(img1)
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
Now we show how to use an existing image as a set of masks, using k-means clustering to segment it into several sections which become masks.
import sklearn.cluster k = 3 h, w = 320, 480 img0 = load_image('../assets/kitty.jpg', h, w) imgp = np.array(list(img0)).reshape((h*w, 3)) clusters, assign, _ = sklearn.cluster.k_means(imgp, k) assign = assign.reshape((h, w)) mask = np.zeros((h, w, k)) for i in range(k): mask[:,:,i] = np.multiply(np.ones((h, w)), (assign==i)) for i in range(k): display_image(mask[:,:,i]*255.) img0 = np.random.uniform(size=(h, w, 3)) + 100.0 octave_n = 3 octave_scale = 1.4 objectives = [T('mixed4b_3x3_bottleneck_pre_relu')[:,:,:,111], T('mixed5b_pool_reduce_pre_relu')[:,:,:,12], T('mixed4b_5x5_bottleneck_pre_relu')[:,:,:,11]] img1 = lapnorm_multi(objectives, img0, mask, iter_n=20, step=1.0, oct_n=3, oct_s=1.4, lap_n=4) display_image(img1)
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
Now, we move on to generating video. The most straightforward way to do this is using feedback; generate one image in the conventional way, and then use it as the input to the next generation, rather than starting with noise again. By itself, this would simply repeat or intensify the features found in the first image, but we can get interesting results by perturbing the input to the second generation slightly before passing it in. For example, we can crop it slightly to remove the outer rim, then resize it to the original size and run it through again. If we do this repeatedly, we will get what looks like a constant zooming-in motion. The next block of code demonstrates this. We'll make a small square with a single feature, then crop the outer rim by around 5% before making the next one. We'll repeat this 20 times and look at the resulting frames. For simplicity, we'll just set the mask to 1 everywhere. Note, we've also set the clear variable in lapnorm_multi to false so we can see all the images in sequence.
h, w = 200, 200 # start with random noise img = np.random.uniform(size=(h, w, 3)) + 100.0 octave_n = 3 octave_scale = 1.4 objectives = [T('mixed4d_5x5_bottleneck_pre_relu')[:,:,:,11]] mask = np.ones((h, w, 1)) # repeat the generation loop 20 times. notice the feedback -- we make img and then use it the initial input for f in range(20): img = lapnorm_multi(objectives, img, mask, iter_n=20, step=1.0, oct_n=3, oct_s=1.4, lap_n=4, clear=False) display_image(img) # let's see it scipy.misc.imsave('frame%05d.png'%f, img) # ffmpeg to save the frames img = resize(img[10:-10,10:-10,:], (h, w)) # before looping back, crop the border by 10 pixels, resize, repeat
examples/dreaming/neural-synth.ipynb
ml4a/ml4a-guides
gpl-2.0
Patches The fill attribute of the Lines mark allows us to fill a path in different ways, while the fill_colors attribute lets us control the color of the fill
sc_x = LinearScale() sc_y = LinearScale() patch = Lines(x=[[0, 2, 1.2, np.nan, np.nan, np.nan, np.nan], [0.5, 2.5, 1.7, np.nan, np.nan, np.nan, np.nan], [4,5,6, 6, 5, 4, 3]], y=[[0, 0, 1 , np.nan, np.nan, np.nan, np.nan], [0.5, 0.5, -0.5, np.nan, np.nan, np.nan, np.nan], [1, 1.1, 1.2, 2.3, 2.2, 2.7, 1.0]], fill_colors=['orange', 'blue', 'red'], fill='inside', stroke_width=10, close_path=True, scales={'x': sc_x, 'y': sc_y}, display_legend=True) Figure(marks=[patch], animation_duration=1000) patch.opacities = [0.1, 0.2] patch.x = [[2, 3, 3.2, np.nan, np.nan, np.nan, np.nan], [0.5, 2.5, 1.7, np.nan, np.nan, np.nan, np.nan], [4,5,6, 6, 5, 4, 3]] patch.close_path = False
examples/Marks/Object Model/Lines.ipynb
SylvainCorlay/bqplot
apache-2.0
Observation - from the report contents page, I can navigate via the Back button to https://publications.parliament.uk/pa/cm201516/cmselect/cmwomeq/584/58401.htm but then it's not clear where I am at all? It would probably make sense to be able to get back to the inquiry page for the inquiry that resulted in the report.
import pandas as pd
notebooks/Committee Reports.ipynb
psychemedia/parlihacks
mit
Report Contents Page Link Scraper
import requests import requests_cache requests_cache.install_cache('parli_comm_cache') from bs4 import BeautifulSoup #https://www.dataquest.io/blog/web-scraping-tutorial-python/ page = requests.get(url) soup = BeautifulSoup(page.content, 'html.parser') #What does a ToC item look like? soup.select('p[class*="ToC"]')[5].find('a') url_written=None url_witnesses=None for p in soup.select('p[class*="ToC"]'): #witnesses if 'Witnesses' in p.find('a'): url_witnesses=p.find('a')['href'] #written evidence if 'Published written evidence' in p.find('a'): url_written=p.find('a')['href'] url_written, url_witnesses #https://stackoverflow.com/a/34661518/454773 pages=[] for EachPart in soup.select('p[class*="ToC"]'): href=EachPart.find('a')['href'] #Fudge to collect URLs of pages asssociated with report content if '#_' in href: pages.append(EachPart.find('a')['href'].split('#')[0]) pages=list(set(pages)) pages #We need to get the relative path for the page... import os.path stub=os.path.split(url) stub #Grab all the pages in the report for p in pages: r=requests.get('{}/{}'.format(stub[0],p))
notebooks/Committee Reports.ipynb
psychemedia/parlihacks
mit
Report - Page Scraper For each HTML Page in the report, extract references to oral evidence session questions and written evidence.
pagesoup=BeautifulSoup(r.content, 'html.parser') print(str(pagesoup.select('div[id="shellcontent"]')[0])[:2000]) import re def evidenceRef(pagesoup): qs=[] ws=[] #Grab list of questions for p in pagesoup.select('div[class="_idFootnote"]'): #Find oral question numbers q=re.search(r'^.*\s+(Q[0-9]*)\s*$', p.find('p').text) if q: qs.append(q.group(1)) #Find links to written evidence links=p.find('p').findAll('a') if len(links)>1: if links[1]['href'].startswith('http://data.parliament.uk/WrittenEvidence/CommitteeEvidence.svc/EvidenceDocument/'): ws.append(links[1].text.strip('()')) return qs, ws evidenceRef(pagesoup) qs=[] ws=[] for p in pages: r=requests.get('{}/{}'.format(stub[0],p)) pagesoup=BeautifulSoup(r.content, 'html.parser') pagesoup.select('div[id="shellcontent"]')[0] qstmp,wstmp= evidenceRef(pagesoup) qs += qstmp ws +=wstmp pd.DataFrame(qs)[0].value_counts().head() pd.DataFrame(ws)[0].value_counts().head()
notebooks/Committee Reports.ipynb
psychemedia/parlihacks
mit
Report - Oral Session Page Scraper Is this reliably cribbed by link text Witnesses?
#url='https://publications.parliament.uk/pa/cm201516/cmselect/cmwomeq/584/58414.htm' if url_witnesses is not None: r=requests.get('{}/{}'.format(stub[0],url_witnesses)) pagesoup=BeautifulSoup(r.content, 'html.parser') l1=[t.text.split('\t')[0] for t in pagesoup.select('h2[class="WitnessHeading"]')] l2=pagesoup.select('table') pd.DataFrame({'a':l1,'b':l2}) #Just as easy to do this by hand items=[] items.append(['Tuesday 15 December 2015','Chris Giles', 'Economics Editor', 'The Financial Times','Q1', 'Q35']) items.append(['Tuesday 15 December 2015','Dr Alison Parken', 'Women Adding Value to the Economy (WAVE)', 'Cardiff University','Q1', 'Q35']) items.append(['Tuesday 15 December 2015','Professor Jill Rubery','', 'Manchester University','Q1', 'Q35']) items.append(['Tuesday 15 December 2015','Sheila Wild', 'Founder', 'Equal Pay Portal','Q1', 'Q35']) items.append(['Tuesday 15 December 2015','Professor the Baroness Wolf of Dulwich', "King's College", 'London','Q1', 'Q35']) items.append(['Tuesday 15 December 2015','Neil Carberry', 'Director for Employment and Skills', 'CBI','Q36','Q58']) items.append(['Tuesday 15 December 2015','Ann Francke', 'Chief Executive', 'Chartered Management Institute','Q36','Q58']) items.append(['Tuesday 15 December 2015','Monika Queisser',' Senior Counsellor and Head of Social Policy', 'Organisation for Economic Cooperation and Development','Q36','Q58']) items.append(['Tuesday 12 January 2016','Amanda Brown', 'Assistant General Secretary', 'NUT','Q59','Q99']) items.append(['Tuesday 12 January 2016','Dr Sally Davies', 'President', "Medical Women's Federation",'Q59','Q99']) items.append(['Tuesday 12 January 2016','Amanda Fone','Chief Executive Officer', 'F1 Recruitment and Search','Q59','Q99']) items.append(['Tuesday 12 January 2016','Audrey Williams', 'Employment Lawyer and Partner',' Fox Williams','Q59','Q99']) items.append(['Tuesday 12 January 2016','Anna Ritchie Allan', 'Project Manager', 'Close the Gap','Q100','Q136']) items.append(['Tuesday 12 January 2016','Christopher Brooks', 'Policy Adviser', 'Age UK','Q100','Q136']) items.append(['Tuesday 12 January 2016','Scarlet Harris', 'Head of Gender Equality', 'TUC','Q100','Q136']) items.append(['Tuesday 12 January 2016','Mr Robert Stephenson-Padron', 'Managing Director', 'Penrose Care','Q100','Q136']) items.append(['Tuesday 19 January 2016','Sarah Jackson', 'Chief Executive', 'Working Families','Q137','Q164']) items.append(['Tuesday 19 January 2016','Adrienne Burgess', 'Joint Chief Executive and Head of Research', 'Fatherhood Institute','Q137','Q164']) items.append(['Tuesday 19 January 2016','Maggie Stilwell', 'Partner', 'Ernst & Young LLP','Q137','Q164']) items.append(['Tuesday 26 January 2016','Michael Newman', 'Vice-Chair', 'Discrimination Law Association','Q165','Q191']) items.append(['Tuesday 26 January 2016','Duncan Brown', '','Institute for Employment Studies','Q165','Q191']) items.append(['Tuesday 26 January 2016','Tim Thomas', 'Head of Employment and Skills', "EEF, the manufacturers' association",'Q165','Q191']) items.append(['Tuesday 26 January 2016','Helen Fairfoul', 'Chief Executive', 'Universities and Colleges Employers Association','Q192','Q223']) items.append(['Tuesday 26 January 2016','Emma Stewart', 'Joint Chief Executive Officer', 'Timewise Foundation','Q192','Q223']) items.append(['Tuesday 26 January 2016','Claire Turner','', 'Joseph Rowntree Foundation','Q192','Q223']) items.append(['Wednesday 10 February 2016','Rt Hon Nicky Morgan MP', 'Secretary of State for Education and Minister for Women and Equalities','Department for Education','Q224','Q296']) items.append(['Wednesday 10 February 2016','Nick Boles MP', 'Minister for Skills', 'Department for Business, Innovation and Skills','Q224','Q296']) df=pd.DataFrame(items,columns=['Date','Name','Role','Org','Qmin','Qmax']) #Cleaning check df['Org']=df['Org'].str.strip() df['n_qmin']=df['Qmin'].str.strip('Q').astype(int) df['n_qmax']=df['Qmax'].str.strip('Q').astype(int) df['session']=df['Qmin']+'-'+df['n_qmax'].astype(str) df.head()
notebooks/Committee Reports.ipynb
psychemedia/parlihacks
mit
Report - Written Evidence Scraper Is this reliably cribbed by link text Published written evidence?
#url='https://publications.parliament.uk/pa/cm201516/cmselect/cmwomeq/584/58415.htm' all_written=[] if url_written is not None: r=requests.get('{}/{}'.format(stub[0],url_written)) pagesoup=BeautifulSoup(r.content, 'html.parser') for p in pagesoup.select('p[class="EvidenceList1"]'): #print(p) #Get rid of span tags for match in p.findAll('span[class="EvidenceList1Span"]'): match.extract() all_written.append((p.contents[1].strip('()').strip(), p.find('a')['href'],p.find('a').text)) written_df=pd.DataFrame(all_written) written_df.columns=['Org','URL','RefNumber'] written_df.head() def getSession(q): return df[(df['n_qmin']<=q) & (df['n_qmax']>=q)].iloc[0]['session'] getSession(33) #Report on sessions that included a question by count df_qs=pd.DataFrame(qs, columns=['qn']) df_qs['session']=df_qs['qn'].apply(lambda x: getSession(int(x.strip('Q'))) ) s_qs_cnt=df_qs['session'].value_counts() s_qs_cnt pd.concat([s_qs_cnt,df.groupby('session')['Org'].apply(lambda x: '; '.join(list(x)))], axis=1).sort_values('session',ascending=False) #Written evidence df_ws=pd.DataFrame(ws,columns=['RefNumber']) df_ws=df_ws.merge(written_df, on='RefNumber') df_ws['Org'].value_counts().head() #Organisations that gave written and witness evidence set(df_ws['Org']).intersection(set(df['Org'])) #Note there are more matches that are hidden by dirty data #- e.g. NUT and National Union of Teachers are presumably the same #- e.g. F1 Recruitment and Search and F1 Recruitment Ltd are presumably the same
notebooks/Committee Reports.ipynb
psychemedia/parlihacks
mit