markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Now lets add some random noise to create a noisy dataset, and re-plot it:
np.random.seed(42) noisy = np.random.normal(digits.data, 4) plot_digits(noisy)
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
It's clear by eye that the images are noisy, and contain spurious pixels. Let's train a PCA on the noisy data, requesting that the projection preserve 50% of the variance:
pca = PCA(0.50).fit(noisy) pca.n_components_
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
Here 50% of the variance amounts to 12 principal components. Now we compute these components, and then use the inverse of the transform to reconstruct the filtered digits:
components = pca.transform(noisy) filtered = pca.inverse_transform(components) plot_digits(filtered)
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
This signal preserving/noise filtering property makes PCA a very useful feature selection routine—for example, rather than training a classifier on very high-dimensional data, you might instead train the classifier on the lower-dimensional representation, which will automatically serve to filter out random noise in the inputs. Example: Eigenfaces Earlier we explored an example of using a PCA projection as a feature selector for facial recognition with a support vector machine (see In-Depth: Support Vector Machines). Here we will take a look back and explore a bit more of what went into that. Recall that we were using the Labeled Faces in the Wild dataset made available through Scikit-Learn:
from sklearn.datasets import fetch_lfw_people faces = fetch_lfw_people(min_faces_per_person=60) print(faces.target_names) print(faces.images.shape)
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
Let's take a look at the principal axes that span this dataset. Because this is a large dataset, we will use RandomizedPCA—it contains a randomized method to approximate the first $N$ principal components much more quickly than the standard PCA estimator, and thus is very useful for high-dimensional data (here, a dimensionality of nearly 3,000). We will take a look at the first 150 components:
# from sklearn.decomposition import RandomizedPCA from sklearn.decomposition import PCA as RandomizedPCA pca = RandomizedPCA(150) pca.fit(faces.data)
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
In this case, it can be interesting to visualize the images associated with the first several principal components (these components are technically known as "eigenvectors," so these types of images are often called "eigenfaces"). As you can see in this figure, they are as creepy as they sound:
fig, axes = plt.subplots(3, 8, figsize=(9, 4), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i, ax in enumerate(axes.flat): ax.imshow(pca.components_[i].reshape(62, 47), cmap='bone')
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
The results are very interesting, and give us insight into how the images vary: for example, the first few eigenfaces (from the top left) seem to be associated with the angle of lighting on the face, and later principal vectors seem to be picking out certain features, such as eyes, noses, and lips. Let's take a look at the cumulative variance of these components to see how much of the data information the projection is preserving:
plt.plot(np.cumsum(pca.explained_variance_ratio_)) plt.xlabel('number of components') plt.ylabel('cumulative explained variance');
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
We see that these 150 components account for just over 90% of the variance. That would lead us to believe that using these 150 components, we would recover most of the essential characteristics of the data. To make this more concrete, we can compare the input images with the images reconstructed from these 150 components:
# Compute the components and projected faces pca = RandomizedPCA(150).fit(faces.data) components = pca.transform(faces.data) projected = pca.inverse_transform(components) # Plot the results fig, ax = plt.subplots(2, 10, figsize=(10, 2.5), subplot_kw={'xticks':[], 'yticks':[]}, gridspec_kw=dict(hspace=0.1, wspace=0.1)) for i in range(10): ax[0, i].imshow(faces.data[i].reshape(62, 47), cmap='binary_r') ax[1, i].imshow(projected[i].reshape(62, 47), cmap='binary_r') ax[0, 0].set_ylabel('full-dim\ninput') ax[1, 0].set_ylabel('150-dim\nreconstruction');
present/mcc2/PythonDataScienceHandbook/05.09-Principal-Component-Analysis.ipynb
csaladenes/csaladenes.github.io
mit
You will use this helper function to write lists containing article ids, categories, and authors for each article in our database to local file.
def write_list_to_disk(my_list, filename): with open(filename, 'w') as f: for item in my_list: line = "%s\n" % item f.write(line)
courses/machine_learning/deepdive2/recommendation_systems/solutions/content_based_preproc.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Pull data from BigQuery The cell below creates a local text file containing all the article ids (i.e. 'content ids') in the dataset. Have a look at the original dataset in BigQuery. Then read through the query below and make sure you understand what it is doing.
sql=""" #standardSQL SELECT (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id FROM `cloud-training-demos.GA360_test.ga_sessions_sample`, UNNEST(hits) AS hits WHERE # only include hits on pages hits.type = "PAGE" AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL GROUP BY content_id """ content_ids_list = bigquery.Client().query(sql).to_dataframe()['content_id'].tolist() write_list_to_disk(content_ids_list, "content_ids.txt") print("Some sample content IDs {}".format(content_ids_list[:3])) print("The total number of articles is {}".format(len(content_ids_list)))
courses/machine_learning/deepdive2/recommendation_systems/solutions/content_based_preproc.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
There should be 15,634 articles in the database. Next, you'll create a local file which contains a list of article categories and a list of article authors. Note the change in the index when pulling the article category or author information. Also, you are using the first author of the article to create our author list. Refer back to the original dataset, use the hits.customDimensions.index field to verify the correct index.
sql=""" #standardSQL SELECT (SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category FROM `cloud-training-demos.GA360_test.ga_sessions_sample`, UNNEST(hits) AS hits WHERE # only include hits on pages hits.type = "PAGE" AND (SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL GROUP BY category """ categories_list = bigquery.Client().query(sql).to_dataframe()['category'].tolist() write_list_to_disk(categories_list, "categories.txt") print(categories_list)
courses/machine_learning/deepdive2/recommendation_systems/solutions/content_based_preproc.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The categories are 'News', 'Stars & Kultur', and 'Lifestyle'. When creating the author list, you'll only use the first author information for each article.
sql=""" #standardSQL SELECT REGEXP_EXTRACT((SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)), r"^[^,]+") AS first_author FROM `cloud-training-demos.GA360_test.ga_sessions_sample`, UNNEST(hits) AS hits WHERE # only include hits on pages hits.type = "PAGE" AND (SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL GROUP BY first_author """ authors_list = bigquery.Client().query(sql).to_dataframe()['first_author'].tolist() write_list_to_disk(authors_list, "authors.txt") print("Some sample authors {}".format(authors_list[:10])) print("The total number of authors is {}".format(len(authors_list)))
courses/machine_learning/deepdive2/recommendation_systems/solutions/content_based_preproc.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
There should be 385 authors in the database. Create train and test sets In this section, you will create the train/test split of our data for training our model. You use the concatenated values for visitor id and content id to create a farm fingerprint, taking approximately 90% of the data for the training set and 10% for the test set.
sql=""" WITH site_history as ( SELECT fullVisitorId as visitor_id, (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id, (SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category, (SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title, (SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)) AS author_list, SPLIT(RPAD((SELECT MAX(IF(index=4, value, NULL)) FROM UNNEST(hits.customDimensions)), 7), '.') as year_month_array, LEAD(hits.customDimensions, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) as nextCustomDimensions FROM `cloud-training-demos.GA360_test.ga_sessions_sample`, UNNEST(hits) AS hits WHERE # only include hits on pages hits.type = "PAGE" AND fullVisitorId IS NOT NULL AND hits.time != 0 AND hits.time IS NOT NULL AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL ) SELECT visitor_id, content_id, category, REGEXP_REPLACE(title, r",", "") as title, REGEXP_EXTRACT(author_list, r"^[^,]+") as author, DATE_DIFF(DATE(CAST(year_month_array[OFFSET(0)] AS INT64), CAST(year_month_array[OFFSET(1)] AS INT64), 1), DATE(1970,1,1), MONTH) as months_since_epoch, (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) as next_content_id FROM site_history WHERE (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) IS NOT NULL AND ABS(MOD(FARM_FINGERPRINT(CONCAT(visitor_id, content_id)), 10)) < 9 """ training_set_df = bigquery.Client().query(sql).to_dataframe() training_set_df.to_csv('training_set.csv', header=False, index=False, encoding='utf-8') training_set_df.head() sql=""" WITH site_history as ( SELECT fullVisitorId as visitor_id, (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) AS content_id, (SELECT MAX(IF(index=7, value, NULL)) FROM UNNEST(hits.customDimensions)) AS category, (SELECT MAX(IF(index=6, value, NULL)) FROM UNNEST(hits.customDimensions)) AS title, (SELECT MAX(IF(index=2, value, NULL)) FROM UNNEST(hits.customDimensions)) AS author_list, SPLIT(RPAD((SELECT MAX(IF(index=4, value, NULL)) FROM UNNEST(hits.customDimensions)), 7), '.') as year_month_array, LEAD(hits.customDimensions, 1) OVER (PARTITION BY fullVisitorId ORDER BY hits.time ASC) as nextCustomDimensions FROM `cloud-training-demos.GA360_test.ga_sessions_sample`, UNNEST(hits) AS hits WHERE # only include hits on pages hits.type = "PAGE" AND fullVisitorId IS NOT NULL AND hits.time != 0 AND hits.time IS NOT NULL AND (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(hits.customDimensions)) IS NOT NULL ) SELECT visitor_id, content_id, category, REGEXP_REPLACE(title, r",", "") as title, REGEXP_EXTRACT(author_list, r"^[^,]+") as author, DATE_DIFF(DATE(CAST(year_month_array[OFFSET(0)] AS INT64), CAST(year_month_array[OFFSET(1)] AS INT64), 1), DATE(1970,1,1), MONTH) as months_since_epoch, (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) as next_content_id FROM site_history WHERE (SELECT MAX(IF(index=10, value, NULL)) FROM UNNEST(nextCustomDimensions)) IS NOT NULL AND ABS(MOD(FARM_FINGERPRINT(CONCAT(visitor_id, content_id)), 10)) >= 9 """ test_set_df = bigquery.Client().query(sql).to_dataframe() test_set_df.to_csv('test_set.csv', header=False, index=False, encoding='utf-8') test_set_df.head()
courses/machine_learning/deepdive2/recommendation_systems/solutions/content_based_preproc.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Let's have a look at the two csv files you just created containing the training and test set. You'll also do a line count of both files to confirm that you have achieved an approximate 90/10 train/test split. In the next notebook, Content Based Filtering you will build a model to recommend an article given information about the current article being read, such as the category, title, author, and publish date.
%%bash wc -l *_set.csv !head *_set.csv
courses/machine_learning/deepdive2/recommendation_systems/solutions/content_based_preproc.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
当你想要构建一个将来需要序列化或编码成其他格式的映射的时候, OrderedDict 是非常有用的。 比如,你想精确控制以 JSON 编码后字段的顺序,你可以先使用 OrderedDict 来构建这样的数据:
import json json.dumps(d)
01 data structures and algorithms/01.07 keep dict in order.ipynb
wuafeing/Python3-Tutorial
gpl-3.0
Pré-processamento dos dados
# Define sliding window def window_time_series(series, n, step=1): # print "in window_time_series",series if step < 1.0: step = max(int(step * n), 1) return [series[i:i + n] for i in range(0, len(series) - n + 1, step)] # PAA function def paa(series, now, opw): if now == None: now = len(series) / opw if opw == None: opw = ceil(len(series) / now) return [sum(series[i * opw: (i + 1) * opw]) / float(opw) for i in range(now)] def standardize(serie): dev = np.sqrt(np.var(serie)) mean = np.mean(serie) return [(each - mean) / dev for each in serie] # Rescale data into [0,1] def rescale(serie): maxval = max(serie) minval = min(serie) gap = float(maxval - minval) return [(each - minval) / gap for each in serie] # Rescale data into [-1,1] def rescaleminus(serie): maxval = max(serie) minval = min(serie) gap = float(maxval - minval) return [(each - minval) / gap * 2 - 1 for each in serie] # Generate quantile bins def QMeq(series, Q): q = pd.qcut(list(set(series)), Q) dic = dict(zip(set(series), q.labels)) MSM = np.zeros([Q, Q]) label = [] for each in series: label.append(dic[each]) for i in range(0, len(label) - 1): MSM[label[i]][label[i + 1]] += 1 for i in xrange(Q): if sum(MSM[i][:]) == 0: continue MSM[i][:] = MSM[i][:] / sum(MSM[i][:]) return np.array(MSM), label, q.levels # Generate quantile bins when equal values exist in the array (slower than QMeq) def QVeq(series, Q): q = pd.qcut(list(set(series)), Q) dic = dict(zip(set(series), q.labels)) qv = np.zeros([1, Q]) label = [] for each in series: label.append(dic[each]) for i in range(0, len(label)): qv[0][label[i]] += 1.0 return np.array(qv[0][:] / sum(qv[0][:])), label # Generate Markov Matrix given a spesicif number of quantile bins def paaMarkovMatrix(paalist, level): paaindex = [] for each in paalist: for k in range(len(level)): lower = float(level[k][1:-1].split(',')[0]) upper = float(level[k][1:-1].split(',')[-1]) if each >= lower and each <= upper: paaindex.append(k) return paaindex # Generate Image (.png) files of generated images def gengramImgs(image, paaimages, label, name, path): import operator index = zip(range(len(label)), label) index.sort(key=operator.itemgetter(1)) count = 0 for p, q in index: count += 1 #print 'generate fig of pdfs:', p plt.ioff(); fig = plt.figure(); fig.set_size_inches((1,1)) ax = plt.Axes(fig, [0., 0., 1., 1.]) ax.set_axis_off() fig.add_axes(ax) plt.imshow(paaimages[p], aspect='equal'); plt.savefig(path+"/fig-"+name+".png") plt.close(fig) if count > 30: break # Generate pdf files of trainsisted array in porlar coordinates def genpolarpdfs(raw, label, name): import matplotlib.backends.backend_pdf as bpdf import operator index = zip(range(len(label)), label) index.sort(key=operator.itemgetter(1)) with bpdf.PdfPages(name) as pdf: for p, q in index: #print 'generate fig of pdfs:', p plt.ioff(); r = np.array(range(1, length + 1)); r = r / 100.0; theta = np.arccos(np.array(rescaleminus(standardize(raw[p][1:])))) * 2; fig = plt.figure(); plt.suptitle(datafile + '_' + str(label[p])); ax = plt.subplot(111, polar=True); ax.plot(theta, r, color='r', linewidth=3); pdf.savefig(fig) plt.close(fig) pdf.close # return the max value instead of mean value in PAAs def maxsample(mat, s): retval = [] x, y, z = mat.shape l = np.int(np.floor(y / float(s))) for each in mat: block = [] for i in range(s): block.append([np.max(each[i * l:(i + 1) * l, j * l:(j + 1) * l]) for j in xrange(s)]) retval.append(np.asarray(block)) return np.asarray(retval) # Pickle the data and save in the pkl file def pickledata(mat, label, train, name): #print '..pickling data:', name traintp = (mat[:train], label[:train]) testtp = (mat[train:], label[train:]) f = file('fridge/' + name + '.pkl', 'wb') pickletp = [traintp, testtp] cPickle.dump(pickletp, f, protocol=cPickle.HIGHEST_PROTOCOL) def pickle3data(mat, label, train, name): #print '..pickling data:', name traintp = (mat[:train], label[:train]) validtp = (mat[:train], label[:train]) testtp = (mat[train:], label[train:]) f = file(name + '.pkl', 'wb') pickletp = [traintp, validtp, testtp] cPickle.dump(pickletp, f, protocol=cPickle.HIGHEST_PROTOCOL)
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Parâmetros gerais dos dados utilizados na modelagem (treino e teste)
################################# ###Define the parameters here#### ################################# datafiles = ['dish washer1-1'] # Data file name (TODO: alterar aqui) trains = [250] # Number of training instances (because we assume training and test data are mixed in one file) size = [32] # PAA size GAF_type = 'GADF' # GAF type: GASF, GADF save_PAA = True # Save the GAF with or without dimension reduction by PAA: True, False rescale_type = 'Zero' # Rescale the data into [0,1] or [-1,1]: Zero, Minusone directory = os.path.join(BENCHMARKING_RESOURCES_PATH, 'GeneratedImages') #the directory will be created if it does not already exist. Here the images will be stored if not os.path.exists(directory): os.makedirs(directory)
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Gerando dados A fim de normalizar os benchmarkings, serão utilizados os dados das séries do bechmarking 1 para o processo de Extração de Características (conversão serie2image - benchmarking 2). Extração de Características
def serie2image(serie, GAF_type = 'GADF', scaling = False, s = 32): """ Customized function to perform Series to Image conversion. Args: serie : original input data (time-serie chunk of appliance/main data - REDD - benchmarking 1) GAF_type : GADF / GASF (Benchmarking 2 process) s : Size of output paaimage originated from serie [ INFO: PAA = (32, 32) / noPAA = (50, 50) ] """ image = None paaimage = None patchimage = None matmatrix = None fullmatrix = None std_data = serie if scaling: std_data = rescale(std_data) paalistcos = paa(std_data, s, None) # paalistcos = rescale(paa(each[1:],s,None)) # paalistcos = rescaleminus(paa(each[1:],s,None)) ################raw################### datacos = np.array(std_data) #print(datacos) datasin = np.sqrt(1 - np.array(std_data) ** 2) #print(datasin) paalistcos = np.array(paalistcos) paalistsin = np.sqrt(1 - paalistcos ** 2) datacos = np.matrix(datacos) datasin = np.matrix(datasin) paalistcos = np.matrix(paalistcos) paalistsin = np.matrix(paalistsin) if GAF_type == 'GASF': paamatrix = paalistcos.T * paalistcos - paalistsin.T * paalistsin matrix = np.array(datacos.T * datacos - datasin.T * datasin) elif GAF_type == 'GADF': paamatrix = paalistsin.T * paalistcos - paalistcos.T * paalistsin matrix = np.array(datasin.T * datacos - datacos.T * datasin) else: sys.exit('Unknown GAF type!') #label = np.asarray(label) image = matrix paaimage = np.array(paamatrix) matmatrix = np.asarray(matmatrix) fullmatrix = np.asarray(fullmatrix) # # maximage = maxsample(image, s) # maxmatrix = np.asarray(np.asarray([each.flatten() for each in maximage])) if save_PAA == False: finalmatrix = matmatrix else: finalmatrix = fullmatrix # uncomment below if needed data in pickled form # pickledata(finalmatrix, label, train, datafilename) #gengramImgs(image, paaimage, label, directory) return image, paaimage, matmatrix, fullmatrix, finalmatrix # Reading power dataset (benchmark 1) BENCHMARKING1_RESOURCES_PATH = "benchmarkings/cs446 project-electric-load-identification-using-machine-learning/" size_paa = 32 size_without_paa = 30 # devices to be used in training and testing use_idx = np.array([3,4,6,7,10,11,13,17,19]) label_columns_idx = ["APLIANCE_{}".format(i) for i in use_idx]
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Conjunto de Treino
print("Processing train dataset (Series to Images)...") # Train... train_power_chunks = np.load( os.path.join(BENCHMARKING1_RESOURCES_PATH, 'datasets/train_power_chunks.npy') ) train_labels_binary = np.load( os.path.join(BENCHMARKING1_RESOURCES_PATH, 'datasets/train_labels_binary.npy') ) data_paa_train = [] data_without_paa_train = [] #for idx, row in tqdm_notebook(df_power_chunks.iterrows(), total = df_power_chunks.shape[0]): for idx, power_chunk in tqdm_notebook(enumerate(train_power_chunks), total = train_power_chunks.shape[0]): #serie = row[attr_columns_idx].tolist() #print(serie) #labels = row[label_columns_idx].astype('int').astype('str').tolist() serie = power_chunk labels = train_labels_binary[idx, :].astype('str').tolist() labels_str = ''.join(labels) for g_Type in ['GASF', 'GADF']: #image, paaimage, matmatrix, fullmatrix, finalmatrix = serie2image(serie, g_Type) image, paaimage, _, _, _ = serie2image(serie, g_Type, scaling=True) # Persist image data files (PAA - noPAA) np.save( os.path.join( BENCHMARKING_RESOURCES_PATH, "GeneratedMatrixImages", "{}_WITHOUTPAA_{}_train_{}.npy".format(idx, g_Type, labels_str) ), image ) # x is the array you want to save imsave( os.path.join( BENCHMARKING_RESOURCES_PATH, "GeneratedImages", "{}_WITHOUTPAA_{}_train_{}.png".format(idx, g_Type, labels_str) ), image ) data_without_paa_train.append( list([idx, g_Type]) + list(image.flatten()) + list(labels) ) np.save( os.path.join( BENCHMARKING_RESOURCES_PATH, "GeneratedMatrixImages", "{}_PAA_{}_train_{}.npy".format(idx, g_Type, labels_str) ), paaimage ) imsave( os.path.join( BENCHMARKING_RESOURCES_PATH, "GeneratedImages", "{}_PAA_{}_train_{}.png".format(idx, g_Type, labels_str) ), paaimage ) data_paa_train.append( list([idx, g_Type]) + list(paaimage.flatten()) + list(labels) ) # VIsualizgin some results... plt.figure(figsize=(8,6)); plt.suptitle(g_Type + ' series'); ax1 = plt.subplot(121); plt.title(g_Type + ' without PAA'); plt.imshow(image); divider = make_axes_locatable(ax1); cax = divider.append_axes("right", size="2.5%", pad=0.2); plt.colorbar(cax=cax); ax2 = plt.subplot(122); plt.title(g_Type + ' with PAA'); plt.imshow(paaimage); print('Saving processed data...') df_without_paa_train = pd.DataFrame( data = data_without_paa_train, columns = list(["IDX", "TYPE"]) + ["DIMESION_{}".format(d) for d in range(size_without_paa*size_without_paa)] + list(label_columns_idx) ) df_without_paa_train.to_csv(os.path.join( BENCHMARKING_RESOURCES_PATH, "datasets", "df_without_paa_train.csv")) df_paa_train = pd.DataFrame( data = data_paa_train, columns = list(["IDX", "TYPE"]) + ["DIMESION_{}".format(d) for d in range(size_paa*size_paa)] + list(label_columns_idx) ) df_paa_train.to_csv(os.path.join( BENCHMARKING_RESOURCES_PATH, "datasets", "df_paa_train.csv"))
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Conjunto de teste
print("Processing test dataset (Series to Images)...") # Test... test_power_chunks = np.load( os.path.join(BENCHMARKING1_RESOURCES_PATH, 'datasets/test_power_chunks.npy') ) test_labels_binary = np.load( os.path.join(BENCHMARKING1_RESOURCES_PATH, 'datasets/test_labels_binary.npy') ) data_paa_test = [] data_without_paa_test = [] #for idx, row in tqdm_notebook(df_power_chunks.iterrows(), total = df_power_chunks.shape[0]): for idx, power_chunk in tqdm_notebook(enumerate(test_power_chunks), total = test_power_chunks.shape[0]): #serie = row[attr_columns_idx].tolist() #print(serie) #labels = row[label_columns_idx].astype('int').astype('str').tolist() serie = power_chunk labels = test_labels_binary[idx, :].astype('str').tolist() labels_str = ''.join(labels) for g_Type in ['GASF', 'GADF']: #image, paaimage, matmatrix, fullmatrix, finalmatrix = serie2image(serie, g_Type) image, paaimage, _, _, _ = serie2image(serie, g_Type, scaling=True) # Persist image data files (PAA - noPAA) np.save( os.path.join( BENCHMARKING_RESOURCES_PATH, "GeneratedMatrixImages", "{}_WITHOUTPAA_{}_test_{}.npy".format(idx, g_Type, labels_str) ), image ) # x is the array you want to save imsave( os.path.join( BENCHMARKING_RESOURCES_PATH, "GeneratedImages", "{}_WITHOUTPAA_{}_test_{}.png".format(idx, g_Type, labels_str) ), image ) data_without_paa_test.append( list([idx, g_Type]) + list(image.flatten()) + list(labels) ) np.save( os.path.join( BENCHMARKING_RESOURCES_PATH, "GeneratedMatrixImages", "{}_PAA_{}_test_{}.npy".format(idx, g_Type, labels_str) ), paaimage ) imsave( os.path.join( BENCHMARKING_RESOURCES_PATH, "GeneratedImages", "{}_PAA_{}_test_{}.png".format(idx, g_Type, labels_str) ), paaimage ) data_paa_test.append( list([idx, g_Type]) + list(paaimage.flatten()) + list(labels) ) # VIsualizgin some results... plt.figure(figsize=(8,6)); plt.suptitle(g_Type + ' series'); ax1 = plt.subplot(121); plt.title(g_Type + ' without PAA'); plt.imshow(image); divider = make_axes_locatable(ax1); cax = divider.append_axes("right", size="2.5%", pad=0.2); plt.colorbar(cax=cax); ax2 = plt.subplot(122); plt.title(g_Type + ' with PAA'); plt.imshow(paaimage); print('Saving processed data...') df_without_paa_test = pd.DataFrame( data = data_without_paa_test, columns = list(["IDX", "TYPE"]) + ["DIMESION_{}".format(d) for d in range(size_without_paa*size_without_paa)] + list(label_columns_idx) ) df_without_paa_test.to_csv(os.path.join( BENCHMARKING_RESOURCES_PATH, "datasets", "df_without_paa_test.csv")) df_paa_test = pd.DataFrame( data = data_paa_test, columns = list(["IDX", "TYPE"]) + ["DIMESION_{}".format(d) for d in range(size_paa*size_paa)] + list(label_columns_idx) ) df_paa_test.to_csv(os.path.join( BENCHMARKING_RESOURCES_PATH, "datasets", "df_paa_test.csv"))
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Modelagem
def metrics(test, predicted): ##CLASSIFICATION METRICS acc = accuracy_score(test, predicted) prec = precision_score(test, predicted) rec = recall_score(test, predicted) f1 = f1_score(test, predicted) f1m = f1_score(test, predicted, average='macro') # print('f1:',f1) # print('acc: ',acc) # print('recall: ',rec) # print('precision: ',prec) # # to copy paste print #print("{:.4}\t{:.4}\t{:.4}\t{:.4}\t{:.4}".format(acc, prec, rec, f1, f1m)) # ##REGRESSION METRICS # mae = mean_absolute_error(test_Y,pred) # print('mae: ',mae) # E_pred = sum(pred) # E_ground = sum(test_Y) # rete = abs(E_pred-E_ground)/float(max(E_ground,E_pred)) # print('relative error total energy: ',rete) return acc, prec, rec, f1, f1m def plot_predicted_and_ground_truth(test, predicted): #import matplotlib.pyplot as plt plt.plot(predicted.flatten(), label = 'pred') plt.plot(test.flatten(), label= 'Y') plt.show() return def embedding_images(images, model): # Feature extraction process with VGG16 vgg16_feature_list = [] # Attributes array (vgg16 embedding) y = [] # Extract labels from name of image path[] for path in tqdm_notebook(images): img = keras_image.load_img(path, target_size=(100, 100)) x = keras_image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) # "Extracting" features... vgg16_feature = vgg16_model.predict(x) vgg16_feature_np = np.array(vgg16_feature) vgg16_feature_list.append(vgg16_feature_np.flatten()) # Image (chuncked serie) file_name = path.split("\\")[-1].split(".")[0] image_labels = [int(l) for l in list(file_name.split("_")[-1])] y.append(image_labels) X = np.array(vgg16_feature_list) return X, y
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Benchmarking (replicando estudo)
# Building dnn model (feature extraction) vgg16_model = VGG16( include_top=False, weights='imagenet', input_tensor=None, input_shape=(100, 100, 3), pooling='avg', classes=1000 )
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Embedding das imagens de Treino
# GAFD Images with PAA (Train) images = sorted(glob( os.path.join( BENCHMARKING_RESOURCES_PATH, "GeneratedImages", "*_PAA_GADF_train_*.png" ) )) X_train, y_train = embedding_images(images, vgg16_model) # Data persistence np.save( os.path.join(BENCHMARKING_RESOURCES_PATH, 'datasets/X_train.npy'), X_train) np.save( os.path.join(BENCHMARKING_RESOURCES_PATH, 'datasets/y_train.npy'), y_train)
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Embedding das imagens de Teste
# GAFD Images with PAA (Train) images = sorted(glob( os.path.join( BENCHMARKING_RESOURCES_PATH, "GeneratedImages", "*_PAA_GADF_test_*.png" ) )) X_test, y_test = embedding_images(images, vgg16_model) # Data persistence np.save( os.path.join(BENCHMARKING_RESOURCES_PATH, 'datasets/X_test.npy'), X_test) np.save( os.path.join(BENCHMARKING_RESOURCES_PATH, 'datasets/y_test.npy'), y_test)
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Treinando Classificador Supervisionado
# Training supervised classifier clf = DecisionTreeClassifier(max_depth=15) # Train classifier clf.fit(X_train, y_train) # Save classifier for future use #joblib.dump(clf, 'Tree'+'-'+device+'-redd-all.joblib')
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
Avaliando Classificador
# Predict test data y_pred = clf.predict(X_test) # Print metrics final_performance = [] y_test = np.array(y_test) y_pred = np.array(y_pred) print("") print("RESULT ANALYSIS\n\n") print("ON/OFF State Charts") print("-" * 115) for i in range(y_test.shape[1]): fig = plt.figure(figsize=(15, 2)) plt.title("Appliance #{}".format( label_columns_idx[i])) plt.plot(y_test[:, i].flatten(), label = "True Y") plt.plot( y_pred[:, i].flatten(), label = "Predicted Y") plt.xlabel('Sample') plt.xticks(range(0, y_test.shape[0], 50)) plt.xlim(0, y_test.shape[0]) plt.ylabel('Status') plt.yticks([0, 1]) plt.ylim(0,1) plt.legend() plt.show() acc, prec, rec, f1, f1m = metrics(y_test[:, i], y_pred[:, i]) final_performance.append([ label_columns_idx[i], round(acc*100, 2), round(prec*100, 2), round(rec*100, 2), round(f1*100, 2), round(f1m*100, 2) ]) print("-" * 115) print("") print("FINAL PERFORMANCE BY APPLIANCE (LABEL):") df_metrics = pd.DataFrame( data = final_performance, columns = ["Appliance", "Accuracy", "Precision", "Recall", "F1-score", "F1-macro"] ) display(df_metrics) print("") print("OVERALL AVERAGE PERFORMANCE:") final_performance = np.mean(np.array(final_performance)[:, 1:].astype(float), axis = 0) display(pd.DataFrame( data = { "Metric": ["Accuracy", "Precision", "Recall", "F1-score", "F1-macro"], "Result (%)": [round(p, 2) for p in final_performance] } )) # print("-----------------") # print("Accuracy : {0:.2f}%".format( final_performance[0] )) # print("Precision : {0:.2f}%".format( final_performance[1] )) # print("Recall : {0:.2f}%".format( final_performance[2] )) # print("F1-score : {0:.2f}%".format( final_performance[3] )) # print("F1-macro : {0:.2f}%".format( final_performance[4] )) # print("-----------------")
phd-thesis/Benchmarking 2 - Identificação de Cargas através de Representação Visual de Séries Temporais-Copy1.ipynb
diegocavalca/Studies
cc0-1.0
TF.Text Metrics <table class="tfo-notebook-buttons" align="left"> <td> <a target="_blank" href="https://www.tensorflow.org/text/tutorials/text_similarity"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a> </td> <td> <a target="_blank" href="https://colab.research.google.com/github/tensorflow/text/blob/master/docs/tutorials/text_similarity.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a> </td> <td> <a target="_blank" href="https://github.com/tensorflow/text/blob/master/docs/tutorials/text_similarity.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View on GitHub</a> </td> <td> <a href="https://storage.googleapis.com/tensorflow_docs/text/docs/tutorials/text_similarity.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a> </td> </table> Overview TensorFlow Text provides a collection of text-metrics-related classes and ops ready to use with TensorFlow 2.0. The library contains implementations of text-similarity metrics such as ROUGE-L, required for automatic evaluation of text generation models. The benefit of using these ops in evaluating your models is that they are compatible with TPU evaluation and work nicely with TF streaming metric APIs. Setup
!pip install -q "tensorflow-text==2.8.*" import tensorflow as tf import tensorflow_text as text
docs/tutorials/text_similarity.ipynb
tensorflow/text
apache-2.0
ROUGE-L The Rouge-L metric is a score from 0 to 1 indicating how similar two sequences are, based on the length of the longest common subsequence (LCS). In particular, Rouge-L is the weighted harmonic mean (or f-measure) combining the LCS precision (the percentage of the hypothesis sequence covered by the LCS) and the LCS recall (the percentage of the reference sequence covered by the LCS). Source: https://www.microsoft.com/en-us/research/publication/rouge-a-package-for-automatic-evaluation-of-summaries/ The TF.Text implementation returns the F-measure, Precision, and Recall for each (hypothesis, reference) pair. Consider the following hypothesis/reference pair:
hypotheses = tf.ragged.constant([['captain', 'of', 'the', 'delta', 'flight'], ['the', '1990', 'transcript']]) references = tf.ragged.constant([['delta', 'air', 'lines', 'flight'], ['this', 'concludes', 'the', 'transcript']])
docs/tutorials/text_similarity.ipynb
tensorflow/text
apache-2.0
The hypotheses and references are expected to be tf.RaggedTensors of tokens. Tokens are required instead of raw sentences because no single tokenization strategy fits all tasks. Now we can call text.metrics.rouge_l and get our result back:
result = text.metrics.rouge_l(hypotheses, references) print('F-Measure: %s' % result.f_measure) print('P-Measure: %s' % result.p_measure) print('R-Measure: %s' % result.r_measure)
docs/tutorials/text_similarity.ipynb
tensorflow/text
apache-2.0
ROUGE-L has an additional hyperparameter, alpha, which determines the weight of the harmonic mean used for computing the F-Measure. Values closer to 0 treat Recall as more important and values closer to 1 treat Precision as more important. alpha defaults to .5, which corresponds to equal weight for Precision and Recall.
# Compute ROUGE-L with alpha=0 result = text.metrics.rouge_l(hypotheses, references, alpha=0) print('F-Measure (alpha=0): %s' % result.f_measure) print('P-Measure (alpha=0): %s' % result.p_measure) print('R-Measure (alpha=0): %s' % result.r_measure) # Compute ROUGE-L with alpha=1 result = text.metrics.rouge_l(hypotheses, references, alpha=1) print('F-Measure (alpha=1): %s' % result.f_measure) print('P-Measure (alpha=1): %s' % result.p_measure) print('R-Measure (alpha=1): %s' % result.r_measure)
docs/tutorials/text_similarity.ipynb
tensorflow/text
apache-2.0
Load in house sales data Dataset is from house sales in King County, the region where the city of Seattle, WA is located.
sales = graphlab.SFrame('kc_house_data.gl/')
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Split data into training and testing. We use seed=0 so that everyone running this notebook gets the same results. In practice, you may set a random seed (or let GraphLab Create pick a random seed for you).
train_data,test_data = sales.random_split(.8,seed=0)
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Learning a multiple regression model Recall we can use the following code to learn a multiple regression model predicting 'price' based on the following features: example_features = ['sqft_living', 'bedrooms', 'bathrooms'] on training data with the following code: (Aside: We set validation_set = None to ensure that the results are always the same)
example_features = ['sqft_living', 'bedrooms', 'bathrooms'] example_model = graphlab.linear_regression.create(train_data, target = 'price', features = example_features, validation_set = None)
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Now that we have fitted the model we can extract the regression weights (coefficients) as an SFrame as follows:
example_weight_summary = example_model.get("coefficients") print example_weight_summary
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Making Predictions In the gradient descent notebook we use numpy to do our regression. In this book we will use existing graphlab create functions to analyze multiple regressions. Recall that once a model is built we can use the .predict() function to find the predicted values for data we pass. For example using the example model above:
example_predictions = example_model.predict(train_data) print example_predictions[0] # should be 271789.505878
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Compute RSS Now that we can make predictions given the model, let's write a function to compute the RSS of the model. Complete the function below to calculate RSS given the model, data, and the outcome.
def get_residual_sum_of_squares(model, data, outcome): # First get the predictions # Then compute the residuals/errors # Then square and add them up return(RSS)
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Test your function by computing the RSS on TEST data for the example model:
rss_example_train = get_residual_sum_of_squares(example_model, test_data, test_data['price']) print rss_example_train # should be 2.7376153833e+14
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Create some new features Although we often think of multiple regression as including multiple different features (e.g. # of bedrooms, squarefeet, and # of bathrooms) but we can also consider transformations of existing features e.g. the log of the squarefeet or even "interaction" features such as the product of bedrooms and bathrooms. You will use the logarithm function to create a new feature. so first you should import it from the math library.
from math import log
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Next create the following 4 new features as column in both TEST and TRAIN data: * bedrooms_squared = bedrooms*bedrooms * bed_bath_rooms = bedrooms*bathrooms * log_sqft_living = log(sqft_living) * lat_plus_long = lat + long As an example here's the first one:
train_data['bedrooms_squared'] = train_data['bedrooms'].apply(lambda x: x**2) test_data['bedrooms_squared'] = test_data['bedrooms'].apply(lambda x: x**2) # create the remaining 3 features in both TEST and TRAIN data
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Squaring bedrooms will increase the separation between not many bedrooms (e.g. 1) and lots of bedrooms (e.g. 4) since 1^2 = 1 but 4^2 = 16. Consequently this feature will mostly affect houses with many bedrooms. bedrooms times bathrooms gives what's called an "interaction" feature. It is large when both of them are large. Taking the log of squarefeet has the effect of bringing large values closer together and spreading out small values. Adding latitude to longitude is totally non-sensical but we will do it anyway (you'll see why) Quiz Question: What is the mean (arithmetic average) value of your 4 new features on TEST data? (round to 2 digits) Learning Multiple Models Now we will learn the weights for three (nested) models for predicting house prices. The first model will have the fewest features the second model will add one more feature and the third will add a few more: * Model 1: squarefeet, # bedrooms, # bathrooms, latitude & longitude * Model 2: add bedrooms*bathrooms * Model 3: Add log squarefeet, bedrooms squared, and the (nonsensical) latitude + longitude
model_1_features = ['sqft_living', 'bedrooms', 'bathrooms', 'lat', 'long'] model_2_features = model_1_features + ['bed_bath_rooms'] model_3_features = model_2_features + ['bedrooms_squared', 'log_sqft_living', 'lat_plus_long']
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Now that you have the features, learn the weights for the three different models for predicting target = 'price' using graphlab.linear_regression.create() and look at the value of the weights/coefficients:
# Learn the three models: (don't forget to set validation_set = None) # Examine/extract each model's coefficients:
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 1? Quiz Question: What is the sign (positive or negative) for the coefficient/weight for 'bathrooms' in model 2? Think about what this means. Comparing multiple models Now that you've learned three models and extracted the model weights we want to evaluate which model is best. First use your functions from earlier to compute the RSS on TRAINING Data for each of the three models.
# Compute the RSS on TRAINING data for each of the three models and record the values:
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
Quiz Question: Which model (1, 2 or 3) has lowest RSS on TRAINING Data? Is this what you expected? Now compute the RSS on on TEST data for each of the three models.
# Compute the RSS on TESTING data for each of the three models and record the values:
Regression/examples/week-2-multiple-regression-assignment-1-blank.ipynb
Weenkus/Machine-Learning-University-of-Washington
mit
4. Provide one or two visualizations that show the distribution of the sample data. Write one or two sentences noting what you observe about the plot or plots.
plt.hist( x=[initialData[CONGRUENT], initialData[INCONGRUENT]], normed=False, range=(min(initialData[CONGRUENT]), max(initialData[INCONGRUENT])), bins=10, label='Time to name' ) plt.hist( x=initialData[CONGRUENT], normed=False, range=(min(initialData[CONGRUENT]), max(initialData[CONGRUENT])), bins=10, label='Time to name' ) plt.hist( x=initialData[INCONGRUENT], normed=False, range=(min(initialData[INCONGRUENT]), max(initialData[INCONGRUENT])), bins=10, label='Time to name', color='Green' ) plt.hist( x=dataDifference, normed=False, range=(min(dataDifference), max(dataDifference)), bins=10, label='Time to name', color='Red' )
P1/P1_Cassio.ipynb
cassiogreco/udacity-data-analyst-nanodegree
mit
From analyzing the histograms of both the Congruent and Incongruent datasets we can visualy see that the Incongruent dataset contains a greater number of higher time-to-name values than the Congruent datasets. This is evident from looking at the values of the mean values of both datasets, previously calculated (14.051125 and 22.0159166667 for Congruent and Incongruent datasets, respectively) 5. Now, perform the statistical test and report your results. What is your confidence level and your critical statistic value? Do you reject the null hypothesis or fail to reject it? Come to a conclusion in terms of the experiment task. Did the results match up with your expectations?
degreesOfFreedom = len(initialData[CONGRUENT]) - 1 def standardError(standardDeviation, sampleSize): return standardDeviation / math.sqrt(sampleSize) def getTValue(mean, se): return mean / se se = standardError(standardDeviation(variance(valuesToPower(valuesMinusMean(dataDifference), 2))), len(dataDifference)) tValue = getTValue(differenceMean, se) def marginOfError(t, standardError): return t * standardError def getConfidenceInterval(mean, t, standardError): return (mean - marginOfError(t, standardError), mean + marginOfError(t, standardError)) print('Degrees of Freedom:', degreesOfFreedom) print('Standard Error:', se) print('T Value:', tValue) print('T Critical Regions: Less than', -TCRITICAL, 'and Greater than', TCRITICAL) print('Is the T Value inside of the critical region?', tValue >= TCRITICAL or tValue < TCRITICAL) print('Is p < 0.005?', tValue >= TCRITICAL or tValue < TCRITICAL) print('Confidence Interval:', getConfidenceInterval(differenceMean, TCRITICAL, se))
P1/P1_Cassio.ipynb
cassiogreco/udacity-data-analyst-nanodegree
mit
Introduction to Tethne: Working with data from the Web of Science Now that we have the basics down, in this notebook we'll begin working with data from the JSTOR Data-for-Research (DfR) portal. The JSTOR DfR portal gives researchers access to bibliographic data and N-grams for the entire JSTOR database. Tethne can use DfR data to generate coauthorship networks, and to improve metadata for Web of Science records. Tethne is also able to use N-gram counts to add information to networks, and can interface with MALLET to perform LDA topic modeling. Methods in Digital & Computational Humanities This notebook is part of a cluster of learning resources developed by the Laubichler Lab and the Digital Innovation Group at Arizona State University as part of an initiative for digital and computational humanities (d+cH). For more information, see our evolving online methods course at https://diging.atlassian.net/wiki/display/DCH. Getting Help Development of the Tethne project is led by Erick Peirson. To get help, first check our issue tracking system on GitHub. There, you can search for questions and problems reported by other users, or ask a question of your own. You can also reach Erick via e-mail at [email protected]. Getting bibliographic data from JSTOR Data-for-Research For the purpose of this tutorial, you can use the sample dataset from https://www.dropbox.com/s/q2jy87pmy9r6bsa/tethne_workshop_data.zip?dl=0. Access the DfR portal at http://dfr.jstor.org/ If you don't already have an account, you will need to create a new account. After you've logged in, perform a search using whatever criteria you please. When you have achieved the result that you desire, create a new dataset request. Under the "Dataset Request" menu in the upper-right corner of the page, click "Submit new request". On the Download Options page, select your desired Data Type. If you do not intend to make use of the contents of the papers themselves, then "Citations Only" is sufficient. Otherwise, choose word counts, bigrams, etc. Output Format should be set to XML. Give your request a title, and set the maximum number of articles. Note that the maximum documents allowed per request is 1,000. Setting Maximum Articles to a value less than the number of search results will yield a random sample of your results. Your request should now appear in your list of Data Requests. When your request is ready (hours to days later), you will receive an e-mail with a download link. When downloading from the Data Requests list, be sure to use the link in the full dataset column. When your dataset download is complete, unzip it. The contents should look something like those shown below. citations.XML contains bibliographic data in XML format. The bigrams, trigrams, wordcounts folders contain N-gram counts for each document. If you were to open one of the XML files in the wordcounts folder, say, you would see some XML that looks like this: ``` <?xml version="1.0" encoding="UTF-8"?> <article id="10.2307/4330482" > <wordcount weight="21" > of </wordcount> <wordcount weight="16" > the </wordcount> <wordcount weight="10" > university </wordcount> <wordcount weight="10" > a </wordcount> <wordcount weight="9" > s </wordcount> <wordcount weight="9" > d </wordcount> <wordcount weight="9" > harvard </wordcount> <wordcount weight="8" > m </wordcount> <wordcount weight="7" > and </wordcount> <wordcount weight="6" > u </wordcount> <wordcount weight="6" > press </wordcount> <wordcount weight="5" > cambridge </wordcount> <wordcount weight="5" > massachusetts </wordcount> <wordcount weight="5" > journal </wordcount> <wordcount weight="4" > by </wordcount> ... <wordcount weight="1" > stephen </wordcount> <wordcount weight="1" > liver </wordcount> <wordcount weight="1" > committee </wordcount> <wordcount weight="1" > school </wordcount> <wordcount weight="1" > lewontin </wordcount> <wordcount weight="1" > canguilhem </wordcount> <wordcount weight="1" > assistant </wordcount> <wordcount weight="1" > jay </wordcount> <wordcount weight="1" > state </wordcount> <wordcount weight="1" > morgan </wordcount> <wordcount weight="1" > advertising </wordcount> <wordcount weight="1" > animal </wordcount> <wordcount weight="1" > is </wordcount> <wordcount weight="1" > species </wordcount> <wordcount weight="1" > claude </wordcount> <wordcount weight="1" > review </wordcount> <wordcount weight="1" > hunt </wordcount> <wordcount weight="1" > founder </wordcount> </article> ``` Each word is represented by a &lt;wordcount&gt;&lt;/wordcount&gt; tag. The "weight" attribute gives the number of times that the word occurs in the document, and the word itself is between the tags. We'll come back to this in just a moment. Parsing DfR datasets Just as for WoS data, there is a module in tethne.readers for working with DfR data. We can import it with:
from tethne.readers import dfr
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Once again, read() accepts a string containing a path to either a single DfR dataset, or a directory containing several. Here, "DfR dataset" refers to the folder containing the file "citations.xml", and the contents of that folder. This will take considerably more time than loading a WoS dataset. The reason is that Tethne automatically detects and parses all of the wordcount data.
dfr_corpus = dfr.read('/Users/erickpeirson/Dropbox/HSS ThatCamp Workshop/sample_data/DfR')
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Combining DfR and WoS data We can combine our datasets using the merge() function. First, we load our WoS data in a separate Corpus:
from tethne.readers import wos wos_corpus = wos.read('/Users/erickpeirson/Dropbox/HSS ThatCamp Workshop/sample_data/wos')
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Both of these datasets are for the Journal of the History of Biology. But note that the WoS and DfR corpora have different numbers of Papers:
len(dfr_corpus), len(wos_corpus)
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Then import merge() from tethne.readers:
from tethne.readers import merge
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
We then create a new Corpus by passing both Corpus objects to merge(). If there is conflicting information in the two corpora, the first Corpus gets priority.
corpus = merge(dfr_corpus, wos_corpus)
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
merge() has combined data where possible, and discarded any duplicates in the original datasets.
len(corpus)
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
FeatureSets Our wordcount data are represented by a FeatureSet. A FeatureSet is a description of how certain sets of elements are distributed across a Corpus. This is kind of like an inversion of an index. For example, we might be interested in which words (elements) are found in which Papers. We can think of authors as a FeatureSet, too. All of the available FeatureSets are available in the features attribute (a dictionary) of our Corpus. We can see the available FeatureSets by inspecting its:
corpus.features
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Note that citations and authors are also FeatureSets. In fact, the majority of network-building functions in Tethne operate on FeatureSets -- including the coauthors() and bibliographic_coupling() functions that we used in the WoS notebook. Each FeatureSet has several attributes. The features attribute contains the distribution data itself. These data themselves are (element, value) tuples. In this case, the elements are words, and the values are wordcounts.
corpus.features['wordcounts'].features.items()[0] # Just show data for the first Paper.
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
The index contains our "vocabulary":
print 'There are %i words in the wordcounts featureset' % len(corpus.features['wordcounts'].index)
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
We can use the feature_distribution() method of our Corpus to look at the distribution of words over time. In the example below I used MatPlotLib to visualize the distribution.
plt.figure(figsize=(10, 5)) plt.bar(*corpus.feature_distribution('wordcounts', 'evolutionary')) # <-- The action. plt.ylabel('Frequency of the word ``evolutionary`` in this Corpus') plt.xlabel('Publication Date') plt.show()
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
If we add the argument mode='documentCounts', we get the number of documents in which 'evolutionary' occurs.
plt.figure(figsize=(10, 5)) plt.bar(*corpus.feature_distribution('wordcounts', 'evolutionary', mode='documentCounts')) # <-- The action. plt.ylabel('Documents containing ``evolutionary``') plt.xlabel('Publication Date') plt.show()
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Note that we can look how documents themselves are distributed using the distribution() method.
plt.figure(figsize=(10, 5)) plt.bar(*corpus.distribution()) # <-- The action. plt.ylabel('Number of Documents') plt.xlabel('Publication Date') plt.show()
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
So, putting these together, we can normalize our feature_distribution() data to get a sense of the relative use of the word 'evolution'.
dates, N_evolution = corpus.feature_distribution('wordcounts', 'evolutionary', mode='documentCounts') dates, N = corpus.distribution() normalized_frequency = [f/N[i] for i, f in enumerate(N_evolution)] plt.figure(figsize=(10, 5)) plt.bar(dates, normalized_frequency) # <-- The action. plt.ylabel('Proportion of documents containing ``evolutionary``') plt.xlabel('Publication Date') plt.show()
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Topic Modeling with DfR wordcounts Latent Dirichlet Allocation is a popular approach to discovering latent "topics" in large corpora. Many digital humanists use a software package called MALLET to fit LDA to text data. Tethne uses MALLET to fit LDA topic models. Before we use LDA, however, we need to do some preprocessing. "Preprocessing" refers to anything that we do to filter or transform our FeatureSet prior to analysis. Pre-processing Two important preprocessing steps are: 1. Removing "stopwords" -- common words like "the", "and", "but", "for", that don't yield much insight into the contents of documents. 2. Removing words that are too common or too rare. These include typos or OCR artifacts. We can do both of these by using the transform() method on our FeatureSet. First, we need a stoplist. NLTK provides a great stoplist.
from nltk.corpus import stopwords stoplist = stopwords.words()
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
We then need to define what elements to keep, and what elements to discard. We will use a function that will evaluate whether or not a word is in our stoplist. The function should take three arguments: f -- the feature itself (the word) v -- the number of instances of that feature in a specific document c -- the number of instances of that feature in the whole FeatureSet dc -- the number of documents that contain that feature This function will be applied to each word in each document. If it returns 0 or None, the word will be excluded. Otherwise, it should return a numeric value (in this case, the count for that document). In addition to applying the stoplist, we'll also exclude any word that occurs in more than 500 of the documents and less than 3 documents, and is less than 4 characters in length.
def apply_stoplist(f, v, c, dc): if f in stoplist or dc > 500 or dc < 3 or len(f) < 4: return None # Discard the element. return v
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
We apply the stoplist using the transform() method. FeatureSets are not modified in place; instead, a new FeatureSet is generated that reflects the specified changes. We'll call the new FeatureSet 'wordcounts_filtered'.
corpus.features['wordcounts_filtered'] = corpus.features['wordcounts'].transform(apply_stoplist)
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
There should be significantly fewer words in our new "wordcounts_filtered" FeatureSet.
print 'There are %i words in the wordcounts featureset' % len(corpus.features['wordcounts'].index) print 'There are %i words in the wordcounts_filtered featureset' % len(corpus.features['wordcounts_filtered'].index)
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
The LDA topic model Tethne provides a class called LDAModel. You should be able to import it directly from the tethne package:
from tethne import LDAModel
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Now we'll create a new LDAModel for our Corpus. The featureset_name parameter tells the LDAModel which FeatureSet we want to use. We'll use our filtered wordcounts.
model = LDAModel(corpus, featureset_name='wordcounts_filtered')
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Next we'll fit the model. We need to tell MALLET how many topics to fit (the hyperparameter Z), and how many iterations (max_iter) to perform. This step may take a little while, depending on the size of your corpus.
model.fit(Z=50, max_iter=500)
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
You can inspect the inferred topics using the model's print_topics() method. By default, this will print the top ten words for each topic.
model.print_topics()
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
We can also look at the representation of a topic over time using the topic_over_time() method. In the example below we'll print the first five of the topics on the same plot.
plt.figure(figsize=(15, 5)) for k in xrange(5): # Generates numbers k in [0, 4]. x, y = model.topic_over_time(k) # Gets topic number k. plt.plot(x, y, label='topic {0}'.format(k), lw=2, alpha=0.7) plt.legend(loc='best') plt.show()
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Generating networks from topic models The features module in the tethne.networks subpackage contains some useful methods for visualizing topic models as networks. You can import it just like the authors or papers modules.
from tethne.networks import topics
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
The terms function generates a network of words connected on the basis of shared affinity with a topic. If two words i and j are both associated with a topic z with $\Phi(i|z) >= 0.01$ and $\Phi(j|z) >= 0.01$, then an edge is drawn between them.
termGraph = topics.terms(model, threshold=0.01) termGraph.order(), termGraph.size() termGraph.name = '' from tethne.writers.graph import to_graphml to_graphml(termGraph, '/Users/erickpeirson/Desktop/topic_terms.graphml')
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
topicCoupling = topics.topic_coupling(model, threshold=0.2) print '%i nodes and %i edges' % (topicCoupling.order(), topicCoupling.size()) to_graphml(topicCoupling, '/Users/erickpeirson/Desktop/lda_topicCoupling.graphml')
2. Working with data from JSTOR Data-for-Research.ipynb
diging/tethne-notebooks
gpl-3.0
Pandas
import pandas as pd pand_tmp = pd.DataFrame(data, columns=['x{0}'.format(i) for i in range(data.shape[1])]) pand_tmp.head()
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
What is the row sum?
pand_tmp.sum(axis=1)
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
Column sum?
pand_tmp.sum(axis=0) pand_tmp.to_csv('numbers.csv', index=False)
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
Spark
import findspark import os findspark.init() # you need that before import pyspark. import pyspark sc = pyspark.SparkContext('local[4]', 'pyspark') lines = sc.textFile('numbers.csv', 18) for l in lines.take(3): print l lines.take(3) type(lines.take(1))
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
How do we skip the header? How about using find()? What is Boolean value for true with find()?
lines = lines.filter(lambda x: x.find('x') != 0) for l in lines.take(2): print l data = lines.map(lambda x: x.split(',')) data.take(3)
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
Row Sum Cast to integer and sum!
def row_sum(x): int_x = map(lambda x: int(x), x) return sum(int_x) data_row_sum = data.map(row_sum) print data_row_sum.collect() print data_row_sum.count()
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
Column Sum This one's a bit trickier, and portends ill for large, complex data sets (like example 5)... Let's enumerate the list comprising each RDD "line" such that each value is indexed by the corresponding column number.
def col_key(x): for i, value in enumerate(x): yield (i, int(value)) tmp = data.flatMap(col_key) tmp.take(15)
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
Notice how flatMap works here: the generator is returned per partition, meaning that the first element value of each tuple cycles.
tmp.take(3) tmp = tmp.groupByKey() for i in tmp.take(2): print i, type(i) data_col_sum = tmp.map(lambda x: sum(x[1])) for i in data_col_sum.take(2): print i print data_col_sum.collect() print data_col_sum.count()
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
Column sum with Spark.sql.dataframe
from pyspark.sql import SQLContext sqlContext = SQLContext(sc) sc pyspark_df = sqlContext.createDataFrame(pand_tmp) pyspark_df.take(2)
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
groupBy() without arguments groups by all columns
for i in pyspark_df.columns: print pyspark_df.groupBy().sum(i).collect()
exercises/03_aggregation.ipynb
milroy/Spark-Meetup
mit
Manufacturer import Many report files from various adsorption device manufacturers can be imported directly using pyGAPS. Here are some examples.
cfld = base_path / "commercial" micromeritics = pgp.isotherm_from_commercial(cfld / "mic" / "Sample_A.xls", 'mic', 'xl') belsorp_dat = pgp.isotherm_from_commercial(cfld / "bel" / "BF010_DUT-13_CH4_111K_run2.DAT", 'bel', 'dat') belsorp_xl = pgp.isotherm_from_commercial(cfld / "bel" / "Sample_C.xls", 'bel', 'xl') threeP_xl = pgp.isotherm_from_commercial(cfld / "3p" / "AC_ref_filter_Ar_87K_run 3_rep.xlsx", '3p', 'xl')
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
AIF Parsing AIF Import Adsorption information files are fully supported in pyGAPS, both for import and exports. Isotherms can be imported from an .aif as:
# Import all isotherms = [pgp.isotherm_from_aif(path) for path in aif_file_paths] # Display an example file print(isotherms[1])
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
AIF Export Similarly, an isotherm can be exported as an AIF file or a string, depending on whether a path is passed. For this purpose use either the module pygaps.isotherm_to_aif() function or the convenience class function to_aif().
# module function for isotherm in isotherms: filename = f'{isotherm.material} {isotherm.adsorbate} {isotherm.temperature}.aif' pgp.isotherm_to_aif(isotherm, base_path / 'aif' / filename) # save to file with convenience function isotherms[0].to_aif('isotherm.aif') # string isotherm_string = isotherms[0].to_aif()
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
JSON Parsing JSON Import Isotherms can be imported either from a json file or from a json string. The same function is used in both cases.
# Import them isotherms = [pgp.isotherm_from_json(path) for path in json_file_paths] # Display an example file print(isotherms[1])
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
JSON Export Exporting to JSON can be done to a file or a string, depending on whether a path is passed. For this purpose use either the module pygaps.isotherm_to_json() function or the convenience class function to_json().
# module function for isotherm in isotherms: filename = f'{isotherm.material} {isotherm.adsorbate} {isotherm.temperature}.json' pgp.isotherm_to_json(isotherm, base_path / 'json' / filename) # save to file with convenience function isotherms[0].to_json('isotherm.json') # string isotherm_string = isotherms[0].to_json()
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
Excel Parsing Excel does not have to be installed on the system in use. Excel Import
# Import them isotherms = [pgp.isotherm_from_xl(path) for path in xl_file_paths] # Display an example file print(isotherms[1]) isotherms[1].plot()
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
Excel Export
# Export each isotherm in turn for isotherm in isotherms: filename = ' '.join([str(isotherm.material), str(isotherm.adsorbate), str(isotherm.temperature)]) + '.xls' pgp.isotherm_to_xl(isotherm, base_path / 'excel' / filename) # save to file with convenience function isotherms[0].to_xl('isotherm.xls')
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
CSV Parsing CSV Import Like JSON, isotherms can be imported either from a CSV file or from a CSV string. The same function is used in both cases.
# Import them isotherms = [pgp.isotherm_from_csv(path) for path in csv_file_paths] # Display an example file print(isotherms[0])
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
CSV Export
# Export each isotherm in turn for isotherm in isotherms: filename = ' '.join([str(isotherm.material), str(isotherm.adsorbate), str(isotherm.temperature)]) + '.csv' pgp.isotherm_to_csv(isotherm, base_path / 'csv' / filename) # save to file with convenience function isotherms[0].to_csv('isotherm.csv') # string representation isotherm_string = isotherms[0].to_csv()
docs/examples/parsing.ipynb
pauliacomi/pyGAPS
mit
Se computa $SVD$ :
#SE HACE LA DESCOMPOSICION DE VALORES SINGULARES U, sigma, Vt = np.linalg.svd(imgmatriz)
MNO/proyecto_final/MNO_2017/proyectos/equipos/equipo_6/avance_22_05_2017/code/Clase_SVD_Imagen.ipynb
csampez/analisis-numerico-computo-cientifico
apache-2.0
Imprimimos resultados de las matrices $U$ $\Sigma$ $Vt$ :
print("U:") print(U) print("sigma:") print(sigma) print("Vt:") print(Vt) #TOTAL DE bytes DEL ARREGLO (solo sigma) sigma.nbytes
MNO/proyecto_final/MNO_2017/proyectos/equipos/equipo_6/avance_22_05_2017/code/Clase_SVD_Imagen.ipynb
csampez/analisis-numerico-computo-cientifico
apache-2.0
Visualizamos la $/Sigma$ en una matriz diagonal:
S = np.zeros(imgmatriz.shape, "float") S[:min(imgmatriz.shape), :min(imgmatriz.shape)] = np.diag(sigma) print(S)
MNO/proyecto_final/MNO_2017/proyectos/equipos/equipo_6/avance_22_05_2017/code/Clase_SVD_Imagen.ipynb
csampez/analisis-numerico-computo-cientifico
apache-2.0
Calculo y reconstrucción: Se calcula una aproximacion usando la primera columna de U y la primera fila de V reporduciendo la imagen, cada columna de pixeles es una ponderacion de los mismos valores originales $\vec{u}_1 $ :
reconstimg = np.matrix(U[:, :1]) * np.diag(sigma[:1]) * np.matrix(V[:1, :]) plt.figure(figsize=(6,6)) plt.imshow(reconstimg, cmap='gray'); #RECONSTRUIMOS CON 8 Y 9 VECTORES for i in range(8, 10): reconstimg = np.matrix(U[:, :i]) * np.diag(sigma[:i]) * np.matrix(V[:i, :]) plt.imshow(reconstimg, cmap='gray') title = "n = %s" % i plt.title(title) plt.show() #RECONSTRUIMOS DE 10 EN 10 VECTORES PARA VER CUANDO SE REPRODUCE UNA IMAGEN SIMILAR A LA ORIGINAL... for i in range(10,50, 10): reconstimg = np.matrix(U[:, :i]) * np.diag(sigma[:i]) * np.matrix(V[:i, :]) plt.imshow(reconstimg, cmap='gray') title = "n = %s" % i plt.title(title) plt.show()
MNO/proyecto_final/MNO_2017/proyectos/equipos/equipo_6/avance_22_05_2017/code/Clase_SVD_Imagen.ipynb
csampez/analisis-numerico-computo-cientifico
apache-2.0
Reconstruccion de matriz original:
np.dot(U, np.dot(S, Vt)) #se usa Vt imgmatriz
MNO/proyecto_final/MNO_2017/proyectos/equipos/equipo_6/avance_22_05_2017/code/Clase_SVD_Imagen.ipynb
csampez/analisis-numerico-computo-cientifico
apache-2.0
We will use the class TwoLayerNet in the file cs231n/classifiers/neural_net.py to represent instances of our network. The network parameters are stored in the instance variable self.params where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
# Create a small net and some toy data to check your implementations. # Note that we set the random seed for repeatable experiments. input_size = 4 hidden_size = 10 num_classes = 3 num_inputs = 5 def init_toy_model(): np.random.seed(0) return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1) def init_toy_data(): np.random.seed(1) X = 10 * np.random.randn(num_inputs, input_size) y = np.array([0, 1, 2, 2, 1]) return X, y net = init_toy_model() X, y = init_toy_data()
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit
Forward pass: compute scores Open the file cs231n/classifiers/neural_net.py and look at the method TwoLayerNet.loss. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters. Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
scores = net.loss(X) print 'Your scores:' print scores print print 'correct scores:' correct_scores = np.asarray([ [-0.81233741, -1.27654624, -0.70335995], [-0.17129677, -1.18803311, -0.47310444], [-0.51590475, -1.01354314, -0.8504215 ], [-0.15419291, -0.48629638, -0.52901952], [-0.00618733, -0.12435261, -0.15226949]]) print correct_scores print # The difference should be very small. We get < 1e-7 print 'Difference between your scores and correct scores:' print np.sum(np.abs(scores - correct_scores))
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit
Forward pass: compute loss In the same function, implement the second part that computes the data and regularizaion loss.
loss, _ = net.loss(X, y, reg=0.1) correct_loss = 1.30378789133 # should be very small, we get < 1e-12 print 'Difference between your loss and correct loss:' print np.sum(np.abs(loss - correct_loss))
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit
Backward pass Implement the rest of the function. This will compute the gradient of the loss with respect to the variables W1, b1, W2, and b2. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
from cs231n.gradient_check import eval_numerical_gradient # Use numeric gradient checking to check your implementation of the backward pass. # If your implementation is correct, the difference between the numeric and # analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2. loss, grads = net.loss(X, y, reg=0.1) # these should all be less than 1e-8 or so for param_name in grads: f = lambda W: net.loss(X, y, reg=0.1)[0] param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False) print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit
Train the network To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function TwoLayerNet.train and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement TwoLayerNet.predict, as the training process periodically performs prediction to keep track of accuracy over time while the network trains. Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
net = init_toy_model() stats = net.train(X, y, X, y, learning_rate=1e-1, reg=1e-5, num_iters=100, verbose=False) print 'Final training loss: ', stats['loss_history'][-1] # plot the loss history plt.plot(stats['loss_history']) plt.xlabel('iteration') plt.ylabel('training loss') plt.title('Training Loss history') plt.show()
assignment1/two_layer_net.ipynb
zlpure/CS231n
mit