markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
Make predictions for a genetic sequenece
model = Enformer(model_path) fasta_extractor = FastaStringExtractor(fasta_file) # @title Make predictions for an genomic example interval target_interval = kipoiseq.Interval('chr11', 35_082_742, 35_197_430) # @param sequence_one_hot = one_hot_encode(fasta_extractor.extract(target_interval.resize(SEQUENCE_LENGTH))) predictions = model.predict_on_batch(sequence_one_hot[np.newaxis])['human'][0] # @title Plot tracks tracks = {'DNASE:CD14-positive monocyte female': predictions[:, 41], 'DNASE:keratinocyte female': predictions[:, 42], 'CHIP:H3K27ac:keratinocyte female': predictions[:, 706], 'CAGE:Keratinocyte - epidermal': np.log10(1 + predictions[:, 4799])} plot_tracks(tracks, target_interval)
enformer/enformer-usage.ipynb
deepmind/deepmind-research
apache-2.0
Contribution scores example
# @title Compute contribution scores target_interval = kipoiseq.Interval('chr12', 54_223_589, 54_338_277) # @param sequence_one_hot = one_hot_encode(fasta_extractor.extract(target_interval.resize(SEQUENCE_LENGTH))) predictions = model.predict_on_batch(sequence_one_hot[np.newaxis])['human'][0] target_mask = np.zeros_like(predictions) for idx in [447, 448, 449]: target_mask[idx, 4828] = 1 target_mask[idx, 5111] = 1 # This will take some time since tf.function needs to get compiled. contribution_scores = model.contribution_input_grad(sequence_one_hot.astype(np.float32), target_mask).numpy() pooled_contribution_scores = tf.nn.avg_pool1d(np.abs(contribution_scores)[np.newaxis, :, np.newaxis], 128, 128, 'VALID')[0, :, 0].numpy()[1088:-1088] tracks = {'CAGE predictions': predictions[:, 4828], 'Enformer gradient*input': np.minimum(pooled_contribution_scores, 0.03)} plot_tracks(tracks, target_interval);
enformer/enformer-usage.ipynb
deepmind/deepmind-research
apache-2.0
Variant scoring example
# @title Score the variant variant = kipoiseq.Variant('chr16', 57025062, 'C', 'T', id='rs11644125') # @param # Center the interval at the variant interval = kipoiseq.Interval(variant.chrom, variant.start, variant.start).resize(SEQUENCE_LENGTH) seq_extractor = kipoiseq.extractors.VariantSeqExtractor(reference_sequence=fasta_extractor) center = interval.center() - interval.start reference = seq_extractor.extract(interval, [], anchor=center) alternate = seq_extractor.extract(interval, [variant], anchor=center) # Make predictions for the refernece and alternate allele reference_prediction = model.predict_on_batch(one_hot_encode(reference)[np.newaxis])['human'][0] alternate_prediction = model.predict_on_batch(one_hot_encode(alternate)[np.newaxis])['human'][0] # @title Visualize some tracks variant_track = np.zeros_like(reference_prediction[:, 0], dtype=bool) variant_track[variant_track.shape[0] // 2] = True tracks = {'variant': variant_track, 'CAGE/neutrofils ref': reference_prediction[:, 4767], 'CAGE/neutrofils alt-ref': alternate_prediction[:, 4767] - reference_prediction[:, 4767], 'CHIP:H3K27ac:neutrophil ref': reference_prediction[:, 2280], 'CHIP:H3K27ac:neutrophil alt-ref': alternate_prediction[:, 2280] - reference_prediction[:, 2280], } plot_tracks(tracks, interval.resize(reference_prediction.shape[0] * 128), height=1)
enformer/enformer-usage.ipynb
deepmind/deepmind-research
apache-2.0
Score variants in a VCF file Report top 20 PCs
enformer_score_variants = EnformerScoreVariantsPCANormalized(model_path, transform_path, num_top_features=20) # Score the first 5 variants from ClinVar # Lower-dimensional scores (20 PCs) it = variant_centered_sequences(clinvar_vcf, sequence_length=SEQUENCE_LENGTH, gzipped=True, chr_prefix='chr') example_list = [] for i, example in enumerate(it): if i >= 5: break variant_scores = enformer_score_variants.predict_on_batch( {k: v[tf.newaxis] for k,v in example['inputs'].items()})[0] variant_scores = {f'PC{i}': score for i, score in enumerate(variant_scores)} example_list.append({**example['metadata'], **variant_scores}) if i % 2 == 0: print(f'Done {i}') df = pd.DataFrame(example_list) df
enformer/enformer-usage.ipynb
deepmind/deepmind-research
apache-2.0
Report all 5,313 features (z-score normalized)
enformer_score_variants_all = EnformerScoreVariantsNormalized(model_path, transform_path) # Score the first 5 variants from ClinVar # All Scores it = variant_centered_sequences(clinvar_vcf, sequence_length=SEQUENCE_LENGTH, gzipped=True, chr_prefix='chr') example_list = [] for i, example in enumerate(it): if i >= 5: break variant_scores = enformer_score_variants_all.predict_on_batch( {k: v[tf.newaxis] for k,v in example['inputs'].items()})[0] variant_scores = {f'{i}_{name[:20]}': score for i, (name, score) in enumerate(zip(df_targets.description, variant_scores))} example_list.append({**example['metadata'], **variant_scores}) if i % 2 == 0: print(f'Done {i}') df = pd.DataFrame(example_list) df
enformer/enformer-usage.ipynb
deepmind/deepmind-research
apache-2.0
https://vincentarelbundock.github.io/Rdatasets/doc/cluster/plantTraits.html Usage data(plantTraits) Format A data frame with 136 observations on the following 31 variables. pdias Diaspore mass (mg) longindex Seed bank longevity durflow Flowering duration height Plant height, an ordered factor with levels 1 < 2 < ... < 8. begflow Time of first flowering, an ordered factor with levels 1 < 2 < 3 < 4 < 5 < 6 < 7 < 8 < 9 mycor Mycorrhizas, an ordered factor with levels 0never < 1 sometimes< 2always vegaer aerial vegetative propagation, an ordered factor with levels 0never < 1 present but limited< 2important. vegsout underground vegetative propagation, an ordered factor with 3 levels identical to vegaer above. autopoll selfing pollination, an ordered factor with levels 0never < 1rare < 2 often< the rule3 insects insect pollination, an ordered factor with 5 levels 0 < ... < 4. wind wind pollination, an ordered factor with 5 levels 0 < ... < 4. lign a binary factor with levels 0:1, indicating if plant is woody. piq a binary factor indicating if plant is thorny. ros a binary factor indicating if plant is rosette. semiros semi-rosette plant, a binary factor (0: no; 1: yes). leafy leafy plant, a binary factor. suman summer annual, a binary factor. winan winter annual, a binary factor. monocarp monocarpic perennial, a binary factor. polycarp polycarpic perennial, a binary factor. seasaes seasonal aestival leaves, a binary factor. seashiv seasonal hibernal leaves, a binary factor. seasver seasonal vernal leaves, a binary factor. everalw leaves always evergreen, a binary factor. everparti leaves partially evergreen, a binary factor. elaio fruits with an elaiosome (dispersed by ants), a binary factor. endozoo endozoochorous fruits, a binary factor. epizoo epizoochorous fruits, a binary factor. aquat aquatic dispersal fruits, a binary factor. windgl wind dispersed fruits, a binary factor. unsp unspecialized mechanism of seed dispersal, a binary factor.
clusdf = clusdf.drop("Unnamed: 0", axis=1) clusdf.head() clusdf.info() #missing values clusdf.apply(lambda x: sum(x.isnull().values), axis = 0) clusdf.head(20) clusdf=clusdf.fillna(clusdf.mean())
Clustering.ipynb
poethacker/hello
apache-2.0
To measure the quality of clustering results, there are two kinds of validity indices: external indices and internal indices. An external index is a measure of agreement between two partitions where the first partition is the a priori known clustering structure, and the second results from the clustering procedure (Dudoit et al., 2002). Internal indices are used to measure the goodness of a clustering structure without external information (Tseng et al., 2005
from sklearn.decomposition import PCA from sklearn.preprocessing import scale clusdf_scale = scale(clusdf) n_samples, n_features = clusdf_scale.shape n_samples, n_features reduced_data = PCA(n_components=2).fit_transform(clusdf_scale) #assuming height to be Y variable to be predicted #n_digits = len(np.unique(clusdf.height)) #From R Cluster sizes: #[1] "26 29 5 32" n_digits=4 kmeans = KMeans(init='k-means++', n_clusters=n_digits, n_init=10) kmeans.fit(reduced_data) clusdf.head(20) # Plot the decision boundary. For that, we will assign a color to each h=0.02 x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1 y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Obtain labels for each point in mesh. Use last trained model. Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure(1) plt.clf() plt.imshow(Z, interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), cmap=plt.cm.Paired, aspect='auto', origin='lower') plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2) # Plot the centroids as a white X centroids = kmeans.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='w', zorder=10) plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n' 'Centroids are marked with white cross') plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.xticks(()) plt.yticks(()) plt.show() kmeans = KMeans(n_clusters=4, random_state=0).fit(reduced_data) kmeans.labels_ np.unique(kmeans.labels_, return_counts=True) import matplotlib.pyplot as plt %matplotlib inline plt.hist(kmeans.labels_) plt.show() kmeans.cluster_centers_ metrics.silhouette_score(reduced_data, kmeans.labels_, metric='euclidean')
Clustering.ipynb
poethacker/hello
apache-2.0
Given the knowledge of the ground truth class assignments labels_true and our clustering algorithm assignments of the same samples labels_pred. Drawbacks Contrary to inertia, MI-based measures require the knowledge of the ground truth classes while almost never available in practice or requires manual assignment by human annotators (as in the supervised learning setting). Hierarchical clustering Hierarchical clustering is a general family of clustering algorithms that build nested clusters by merging or splitting them successively. This hierarchy of clusters is represented as a tree (or dendrogram). The root of the tree is the unique cluster that gathers all the samples, the leaves being the clusters with only one sample. The AgglomerativeClustering object performs a hierarchical clustering using a bottom up approach: each observation starts in its own cluster, and clusters are successively merged together. The linkage criteria determines the metric used for the merge strategy: Ward minimizes the sum of squared differences within all clusters. It is a variance-minimizing approach and in this sense is similar to the k-means objective function but tackled with an agglomerative hierarchical approach. Maximum or complete linkage minimizes the maximum distance between observations of pairs of clusters. Average linkage minimizes the average of the distances between all observations of pairs of clusters. Single linkage minimizes the distance between the closest observations of pairs of clusters.
clustering = AgglomerativeClustering(n_clusters=4).fit(reduced_data) clustering clustering.labels_ np.unique(clustering.labels_, return_counts=True) from scipy.cluster.hierarchy import dendrogram, linkage Z = linkage(reduced_data) dendrogram(Z) #dn1 = hierarchy.dendrogram(Z, ax=axes[0], above_threshold_color='y',orientation='top') plt.show() metrics.silhouette_score(reduced_data, clustering.labels_, metric='euclidean')
Clustering.ipynb
poethacker/hello
apache-2.0
DBSCAN The DBSCAN algorithm views clusters as areas of high density separated by areas of low density. Due to this rather generic view, clusters found by DBSCAN can be any shape, as opposed to k-means which assumes that clusters are convex shaped. The central component to the DBSCAN is the concept of core samples, which are samples that are in areas of high density. A cluster is therefore a set of core samples, each close to each other (measured by some distance measure) and a set of non-core samples that are close to a core sample (but are not themselves core samples). There are two parameters to the algorithm, min_samples and eps, which define formally what we mean when we say dense. Higher min_samples or lower eps indicate higher density necessary to form a cluster. More formally, we define a core sample as being a sample in the dataset such that there exist min_samples other samples within a distance of eps, which are defined as neighbors of the core sample. This tells us that the core sample is in a dense area of the vector space. A cluster is a set of core samples that can be built by recursively taking a core sample, finding all of its neighbors that are core samples, finding all of their neighbors that are core samples, and so on. A cluster also has a set of non-core samples, which are samples that are neighbors of a core sample in the cluster but are not themselves core samples. Intuitively, these samples are on the fringes of a cluster.
db = DBSCAN().fit(reduced_data) db db.labels_ clusdf.shape reduced_data.shape reduced_data[:10,:2] for i in range(0, reduced_data.shape[0]): if db.labels_[i] == 0: c1 = plt.scatter(reduced_data[i,0],reduced_data[i,1],c='r',marker='+') elif db.labels_[i] == 1: c2 = plt.scatter(reduced_data[i,0],reduced_data[i,1],c='g',marker='o') elif db.labels_[i] == -1:c3 = plt.scatter(reduced_data[i,0],reduced_data[i,1],c='b',marker='*') plt.legend([c1, c2, c3], ['Cluster 1', 'Cluster 2','Noise']) plt.title('DBSCAN finds 2 clusters and noise') plt.show()
Clustering.ipynb
poethacker/hello
apache-2.0
Gaussian mixture models a mixture model is a probabilistic model for representing the presence of subpopulations within an overall population, without requiring that an observed data set should identify the sub-population to which an individual observation belongs. Formally a mixture model corresponds to the mixture distribution that represents the probability distribution of observations in the overall population. However, while problems associated with "mixture distributions" relate to deriving the properties of the overall population from those of the sub-populations, "mixture models" are used to make statistical inferences about the properties of the sub-populations given only observations on the pooled population, without sub-population identity information. sklearn.mixture is a package which enables one to learn Gaussian Mixture Models (diagonal, spherical, tied and full covariance matrices supported), sample them, and estimate them from data. Facilities to help determine the appropriate number of components are also provided. A Gaussian mixture model is a probabilistic model that assumes all the data points are generated from a mixture of a finite number of Gaussian distributions with unknown parameters. One can think of mixture models as generalizing k-means clustering to incorporate information about the covariance structure of the data as well as the centers of the latent Gaussians. Scikit-learn implements different classes to estimate Gaussian mixture models, that correspond to different estimation strategies. cite- https://jakevdp.github.io/PythonDataScienceHandbook/05.12-gaussian-mixtures.html
%matplotlib inline import matplotlib.pyplot as plt import seaborn as sns; sns.set() import numpy as np clusdf.head() reduced_data # Plot the data with K Means Labels from sklearn.cluster import KMeans kmeans = KMeans(4, random_state=0) labels = kmeans.fit(reduced_data).predict(reduced_data) plt.scatter(reduced_data[:, 0], reduced_data[:, 1], c=labels, s=40, cmap='viridis'); X=reduced_data from sklearn.cluster import KMeans from scipy.spatial.distance import cdist def plot_kmeans(kmeans, X, n_clusters=4, rseed=0, ax=None): labels = kmeans.fit_predict(X) # plot the input data ax = ax or plt.gca() ax.axis('equal') ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2) # plot the representation of the KMeans model centers = kmeans.cluster_centers_ radii = [cdist(X[labels == i], [center]).max() for i, center in enumerate(centers)] for c, r in zip(centers, radii): ax.add_patch(plt.Circle(c, r, fc='#CCCCCC', lw=3, alpha=0.5, zorder=1)) kmeans = KMeans(n_clusters=4, random_state=0) plot_kmeans(kmeans, X) rng = np.random.RandomState(13) X_stretched = np.dot(X, rng.randn(2, 2)) kmeans = KMeans(n_clusters=4, random_state=0) plot_kmeans(kmeans, X_stretched) from sklearn.mixture import GMM gmm = GMM(n_components=4).fit(X) labels = gmm.predict(X) plt.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis'); probs = gmm.predict_proba(X) print(probs[:5].round(3)) size = 50 * probs.max(1) ** 2 # square emphasizes differences plt.scatter(X[:, 0], X[:, 1], c=labels, cmap='viridis', s=size); from matplotlib.patches import Ellipse def draw_ellipse(position, covariance, ax=None, **kwargs): """Draw an ellipse with a given position and covariance""" ax = ax or plt.gca() # Convert covariance to principal axes if covariance.shape == (2, 2): U, s, Vt = np.linalg.svd(covariance) angle = np.degrees(np.arctan2(U[1, 0], U[0, 0])) width, height = 2 * np.sqrt(s) else: angle = 0 width, height = 2 * np.sqrt(covariance) # Draw the Ellipse for nsig in range(1, 4): ax.add_patch(Ellipse(position, nsig * width, nsig * height, angle, **kwargs)) def plot_gmm(gmm, X, label=True, ax=None): ax = ax or plt.gca() labels = gmm.fit(X).predict(X) if label: ax.scatter(X[:, 0], X[:, 1], c=labels, s=40, cmap='viridis', zorder=2) else: ax.scatter(X[:, 0], X[:, 1], s=40, zorder=2) ax.axis('equal') w_factor = 0.2 / gmm.weights_.max() for pos, covar, w in zip(gmm.means_, gmm.covars_, gmm.weights_): draw_ellipse(pos, covar, alpha=w * w_factor) gmm = GMM(n_components=4, random_state=42) plot_gmm(gmm, X) gmm = GMM(n_components=4, covariance_type='full', random_state=42) plot_gmm(gmm, X_stretched) from sklearn.datasets import make_moons Xmoon, ymoon = make_moons(200, noise=.05, random_state=0) plt.scatter(Xmoon[:, 0], Xmoon[:, 1]); gmm2 = GMM(n_components=2, covariance_type='full', random_state=0) plot_gmm(gmm2, Xmoon) gmm16 = GMM(n_components=16, covariance_type='full', random_state=0) plot_gmm(gmm16, Xmoon, label=False)
Clustering.ipynb
poethacker/hello
apache-2.0
mixture of 16 Gaussians serves not to find separated clusters of data, but rather to model the overall distribution of the input data
%matplotlib inline n_components = np.arange(1, 21) models = [GMM(n, covariance_type='full', random_state=0).fit(Xmoon) for n in n_components] plt.plot(n_components, [m.bic(Xmoon) for m in models], label='BIC') plt.plot(n_components, [m.aic(Xmoon) for m in models], label='AIC') plt.legend(loc='best') plt.xlabel('n_components') plt.show()
Clustering.ipynb
poethacker/hello
apache-2.0
The optimal number of clusters is the value that minimizes the AIC or BIC, depending on which approximation we wish to use. Here it is 8. BIRCH The Birch (Balanced Iterative Reducing and Clustering using Hierarchies ) builds a tree called the Characteristic Feature Tree (CFT) for the given data. The data is essentially lossy compressed to a set of Characteristic Feature nodes (CF Nodes). The CF Nodes have a number of subclusters called Characteristic Feature subclusters (CF Subclusters) and these CF Subclusters located in the non-terminal CF Nodes can have CF Nodes as children. The CF Subclusters hold the necessary information for clustering which prevents the need to hold the entire input data in memory. This information includes: Number of samples in a subcluster. Linear Sum - A n-dimensional vector holding the sum of all samples Squared Sum - Sum of the squared L2 norm of all samples. Centroids - To avoid recalculation linear sum / n_samples. Squared norm of the centroids. It is a memory-efficient, online-learning algorithm provided as an alternative to MiniBatchKMeans. It constructs a tree data structure with the cluster centroids being read off the leaf. These can be either the final cluster centroids or can be provided as input to another clustering algorithm such as AgglomerativeClustering.
from sklearn.cluster import Birch X = reduced_data brc = Birch(branching_factor=50, n_clusters=None, threshold=0.5,compute_labels=True) brc.fit(X) brc.predict(X) labels = brc.predict(X) plt.scatter(reduced_data[:, 0], reduced_data[:, 1], c=labels, s=40, cmap='viridis'); plt.show()
Clustering.ipynb
poethacker/hello
apache-2.0
# Mini Batch K-Means The MiniBatchKMeans is a variant of the KMeans algorithm which uses mini-batches to reduce the computation time, while still attempting to optimise the same objective function. Mini-batches are subsets of the input data, randomly sampled in each training iteration. These mini-batches drastically reduce the amount of computation required to converge to a local solution. In contrast to other algorithms that reduce the convergence time of k-means, mini-batch k-means produces results that are generally only slightly worse than the standard algorithm. The algorithm iterates between two major steps, similar to vanilla k-means. In the first step, samples are drawn randomly from the dataset, to form a mini-batch. These are then assigned to the nearest centroid. In the second step, the centroids are updated. In contrast to k-means, this is done on a per-sample basis.
from sklearn.cluster import MiniBatchKMeans import numpy as np X = reduced_data # manually fit on batches kmeans = MiniBatchKMeans(n_clusters=2,random_state=0,batch_size=6) kmeans = kmeans.partial_fit(X[0:6,:]) kmeans = kmeans.partial_fit(X[6:12,:]) kmeans.cluster_centers_ kmeans.predict(X) # fit on the whole data kmeans = MiniBatchKMeans(n_clusters=4,random_state=0,batch_size=6,max_iter=10).fit(X) kmeans.cluster_centers_ kmeans.predict(X) # Plot the decision boundary. For that, we will assign a color to each h=0.02 x_min, x_max = reduced_data[:, 0].min() - 1, reduced_data[:, 0].max() + 1 y_min, y_max = reduced_data[:, 1].min() - 1, reduced_data[:, 1].max() + 1 xx, yy = np.meshgrid(np.arange(x_min, x_max, h), np.arange(y_min, y_max, h)) # Obtain labels for each point in mesh. Use last trained model. Z = kmeans.predict(np.c_[xx.ravel(), yy.ravel()]) # Put the result into a color plot Z = Z.reshape(xx.shape) plt.figure(1) plt.clf() plt.imshow(Z, interpolation='nearest', extent=(xx.min(), xx.max(), yy.min(), yy.max()), cmap=plt.cm.Paired, aspect='auto', origin='lower') plt.plot(reduced_data[:, 0], reduced_data[:, 1], 'k.', markersize=2) # Plot the centroids as a white X centroids = kmeans.cluster_centers_ plt.scatter(centroids[:, 0], centroids[:, 1], marker='x', s=169, linewidths=3, color='w', zorder=10) plt.title('K-means clustering on the digits dataset (PCA-reduced data)\n' 'Centroids are marked with white cross') plt.xlim(x_min, x_max) plt.ylim(y_min, y_max) plt.xticks(()) plt.yticks(()) plt.show()
Clustering.ipynb
poethacker/hello
apache-2.0
Mean Shift MeanShift clustering aims to discover blobs in a smooth density of samples. It is a centroid based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form the final set of centroids. Mean shift clustering using a flat kernel. Mean shift clustering aims to discover “blobs” in a smooth density of samples. It is a centroid-based algorithm, which works by updating candidates for centroids to be the mean of the points within a given region. These candidates are then filtered in a post-processing stage to eliminate near-duplicates to form the final set of centroids. Seeding is performed using a binning technique for scalability.
print(__doc__) import numpy as np from sklearn.cluster import MeanShift, estimate_bandwidth from sklearn.datasets.samples_generator import make_blobs # ############################################################################# # Generate sample data centers = [[1, 1], [-1, -1], [1, -1]] X = reduced_data # ############################################################################# # Compute clustering with MeanShift # The following bandwidth can be automatically detected using bandwidth = estimate_bandwidth(X, quantile=0.2, n_samples=500) ms = MeanShift(bandwidth=bandwidth, bin_seeding=True) ms.fit(X) labels = ms.labels_ cluster_centers = ms.cluster_centers_ labels_unique = np.unique(labels) n_clusters_ = len(labels_unique) print("number of estimated clusters : %d" % n_clusters_) # ############################################################################# # Plot result import matplotlib.pyplot as plt from itertools import cycle plt.figure(1) plt.clf() colors = cycle('bgrcmykbgrcmykbgrcmykbgrcmyk') for k, col in zip(range(n_clusters_), colors): my_members = labels == k cluster_center = cluster_centers[k] plt.plot(X[my_members, 0], X[my_members, 1], col + '.') plt.plot(cluster_center[0], cluster_center[1], 'o', markerfacecolor=col, markeredgecolor='k', markersize=14) plt.title('Estimated number of clusters: %d' % n_clusters_) plt.show()
Clustering.ipynb
poethacker/hello
apache-2.0
knowledge of the ground truth class assignments labels_true and our clustering algorithm assignments of the same samples labels_pred https://scikit-learn.org/stable/modules/clustering.html#clustering-performance-evaluation - adjusted Rand index is a function that measures the similarity of the two assignments the Mutual Information is a function that measures the agreement of the two assignments, ignoring permutations. The following two desirable objectives for any cluster assignment: - homogeneity: each cluster contains only members of a single class. - completeness: all members of a given class are assigned to the same cluster. We can turn those concept as scores.Both are bounded below by 0.0 and above by 1.0 (higher is better) homogeneity_score and completeness_score. Their harmonic mean called V-measure is computed by - v_measure_score The Silhouette Coefficient is defined for each sample and is composed of two scores: a: The mean distance between a sample and all other points in the same class. b: The mean distance between a sample and all other points in the next nearest cluster.
from sklearn import metrics from sklearn.metrics import pairwise_distances from sklearn import datasets dataset = datasets.load_iris() X = dataset.data y = dataset.target import numpy as np from sklearn.cluster import KMeans kmeans_model = KMeans(n_clusters=3, random_state=1).fit(X) labels = kmeans_model.labels_ labels_true=y labels_pred=labels from sklearn import metrics metrics.adjusted_rand_score(labels_true, labels_pred) from sklearn import metrics metrics.adjusted_mutual_info_score(labels_true, labels_pred) metrics.homogeneity_score(labels_true, labels_pred) metrics.completeness_score(labels_true, labels_pred) metrics.v_measure_score(labels_true, labels_pred) metrics.silhouette_score(X, labels, metric='euclidean')
Clustering.ipynb
poethacker/hello
apache-2.0
Question: I want to know how similar 2 style are. I really like Apricot Blondes, and I want to see what other styles Apricot would go in. Perhaps it would be good in a German Pils. How to get there: The dataset shows the percentage of votes that said a style-addition combo would likely taste good. So, we can compare the votes on each addition for any two styles, and see how similar they are.
import math # Square the difference of each row, and then return the mean of the column. # This is the average difference between the two. # It will be higher if they are different, and lower if they are similar def similarity(styleA, styleB): diff = np.square(wtb[styleA] - wtb[styleB]) return diff.mean() res = [] # Loop through each addition pair wtb = wtb.T for styleA in wtb.columns: for styleB in wtb.columns: # Skip if styleA and combo B are the same. # To prevent duplicates, skip if A is after B alphabetically if styleA != styleB and styleA < styleB: res.append([styleA, styleB, similarity(styleA, styleB)]) df = pd.DataFrame(res, columns=["styleA", "styleB", "similarity"])
notebooks/Style Similarity.ipynb
jamesnw/wtb-data
mit
Top 10 most similar styles
df.sort_values("similarity").head(10)
notebooks/Style Similarity.ipynb
jamesnw/wtb-data
mit
10 Least Similar styles
df.sort_values("similarity", ascending=False).head(10)
notebooks/Style Similarity.ipynb
jamesnw/wtb-data
mit
Similarity of a specific combo
def comboSimilarity(styleA, styleB): # styleA needs to be before styleB alphabetically if styleA > styleB: addition_temp = styleA styleA = styleB styleB = addition_temp return df.loc[df['styleA'] == styleA].loc[df['styleB'] == styleB] comboSimilarity('Blonde Ale', 'German Pils')
notebooks/Style Similarity.ipynb
jamesnw/wtb-data
mit
We can see that Blonde Ales and German Pils are right between the mean and 50th percentile, so it's not a bad idea, but it's not a good idea either. We can also take a look at this visually to confirm.
%matplotlib inline import matplotlib import matplotlib.pyplot as plt n, bins, patches = plt.hist(df['similarity'], bins=50) similarity = float(comboSimilarity('Blonde Ale', 'German Pils')['similarity']) # Find the histogram bin that holds the similarity between the two target = np.argmax(bins>similarity) patches[target].set_fc('r') plt.show()
notebooks/Style Similarity.ipynb
jamesnw/wtb-data
mit
Is it working? Let's see! TODO 1.a: Run the decode_img function and plot it to see a happy looking daisy.
img = tf.io.read_file( "gs://cloud-ml-data/img/flower_photos/daisy/754296579_30a9ae018c_n.jpg" ) # Uncomment to see the image string. # print(img) img = decode_img(img, [IMG_WIDTH, IMG_HEIGHT]) plt.imshow(img.numpy());
notebooks/image_models/solutions/3_tf_hub_transfer_learning.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Note: It may take a 4-5 minutes to see result of different batches. MobileNetV2 These flower photos are much larger than handwritting recognition images in MNIST. They are about 10 times as many pixels per axis and there are three color channels, making the information here over 200 times larger! How do our current techniques stand up? Copy your best model architecture over from the <a href="2_mnist_models.ipynb">MNIST models lab</a> and see how well it does after training for 5 epochs of 50 steps. TODO 2.a Copy over the most accurate model from 2_mnist_models.ipynb or build a new CNN Keras model.
eval_path = "gs://cloud-ml-data/img/flower_photos/eval_set.csv" nclasses = len(CLASS_NAMES) hidden_layer_1_neurons = 400 hidden_layer_2_neurons = 100 dropout_rate = 0.25 num_filters_1 = 64 kernel_size_1 = 3 pooling_size_1 = 2 num_filters_2 = 32 kernel_size_2 = 3 pooling_size_2 = 2 layers = [ Conv2D( num_filters_1, kernel_size=kernel_size_1, activation="relu", input_shape=(IMG_WIDTH, IMG_HEIGHT, IMG_CHANNELS), ), MaxPooling2D(pooling_size_1), Conv2D(num_filters_2, kernel_size=kernel_size_2, activation="relu"), MaxPooling2D(pooling_size_2), Flatten(), Dense(hidden_layer_1_neurons, activation="relu"), Dense(hidden_layer_2_neurons, activation="relu"), Dropout(dropout_rate), Dense(nclasses), Softmax(), ] old_model = Sequential(layers) old_model.compile( optimizer="adam", loss="categorical_crossentropy", metrics=["accuracy"] ) train_ds = load_dataset(train_path, BATCH_SIZE) eval_ds = load_dataset(eval_path, BATCH_SIZE, training=False) old_model.fit( train_ds, epochs=5, steps_per_epoch=5, validation_data=eval_ds, validation_steps=VALIDATION_STEPS, )
notebooks/image_models/solutions/3_tf_hub_transfer_learning.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
If your model is like mine, it learns a little bit, slightly better then random, but ugh, it's too slow! With a batch size of 32, 5 epochs of 5 steps is only getting through about a quarter of our images. Not to mention, this is a much larger problem then MNIST, so wouldn't we need a larger model? But how big do we need to make it? Enter Transfer Learning. Why not take advantage of someone else's hard work? We can take the layers of a model that's been trained on a similar problem to ours and splice it into our own model. Tensorflow Hub is a database of models, many of which can be used for Transfer Learning. We'll use a model called MobileNet which is an architecture optimized for image classification on mobile devices, which can be done with TensorFlow Lite. Let's compare how a model trained on ImageNet data compares to one built from scratch. The tensorflow_hub python package has a function to include a Hub model as a layer in Keras. We'll set the weights of this model as un-trainable. Even though this is a compressed version of full scale image classification models, it still has over four hundred thousand paramaters! Training all these would not only add to our computation, but it is also prone to over-fitting. We'll add some L2 regularization and Dropout to prevent that from happening to our trainable weights. TODO 2.b: Add a Hub Keras Layer at the top of the model using the handle provided.
module_selection = "mobilenet_v2_100_224" module_handle = "https://tfhub.dev/google/imagenet/{}/feature_vector/4".format( module_selection ) transfer_model = tf.keras.Sequential( [ hub.KerasLayer(module_handle, trainable=False), tf.keras.layers.Dropout(rate=0.2), tf.keras.layers.Dense( nclasses, activation="softmax", kernel_regularizer=tf.keras.regularizers.l2(0.0001), ), ] ) transfer_model.build((None,) + (IMG_HEIGHT, IMG_WIDTH, IMG_CHANNELS)) transfer_model.summary()
notebooks/image_models/solutions/3_tf_hub_transfer_learning.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
We need to define model, and a cost function
# Perceptron model (or Linear regression) Y_ = X*W + B def distance(y, y_): return tf.abs(y-y_) # cost = distance(Y_, tf.sin(X)) cost = tf.reduce_mean(distance(Y_, Y)) optimizer = tf.train.GradientDescentOptimizer(learning_rate = 0.01).minimize(cost)
Training a network with TensorFlow/.ipynb_checkpoints/Sine wave predictor-checkpoint.ipynb
aliasvishnu/TensorFlow-Creative-Applications
gpl-3.0
We can group data by more than one factor. Let's say we're interested in how levels of ADHD interact with groupStatus (multitasking: high or low). We will first make a factor for ADHD (median-split), and add it as a grouping variable using the cut() function in pandas:
df["adhdF"] = pd.cut(df["adhd"],bins=2,labels=["Low","High"])
public/tutorials/python/3_descriptives/lesson.ipynb
monicathieu/cu-psych-r-tutorial
mit
Mémoïsation générique, non typée C'est étrangement court !
def memo(f): memoire = {} # dictionnaire vide, {} ou dict() def memo_f(n): # nouvelle fonction if n not in memoire: # verification memoire[n] = f(n) # stockage return memoire[n] # lecture return memo_f # ==> f memoisée !
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Essais
memo_f1 = memo(f1) print("3 secondes...") print(memo_f1(10)) # 13, 3 secondes après print("0 secondes !") print(memo_f1(10)) # instantanné ! # différent de ces deux lignes ! print("3 secondes...") print(memo(f1)(10)) print("3 secondes...") print(memo(f1)(10)) # 3 secondes aussi ! %timeit memo_f1(10) # instantanné !
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Et :
memo_f2 = memo(f2) print("4 secondes...") print(memo_f2(10)) # 100, 4 secondes après print("0 secondes !") print(memo_f2(10)) # instantanné ! %timeit memo_f2(10) # instantanné !
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Mémoïsation générique et typée Ce n'est pas tellement plus compliquée de typer la mémoïsation.
def memo_avec_type(f): memoire = {} # dictionnaire vide, {} ou dict() def memo_f_avec_type(n): if (type(n), n) not in memoire: memoire[(type(n), n)] = f(n) return memoire[(type(n), n)] return memo_f_avec_type
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Avantage, on obtient un résultat plus cohérent "au niveau de la reproducibilité des résultats", par exemple :
def fonction_sur_entiers_ou_flottants(n): if isinstance(n, int): return 'Int' elif isinstance(n, float): return 'Float' else: return '?' test0 = fonction_sur_entiers_ou_flottants print(test0(1)) print(test0(1.0)) # résultat correct ! print(test0("1")) test1 = memo(fonction_sur_entiers_ou_flottants) print(test1(1)) print(test1(1.0)) # résultat incorrect ! print(test1("1")) test2 = memo_avec_type(fonction_sur_entiers_ou_flottants) print(test2(1)) print(test2(1.0)) # résultat correct ! print(test2("1"))
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Bonus : on peut utiliser la syntaxe d'un décorateur en Python
def fibo(n): if n <= 1: return 1 else: return fibo(n-1) + fibo(n-2) print("Test de fibo() non mémoisée :") for n in range(10): print("F_{} = {}".format(n, fibo(n)))
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Cette fonction récursive est terriblement lente !
%timeit fibo(35) # version plus rapide ! @memo def fibo2(n): if n <= 1: return 1 else: return fibo2(n-1) + fibo2(n-2) print("Test de fibo() mémoisée (plus rapide) :") for n in range(10): print("F_{} = {}".format(n, fibo2(n))) %timeit fibo2(35)
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Autre exemple, ou le gain de temps est moins significatif.
def factorielle(n): if n <= 0: return 0 elif n == 1: return 1 else: return n * factorielle(n-1) print("Test de factorielle() non mémoisée :") for n in range(10): print("{}! = {}".format(n, factorielle(n))) %timeit factorielle(30) @memo def factorielle2(n): if n <= 0: return 0 elif n == 1: return 1 else: return n * factorielle2(n-1) print("Test de factorielle() mémoisée :") for n in range(10): print("{}! = {}".format(n, factorielle2(n))) %timeit factorielle2(30)
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Conclusion En Python, c'est facile, avec des dictionnaires génériques et une syntaxe facilitée avec un décorateur. Bonus : ce décorateur est dans la bibliothèque standard dans le module functools !
from functools import lru_cache # lru = least recently updated @lru_cache(maxsize=None) def fibo3(n): if n <= 1: return 1 else: return fibo3(n-1) + fibo3(n-2) print("Test de fibo() mémoisée avec functools.lru_cache (plus rapide) :") for n in range(10): print("F_{} = {}".format(n, fibo3(n))) %timeit fibo2(35) %timeit fibo3(35) %timeit fibo2(70) %timeit fibo3(70)
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
(On obtient presque les mêmes performances que notre implémentation manuelle) En OCaml Je traite exactement les mêmes exemples. J'expérimente l'utilisation de deux kernels Jupyter différents pour afficher des exemples de codes écrits dans deux langages dans le même notebook... Ce n'est pas très propre mais ça marche. Préliminaires Quelques fonctions nécessaires pour ces exemples :
let print = Format.printf;; let sprintf = Format.sprintf;; let time = Unix.time;; let sleep n = Sys.command (sprintf "sleep %i" n);; let timeit (repet : int) (f : 'a -> 'a) (x : 'a) () : float = let time0 = time () in for _ = 1 to repet do ignore (f x); done; let time1 = time () in (time1 -. time0 ) /. (float_of_int repet) ;;
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Exemples de fonctions à mémoïser
let f1 n = ignore (sleep 3); n + 2 ;; let _ = f1 10;; (* 13, après 3 secondes *) timeit 3 f1 10 ();; (* 3 secondes *)
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Et un autre exemple similaire :
let f2 n = ignore (sleep 4); n * n ;; let _ = f2 10;; (* 100, après 3 secondes *) timeit 3 f2 10 ();; (* 4 secondes *)
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Mémoïsation pour des fonctions d'un argument On utilise le module Hashtbl de la bibliothèque standard.
let memo f = let memoire = Hashtbl.create 128 in (* taille 128 par defaut *) let memo_f n = if Hashtbl.mem memoire n then (* lecture *) Hashtbl.find memoire n else begin let res = f n in (* calcul *) Hashtbl.add memoire n res; (* stockage *) res end in memo_f (* nouvelle fonction *) ;;
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Essais Deux exemples :
let memo_f1 = memo f1 ;; let _ = memo_f1 10 ;; (* 3 secondes *) let _ = memo_f1 10 ;; (* instantanné *) timeit 100 memo_f1 20 ();; (* 0.03 secondes *) let memo_f2 = memo f2 ;; let _ = memo_f2 10 ;; (* 4 secondes *) let _ = memo_f2 10 ;; (* instantanné *) timeit 100 memo_f2 20 ();; (* 0.04 secondes *)
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Ma fonction timeit fait un nombre paramétrique de répétitions sur des entrées non aléatoires, donc le temps moyen observé dépend du nombre de répétitions !
timeit 10000 memo_f2 50 ();; (* 0.04 secondes *)
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Exemple de la suite de Fibonacci
let rec fibo = function | 0 | 1 -> 1 | n -> (fibo (n - 1)) + (fibo (n - 2)) ;; fibo 40;; timeit 10 fibo 40 ();; (* 4.2 secondes ! *)
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Et avec la mémoïsation automatique :
let memo_fibo = memo fibo;; memo_fibo 40;; timeit 10 memo_fibo 41 ();; (* 0.7 secondes ! *)
agreg/Mémoisation_en_Python_et_OCaml.ipynb
Naereen/notebooks
mit
Given the variables: planet = "Earth" diameter = 12742 Use .format() to print the following string: The diameter of Earth is 12742 kilometers.
planet = "Earth" diameter = 12742 'The diameter of {} is {} kilometers.'.format(planet,diameter)
Python-Crash-Course/Python Crash Course Exercises .ipynb
iannesbitt/ml_bootcamp
mit
Given this nested dictionary grab the word "hello". Be prepared, this will be annoying/tricky
d = {'k1':[1,2,3,{'tricky':['oh','man','inception',{'target':[1,2,3,'hello']}]}]} d['k1'][3]['tricky'][3]['target'][3]
Python-Crash-Course/Python Crash Course Exercises .ipynb
iannesbitt/ml_bootcamp
mit
What is the main difference between a tuple and a list?
# Tuple is immutable, list items can be changed
Python-Crash-Course/Python Crash Course Exercises .ipynb
iannesbitt/ml_bootcamp
mit
Create a function that grabs the email website domain from a string in the form: [email protected] So for example, passing "[email protected]" would return: domain.com
def domainGet(inp): return inp.split('@')[1] domainGet('[email protected]')
Python-Crash-Course/Python Crash Course Exercises .ipynb
iannesbitt/ml_bootcamp
mit
Create a basic function that returns True if the word 'dog' is contained in the input string. Don't worry about edge cases like a punctuation being attached to the word dog, but do account for capitalization.
def findDog(inp): return 'dog' in inp.lower().split() findDog('Is there a dog here?')
Python-Crash-Course/Python Crash Course Exercises .ipynb
iannesbitt/ml_bootcamp
mit
Create a function that counts the number of times the word "dog" occurs in a string. Again ignore edge cases.
def countDog(inp): dog = 0 for x in inp.lower().split(): if x == 'dog': dog += 1 return dog countDog('This dog runs faster than the other dog dude!')
Python-Crash-Course/Python Crash Course Exercises .ipynb
iannesbitt/ml_bootcamp
mit
Use lambda expressions and the filter() function to filter out words from a list that don't start with the letter 's'. For example: seq = ['soup','dog','salad','cat','great'] should be filtered down to: ['soup','salad']
seq = ['soup','dog','salad','cat','great'] list(filter(lambda item:item[0]=='s',seq))
Python-Crash-Course/Python Crash Course Exercises .ipynb
iannesbitt/ml_bootcamp
mit
Final Problem You are driving a little too fast, and a police officer stops you. Write a function to return one of 3 possible results: "No ticket", "Small ticket", or "Big Ticket". If your speed is 60 or less, the result is "No Ticket". If speed is between 61 and 80 inclusive, the result is "Small Ticket". If speed is 81 or more, the result is "Big Ticket". Unless it is your birthday (encoded as a boolean value in the parameters of the function) -- on your birthday, your speed can be 5 higher in all cases.
def caught_speeding(speed, is_birthday): if is_birthday: speed = speed - 5 if speed > 80: return 'Big Ticket' elif speed > 60: return 'Small Ticket' else: return 'No Ticket' caught_speeding(81,True) caught_speeding(81,False)
Python-Crash-Course/Python Crash Course Exercises .ipynb
iannesbitt/ml_bootcamp
mit
Simple 3D Visualizations of a neuron Create 3D visualizations
# Specify param.image size to work with our models input, must be a multiple of 400. param_f = lambda: param.image(120, h=120, channels=3) # std_transforms = [ # pad(2, mode="constant", constant_value=.5), # jitter(2)] # transforms = std_transforms + [crop_or_pad_to(*model.image_shape[:2])] transforms = [] # Specify the objective # neuron = lambda n: objectives.neuron(LAYERS['pool1'][0], n) # obj = neuron(0) channel = lambda n: objectives.channel(LAYERS['pool1'][0], n) obj = channel(0) # Specify the number of optimzation steps, will output image at each step thresholds = (1, 2, 4, 8, 16, 32, 64, 128, 256, 512) # Render the objevtive imgs = render.render_vis(model, obj, param_f, thresholds=thresholds, transforms=transforms) show([nd.zoom(img[0], [1,1,1], order=0) for img in imgs]) # test = np.array(imgs) # test = test.reshape(400) # test = test[0:400:1] # fig = plt.figure(frameon=False); # ax = plt.Axes(fig, [0, 0, 1, 1]); # ax.set_axis_off(); # fig.add_axes(ax); # ax.plot(test, 'black'); # ax.set(xlim=(0, 400)); # ax.set(ylim=(0,1))
lucid_work/notebooks/feature_visualization.ipynb
davidparks21/qso_lya_detection_pipeline
mit
Simple 1D visualizations
# Specify param.image size param_f = lambda: param.image(400, h=1, channels=1) transforms = [] # Specify the objective # neuron = lambda n: objectives.neuron(LAYERS['pool1'][0], n) # obj = neuron(0) channel = lambda n: objectives.channel(LAYERS['pool1'][0], n) obj = channel(0) # Specify the number of optimzation steps, thresholds = (128,) # Render the objevtive imgs = render.render_vis(model, obj, param_f, thresholds=thresholds, transforms=transforms, verbose=False) # Display visualization test = np.array(imgs) test = test.reshape(400) test = test[0:400:1] fig = plt.figure(frameon=False); ax = plt.Axes(fig, [0, 0, 1, 1]); ax.set_axis_off(); fig.add_axes(ax); ax.plot(test, 'black'); ax.set(xlim=(0, 400)); ax.set(ylim=(0,1))
lucid_work/notebooks/feature_visualization.ipynb
davidparks21/qso_lya_detection_pipeline
mit
Visualize the process using plt.plot with t on the x-axis and W(t) on the y-axis. Label your x and y axes.
plt.plot(t,W) plt.xlabel("$t$") plt.ylabel("$W(t)$") assert True # this is for grading
assignments/assignment03/NumpyEx03.ipynb
aschaffn/phys202-2015-work
mit
Use np.diff to compute the changes at each step of the motion, dW, and then compute the mean and standard deviation of those differences.
dW = np.diff(W) dW.mean(), dW.std() assert len(dW)==len(W)-1 assert dW.dtype==np.dtype(float)
assignments/assignment03/NumpyEx03.ipynb
aschaffn/phys202-2015-work
mit
Write a function that takes $W(t)$ and converts it to geometric Brownian motion using the equation: $$ X(t) = X_0 e^{((\mu - \sigma^2/2)t + \sigma W(t))} $$ Use Numpy ufuncs and no loops in your function.
def geo_brownian(t, W, X0, mu, sigma): """Return X(t) for geometric brownian motion with drift mu, volatility sigma.""" exponent = 0.5 * t * (mu - sigma)**2 + sigma * W return X0 * np.exp(exponent) assert True # leave this for grading
assignments/assignment03/NumpyEx03.ipynb
aschaffn/phys202-2015-work
mit
Use your function to simulate geometric brownian motion, $X(t)$ for $X_0=1.0$, $\mu=0.5$ and $\sigma=0.3$ with the Wiener process you computed above. Visualize the process using plt.plot with t on the x-axis and X(t) on the y-axis. Label your x and y axes.
plt.plot(t, geo_brownian(t, W, 1.0, 0.5, 0.3)) plt.xlabel("$t$") plt.ylabel("$X(t)$") assert True # leave this for grading
assignments/assignment03/NumpyEx03.ipynb
aschaffn/phys202-2015-work
mit
<div class="alert alert-info"> **Note** Typecasting `int(response)` converted the string `response` to integer. If user enters anything other than integer, `ValueError` is raised </div> if-else statement Usage: python if condition: statement_1 statement_2 ... statement_n else: statement_1 statement_2 ... statement_n Example:
response = input("Enter an integer : ") num = int(response) if num % 2 == 0: print("{} is an even number".format(num)) else: print("{} is an odd number".format(num))
doc/Langauge/04-Control Structures.ipynb
OpenWeavers/openanalysis
gpl-3.0
Single Line if-else This serves as a replacement for ternery operator avaliable in C Usage: C ternery c result = (condition) ? value_true : value_false Python Single Line if else python result = value_true if condition else value_false Example:
response = input("Enter an integer : ") num = int(response) result = "even" if num % 2 == 0 else "odd" print("{} is {} number".format(num,result))
doc/Langauge/04-Control Structures.ipynb
OpenWeavers/openanalysis
gpl-3.0
if-else ladder Usage: python if condition_1: statements_1 elif condition_2: statements_2 elif condition_3: statements_3 ... ... ... elif condition_n: statements_n else: statements_last <div class="alert alert-info"> **Note** `Python` uses `elif` instead of `else if` like in `C`,`Java` or `C#` </div> Example:
response = input("Enter an integer (+ve or -ve) : ") num = int(response) if num > 0: print("{} is +ve".format(num)) elif num == 0: print("Zero") else: print("{} is -ve".format(num))
doc/Langauge/04-Control Structures.ipynb
OpenWeavers/openanalysis
gpl-3.0
<div class="alert alert-info"> **Note**: No `switch-case` There is no `switch-case` structure in Python. It can be realized using `if-else ladder` or any other ways </div> while loop Usage: python while condition: statement_1 statement_2 ... statement_n Example:
response = input("Enter an integer : ") num = int(response) prev,current = 0,1 i = 0 while i < num: prev,current = current,prev + current print('Fib[{}] = {}'.format(i,current),end=',') i += 1
doc/Langauge/04-Control Structures.ipynb
OpenWeavers/openanalysis
gpl-3.0
<div class="alert alert-info"> **Note** - Multiple assignments in single statement can be done -`Python` doesn't support `++` and `--` operators as in `C` - There is no `do-while` loop in Python </div> for loop Usage: python for object in collection: do_something_with_object <div class="alert alert-info"> **Notes** - `C` like `for(init;test;modify)` is not supported in Python - Python provides `range` object for iterating over numbers Usage of `range` object: ```python x = range(start = 0,stop,step = 1) ``` now `x` can be iterated, and it generates numbers including `start` excluding `stop` differing in the steps of `step` </div> Example:
for i in range(10): print(i, end=',') for i in range(2,10,3): print(i, end=',') response = input("Enter an integer : ") num = int(response) prev,current = 0,1 for i in range(num): prev,current = current,prev + current print('Fib[{}] = {}'.format(i,current),end=',')
doc/Langauge/04-Control Structures.ipynb
OpenWeavers/openanalysis
gpl-3.0
SVC Parameter Settings
# default parameters for SVC # ========================== default_svc_params = {} default_svc_params['C'] = 1.0 # penalty default_svc_params['class_weight'] = None # Set the parameter C of class i to class_weight[i]*C # set to 'auto' for unbalanced classes default_svc_params['gamma'] = 0.0 # Kernel coefficient for 'rbf', 'poly' and 'sigmoid' default_svc_params['kernel'] = 'rbf' # 'linear', 'poly', 'rbf', 'sigmoid', 'precomputed' or a callable # use of 'sigmoid' is discouraged default_svc_params['shrinking'] = True # Whether to use the shrinking heuristic. default_svc_params['probability'] = False # Whether to enable probability estimates. default_svc_params['tol'] = 0.001 # Tolerance for stopping criterion. default_svc_params['cache_size'] = 200 # size of the kernel cache (in MB). default_svc_params['max_iter'] = -1 # limit on iterations within solver, or -1 for no limit. default_svc_params['verbose'] = False default_svc_params['degree'] = 3 # 'poly' only default_svc_params['coef0'] = 0.0 # 'poly' and 'sigmoid' only # set the parameters for the classifier # ===================================== svc_params = dict(default_svc_params) svc_params['cache_size'] = 2000 svc_params['probability'] = True svc_params['kernel'] = 'poly' svc_params['C'] = 1.0 svc_params['gamma'] = 0.1112 svc_params['degree'] = 3 svc_params['coef0'] = 1 # create the classifier itself # ============================ svc_clf = SVC(**svc_params)
svm.scikit/svm_poly_pca.scikit_benchmark.ipynb
grfiv/MNIST
mit
Learning Curves see http://scikit-learn.org/stable/auto_examples/model_selection/plot_learning_curve.html The score is the model accuracy The red line shows how well the model fits the data it was trained on: a high score indicates low bias ... the model does fit the training data it's not unusual for the red line to start at 1.00 and decline slightly a low score indicates the model does not fit the training data ... more predictor variables are ususally indicated, or a different model The green line shows how well the model predicts the test data: if it's rising then it means more data to train on will produce better predictions
t0 = time.time() from sklearn.learning_curve import learning_curve from sklearn.cross_validation import ShuffleSplit def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None, n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)): """ Generate a simple plot of the test and training learning curve. Parameters ---------- estimator : object type that implements the "fit" and "predict" methods An object of that type which is cloned for each validation. title : string Title for the chart. X : array-like, shape (n_samples, n_features) Training vector, where n_samples is the number of samples and n_features is the number of features. y : array-like, shape (n_samples) or (n_samples, n_features), optional Target relative to X for classification or regression; None for unsupervised learning. ylim : tuple, shape (ymin, ymax), optional Defines minimum and maximum yvalues plotted. cv : integer, cross-validation generator, optional If an integer is passed, it is the number of folds (defaults to 3). Specific cross-validation objects can be passed, see sklearn.cross_validation module for the list of possible objects n_jobs : integer, optional Number of jobs to run in parallel (default 1). """ plt.figure(figsize=(8, 6)) plt.title(title) if ylim is not None: plt.ylim(*ylim) plt.xlabel("Training examples") plt.ylabel("Score") train_sizes, train_scores, test_scores = learning_curve( estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes) train_scores_mean = np.mean(train_scores, axis=1) train_scores_std = np.std(train_scores, axis=1) test_scores_mean = np.mean(test_scores, axis=1) test_scores_std = np.std(test_scores, axis=1) plt.grid() plt.fill_between(train_sizes, train_scores_mean - train_scores_std, train_scores_mean + train_scores_std, alpha=0.1, color="r") plt.fill_between(train_sizes, test_scores_mean - test_scores_std, test_scores_mean + test_scores_std, alpha=0.1, color="g") plt.plot(train_sizes, train_scores_mean, 'o-', color="r", label="Training score") plt.plot(train_sizes, test_scores_mean, 'o-', color="g", label="Cross-validation score") plt.tight_layout() plt.legend(loc="best") return plt C_gamma = "C="+str(np.round(svc_params['C'],4))+", gamma="+str(np.round(svc_params['gamma'],6)) title = "Learning Curves (SVM, Poly, " + C_gamma + ")" plot_learning_curve(estimator = svc_clf, title = title, X = trainX, y = trainY, ylim = (0.85, 1.01), cv = ShuffleSplit(n = trainX.shape[0], n_iter = 5, test_size = 0.2, random_state=0), n_jobs = 8) plt.show() print("\ntime in minutes {0:.2f}".format((time.time()-t0)/60))
svm.scikit/svm_poly_pca.scikit_benchmark.ipynb
grfiv/MNIST
mit
Custom training and batch prediction <table align="left"> <td> <a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/custom/sdk-custom-image-classification-batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab </a> </td> <td> <a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/official/custom/sdk-custom-image-classification-batch.ipynb"> <img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo"> View on GitHub </a> </td> </table> <br/><br/><br/> Overview This tutorial demonstrates how to use the Vertex SDK for Python to train and deploy a custom image classification model for batch prediction. Dataset The dataset used for this tutorial is the cifar10 dataset from TensorFlow Datasets. The version of the dataset you will use is built into TensorFlow. The trained model predicts which type of class an image is from ten classes: airplane, automobile, bird, cat, deer, dog, frog, horse, ship, truck. Objective In this notebook, you create a custom-trained model from a Python script in a Docker container using the Vertex SDK for Python, and then do a prediction on the deployed model by sending data. Alternatively, you can create custom-trained models using gcloud command-line tool, or online using the Cloud Console. The steps performed include: Create a Vertex AI custom job for training a model. Train a TensorFlow model. Make a batch prediction. Cleanup resources. Costs This tutorial uses billable components of Google Cloud (GCP): Vertex AI Cloud Storage Learn about Vertex AI pricing and Cloud Storage pricing, and use the Pricing Calculator to generate a cost estimate based on your projected usage. Installation Install the latest (preview) version of Vertex SDK for Python.
import os # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # Google Cloud Notebook requires dependencies to be installed with '--user' USER_FLAG = "" if IS_GOOGLE_CLOUD_NOTEBOOK: USER_FLAG = "--user" ! pip install {USER_FLAG} --upgrade google-cloud-aiplatform
notebooks/official/custom/sdk-custom-image-classification-batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Authenticate your Google Cloud account If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step. If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth. Otherwise, follow these steps: In the Cloud Console, go to the Create service account key page. Click Create service account. In the Service account name field, enter a name, and click Create. In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex AI" into the filter box, and select Vertex AI Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin. Click Create. A JSON file that contains your key downloads to your local environment. Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
import sys # If you are running this notebook in Colab, run this cell and follow the # instructions to authenticate your GCP account. This provides access to your # Cloud Storage bucket and lets you submit training jobs and prediction # requests. # The Google Cloud Notebook product has specific requirements IS_GOOGLE_CLOUD_NOTEBOOK = os.path.exists("/opt/deeplearning/metadata/env_version") # If on Google Cloud Notebooks, then don't execute this code if not IS_GOOGLE_CLOUD_NOTEBOOK: if "google.colab" in sys.modules: from google.colab import auth as google_auth google_auth.authenticate_user() # If you are running this notebook locally, replace the string below with the # path to your service account key and run this cell to authenticate your GCP # account. elif not os.getenv("IS_TESTING"): %env GOOGLE_APPLICATION_CREDENTIALS ''
notebooks/official/custom/sdk-custom-image-classification-batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Create a Cloud Storage bucket The following steps are required, regardless of your notebook environment. When you submit a training job using the Cloud SDK, you upload a Python package containing your training code to a Cloud Storage bucket. Vertex AI runs the code from this package. In this tutorial, Vertex AI also saves the trained model that results from your job in the same bucket. Using this model artifact, you can then create Vertex AI model resources. Set the name of your Cloud Storage bucket below. It must be unique across all Cloud Storage buckets. You may also change the REGION variable, which is used for operations throughout the rest of this notebook. Make sure to choose a region where Vertex AI services are available. You may not use a Multi-Regional Storage bucket for training with Vertex AI.
BUCKET_NAME = "gs://[your-bucket-name]" # @param {type:"string"} REGION = "[your-region]" # @param {type:"string"} if BUCKET_NAME == "" or BUCKET_NAME is None or BUCKET_NAME == "gs://[your-bucket-name]": BUCKET_NAME = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
notebooks/official/custom/sdk-custom-image-classification-batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Send the prediction request To make a batch prediction request, call the model object's batch_predict method with the following parameters: - instances_format: The format of the batch prediction request file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list" - prediction_format: The format of the batch prediction response file: "jsonl", "csv", "bigquery", "tf-record", "tf-record-gzip" or "file-list" - job_display_name: The human readable name for the prediction job. - gcs_source: A list of one or more Cloud Storage paths to your batch prediction requests. - gcs_destination_prefix: The Cloud Storage path that the service will write the predictions to. - model_parameters: Additional filtering parameters for serving prediction results. - machine_type: The type of machine to use for training. - accelerator_type: The hardware accelerator type. - accelerator_count: The number of accelerators to attach to a worker replica. - starting_replica_count: The number of compute instances to initially provision. - max_replica_count: The maximum number of compute instances to scale to. In this tutorial, only one instance is provisioned. Compute instance scaling You can specify a single instance (or node) to process your batch prediction request. This tutorial uses a single node, so the variables MIN_NODES and MAX_NODES are both set to 1. If you want to use multiple nodes to process your batch prediction request, set MAX_NODES to the maximum number of nodes you want to use. Vertex AI autoscales the number of nodes used to serve your predictions, up to the maximum number you set. Refer to the pricing page to understand the costs of autoscaling with multiple nodes.
MIN_NODES = 1 MAX_NODES = 1 # The name of the job BATCH_PREDICTION_JOB_NAME = "cifar10_batch-" + TIMESTAMP # Folder in the bucket to write results to DESTINATION_FOLDER = "batch_prediction_results" # The Cloud Storage bucket to upload results to BATCH_PREDICTION_GCS_DEST_PREFIX = BUCKET_NAME + "/" + DESTINATION_FOLDER # Make SDK batch_predict method call batch_prediction_job = model.batch_predict( instances_format="jsonl", predictions_format="jsonl", job_display_name=BATCH_PREDICTION_JOB_NAME, gcs_source=BATCH_PREDICTION_GCS_SOURCE, gcs_destination_prefix=BATCH_PREDICTION_GCS_DEST_PREFIX, model_parameters=None, machine_type=DEPLOY_COMPUTE, accelerator_type=DEPLOY_GPU, accelerator_count=DEPLOY_NGPU, starting_replica_count=MIN_NODES, max_replica_count=MAX_NODES, sync=True, )
notebooks/official/custom/sdk-custom-image-classification-batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Evaluate results You can then run a quick evaluation on the prediction results: np.argmax: Convert each list of confidence levels to a label Compare the predicted labels to the actual labels Calculate accuracy as correct/total To improve the accuracy, try training for a higher number of epochs.
y_predicted = [np.argmax(result["prediction"]) for result in results] correct = sum(y_predicted == np.array(y_test)) accuracy = len(y_predicted) print( f"Correct predictions = {correct}, Total predictions = {accuracy}, Accuracy = {correct/accuracy}" )
notebooks/official/custom/sdk-custom-image-classification-batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Cleaning up To clean up all Google Cloud resources used in this project, you can delete the Google Cloud project you used for the tutorial. Otherwise, you can delete the individual resources you created in this tutorial: Training Job Model Cloud Storage Bucket
delete_training_job = True delete_model = True # Warning: Setting this to true will delete everything in your bucket delete_bucket = False # Delete the training job job.delete() # Delete the model model.delete() if delete_bucket and "BUCKET_NAME" in globals(): ! gsutil -m rm -r $BUCKET_NAME
notebooks/official/custom/sdk-custom-image-classification-batch.ipynb
GoogleCloudPlatform/vertex-ai-samples
apache-2.0
Basics Set-up a simple run with a constant linear bed. We will first define the bed: Glacier bed
# This is the bed rock, linearily decreasing from 3000m altitude to 1000m, in 200 steps nx = 200 bed_h = np.linspace(3400, 1400, nx) # At the begining, there is no glacier so our glacier surface is at the bed altitude surface_h = bed_h # Let's set the model grid spacing to 100m (needed later) map_dx = 100 # plot this plt.plot(bed_h, color='k', label='Bedrock') plt.plot(surface_h, label='Initial glacier') plt.xlabel('Grid points') plt.ylabel('Altitude (m)') plt.legend(loc='best');
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
Now we have to decide how wide our glacier is, and what it the shape of its bed. For a start, we will use a "u-shaped" bed (see the documentation), with a constant width of 300m:
# The units of widths is in "grid points", i.e. 3 grid points = 300 m in our case widths = np.zeros(nx) + 3. # Define our bed init_flowline = VerticalWallFlowline(surface_h=surface_h, bed_h=bed_h, widths=widths, map_dx=map_dx)
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
The init_flowline variable now contains all deometrical information needed by the model. It can give access to some attributes, which are quite useless for a non-existing glacier:
print('Glacier length:', init_flowline.length_m) print('Glacier area:', init_flowline.area_km2) print('Glacier volume:', init_flowline.volume_km3)
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
Mass balance Then we will need a mass balance model. In our case this will be a simple linear mass-balance, defined by the equilibrium line altitude and an altitude gradient (in [mm m$^{-1}$]):
# ELA at 3000m a.s.l., gradient 4 mm m-1 mb_model = LinearMassBalanceModel(3000, grad=4)
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
The mass-balance model gives you the mass-balance for any altitude you want, in units [m s$^{-1}$]. Let us compute the annual mass-balance along the glacier profile:
annual_mb = mb_model.get_mb(surface_h) * SEC_IN_YEAR # Plot it plt.plot(annual_mb, bed_h, color='C2', label='Mass-balance') plt.xlabel('Annual mass-balance (m yr-1)') plt.ylabel('Altitude (m)') plt.legend(loc='best');
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
Model run Now that we have all the ingredients to run the model, we just have to initialize it:
# The model requires the initial glacier bed, a mass-balance model, and an initial time (the year y0) model = FlowlineModel(init_flowline, mb_model=mb_model, y0=0.)
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
We can now run the model for 150 years and see how the output looks like:
model.run_until(150) # Plot the initial conditions first: plt.plot(init_flowline.bed_h, color='k', label='Bedrock') plt.plot(init_flowline.surface_h, label='Initial glacier') # The get the modelled flowline (model.fls[-1]) and plot it's new surface plt.plot(model.fls[-1].surface_h, label='Glacier after {} years'.format(model.yr)) plt.xlabel('Grid points') plt.ylabel('Altitude (m)') plt.legend(loc='best');
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
Let's print out a few infos about our glacier:
print('Year:', model.yr) print('Glacier length (m):', model.length_m) print('Glacier area (km2):', model.area_km2) print('Glacier volume (km3):', model.volume_km3)
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
Note that the model time is now 150. Runing the model with the sane input will do nothing:
model.run_until(150) print('Year:', model.yr) print('Glacier length (m):', model.length_m)
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
If we want to compute longer, we have to set the desired date:
model.run_until(500) # Plot the initial conditions first: plt.plot(init_flowline.bed_h, color='k', label='Bedrock') plt.plot(init_flowline.surface_h, label='Initial glacier') # The get the modelled flowline (model.fls[-1]) and plot it's new surface plt.plot(model.fls[-1].surface_h, label='Glacier after {} years'.format(model.yr)) plt.xlabel('Grid points') plt.ylabel('Altitude (m)') plt.legend(loc='best'); print('Year:', model.yr) print('Glacier length (m):', model.length_m) print('Glacier area (km2):', model.area_km2) print('Glacier volume (km3):', model.volume_km3)
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
Note that in order to store some intermediate steps of the evolution of the glacier, it might be useful to make a loop:
# Reinitialize the model model = FlowlineModel(init_flowline, mb_model=mb_model, y0=0.) # Year 0 to 600 in 6 years step yrs = np.arange(0, 600, 5) # Array to fill with data nsteps = len(yrs) length = np.zeros(nsteps) vol = np.zeros(nsteps) # Loop for i, yr in enumerate(yrs): model.run_until(yr) length[i] = model.length_m vol[i] = model.volume_km3 # I store the final results for later use simple_glacier_h = model.fls[-1].surface_h
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
We can now plot the evolution of the glacier length and volume with time:
f, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5)) ax1.plot(yrs, length); ax1.set_xlabel('Years') ax1.set_ylabel('Length (m)'); ax2.plot(yrs, vol); ax2.set_xlabel('Years') ax2.set_ylabel('Volume (km3)');
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
A first experiment Ok, now we have seen the basics. Will will now define a simple experiment, in which we will now make the glacier wider at the top (in the accumulation area). This is a common situation for valley glaciers.
# We define the widths as before: widths = np.zeros(nx) + 3. # But we now make our glacier 600 me wide fir the first grid points: widths[0:15] = 6 # Define our new bed wider_flowline = VerticalWallFlowline(surface_h=surface_h, bed_h=bed_h, widths=widths, map_dx=map_dx)
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
We will now run our model with the new inital conditions, and store the output in a new variable for comparison:
# Reinitialize the model with the new input model = FlowlineModel(wider_flowline, mb_model=mb_model, y0=0.) # Array to fill with data nsteps = len(yrs) length_w = np.zeros(nsteps) vol_w = np.zeros(nsteps) # Loop for i, yr in enumerate(yrs): model.run_until(yr) length_w[i] = model.length_m vol_w[i] = model.volume_km3 # I store the final results for later use wider_glacier_h = model.fls[-1].surface_h
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
Compare the results:
# Plot the initial conditions first: plt.plot(init_flowline.bed_h, color='k', label='Bedrock') # Then the final result plt.plot(simple_glacier_h, label='Simple glacier') plt.plot(wider_glacier_h, label='Wider glacier') plt.xlabel('Grid points') plt.ylabel('Altitude (m)') plt.legend(loc='best'); f, (ax1, ax2) = plt.subplots(1, 2, figsize=(14, 5)) ax1.plot(yrs, length, label='Simple glacier'); ax1.plot(yrs, length_w, label='Wider glacier'); ax1.legend(loc='best') ax1.set_xlabel('Years') ax1.set_ylabel('Length (m)'); ax2.plot(yrs, vol, label='Simple glacier'); ax2.plot(yrs, vol_w, label='Wider glacier'); ax2.legend(loc='best') ax2.set_xlabel('Years') ax2.set_ylabel('Volume (km3)');
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
Ice flow parameters The ice flow parameters are going to have a strong influence on the behavior of the glacier. The default in OGGM is to set Glen's creep parameter A to the "standard value" defined by Cuffey and Patterson:
# Default in OGGM print(A)
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
We can change this and see what happens:
# Reinitialize the model with the new parameter model = FlowlineModel(init_flowline, mb_model=mb_model, y0=0., glen_a=A / 10) # Array to fill with data nsteps = len(yrs) length_s1 = np.zeros(nsteps) vol_s1 = np.zeros(nsteps) # Loop for i, yr in enumerate(yrs): model.run_until(yr) length_s1[i] = model.length_m vol_s1[i] = model.volume_km3 # I store the final results for later use stiffer_glacier_h = model.fls[-1].surface_h # And again model = FlowlineModel(init_flowline, mb_model=mb_model, y0=0., glen_a=A * 10) # Array to fill with data nsteps = len(yrs) length_s2 = np.zeros(nsteps) vol_s2 = np.zeros(nsteps) # Loop for i, yr in enumerate(yrs): model.run_until(yr) length_s2[i] = model.length_m vol_s2[i] = model.volume_km3 # I store the final results for later use softer_glacier_h = model.fls[-1].surface_h # Plot the initial conditions first: plt.plot(init_flowline.bed_h, color='k', label='Bedrock') # Then the final result plt.plot(simple_glacier_h, label='Default A') plt.plot(stiffer_glacier_h, label='A / 10') plt.plot(softer_glacier_h, label='A * 10') plt.xlabel('Grid points') plt.ylabel('Altitude (m)') plt.legend(loc='best');
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
In his seminal paper, Oerlemans also uses a so-called "sliding parameter", representing basal sliding. In OGGM this parameter is set to 0 per default, but it can be modified at whish:
# Change sliding to use Oerlemans value: model = FlowlineModel(init_flowline, mb_model=mb_model, y0=0., glen_a=A, fs=5.7e-20) # Array to fill with data nsteps = len(yrs) length_s3 = np.zeros(nsteps) vol_s3 = np.zeros(nsteps) # Loop for i, yr in enumerate(yrs): model.run_until(yr) length_s3[i] = model.length_m vol_s3[i] = model.volume_km3 # I store the final results for later use sliding_glacier_h = model.fls[-1].surface_h # Plot the initial conditions first: plt.plot(init_flowline.bed_h, color='k', label='Bedrock') # Then the final result plt.plot(simple_glacier_h, label='Default') plt.plot(sliding_glacier_h, label='Sliding glacier') plt.xlabel('Grid points') plt.ylabel('Altitude (m)') plt.legend(loc='best');
docs/notebooks/flowline_model.ipynb
jlandmann/oggm
gpl-3.0
purpose upload sketches to S3 build stimulus dictionary and write to database upload sketches to s3
upload_dir = './sketch' import boto runThis = 0 if runThis: conn = boto.connect_s3() b = conn.create_bucket('sketchpad_basic_pilot2_sketches') all_files = [i for i in os.listdir(upload_dir) if i != '.DS_Store'] for a in all_files: print a k = b.new_key(a) k.set_contents_from_filename(os.path.join(upload_dir,a)) k.set_acl('public-read')
experiments/recog/preprocess_sketches.ipynb
judithfan/graphcomm
mit
build stimulus dictionary
## read in experimental metadata file path_to_metadata = '../../analysis/sketchpad_basic_pilot2_group_data.csv' meta = pd.read_csv(path_to_metadata) ## clean up and add filename column meta2 = meta.drop(['svg','png','Unnamed: 0'],axis=1) filename = [] games = [] for i,row in meta2.iterrows(): filename.append('gameID_{}_trial_{}.png'.format(row['gameID'],row['trialNum'])) games.append([]) meta2['filename'] = filename meta2['games'] = games ## write out metadata to json file stimdict = meta2.to_dict(orient='records') import json with open('sketchpad_basic_recog_meta.js', 'w') as fout: json.dump(stimdict, fout) J = json.loads(open('sketchpad_basic_recog_meta.js',mode='ru').read()) assert len(J)==len(meta2) '{} unique games.'.format(len(np.unique(meta2.gameID.values)))
experiments/recog/preprocess_sketches.ipynb
judithfan/graphcomm
mit
upload stim dictionary to mongo (db = 'stimuli', collection='sketchpad_basic_recog')
# set vars auth = pd.read_csv('auth.txt', header = None) # this auth.txt file contains the password for the sketchloop user pswd = auth.values[0][0] user = 'sketchloop' host = 'rxdhawkins.me' ## cocolab ip address # have to fix this to be able to analyze from local import pymongo as pm conn = pm.MongoClient('mongodb://sketchloop:' + pswd + '@127.0.0.1') db = conn['stimuli'] coll = db['sketchpad_basic_pilot2_sketches'] ## actually add data now to the database for (i,j) in enumerate(J): if i%100==0: print ('%d of %d' % (i,len(J))) coll.insert_one(j) ## How many sketches have been retrieved at least once? equivalent to: coll.find({'numGames':{'$exists':1}}).count() coll.find({'numGames':{'$gte':0}}).count() ## stashed away handy querying things # coll.find({'numGames':{'$gte':1}}).sort('trialNum')[0] # from bson.objectid import ObjectId # coll.find({'_id':ObjectId('5a9a003d47e3d54db0bf33cc')}).count()
experiments/recog/preprocess_sketches.ipynb
judithfan/graphcomm
mit
crop 3d objects
import os from PIL import Image def RGBA2RGB(image, color=(255, 255, 255)): """Alpha composite an RGBA Image with a specified color. Simpler, faster version than the solutions above. Source: http://stackoverflow.com/a/9459208/284318 Keyword Arguments: image -- PIL RGBA Image object color -- Tuple r, g, b (default 255, 255, 255) """ image.load() # needed for split() background = Image.new('RGB', image.size, color) background.paste(image, mask=image.split()[3]) # 3 is the alpha channel return background def load_and_crop_image(path, dest='object_cropped', imsize=224): im = Image.open(path) # if np.array(im).shape[-1] == 4: # im = RGBA2RGB(im) # crop to sketch only arr = np.asarray(im) if len(arr.shape)==2: w,h = np.where(arr!=127) else: w,h,d = np.where(arr!=127) # where the image is not white if len(h)==0: print(path) xlb = min(h) xub = max(h) ylb = min(w) yub = max(w) lb = min([xlb,ylb]) ub = max([xub,yub]) im = im.crop((lb, lb, ub, ub)) im = im.resize((imsize, imsize), Image.ANTIALIAS) objname = path.split('/')[-1] if not os.path.exists(dest): os.makedirs(dest) im.save(os.path.join(dest,objname)) run_this = 0 if run_this: ## actually crop images now data_dir = './object' allobjs = ['./object/' + i for i in os.listdir(data_dir)] for o in allobjs: load_and_crop_image(o) run_this = 0 if run_this: ## rename objects in folder data_dir = './object' allobjs = [data_dir + '/' + i for i in os.listdir(data_dir) if i != '.DS_Store'] for o in allobjs: if len(o.split('_'))==4: os.rename(o, os.path.join(data_dir, o.split('/')[-1].split('_')[2] + '.png'))
experiments/recog/preprocess_sketches.ipynb
judithfan/graphcomm
mit
&larr; Back to Index Audio Representation In performance, musicians convert sheet music representations into sound which is transmitted through the air as air pressure oscillations. In essence, sound is simply air vibrating (Wikipedia). Sound vibrates through the air as longitudinal waves, i.e. the oscillations are parallel to the direction of propagation. Audio refers to the production, transmission, or reception of sounds that are audible by humans. An audio signal is a representation of sound that represents the fluctuation in air pressure caused by the vibration as a function of time. Unlike sheet music or symbolic representations, audio representations encode everything that is necessary to reproduce an acoustic realization of a piece of music. However, note parameters such as onsets, durations, and pitches are not encoded explicitly. This makes converting from an audio representation to a symbolic representation a difficult and ill-defined task. Waveforms and the Time Domain The basic representation of an audio signal is in the time domain. Let's listen to a file:
x, sr = librosa.load('audio/c_strum.wav') ipd.Audio(x, rate=sr)
audio_representation.ipynb
stevetjoa/stanford-mir
mit
(If you get an error using librosa.load, you may need to install ffmpeg.) The change in air pressure at a certain time is graphically represented by a pressure-time plot, or simply waveform. To plot a waveform, use librosa.display.waveplot:
plt.figure(figsize=(15, 5)) librosa.display.waveplot(x, sr, alpha=0.8)
audio_representation.ipynb
stevetjoa/stanford-mir
mit
Digital computers can only capture this data at discrete moments in time. The rate at which a computer captures audio data is called the sampling frequency (often abbreviated fs) or sampling rate (often abbreviated sr). For this workshop, we will mostly work with a sampling frequency of 44100 Hz, the sampling rate of CD recordings. Timbre: Temporal Indicators Timbre is the quality of sound that distinguishes the tone of different instruments and voices even if the sounds have the same pitch and loudness. One characteristic of timbre is its temporal evolution. The envelope of a signal is a smooth curve that approximates the amplitude extremes of a waveform over time. Envelopes are often modeled by the ADSR model (Wikipedia) which describes four phases of a sound: attack, decay, sustain, release. During the attack phase, the sound builds up, usually with noise-like components over a broad frequency range. Such a noise-like short-duration sound at the start of a sound is often called a transient. During the decay phase, the sound stabilizes and reaches a steady periodic pattern. During the sustain phase, the energy remains fairly constant. During the release phase, the sound fades away. The ADSR model is a simplification and does not necessarily model the amplitude envelopes of all sounds.
ipd.Image("https://upload.wikimedia.org/wikipedia/commons/thumb/e/ea/ADSR_parameter.svg/640px-ADSR_parameter.svg.png")
audio_representation.ipynb
stevetjoa/stanford-mir
mit
Timbre: Spectral Indicators Another property used to characterize timbre is the existence of partials and their relative strengths. Partials are the dominant frequencies in a musical tone with the lowest partial being the fundamental frequency. The partials of a sound are visualized with a spectrogram. A spectrogram shows the intensity of frequency components over time. (See Fourier Transform and Short-Time Fourier Transform for more.) Pure Tone Let's synthesize a pure tone at 1047 Hz, concert C6:
T = 2.0 # seconds f0 = 1047.0 sr = 22050 t = numpy.linspace(0, T, int(T*sr), endpoint=False) # time variable x = 0.1*numpy.sin(2*numpy.pi*f0*t) ipd.Audio(x, rate=sr)
audio_representation.ipynb
stevetjoa/stanford-mir
mit
Display the spectrum of the pure tone:
X = scipy.fft(x[:4096]) X_mag = numpy.absolute(X) # spectral magnitude f = numpy.linspace(0, sr, 4096) # frequency variable plt.figure(figsize=(14, 5)) plt.plot(f[:2000], X_mag[:2000]) # magnitude spectrum plt.xlabel('Frequency (Hz)')
audio_representation.ipynb
stevetjoa/stanford-mir
mit
Oboe Let's listen to an oboe playing a C6:
x, sr = librosa.load('audio/oboe_c6.wav') ipd.Audio(x, rate=sr) print(x.shape)
audio_representation.ipynb
stevetjoa/stanford-mir
mit
Display the spectrum of the oboe:
X = scipy.fft(x[10000:14096]) X_mag = numpy.absolute(X) plt.figure(figsize=(14, 5)) plt.plot(f[:2000], X_mag[:2000]) # magnitude spectrum plt.xlabel('Frequency (Hz)')
audio_representation.ipynb
stevetjoa/stanford-mir
mit
Clarinet Let's listen to a clarinet playing a concert C6:
x, sr = librosa.load('audio/clarinet_c6.wav') ipd.Audio(x, rate=sr) print(x.shape) X = scipy.fft(x[10000:14096]) X_mag = numpy.absolute(X) plt.figure(figsize=(14, 5)) plt.plot(f[:2000], X_mag[:2000]) # magnitude spectrum plt.xlabel('Frequency (Hz)')
audio_representation.ipynb
stevetjoa/stanford-mir
mit
Dependency in how many states are added Here we see whether the distribution of synaptic influences depends on the state of the vector. First we start with only two
n_dim = 400 nn = Hopfield(n_dim=n_dim, T=T, prng=prng) list_of_patterns = nn.generate_random_patterns(n_dim) nn.train(list_of_patterns, normalize=normalize)
notebooks/2016-12-11(Study of connectivity distribution).ipynb
h-mayorquin/hopfield_sequences
mit