markdown
stringlengths
0
37k
code
stringlengths
1
33.3k
path
stringlengths
8
215
repo_name
stringlengths
6
77
license
stringclasses
15 values
(These are long novels!) We can also group and slice our dataframe to do further analyses.
###Ex: print the average novel length for male authors and female authors. ###### What conclusions might you draw from this? ###Ex: graph the average novel length by gender ##EX: Add error bars to your graph
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
Gold star exercise This one is a bit tricky. If you're not quite there, no worries! We'll work through it together. Ex: plot the average novel length by year, with error bars. Your x-axis should be year, and your y-axis number of words. HINT: Copy and paste what we did above with gender, and then change the necessary variables and options. By my count, you should only have to change one variable, and one graph option.
#Write your exercise solution here
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='lambda'></a> 4. Applying NLTK Functions and the lambda function If we want to apply nltk functions we can do so using .apply(). If we want to use list comprehension on the split text, we have to introduce one more Python trick: the lambda function. This simply allows us to write our own function to apply to each row in our dataframe. For example, we may want tokenize our text instead of splitting on the white space. To do this we can use the lambda function. Note: If you want to explore lambda functions more, see the notebook titled A-Bonus_LambdaFunctions.ipynb in this folder. Because of the length of the novels tokenizing the text takes a bit of time. We'll instead tokenize the title only.
df['title_tokens'] = df['title'].apply(nltk.word_tokenize) df['title_tokens']
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
With this tokenized list we might want to, for example, remove punctuation. Again, we can use the lambda function, with list comprehension.
df['title_tokens_clean'] = df['title_tokens'].apply(lambda x: [word for word in x if word not in list(string.punctuation)]) df['title_tokens_clean']
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='extract'></a> 5. Extracting Text from a Dataframe We may want to extract the text from our dataframe, to do further analyses on the text only. We can do this using the tolist() function and the join() function.
novels = df['text'].tolist() print(novels[:1]) #turn all of the novels into one long string using the join function cat_novels = ''.join(n for n in novels) print(cat_novels[:100])
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
<a id='exercise'></a> 6. Exercise: Average TTR (if time, otherwise do on your own) Motivating Question: Is there a difference in the average TTR for male and female authors? To answer this, go step by step. For computational reasons we will use the list we created by splitting on white spaces rather than tokenized text. So this is approximate only. We first need to count the token type in each novel. We can do this in two steps. First, create a column that contains a list of the unique token types, by applying the set function.
##Ex: create a new column, 'text_type', which contains a list of unique token types ##Ex: create a new column, 'type_count', which is a count of the token types in each novel. ##Ex: create a new column, 'ttr', which contains the type-token ratio for each novel. ##Ex: Print the average ttr by author gender ##Ex: Graph this result with error bars
03-Pandas_and_DTM/00-PandasAndTextAnalysis.ipynb
lknelson/text-analysis-2017
bsd-3-clause
Basic usage The simplest option is to simply call the diagnose_tcr_ecs_tcre method of the MAGICC instance and read out the results.
with MAGICC6() as magicc: # you can tweak whatever parameters you want in # MAGICC6/run/MAGCFG_DEFAULTALL.CFG, here's a few # examples that might be of interest results = magicc.diagnose_tcr_ecs_tcre( CORE_CLIMATESENSITIVITY=2.75, CORE_DELQ2XCO2=3.65, CORE_HEATXCHANGE_LANDOCEAN=1.5, ) print( "TCR is {tcr:.4f}, ECS is {ecs:.4f} and TCRE is {tcre:.6f}".format( **results ) )
notebooks/Diagnose-TCR-ECS-TCRE.ipynb
openclimatedata/pymagicc
agpl-3.0
If we wish, we can alter the MAGICC instance's parameters before calling the diagnose_tcr_ecs method.
with MAGICC6() as magicc: results_default = magicc.diagnose_tcr_ecs_tcre() results_low_ecs = magicc.diagnose_tcr_ecs_tcre(CORE_CLIMATESENSITIVITY=1.5) results_high_ecs = magicc.diagnose_tcr_ecs_tcre( CORE_CLIMATESENSITIVITY=4.5 ) print( "Default TCR is {tcr:.4f}, ECS is {ecs:.4f} and TCRE is {tcre:.6f}".format( **results_default ) ) print( "Low TCR is {tcr:.4f}, ECS is {ecs:.4f} and TCRE is {tcre:.6f}".format( **results_low_ecs ) ) print( "High TCR is {tcr:.4f}, ECS is {ecs:.4f} and TCRE is {tcre:.6f}".format( **results_high_ecs ) )
notebooks/Diagnose-TCR-ECS-TCRE.ipynb
openclimatedata/pymagicc
agpl-3.0
Making a plot The output also includes the timeseries that were used in the diagnosis experiment. Hence we can use the output to make a plot.
# NBVAL_IGNORE_OUTPUT join_year = 1900 pdf = ( results["timeseries"] .filter(region="World") .to_iamdataframe() .swap_time_for_year() .data ) for variable, df in pdf.groupby("variable"): fig, axes = plt.subplots(1, 2, sharey=True, figsize=(16, 4.5)) unit = df["unit"].unique()[0] for scenario, scdf in df.groupby("scenario"): scdf.plot(x="year", y="value", ax=axes[0], label=scenario) scdf.plot(x="year", y="value", ax=axes[1], label=scenario) axes[0].set_xlim([1750, join_year]) axes[0].set_ylabel("{} ({})".format(variable, unit)) axes[1].set_xlim(left=join_year) axes[1].legend_.remove() fig.tight_layout() # NBVAL_IGNORE_OUTPUT results["timeseries"].filter( scenario="abrupt-2xCO2", region="World", year=range(1795, 1905) ).timeseries()
notebooks/Diagnose-TCR-ECS-TCRE.ipynb
openclimatedata/pymagicc
agpl-3.0
Térbeli görbék, adathalmazok Ahhoz hogy egy ábrát térben tudjunk megjeleníteni, fel kell készíteni a környezetet. A térbeli ábrák megjelenítése és azok tulajdonságainak beállítása kicsit körülményesebb a 2D-s ábráknál. A legszembetűnőbb különbség, hogy az ábrák úgynevezett axes (körül belül itt a koordinátatengelyekre kell gondolni...) objektumok köré csoportosulnak, s ezek tulajdonságaiként, illetve ezeken alkalmazott függvényekként jönnek létre maguk az ábrák. Példaképpen ábrázoljunk egy egszerű paraméteres térbeli görbét! Legyen ez a görbe a következő spirális függvény: \begin{equation} \mathbf{r}(t)=\left(\begin{array}{c} \cos(3t)\ \sin(3t)\ t \end{array}\right) \end{equation} Először is gyártsuk let a $t$ paraméter mintavételezési pontjait a $[0,2\pi]$ intervallumban:
t=linspace(0,2*pi,100) # 100 pont 0 és 2*pi között
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
A következő kódcellában két dolog fog történni. Előszöris létrehozzuk az ax nevű axes objektumot, amelynek expliciten megadjuk, hogy 3D-s koordinátarendszer legyen. Illetve erre az objektumra hatva a plot függvénnyel létrehozzuk magát az ábrát. Figyeljük meg, hogy most a plot függvény háruom bemenő paramétert vár!
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása ax.plot(cos(3*t),sin(3*t),t)
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Ahogy a síkbeli ábráknál láttuk, a plot függvényt itt is használhatjuk rendezetlenül mintavételezett adatok ábrázolására is.
ax=subplot(1,1,1,projection='3d') ax.plot(rand(10),rand(10),rand(10),'o')
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
A stílusdefiníciók a 2D ábrákhoz hasonló kulcsszavas argumentumok alapján dolgozódnak fel! Lássunk erre is egy példát:
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása ax.plot(cos(3*t),sin(3*t),t,color='green',linestyle='dashed',linewidth=3)
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Térbeli ábrák megjelenítése kapcsán rendszeresen felmerülő probléma, hogy jó irányból nézzünk rá az ábrára. Az ábra nézőpontjait a view_init függvény segítségével tudjuk megadni. A view_init két paramétere ekvatoriális gömbi koordinátarendszerben adja meg az ábra nézőpontját. A két bemenő paraméter a deklináció és az azimutszög fokban mérve. Például az $x$-tengely felől így lehet készíteni ábrát:
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása ax.plot(cos(3*t),sin(3*t),t) ax.view_init(0,0)
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Az $y$-tengely felől pedig így:
ax=subplot(1,1,1,projection='3d') #térbeli koordináta tengely létrehozása ax.plot(cos(3*t),sin(3*t),t) ax.view_init(0,90)
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Ha interaktív függvényeket használunk, akkor a nézőpontot az alábbiak szerint interaktívan tudjuk változtatni:
def forog(th,phi): ax=subplot(1,1,1,projection='3d') ax.plot(sin(3*t),cos(3*t),t) ax.view_init(th,phi) interact(forog,th=(-90,90),phi=(0,360));
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Kétváltozós függvények és felületek A térbeli ábrák egyik előnye, hogy térbeli felületeket is meg tudunk jeleníteni. Ennek a legegyszerűbb esete a kétváltozós $$z=f(x,y)$$ függvények magasságtérképszerű ábrázolása. Ahogy azt már megszoktuk, itt is az első feladat a mintavételezés és a függvény kiértékelése. Az alábbiakban vizsgáljuk meg a $$z=-[\sin(x) ^{10} + \cos(10 + y x) \cos(x)]\exp((-x^2-y^2)/4)$$ függvényt!
x,y = meshgrid(linspace(-3,3,250),linspace(-5,5,250)) # mintavételezési pontok legyártása. z = -(sin(x) ** 10 + cos(10 + y * x) * cos(x))*exp((-x**2-y**2)/4) # függvény kiértékelés
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
A plot_surface függvény segítségével jeleníthetjük meg ezt a függvényt.
ax = subplot(111, projection='3d') ax.plot_surface(x, y, z)
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Sokszor szemléletes a kirajzolódott felületet valamilyen színskála szerint színezni. Ezt a síkbeli ábráknál már megszokott módon a cmap kulcsszó segítségével tehetjük.
ax = subplot(111, projection='3d') ax.plot_surface(x, y, z,cmap='viridis')
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
A térbeli felületek legáltalánosabb megadása kétparaméteres vektor értékű függvényekkel lehetséges. Azaz \begin{equation} \mathbf{r}(u,v)=\left(\begin{array}{c} f(u,v)\ g(u,v)\ h(u,v) \end{array}\right) \end{equation} Vizsgáljunk meg erre egy példát, ahol a megjeleníteni kívánt felület egy tórusz! A tórusz egy lehetséges paraméterezése a következő: \begin{equation} \mathbf{r}(\theta,\varphi)=\left(\begin{array}{c} (R_1 + R_2 \cos \theta) \cos{\varphi}\ (R_1 + R_2 \cos \theta) \sin{\varphi} \ R_2 \sin \theta \end{array}\right) \end{equation} Itt $R_1$ és $R_2$ a tórusz két sugarának paramétere, $\theta$ és $\varphi$ pedig mind a ketten a $[0,2\pi]$ intervallumon futnak végig. Legyen $R_1=4$ és $R_2=1$. Rajzoljuk ki ezt a felületet! Első lépésként gyártsuk le az ábrázolandó felület pontjait:
theta,phi=meshgrid(linspace(0,2*pi,250),linspace(0,2*pi,250)) x=(4 + 1*cos(theta))*cos(phi) y=(4 + 1*cos(theta))*sin(phi) z=1*sin(theta)
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Ábrázolni ismét a plot_surface függvény segítségével tudunk:
ax = subplot(111, projection='3d') ax.plot_surface(x, y, z)
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
A fenti ábrát egy kicsit arányosabbá tehetjük, ha a tengelyek megjelenítésének arányát, illetve a tengerek határait átállítjuk. Ezt a set_aspect, illetve a set_xlim, set_ylim és set_zlim függvények segítségével tehetjük meg:
ax = subplot(111, projection='3d') ax.plot_surface(x, y, z) ax.set_aspect('equal'); ax.set_xlim(-5,5); ax.set_ylim(-5,5); ax.set_zlim(-5,5);
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Végül tegyük ezt az ábrát is interaktívvá:
def forog(th,ph): ax = subplot(111, projection='3d') ax.plot_surface(x, y, z) ax.view_init(th,ph) ax.set_aspect('equal'); ax.set_xlim(-5,5); ax.set_ylim(-5,5); ax.set_zlim(-5,5); interact(forog,th=(-90,90),ph=(0,360));
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
Erőterek 3D-ben Térbeli vektortereket, azaz olyan függvényeket, amelyek a tér minden pontjához egy háromdimenziós vektort rendelnek, a síkbeli ábrákhoz hasonlóan itt is a quiver parancs segítségével tudunk megjeleníteni. Az alábbi példában az egységgömb felületének 100 pontjába rajzolunk egy-egy radiális irányba mutató vektort:
phiv,thv=(2*pi*rand(100),pi*rand(100)) #Ez a két sor a térbeli egység gömb xv,yv,zv=(cos(phiv)*sin(thv),sin(phiv)*sin(thv),cos(thv)) #100 véletlen pontját jelöli ki uv,vv,wv=(xv,yv,zv) #Ez pedig a megfelelő pontokhoz hozzá rendel egy egy radiális vektort ax = subplot(111, projection='3d') ax.quiver(xv, yv, zv, uv, vv, wv, length=0.3,color='darkcyan') ax.set_aspect('equal')
notebooks/Package04/3D.ipynb
oroszl/szamprob
gpl-3.0
¿Cómo podemos encontrar el tipo de una variable? (a) Usar la función print() para determinar el tipo mirando el resultado. (b) Usar la función type(). (c) Usarla en una expresión y usar print() sobre el resultado. (d) Mirar el lugar donde se declaró la variable. Si a="10" y b="Diez", se puede decir que a y b: (a) Son del mismo tipo. (b) Se pueden multiplicar. (c) Son iguales. (d) Son de tipos distintos.
a='10' b='Diez' type(b)
Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/02. Variables, tipos y operaciones.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Operaciones entre tipos
int(3.14) int(3.9999) # Redondea? int? int int(3.0) int(3) int("12") int("twelve") float(3) float? str(3) str(3.0) str(int(2.9999))
Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/02. Variables, tipos y operaciones.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Nombres de variables
hola=10 hola mi variable=10 mi_variable=10 mi-variable=10 mi.variable=10 mi$variable=10 variable_1=34 1_variable=34 pi=3.1315 def=10
Teaching Materials/Programming/Python/Python3Espanol/1_Introduccion/02. Variables, tipos y operaciones.ipynb
astro4dev/OAD-Data-Science-Toolkit
gpl-3.0
Model 2: Apply data cleanup Recall that we did some data cleanup in the previous lab. Let's do those before training. This is a dataset that we will need quite frequently in this notebook, so let's extract it first.
%%bigquery CREATE OR REPLACE TABLE serverlessml.cleaned_training_data AS SELECT (tolls_amount + fare_amount) AS fare_amount, pickup_longitude AS pickuplon, pickup_latitude AS pickuplat, dropoff_longitude AS dropofflon, dropoff_latitude AS dropofflat, passenger_count*1.0 AS passengers FROM `nyc-tlc.yellow.trips` WHERE ABS(MOD(FARM_FINGERPRINT(CAST(pickup_datetime AS STRING)), 100000)) = 1 AND trip_distance > 0 AND fare_amount >= 2.5 AND pickup_longitude > -78 AND pickup_longitude < -70 AND dropoff_longitude > -78 AND dropoff_longitude < -70 AND pickup_latitude > 37 AND pickup_latitude < 45 AND dropoff_latitude > 37 AND dropoff_latitude < 45 AND passenger_count > 0 %%bigquery -- LIMIT 0 is a free query, this allows us to check that the table exists. SELECT * FROM serverlessml.cleaned_training_data LIMIT 0 %%bigquery CREATE OR REPLACE MODEL serverlessml.model2_cleanup OPTIONS(input_label_cols=['fare_amount'], model_type='linear_reg') AS SELECT * FROM serverlessml.cleaned_training_data %%bigquery SELECT SQRT(mean_squared_error) AS rmse FROM ML.EVALUATE(MODEL serverlessml.model2_cleanup)
notebooks/launching_into_ml/solutions/2_first_model.ipynb
GoogleCloudPlatform/asl-ml-immersion
apache-2.0
Note that, despite their completely different appearance, both affinity matrices contain the same values, but with a different order of rows and columns. For this dataset, the sorted affinity matrix is almost block diagonal. Note, also, that the block-wise form of this matrix depends on parameter $\gamma$. Exercise 2: Modify the selection of $\gamma$, and check the effect of this in the appearance of the sorted similarity matrix. Write down the values for which you consider that the structure of the matrix better resembles the number of clusters in the datasets. Out from the diagonal block, similarities are close to zero. We can enforze a block diagonal structure be setting to zero the small similarity values. For instance, by thresholding ${\bf K}s$ with threshold $t$, we get the truncated (and sorted) affinity matrix $$ \overline{K}{s,ij} = K_{s,ij} \cdot \text{u}(K_{s,ij} - t) $$ (where $\text{u}()$ is the step function) which is block diagonal. Exercise 3: Compute the truncated and sorted affinity matrix with $t=0.001$
t = 0.001 # Kt = <FILL IN> # Truncated affinity matrix Kt = K*(K>t) # Truncated affinity matrix # Kst = <FILL IN> # Truncated and sorted affinity matrix Kst = Ks*(Ks>t) # Truncated and sorted affinity matrix # </SOL>
U2.SpectralClustering/.ipynb_checkpoints/SpecClustering-checkpoint.ipynb
ML4DS/ML4all
mit
Note that, despite the eigenvector components can not be used as a straighforward cluster indicator, they are strongly informative of the clustering structure. All points in the same cluster have similar values of the corresponding eigenvector components $(v_{n0}, \ldots, v_{n,c-1})$. Points from different clusters have different values of the corresponding eigenvector components $(v_{n0}, \ldots, v_{n,c-1})$. Therfore we can define vectors ${\bf z}^{(n)} = (v_{n0}, \ldots, v_{n,c-1})$ and apply a centroid based algorithm (like $K$-means) to identify all points with similar eigenvector components. The corresponding samples in ${\bf X}$ become the final clusters of the spectral clustering algorithm. One possible way to identify the cluster structure is to apply a $K$-means algorithm over the eigenvector coordinates. The steps of the spectral clustering algorithm become the following 5. A spectral clustering (graph cutting) algorithm 5.1. The steps of the spectral clustering algorithm. Summarizing, the steps of the spectral clustering algorithm for a data matrix ${\bf X}$ are the following: Compute the affinity matrix, ${\bf K}$. Optionally, truncate the smallest components to zero. Compute the laplacian matrix, ${\bf L}$ Compute the $c$ orthogonal eigenvectors with smallest eigenvalues, ${\bf v}0,\ldots,{\bf v}{c-1}$ Construct the sample set ${\bf Z}$ with rows ${\bf z}^{(n)} = (v_{0n}, \ldots, v_{c-1,n})$ Apply the $K$-means algorithms over ${\bf Z}$ with $K=c$ centroids. Assign samples in ${\bf X}$ to clusters: if ${\bf z}^{(n)}$ is assigned by $K$-means to cluster $i$, assign sample ${\bf x}^{(n)}$ in ${\bf X}$ to cluster $i$. Exercise 7: In this exercise we will apply the spectral clustering algorithm to the two-rings dataset ${\bf X}_2$, using $\gamma = 20$, $t=0.1$ and $c = 2$ clusters. Complete step 1, and plot the graph induced by ${\bf K}$
# <SOL> g = 20 t = 0.1 K2 = rbf_kernel(X2, X2, gamma=g) K2t = K2*(K2>t) G2 = nx.from_numpy_matrix(K2t) graphplot = nx.draw(G2, X2, node_size=40, width=0.5) plt.axis('equal') plt.show() # </SOL>
U2.SpectralClustering/.ipynb_checkpoints/SpecClustering-checkpoint.ipynb
ML4DS/ML4all
mit
Visualize the CelebA Data The CelebA dataset contains over 200,000 celebrity images with annotations. Since you're going to be generating faces, you won't need the annotations, you'll only need the images. Note that these are color images with 3 color channels (RGB) each. Pre-process and Load the Data Since the project's main focus is on building the GANs, we've done some of the pre-processing for you. Each of the CelebA images has been cropped to remove parts of the image that don't include a face, then resized down to 64x64x3 NumPy images. This pre-processed dataset is a smaller subset of the very large CelebA data. There are a few other steps that you'll need to transform this data and create a DataLoader. Exercise: Complete the following get_dataloader function, such that it satisfies these requirements: Your images should be square, Tensor images of size image_size x image_size in the x and y dimension. Your function should return a DataLoader that shuffles and batches these Tensor images. ImageFolder To create a dataset given a directory of images, it's recommended that you use PyTorch's ImageFolder wrapper, with a root directory processed_celeba_small/ and data transformation passed in.
# necessary imports import torch from torchvision import datasets from torchvision import transforms def get_dataloader(batch_size, image_size, data_dir='processed_celeba_small/'): """ Batch the neural network data using DataLoader :param batch_size: The size of each batch; the number of images in a batch :param img_size: The square size of the image data (x, y) :param data_dir: Directory where image data is located :return: DataLoader with batched data """ # TODO: Implement function and return a dataloader return None
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Create a DataLoader Exercise: Create a DataLoader celeba_train_loader with appropriate hyperparameters. Call the above function and create a dataloader to view images. * You can decide on any reasonable batch_size parameter * Your image_size must be 32. Resizing the data to a smaller size will make for faster training, while still creating convincing images of faces!
# Define function hyperparameters batch_size = img_size = """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ # Call your function and get a dataloader celeba_train_loader = get_dataloader(batch_size, img_size)
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Next, you can view some images! You should seen square images of somewhat-centered faces. Note: You'll need to convert the Tensor images into a NumPy type and transpose the dimensions to correctly display an image, suggested imshow code is below, but it may not be perfect.
# helper display function def imshow(img): npimg = img.numpy() plt.imshow(np.transpose(npimg, (1, 2, 0))) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ # obtain one batch of training images dataiter = iter(celeba_train_loader) images, _ = dataiter.next() # _ for no labels # plot the images in the batch, along with the corresponding labels fig = plt.figure(figsize=(20, 4)) plot_size=20 for idx in np.arange(plot_size): ax = fig.add_subplot(2, plot_size/2, idx+1, xticks=[], yticks=[]) imshow(images[idx])
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Exercise: Pre-process your image data and scale it to a pixel range of -1 to 1 You need to do a bit of pre-processing; you know that the output of a tanh activated generator will contain pixel values in a range from -1 to 1, and so, we need to rescale our training images to a range of -1 to 1. (Right now, they are in a range from 0-1.)
# TODO: Complete the scale function def scale(x, feature_range=(-1, 1)): ''' Scale takes in an image x and returns that image, scaled with a feature_range of pixel values from -1 to 1. This function assumes that the input x is already scaled from 0-1.''' # assume x is scaled to (0, 1) # scale to feature_range and return scaled x return x """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ # check scaled range # should be close to -1 to 1 img = images[0] scaled_img = scale(img) print('Min: ', scaled_img.min()) print('Max: ', scaled_img.max())
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Define the Model A GAN is comprised of two adversarial networks, a discriminator and a generator. Discriminator Your first task will be to define the discriminator. This is a convolutional classifier like you've built before, only without any maxpooling layers. To deal with this complex data, it's suggested you use a deep network with normalization. You are also allowed to create any helper functions that may be useful. Exercise: Complete the Discriminator class The inputs to the discriminator are 32x32x3 tensor images The output should be a single value that will indicate whether a given image is real or fake
import torch.nn as nn import torch.nn.functional as F class Discriminator(nn.Module): def __init__(self, conv_dim): """ Initialize the Discriminator Module :param conv_dim: The depth of the first convolutional layer """ super(Discriminator, self).__init__() # complete init function def forward(self, x): """ Forward propagation of the neural network :param x: The input to the neural network :return: Discriminator logits; the output of the neural network """ # define feedforward behavior return x """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_discriminator(Discriminator)
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Generator The generator should upsample an input and generate a new image of the same size as our training data 32x32x3. This should be mostly transpose convolutional layers with normalization applied to the outputs. Exercise: Complete the Generator class The inputs to the generator are vectors of some length z_size The output should be a image of shape 32x32x3
class Generator(nn.Module): def __init__(self, z_size, conv_dim): """ Initialize the Generator Module :param z_size: The length of the input latent vector, z :param conv_dim: The depth of the inputs to the *last* transpose convolutional layer """ super(Generator, self).__init__() # complete init function def forward(self, x): """ Forward propagation of the neural network :param x: The input to the neural network :return: A 32x32x3 Tensor image as output """ # define feedforward behavior return x """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_generator(Generator)
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Initialize the weights of your networks To help your models converge, you should initialize the weights of the convolutional and linear layers in your model. From reading the original DCGAN paper, they say: All weights were initialized from a zero-centered Normal distribution with standard deviation 0.02. So, your next task will be to define a weight initialization function that does just this! You can refer back to the lesson on weight initialization or even consult existing model code, such as that from the networks.py file in CycleGAN Github repository to help you complete this function. Exercise: Complete the weight initialization function This should initialize only convolutional and linear layers Initialize the weights to a normal distribution, centered around 0, with a standard deviation of 0.02. The bias terms, if they exist, may be left alone or set to 0.
def weights_init_normal(m): """ Applies initial weights to certain layers in a model . The weights are taken from a normal distribution with mean = 0, std dev = 0.02. :param m: A module or layer in a network """ # classname will be something like: # `Conv`, `BatchNorm2d`, `Linear`, etc. classname = m.__class__.__name__ # TODO: Apply initial weights to convolutional and linear layers
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Build complete network Define your models' hyperparameters and instantiate the discriminator and generator from the classes defined above. Make sure you've passed in the correct input arguments.
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ def build_network(d_conv_dim, g_conv_dim, z_size): # define discriminator and generator D = Discriminator(d_conv_dim) G = Generator(z_size=z_size, conv_dim=g_conv_dim) # initialize model weights D.apply(weights_init_normal) G.apply(weights_init_normal) print(D) print() print(G) return D, G
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Exercise: Define model hyperparameters
# Define model hyperparams d_conv_dim = g_conv_dim = z_size = """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ D, G = build_network(d_conv_dim, g_conv_dim, z_size)
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Training on GPU Check if you can train on GPU. Here, we'll set this as a boolean variable train_on_gpu. Later, you'll be responsible for making sure that Models, Model inputs, and Loss function arguments Are moved to GPU, where appropriate.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import torch # Check for a GPU train_on_gpu = torch.cuda.is_available() if not train_on_gpu: print('No GPU found. Please use a GPU to train your neural network.') else: print('Training on GPU!')
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Discriminator and Generator Losses Now we need to calculate the losses for both types of adversarial networks. Discriminator Losses For the discriminator, the total loss is the sum of the losses for real and fake images, d_loss = d_real_loss + d_fake_loss. Remember that we want the discriminator to output 1 for real images and 0 for fake images, so we need to set up the losses to reflect that. Generator Loss The generator loss will look similar only with flipped labels. The generator's goal is to get the discriminator to think its generated images are real. Exercise: Complete real and fake loss functions You may choose to use either cross entropy or a least squares error loss to complete the following real_loss and fake_loss functions.
def real_loss(D_out): '''Calculates how close discriminator outputs are to being real. param, D_out: discriminator logits return: real loss''' loss = return loss def fake_loss(D_out): '''Calculates how close discriminator outputs are to being fake. param, D_out: discriminator logits return: fake loss''' loss = return loss
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Optimizers Exercise: Define optimizers for your Discriminator (D) and Generator (G) Define optimizers for your models with appropriate hyperparameters.
import torch.optim as optim # Create optimizers for the discriminator D and generator G d_optimizer = g_optimizer =
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Training Training will involve alternating between training the discriminator and the generator. You'll use your functions real_loss and fake_loss to help you calculate the discriminator losses. You should train the discriminator by alternating on real and fake images Then the generator, which tries to trick the discriminator and should have an opposing loss function Saving Samples You've been given some code to print out some loss statistics and save some generated "fake" samples. Exercise: Complete the training function Keep in mind that, if you've moved your models to GPU, you'll also have to move any model inputs to GPU.
def train(D, G, n_epochs, print_every=50): '''Trains adversarial networks for some number of epochs param, D: the discriminator network param, G: the generator network param, n_epochs: number of epochs to train for param, print_every: when to print and record the models' losses return: D and G losses''' # move models to GPU if train_on_gpu: D.cuda() G.cuda() # keep track of loss and generated, "fake" samples samples = [] losses = [] # Get some fixed data for sampling. These are images that are held # constant throughout training, and allow us to inspect the model's performance sample_size=16 fixed_z = np.random.uniform(-1, 1, size=(sample_size, z_size)) fixed_z = torch.from_numpy(fixed_z).float() # move z to GPU if available if train_on_gpu: fixed_z = fixed_z.cuda() # epoch training loop for epoch in range(n_epochs): # batch training loop for batch_i, (real_images, _) in enumerate(celeba_train_loader): batch_size = real_images.size(0) real_images = scale(real_images) # =============================================== # YOUR CODE HERE: TRAIN THE NETWORKS # =============================================== # 1. Train the discriminator on real and fake images d_loss = # 2. Train the generator with an adversarial loss g_loss = # =============================================== # END OF YOUR CODE # =============================================== # Print some loss stats if batch_i % print_every == 0: # append discriminator loss and generator loss losses.append((d_loss.item(), g_loss.item())) # print discriminator and generator loss print('Epoch [{:5d}/{:5d}] | d_loss: {:6.4f} | g_loss: {:6.4f}'.format( epoch+1, n_epochs, d_loss.item(), g_loss.item())) ## AFTER EACH EPOCH## # this code assumes your generator is named G, feel free to change the name # generate and save sample, fake images G.eval() # for generating samples samples_z = G(fixed_z) samples.append(samples_z) G.train() # back to training mode # Save training generator samples with open('train_samples.pkl', 'wb') as f: pkl.dump(samples, f) # finally return losses return losses
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Set your number of training epochs and train your GAN!
# set number of epochs n_epochs = """ DON'T MODIFY ANYTHING IN THIS CELL """ # call training function losses = train(D, G, n_epochs=n_epochs)
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Training loss Plot the training losses for the generator and discriminator, recorded after each epoch.
fig, ax = plt.subplots() losses = np.array(losses) plt.plot(losses.T[0], label='Discriminator', alpha=0.5) plt.plot(losses.T[1], label='Generator', alpha=0.5) plt.title("Training Losses") plt.legend()
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Generator samples from training View samples of images from the generator, and answer a question about the strengths and weaknesses of your trained models.
# helper function for viewing a list of passed in sample images def view_samples(epoch, samples): fig, axes = plt.subplots(figsize=(16,4), nrows=2, ncols=8, sharey=True, sharex=True) for ax, img in zip(axes.flatten(), samples[epoch]): img = img.detach().cpu().numpy() img = np.transpose(img, (1, 2, 0)) img = ((img + 1)*255 / (2)).astype(np.uint8) ax.xaxis.set_visible(False) ax.yaxis.set_visible(False) im = ax.imshow(img.reshape((32,32,3))) # Load samples from generator, taken while training with open('train_samples.pkl', 'rb') as f: samples = pkl.load(f) _ = view_samples(-1, samples)
DEEP LEARNING/Pytorch from scratch/TODO/GAN/project-face-generation/dlnd_face_generation.ipynb
Diyago/Machine-Learning-scripts
apache-2.0
Let us perform our analysis on selected 2 days
gjw.store.window = TimeFrame(start='2015-09-03 00:00:00+01:00', end='2015-09-05 00:00:00+01:00') gjw.set_window = TimeFrame(start='2015-09-03 00:00:00+01:00', end='2015-09-05 00:00:00+01:00') elec = gjw.buildings[building_number].elec mains = elec.mains() mains.plot() #plt.show() house = elec['fridge'] #only one meter so any selection will do df = house.load().next() #load the first chunk of data into a dataframe #df.info() #check that the data is what we want (optional) #note the data has two columns and a time index #df.head() #df.tail() #df.plot() #plt.show()
notebooks/disaggregation-hart-CO-active_only.ipynb
gjwo/nilm_gjw_data
apache-2.0
Hart Training We'll now do the training from the aggregate data. The algorithm segments the time series data into steady and transient states. Thus, we'll first figure out the transient and the steady states. Next, we'll try and pair the on and the off transitions based on their proximity in time and value.
#df.ix['2015-09-03 11:00:00+01:00':'2015-09-03 12:00:00+01:00'].plot()# select a time range and plot it #plt.show() h = Hart85() h.train(mains,cols=[('power','active')]) h.steady_states ax = mains.plot() h.steady_states['active average'].plot(style='o', ax = ax); plt.ylabel("Power (W)") plt.xlabel("Time"); #plt.show()
notebooks/disaggregation-hart-CO-active_only.ipynb
gjwo/nilm_gjw_data
apache-2.0
Hart Disaggregation
disag_filename = join(data_dir, 'disag_gjw_hart.hdf5') output = HDFDataStore(disag_filename, 'w') h.disaggregate(mains,output,sample_period=1) output.close() disag_hart = DataSet(disag_filename) disag_hart disag_hart_elec = disag_hart.buildings[building_number].elec disag_hart_elec
notebooks/disaggregation-hart-CO-active_only.ipynb
gjwo/nilm_gjw_data
apache-2.0
Combinatorial Optimisation training
co = CombinatorialOptimisation() co.train(mains,cols=[('power','active')]) co.steady_states ax = mains.plot() co.steady_states['active average'].plot(style='o', ax = ax); plt.ylabel("Power (W)") plt.xlabel("Time"); disag_filename = join(data_dir, 'disag_gjw_co.hdf5') output = HDFDataStore(disag_filename, 'w') co.disaggregate(mains,output,sample_period=1) output.close()
notebooks/disaggregation-hart-CO-active_only.ipynb
gjwo/nilm_gjw_data
apache-2.0
Can't use because no test data for comparison
from nilmtk.metrics import f1_score f1_hart= f1_score(disag_hart_elec, test_elec) f1_hart.index = disag_hart_elec.get_labels(f1_hart.index) f1_hart.plot(kind='barh') plt.ylabel('appliance'); plt.xlabel('f-score'); plt.title("Hart");
notebooks/disaggregation-hart-CO-active_only.ipynb
gjwo/nilm_gjw_data
apache-2.0
Uniform Sample A uniform sample is a sample drawn at random without replacements
def sample(num_sample, top): """ Create a random sample from a table Attributes --------- num_sample: int top: dataframe Returns a random subset of table index """ df_index = [] for i in np.arange(0, num_sample, 1): # pick randomly from the whole table sample_index = np.random.randint(0, len(top)) # store index df_index.append(sample_index) return df_index def sample_no_replacement(num_sample, top): """ Create a random sample from a table Attributes --------- num_sample: int top: dataframe Returns a random subset of table index """ df_index = [] lst = np.arange(0, len(top), 1) for i in np.arange(0, num_sample, 1): # pick randomly from the whole table sample_index = np.random.choice(lst) lst = np.setdiff1d(lst,[sample_index]) df_index.append(sample_index) return df_index
Data/data_Stats_3_ChanceModels.ipynb
omoju/Fundamentals
gpl-3.0
We can simulate the act of rolling dice by just pulling out rows
index_ = sample(3, die) df = die.ix[index_, :] df index_ = sample(1, coin) df = coin.ix[index_, :] df def sum_draws( n, box ): """ Construct histogram for the sum of n draws from a box with replacement Attributes ----------- n: int (number of draws) box: dataframe (the box model) """ data = numpy.zeros(shape=(n,1)) if n > 0: for i in range(n): index_ = np.random.randint(0, len(box), n) df = box.ix[index_, :] data[i] = df.Content.sum() bins = np.arange(data.min()-0.5, data.max()+1, 1) pyplt.hist(data, bins=bins, normed=True) pyplt.ylabel('percent per unit') pyplt.xlabel('Number on ticket') pyplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.); else: raise ValueError('n has to be greater than 0') box = pd.DataFrame() box["Content"] = [0,1,2,3,4] pyplt.rcParams['figure.figsize'] = (4, 3) sum_draws(100, box) pyplt.rcParams['figure.figsize'] = (4, 3) low, high = box.Content.min() - 0.5, box.Content.max() + 1 bins = np.arange(low, high, 1) box.plot.hist(bins=bins, normed=True) pyplt.ylabel('percent per unit') pyplt.xlabel('Number on ticket') pyplt.legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.); sum_draws(1000, box)
Data/data_Stats_3_ChanceModels.ipynb
omoju/Fundamentals
gpl-3.0
Modeling the Law of Averages The law of averages states that as the number of draws increases, so too does the difference between the expected average versus the observed average. $$ Chance \ Error = Observed - Expected $$ In the case of coin tosses, as the number of tosses goes up, so does the absolute chance error.
def number_of_heads( n, box ): """ The number of heads in n tosses Attributes ----------- n: int (number of draws) box: dataframe (the coin box model) """ data = numpy.zeros(shape=(n,1)) if n > 0: value = np.random.randint(0, len(box), n) data = value else: raise ValueError('n has to be greater than 0') return data.sum() box = pd.DataFrame() box["Content"] = [0,1] low, high, step = 100, 10000, 2 length = len(range(low, high, step)) num_tosses = numpy.zeros(shape=(length,1)) num_heads = numpy.zeros(shape=(length,1)) chance_error = numpy.zeros(shape=(length,1)) percentage_difference = numpy.zeros(shape=(length,1)) i= 0 for n in range(low, high, step): observed = number_of_heads(n, box) expected = n//2 num_tosses[i] = n num_heads[i] = observed chance_error[i] = math.fabs(expected - observed) percentage_difference[i] = math.fabs(((num_heads[i] / num_tosses[i]) * 100) - 50) i += 1 avg_heads = pd.DataFrame(index= range(low, high, step) ) avg_heads['num_tosses'] = num_tosses avg_heads['num_heads'] = num_heads avg_heads['chance_error'] = chance_error avg_heads['percentage_difference'] = percentage_difference avg_heads.reset_index(inplace=True) pyplt.rcParams['figure.figsize'] = (8, 3) pyplt.plot(avg_heads.chance_error, 'ro', markersize=1) pyplt.ylim(-50, 500) pyplt.title('Modeling the Law of Averages') pyplt.ylabel('Difference between \nObserved versus Expected') pyplt.xlabel('Number of Tosses'); pyplt.rcParams['figure.figsize'] = (8, 4) ax = pyplt.plot(avg_heads.percentage_difference, 'bo', markersize=1) pyplt.ylim(-5, 20) pyplt.ylabel('The Percentage Difference\n Between Observed and Expected') pyplt.xlabel('Number of Tosses'); pyplt.rcParams['figure.figsize'] = (4, 3)
Data/data_Stats_3_ChanceModels.ipynb
omoju/Fundamentals
gpl-3.0
Statistical analysis on Allsides bias rating: No sources from the images boxes were rated in the Allsides bias rating dataset. Therefore comparisons between bias of baseline sources versus image box sources could not be performed. Statistical analysis on Facebook Study bias rating: Hillary Clinton Image Box images versus Baseline images source bias according to Facebook bias ratings:
print("Baseline skew: ", stats.skew(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3])) print("Image Box skew: ", stats.skew(HC_imagebox.facebookbias_rating[HC_imagebox.facebookbias_rating<3]))
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
from the stats page "For normally distributed data, the skewness should be about 0. A skewness value > 0 means that there is more weight in the left tail of the distribution. The function skewtest can be used to determine if the skewness value is close enough to 0, statistically speaking."
print("Baseline skew: ", stats.skewtest(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3])) print("Image Box skew: ", stats.skewtest(HC_imagebox.facebookbias_rating[HC_imagebox.facebookbias_rating<3])) stats.ks_2samp(HC_baseline.facebookbias_rating[HC_baseline.facebookbias_rating<3], HC_imagebox.facebookbias_rating[HC_imagebox.facebookbias_rating<3]) HC_imagebox.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='blue') HC_baseline.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='green')
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Donald Trump Image Box images versus Baseline images source bias according to Facebook bias ratings:
print("Baseline skew: ", stats.skew(DT_baseline.facebookbias_rating[DT_baseline.facebookbias_rating<3])) print("Image Box skew: ", stats.skew(DT_imagebox.facebookbias_rating[DT_imagebox.facebookbias_rating<3])) stats.ks_2samp(DT_baseline.facebookbias_rating[DT_baseline.facebookbias_rating<3], DT_imagebox.facebookbias_rating[DT_imagebox.facebookbias_rating<3]) DT_imagebox.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='red') DT_baseline.facebookbias_rating.plot.hist(alpha=0.5, bins=20, range=(-1,1), color='green') print("Number of missing ratings for Hillary Clinton Baseline data: ", len(HC_baseline[HC_baseline.facebookbias_rating == 999])) print("Number of missing ratings for Hillary Clinton Image Box data: ", len(HC_imagebox[HC_imagebox.facebookbias_rating == 999])) print("Number of missing ratings for Donald Trump Baseline data: ", len(DT_baseline[DT_baseline.facebookbias_rating == 999])) print("Number of missing ratings for Donald Trump Image Box data: ", len(DT_baseline[DT_imagebox.facebookbias_rating == 999]))
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
The Kolmogorov-Smirnov analyis shows that the distribution of political representation across image sources is different between the baseline images and those found in the image box. Statistical analysis on Allsides + Facebook + MondoTimes + my bias ratings: Convert strings to integers:
def convert_to_ints(col): if col == 'Left': return -1 elif col == 'Center': return 0 elif col == 'Right': return 1 else: return np.nan HC_imagebox['final_rating_ints'] = HC_imagebox.final_rating.apply(convert_to_ints) DT_imagebox['final_rating_ints'] = DT_imagebox.final_rating.apply(convert_to_ints) HC_baseline['final_rating_ints'] = HC_baseline.final_rating.apply(convert_to_ints) DT_baseline['final_rating_ints'] = DT_baseline.final_rating.apply(convert_to_ints) HC_imagebox.final_rating_ints.value_counts() DT_imagebox.final_rating_ints.value_counts()
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Prepare data for chi squared test
HC_baseline_counts = HC_baseline.final_rating.value_counts() HC_imagebox_counts = HC_imagebox.final_rating.value_counts() DT_baseline_counts = DT_baseline.final_rating.value_counts() DT_imagebox_counts = DT_imagebox.final_rating.value_counts() HC_baseline_counts.head() normalised_bias_ratings = pd.DataFrame({'HC_ImageBox':HC_imagebox_counts, 'HC_Baseline' : HC_baseline_counts, 'DT_ImageBox': DT_imagebox_counts, 'DT_Baseline': DT_baseline_counts} ) normalised_bias_ratings
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Remove Unknown / unreliable row
normalised_bias_ratings = normalised_bias_ratings[:3]
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Calculate percentages for plotting purposes
normalised_bias_ratings.loc[:,'HC_Baseline_pcnt'] = normalised_bias_ratings.HC_Baseline/normalised_bias_ratings.HC_Baseline.sum()*100 normalised_bias_ratings.loc[:,'HC_ImageBox_pcnt'] = normalised_bias_ratings.HC_ImageBox/normalised_bias_ratings.HC_ImageBox.sum()*100 normalised_bias_ratings.loc[:,'DT_Baseline_pcnt'] = normalised_bias_ratings.DT_Baseline/normalised_bias_ratings.DT_Baseline.sum()*100 normalised_bias_ratings.loc[:,'DT_ImageBox_pcnt'] = normalised_bias_ratings.DT_ImageBox/normalised_bias_ratings.DT_ImageBox.sum()*100 normalised_bias_ratings normalised_bias_ratings.columns HC_percentages = normalised_bias_ratings[['HC_Baseline_pcnt', 'HC_ImageBox_pcnt']] DT_percentages = normalised_bias_ratings[['DT_Baseline_pcnt', 'DT_ImageBox_pcnt']]
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Test Hillary Clinton Image Box images against Baseline images:
stats.chisquare(f_exp=normalised_bias_ratings.HC_Baseline, f_obs=normalised_bias_ratings.HC_ImageBox) HC_percentages.plot.bar()
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
Test Donald Trump Image Box images against Basline images:
stats.chisquare(f_exp=normalised_bias_ratings.DT_Baseline, f_obs=normalised_bias_ratings.DT_ImageBox) DT_percentages.plot.bar()
Statistics.ipynb
comp-journalism/Baseline_Problem_for_Algorithm_Audits
mit
The storage bucket we create will be created by default using the project id.
storage_bucket = 'gs://' + datalab.Context.default().project_id + '-datalab-workspace/' storage_region = 'us-central1' workspace_path = os.path.join(storage_bucket, 'census') # We will rely on outputs from data preparation steps in the previous notebook. local_workspace_path = '/content/datalab/workspace/census' !gsutil mb -c regional -l {storage_region} {storage_bucket}
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
googledatalab/notebooks
apache-2.0
NOTE: If you have previously run this notebook, and want to start from scratch, then run the next cell to delete previous outputs.
!gsutil -m rm -rf {workspace_path}
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
googledatalab/notebooks
apache-2.0
Data To get started, we will copy the data into this workspace from the local workspace created in the previous notebook. Generally, in your own work, you will have existing data to work with, that you may or may not need to copy around, depending on its current location.
!gsutil -q cp {local_workspace_path}/data/train.csv {workspace_path}/data/train.csv !gsutil -q cp {local_workspace_path}/data/eval.csv {workspace_path}/data/eval.csv !gsutil -q cp {local_workspace_path}/data/schema.json {workspace_path}/data/schema.json !gsutil ls -r {workspace_path}
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
googledatalab/notebooks
apache-2.0
DataSets
train_data_path = os.path.join(workspace_path, 'data/train.csv') eval_data_path = os.path.join(workspace_path, 'data/eval.csv') schema_path = os.path.join(workspace_path, 'data/schema.json') train_data = ml.CsvDataSet(file_pattern=train_data_path, schema_file=schema_path) eval_data = ml.CsvDataSet(file_pattern=eval_data_path, schema_file=schema_path)
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
googledatalab/notebooks
apache-2.0
Data Analysis When building a model, a number of pieces of information about the training data are required - for example, the list of entries or vocabulary of a categorical/discrete column, or aggregate statistics like min and max for numerical columns. These require a full pass over the training data, and is usually done once, and needs to be repeated once if you change the schema in a future iteration. On the Cloud, this analysis is done with BigQuery, by referencing the csv data in storage as external data sources. The output of this analysis will be stored into storage. In the analyze() call below, notice the use of cloud=True to move data analysis from happening locally to happening in the cloud.
analysis_path = os.path.join(workspace_path, 'analysis') regression.analyze(dataset=train_data, output_dir=analysis_path, cloud=True)
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
googledatalab/notebooks
apache-2.0
Like in the local notebook, the output of analysis is a stats file that contains analysis from the numerical columns, and a vocab file from each categorical column.
!gsutil ls {analysis_path}
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
googledatalab/notebooks
apache-2.0
Let's inspect one of the files; in particular the numerical analysis, since it will also tell us some interesting statistics about the income column, the value we want to predict.
!gsutil cat {analysis_path}/stats.json
samples/ML Toolbox/Regression/Census/2 Service Preprocess.ipynb
googledatalab/notebooks
apache-2.0
Exploring dataset Please see this notebook for more context on this problem and how the features were chosen.
#%writefile babyweight/trainer/model.py # Copyright 2018 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License.
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Creating a ML dataset using BigQuery </h2> We can use BigQuery to create the training and evaluation datasets. Because of the masking (ultrasound vs. no ultrasound), the query itself is a little complex.
#%writefile -a babyweight/trainer/model.py def create_queries(): query_all = """ WITH with_ultrasound AS ( SELECT weight_pounds AS label, CAST(is_male AS STRING) AS is_male, mother_age, CAST(plurality AS STRING) AS plurality, gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 AND gestation_weeks > 0 AND mother_age > 0 AND plurality > 0 AND weight_pounds > 0 ), without_ultrasound AS ( SELECT weight_pounds AS label, 'Unknown' AS is_male, mother_age, IF(plurality > 1, 'Multiple', 'Single') AS plurality, gestation_weeks, FARM_FINGERPRINT(CONCAT(CAST(YEAR AS STRING), CAST(month AS STRING))) AS hashmonth FROM publicdata.samples.natality WHERE year > 2000 AND gestation_weeks > 0 AND mother_age > 0 AND plurality > 0 AND weight_pounds > 0 ), preprocessed AS ( SELECT * from with_ultrasound UNION ALL SELECT * from without_ultrasound ) SELECT label, is_male, mother_age, plurality, gestation_weeks FROM preprocessed """ train_query = "{} WHERE ABS(MOD(hashmonth, 4)) < 3".format(query_all) eval_query = "{} WHERE ABS(MOD(hashmonth, 4)) = 3".format(query_all) return train_query, eval_query print create_queries()[0] #%writefile -a babyweight/trainer/model.py def query_to_dataframe(query): import pandas as pd import pkgutil privatekey = pkgutil.get_data(KEYDIR, 'privatekey.json') print(privatekey[:200]) return pd.read_gbq(query, project_id=PROJECT, dialect='standard', private_key=privatekey) def create_dataframes(frac): # small dataset for testing if frac > 0 and frac < 1: sample = " AND RAND() < {}".format(frac) else: sample = "" train_query, eval_query = create_queries() train_query = "{} {}".format(train_query, sample) eval_query = "{} {}".format(eval_query, sample) train_df = query_to_dataframe(train_query) eval_df = query_to_dataframe(eval_query) return train_df, eval_df train_df, eval_df = create_dataframes(0.001) train_df.describe() eval_df.head()
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Creating a scikit-learn model using random forests </h2> Let's train the model locally
#%writefile -a babyweight/trainer/model.py def input_fn(indf): import copy import pandas as pd df = copy.deepcopy(indf) # one-hot encode the categorical columns df["plurality"] = df["plurality"].astype(pd.api.types.CategoricalDtype( categories=["Single","Multiple","1","2","3","4","5"])) df["is_male"] = df["is_male"].astype(pd.api.types.CategoricalDtype( categories=["Unknown","false","true"])) # features, label label = df['label'] del df['label'] features = pd.get_dummies(df) return features, label train_x, train_y = input_fn(train_df) print(train_x[:5]) print(train_y[:5]) from sklearn.ensemble import RandomForestRegressor estimator = RandomForestRegressor(max_depth=5, n_estimators=100, random_state=0) estimator.fit(train_x, train_y) import numpy as np eval_x, eval_y = input_fn(eval_df) eval_pred = estimator.predict(eval_x) print(eval_pred[1000:1005]) print(eval_y[1000:1005]) print(np.sqrt(np.mean((eval_pred-eval_y)*(eval_pred-eval_y)))) #%writefile -a babyweight/trainer/model.py def train_and_evaluate(frac, max_depth=5, n_estimators=100): import numpy as np # get data train_df, eval_df = create_dataframes(frac) train_x, train_y = input_fn(train_df) # train from sklearn.ensemble import RandomForestRegressor estimator = RandomForestRegressor(max_depth=max_depth, n_estimators=n_estimators, random_state=0) estimator.fit(train_x, train_y) # evaluate eval_x, eval_y = input_fn(eval_df) eval_pred = estimator.predict(eval_x) rmse = np.sqrt(np.mean((eval_pred-eval_y)*(eval_pred-eval_y))) print("Eval rmse={}".format(rmse)) return estimator, rmse #%writefile -a babyweight/trainer/model.py def save_model(estimator, gcspath, name): from sklearn.externals import joblib import os, subprocess, datetime model = 'model.joblib' joblib.dump(estimator, model) model_path = os.path.join(gcspath, datetime.datetime.now().strftime( 'export_%Y%m%d_%H%M%S'), model) subprocess.check_call(['gsutil', 'cp', model, model_path]) return model_path saved = save_model(estimator, 'gs://{}/babyweight/sklearn'.format(BUCKET), 'babyweight') print saved
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Packaging up as a Python package Note the %writefile in the cells above. I uncommented those and ran the cells to write out a model.py The following cell writes out a task.py
%writefile babyweight/trainer/task.py # Copyright 2018 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. import argparse import os import hypertune import model if __name__ == '__main__': parser = argparse.ArgumentParser() parser.add_argument( '--bucket', help = 'GCS path to output.', required = True ) parser.add_argument( '--frac', help = 'Fraction of input to process', type = float, required = True ) parser.add_argument( '--maxDepth', help = 'Depth of trees', type = int, default = 5 ) parser.add_argument( '--numTrees', help = 'Number of trees', type = int, default = 100 ) parser.add_argument( '--projectId', help = 'ID (not name) of your project', required = True ) parser.add_argument( '--job-dir', help = 'output directory for model, automatically provided by gcloud', required = True ) args = parser.parse_args() arguments = args.__dict__ model.PROJECT = arguments['projectId'] model.KEYDIR = 'trainer' estimator, rmse = model.train_and_evaluate(arguments['frac'], arguments['maxDepth'], arguments['numTrees'] ) loc = model.save_model(estimator, arguments['job_dir'], 'babyweight') print("Saved model to {}".format(loc)) # this is for hyperparameter tuning hpt = hypertune.HyperTune() hpt.report_hyperparameter_tuning_metric( hyperparameter_metric_tag='rmse', metric_value=rmse, global_step=0) # done !pip freeze | grep pandas %writefile babyweight/setup.py # Copyright 2018 Google Inc. All Rights Reserved. # # Licensed under the Apache License, Version 2.0 (the "License"); # you may not use this file except in compliance with the License. # You may obtain a copy of the License at # # http://www.apache.org/licenses/LICENSE-2.0 # # Unless required by applicable law or agreed to in writing, software # distributed under the License is distributed on an "AS IS" BASIS, # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. # See the License for the specific language governing permissions and # limitations under the License. from setuptools import setup setup(name='trainer', version='1.0', description='Natality, with sklearn', url='http://github.com/GoogleCloudPlatform/training-data-analyst', author='Google', author_email='[email protected]', license='Apache2', packages=['trainer'], ## WARNING! Do not upload this package to PyPI ## BECAUSE it contains a private key package_data={'': ['privatekey.json']}, install_requires=[ 'pandas-gbq==0.3.0', 'urllib3', 'google-cloud-bigquery==0.29.0', 'cloudml-hypertune' ], zip_safe=False)
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Try out the package on a subset of the data.
%bash export PYTHONPATH=${PYTHONPATH}:${PWD}/babyweight python -m trainer.task \ --bucket=${BUCKET} --frac=0.001 --job-dir=gs://${BUCKET}/babyweight/sklearn --projectId $PROJECT
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Training on Cloud ML Engine </h2> Submit the code to the ML Engine service
%bash RUNTIME_VERSION="1.8" PYTHON_VERSION="2.7" JOB_NAME=babyweight_skl_$(date +"%Y%m%d_%H%M%S") JOB_DIR="gs://$BUCKET/babyweight/sklearn/${JOBNAME}" gcloud ml-engine jobs submit training $JOB_NAME \ --job-dir $JOB_DIR \ --package-path $(pwd)/babyweight/trainer \ --module-name trainer.task \ --region us-central1 \ --runtime-version=$RUNTIME_VERSION \ --python-version=$PYTHON_VERSION \ -- \ --bucket=${BUCKET} --frac=0.1 --projectId $PROJECT
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
The training finished in 20 minutes with a RMSE of 1.05 lbs. <h2> Deploying the trained model </h2> <p> Deploying the trained model to act as a REST web service is a simple gcloud call.
%bash gsutil ls gs://${BUCKET}/babyweight/sklearn/ | tail -1 %bash MODEL_NAME="babyweight" MODEL_VERSION="skl" MODEL_LOCATION=$(gsutil ls gs://${BUCKET}/babyweight/sklearn/ | tail -1) echo "Deleting and deploying $MODEL_NAME $MODEL_VERSION from $MODEL_LOCATION ... this will take a few minutes" #gcloud ml-engine versions delete ${MODEL_VERSION} --model ${MODEL_NAME} #gcloud ml-engine models delete ${MODEL_NAME} #gcloud ml-engine models create ${MODEL_NAME} --regions $REGION gcloud alpha ml-engine versions create ${MODEL_VERSION} --model ${MODEL_NAME} --origin ${MODEL_LOCATION} \ --framework SCIKIT_LEARN --runtime-version 1.8 --python-version=2.7
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
<h2> Using the model to predict </h2> <p> Send a JSON request to the endpoint of the service to make it predict a baby's weight ... Note that we need to send in an array of numbers in the same order as when we trained the model. You can sort of save some preprocessing by using sklearn's Pipeline, but we did our preprocessing with Pandas, so that is not an option. <p> So, let's find the order of columns:
data = [] for i in range(2): data.append([]) for col in eval_x: # convert from numpy integers to standard integers data[i].append(int(np.uint64(eval_x[col][i]).item())) print(eval_x.columns) print(json.dumps(data))
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
As long as you send in the data in that order, it will work:
from googleapiclient import discovery from oauth2client.client import GoogleCredentials import json credentials = GoogleCredentials.get_application_default() api = discovery.build('ml', 'v1', credentials=credentials) request_data = {'instances': # [u'mother_age', u'gestation_weeks', u'is_male_Unknown', u'is_male_0', # u'is_male_1', u'plurality_Single', u'plurality_Multiple', # u'plurality_1', u'plurality_2', u'plurality_3', u'plurality_4', # u'plurality_5'] [[24, 38, 1, 0, 0, 1, 0, 0, 0, 0, 0, 0], [34, 39, 0, 0, 1, 0, 0, 1, 0, 0, 0, 0]] } parent = 'projects/%s/models/%s/versions/%s' % (PROJECT, 'babyweight', 'skl') response = api.projects().predict(body=request_data, name=parent).execute() print "response={0}".format(response)
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
Hyperparameter tuning Let's do a bunch of parallel trials to find good maxDepth and numTrees
%writefile hyperparam.yaml trainingInput: hyperparameters: goal: MINIMIZE maxTrials: 100 maxParallelTrials: 5 hyperparameterMetricTag: rmse params: - parameterName: maxDepth type: INTEGER minValue: 2 maxValue: 8 scaleType: UNIT_LINEAR_SCALE - parameterName: numTrees type: INTEGER minValue: 50 maxValue: 150 scaleType: UNIT_LINEAR_SCALE %bash RUNTIME_VERSION="1.8" PYTHON_VERSION="2.7" JOB_NAME=babyweight_skl_$(date +"%Y%m%d_%H%M%S") JOB_DIR="gs://$BUCKET/babyweight/sklearn/${JOBNAME}" gcloud ml-engine jobs submit training $JOB_NAME \ --job-dir $JOB_DIR \ --package-path $(pwd)/babyweight/trainer \ --module-name trainer.task \ --region us-central1 \ --runtime-version=$RUNTIME_VERSION \ --python-version=$PYTHON_VERSION \ --config=hyperparam.yaml \ -- \ --bucket=${BUCKET} --frac=0.01 --projectId $PROJECT
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
If you go to the GCP console and click on the job, you will see the trial information start to populating, with the lowest rmse trial listed first. I got the best performance with these settings: <pre> "hyperparameters": { "maxDepth": "8", "numTrees": "90" }, "finalMetric": { "trainingStep": "1", "objectiveValue": 1.03123724461 } </pre> Train on full dataset Let's train on the full dataset with these hyperparameters. I am using a larger machine (8 CPUS, 52 GB of memory).
%writefile largemachine.yaml trainingInput: scaleTier: CUSTOM masterType: large_model %bash RUNTIME_VERSION="1.8" PYTHON_VERSION="2.7" JOB_NAME=babyweight_skl_$(date +"%Y%m%d_%H%M%S") JOB_DIR="gs://$BUCKET/babyweight/sklearn/${JOBNAME}" gcloud ml-engine jobs submit training $JOB_NAME \ --job-dir $JOB_DIR \ --package-path $(pwd)/babyweight/trainer \ --module-name trainer.task \ --region us-central1 \ --runtime-version=$RUNTIME_VERSION \ --python-version=$PYTHON_VERSION \ --scale-tier=CUSTOM \ --config=largemachine.yaml \ -- \ --bucket=${BUCKET} --frac=1 --projectId $PROJECT --maxDepth 8 --numTrees 90
blogs/sklearn/babyweight_skl.ipynb
GoogleCloudPlatform/training-data-analyst
apache-2.0
In a Tropical Semiring The following example is taken from mohri.2009.hwa, Figure 12.
%%automaton --strip a context = "lal_char, zmin" $ -> 0 0 -> 1 <0>a, <1>b, <5>c 0 -> 2 <0>d, <1>e 1 -> 3 <0>e, <1>f 2 -> 3 <4>e, <5>f 3 -> $ a.push_weights()
doc/notebooks/automaton.push_weights.ipynb
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
gpl-3.0
Note that weight pushing improves the "minimizability" of weighted automata:
a.minimize() a.push_weights().minimize()
doc/notebooks/automaton.push_weights.ipynb
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
gpl-3.0
In $\mathbb{Q}$ Again, the following example is taken from mohri.2009.hwa, Figure 12 (subfigure 12.d lacks two transitions), but computed in $\mathbb{Q}$ rather than $\mathbb{R}$ to render more readable results.
%%automaton --strip a context = "lal_char, q" $ -> 0 0 -> 1 <0>a, <1>b, <5>c 0 -> 2 <0>d, <1>e 1 -> 3 <0>e, <1>f 2 -> 3 <4>e, <5>f 3 -> $ a.push_weights()
doc/notebooks/automaton.push_weights.ipynb
pombredanne/https-gitlab.lrde.epita.fr-vcsn-vcsn
gpl-3.0
1. Load blast hits
#Load blast hits blastp_hits = pd.read_csv("2_blastp_hits.tsv",sep="\t",quotechar='"') blastp_hits.head() #Filter out Metahit 2010 hits, keep only Metahit 2014 blastp_hits = blastp_hits[blastp_hits.db != "metahit_pep"]
phage_assembly/5_annotation/asm_v1.2/orf_160621/3b_select_reliable_orfs.ipynb
maubarsom/ORFan-proteins
mit
2.4.3 Write out filtered blast hits
filt_blastp_hits = blastp_hits_annot[ blastp_hits_annot.query_id.apply(lambda x: x in reliable_orfs.query_id.tolist())] filt_blastp_hits.to_csv("3_filtered_orfs/d9539_asm_v1.2_orf_filt_blastp.tsv",sep="\t",quotechar='"') filt_blastp_hits.head()
phage_assembly/5_annotation/asm_v1.2/orf_160621/3b_select_reliable_orfs.ipynb
maubarsom/ORFan-proteins
mit
train test split
sku_id_groups = np.load(npz_sku_ids_group_kmeans) for key, val in sku_id_groups.iteritems(): print key, ",", val.shape # gp_predictor = GaussianProcessPricePredictorForCluster(npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans, # mobs_norm_path=mobs_norm_path, # price_history_csv=price_history_csv, # input_min_len=input_min_len, # target_len=target_len) %%time #gp_predictor.prepare(chosen_cluster=9) %%time #dtw_mean = gp_predictor.train_validate() #dtw_mean # Do not run this again unless you have enough space in the disk and lots of memory # with open('cur_gp.pickle', 'w') as fp: # Python 3: open(..., 'wb') # pickle.dump(gp, fp)
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cross Validation
#writing to bayes_opt_dir = data_path + '/gp_regressor' assert isdir(bayes_opt_dir) pairs_ts_npy_filename = 'pairs_ts' cv_score_dict_npy_filename = 'dtw_scores' pairs_ts_npy_filename = 'pairs_ts' res_gp_filename = 'res_gp_opt'
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster: 6 Best Length Scale: 1.2593471510883105 n restart optimizer: 5 Cluster: 4 Best Length Scale: 2.5249662383238189 n restarts optimizer: 4 Cluster: 0 Best Length Scale: 4.2180911518619402 n restarts optimizer: 3 Cluster: 1 Best Length Scale: 0.90557520548216341 n restarts optimizer: 2 Cluster: 7 Best Length Scale: 0.86338778478034262 n restarts optimizer: 2 Cluster: 5 Best Length Scale: 0.65798759657324202 n restarts optimizer: 2 Cluster: 3 Best Length scale: 0.92860995029528248 n restarts optimizer: 1 Cluster: 2 Best length scale: 1.0580280512277951 n restarts optimizer: 10 Cluster: 9 best length scale: ??? //... Cluster 9
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=9, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=pairs_ts_npy_filename, res_gp_filename=res_gp_filename, npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans, mobs_norm_path=mobs_norm_path, price_history_csv=price_history_csv, input_min_len=input_min_len, target_len=target_len, random_state=random_state, verbose = True, n_restarts_optimizer=1) opt_res = cur_gp_opt.run_opt(n_random_starts=5, n_calls=10) plot_res_gp(opt_res) opt_res.best_params
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 2
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=2, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=pairs_ts_npy_filename, res_gp_filename=res_gp_filename, npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans, mobs_norm_path=mobs_norm_path, price_history_csv=price_history_csv, input_min_len=input_min_len, target_len=target_len, random_state=random_state, verbose = True, n_restarts_optimizer=10) opt_res = cur_gp_opt.run_opt(n_random_starts=15, n_calls=30) plot_res_gp(opt_res) opt_res.best_params
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 3
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=3, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=pairs_ts_npy_filename, res_gp_filename=res_gp_filename, npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans, mobs_norm_path=mobs_norm_path, price_history_csv=price_history_csv, input_min_len=input_min_len, target_len=target_len, random_state=random_state, verbose = True, n_restarts_optimizer=1) opt_res = cur_gp_opt.run_opt(n_random_starts=5, n_calls=10) plot_res_gp(opt_res) opt_res.best_params
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 5
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=5, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=pairs_ts_npy_filename, res_gp_filename=res_gp_filename, npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans, mobs_norm_path=mobs_norm_path, price_history_csv=price_history_csv, input_min_len=input_min_len, target_len=target_len, random_state=random_state, verbose = True, n_restarts_optimizer=2) opt_res = cur_gp_opt.run_opt(n_random_starts=9, n_calls=20) plot_res_gp(opt_res) opt_res.best_params
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 7
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=7, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=pairs_ts_npy_filename, res_gp_filename=res_gp_filename, npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans, mobs_norm_path=mobs_norm_path, price_history_csv=price_history_csv, input_min_len=input_min_len, target_len=target_len, random_state=random_state, verbose = True, n_restarts_optimizer=2) opt_res = cur_gp_opt.run_opt(n_random_starts=6, n_calls=13) plot_res_gp(opt_res) opt_res.best_params
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 1
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=1, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=pairs_ts_npy_filename, res_gp_filename=res_gp_filename, npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans, mobs_norm_path=mobs_norm_path, price_history_csv=price_history_csv, input_min_len=input_min_len, target_len=target_len, random_state=random_state, verbose = True, n_restarts_optimizer=2) opt_res = cur_gp_opt.run_opt(n_random_starts=7, n_calls=15) plot_res_gp(opt_res) opt_res.best_params
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 6
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=6, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=pairs_ts_npy_filename, res_gp_filename=res_gp_filename, npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans, mobs_norm_path=mobs_norm_path, price_history_csv=price_history_csv, input_min_len=input_min_len, target_len=target_len, random_state=random_state, verbose = False, n_restarts_optimizer=5) opt_res = cur_gp_opt.run_opt(n_random_starts=3, n_calls=10) plot_res_gp(opt_res) opt_res.best_params
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 4
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=4, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=pairs_ts_npy_filename, res_gp_filename=res_gp_filename, npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans, mobs_norm_path=mobs_norm_path, price_history_csv=price_history_csv, input_min_len=input_min_len, target_len=target_len, random_state=random_state, verbose = True, n_restarts_optimizer=4) opt_res = cur_gp_opt.run_opt(n_random_starts=5, n_calls=20) plot_res_gp(opt_res) opt_res.best_params
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
Cluster 0
%%time cur_gp_opt = GaussianProcessPricePredictorGpOpt(chosen_cluster=0, bayes_opt_dir=bayes_opt_dir, cv_score_dict_npy_filename=cv_score_dict_npy_filename, pairs_ts_npy_filename=pairs_ts_npy_filename, res_gp_filename=res_gp_filename, npz_sku_ids_group_kmeans=npz_sku_ids_group_kmeans, mobs_norm_path=mobs_norm_path, price_history_csv=price_history_csv, input_min_len=input_min_len, target_len=target_len, random_state=random_state, verbose = True, n_restarts_optimizer=3) opt_res = cur_gp_opt.run_opt(n_random_starts=10, n_calls=20) plot_res_gp(opt_res) opt_res.best_params
02_preprocessing/exploration11-price_history_gaussian_process_regressor_clustered_cross_valid.ipynb
pligor/predicting-future-product-prices
agpl-3.0
target data for feature selection average all data for each compound
# load the training data data = pd.read_csv(os.path.abspath('__file__' + "/../../../../data/TrainSet.txt"),sep='\t') data.drop(['Intensity','Odor','Replicate','Dilution'],axis=1, inplace=1) data.columns = ['#oID', 'individual'] + list(data.columns)[2:] data.head() # load leaderboard data and reshape them to match the training data LB_data_high = pd.read_csv(os.path.abspath('__file__' + "/../../../../data/LBs1.txt"),sep='\t') LB_data_high = LB_data_high.pivot_table(index=['#oID','individual'],columns='descriptor',values='value') LB_data_high.reset_index(level=[0,1],inplace=1) LB_data_high.rename(columns={' CHEMICAL':'CHEMICAL'}, inplace=True) LB_data_high = LB_data_high[data.columns] LB_data_high.head() # load leaderboard low intensity data and reshape them to match the training data LB_data_low = pd.read_csv(os.path.abspath('__file__' + "/../../../../data/leaderboard_set_Low_Intensity.txt"),sep='\t') LB_data_low = LB_data_low.pivot_table(index=['#oID','individual'],columns='descriptor',values='value') LB_data_low.reset_index(level=[0,1],inplace=1) LB_data_low.rename(columns={' CHEMICAL':'CHEMICAL'}, inplace=True) LB_data_low = LB_data_low[data.columns] LB_data_low.head() # put them all together selection_data = pd.concat((data,LB_data_high,LB_data_low),ignore_index=True) # replace descriptor data with np.nan if intensity is zero for descriptor in [u'VALENCE/PLEASANTNESS', u'BAKERY', u'SWEET', u'FRUIT', u'FISH', u'GARLIC', u'SPICES', u'COLD', u'SOUR', u'BURNT', u'ACID', u'WARM', u'MUSKY', u'SWEATY', u'AMMONIA/URINOUS', u'DECAYED', u'WOOD', u'GRASS', u'FLOWER', u'CHEMICAL']: selection_data.loc[(selection_data['INTENSITY/STRENGTH'] == 0),descriptor] = np.nan #average them all selection_data = selection_data.groupby('#oID').mean() selection_data.drop('individual',1,inplace=1) selection_data.to_csv('targets_for_feature_selection.csv') selection_data.head()
opc_python/hulab/collaboration/target_data_preparation.ipynb
dream-olfaction/olfaction-prediction
mit
target data for training filter out the relevant data for each compound
# load the train data data = pd.read_csv(os.path.abspath('__file__' + "/../../../../data/TrainSet.txt"),sep='\t') data.drop(['Odor','Replicate'],axis=1, inplace=1) data.columns = [u'#oID','Intensity','Dilution', u'individual', u'INTENSITY/STRENGTH', u'VALENCE/PLEASANTNESS', u'BAKERY', u'SWEET', u'FRUIT', u'FISH', u'GARLIC', u'SPICES', u'COLD', u'SOUR', u'BURNT', u'ACID', u'WARM', u'MUSKY', u'SWEATY', u'AMMONIA/URINOUS', u'DECAYED', u'WOOD', u'GRASS', u'FLOWER', u'CHEMICAL'] data.head() #load LB data LB_data_high = pd.read_csv(os.path.abspath('__file__' + "/../../../../data/LBs1.txt"),sep='\t') LB_data_high = LB_data_high.pivot_table(index=['#oID','individual'],columns='descriptor',values='value') LB_data_high.reset_index(level=[0,1],inplace=1) LB_data_high.rename(columns={' CHEMICAL':'CHEMICAL'}, inplace=True) LB_data_high['Dilution'] = '1/1,000 ' LB_data_high['Intensity'] = 'high ' LB_data_high = LB_data_high[data.columns] LB_data_high.head() # put them together data = pd.concat((data,LB_data_high),ignore_index=True) # replace descriptor data with np.nan if intensity is zero for descriptor in [u'VALENCE/PLEASANTNESS', u'BAKERY', u'SWEET', u'FRUIT', u'FISH', u'GARLIC', u'SPICES', u'COLD', u'SOUR', u'BURNT', u'ACID', u'WARM', u'MUSKY', u'SWEATY', u'AMMONIA/URINOUS', u'DECAYED', u'WOOD', u'GRASS', u'FLOWER', u'CHEMICAL']: data.loc[(data['INTENSITY/STRENGTH'] == 0),descriptor] = np.nan # average the duplicates data = data.groupby(['individual','#oID','Dilution','Intensity']).mean() data.reset_index(level=[2,3], inplace=True) #filter out data for intensity prediction data_int = data[data.Dilution == '1/1,000 '] # filter out data for everything else data = data[data.Intensity == 'high '] # replace the Intensity data with the data_int intensity values data['INTENSITY/STRENGTH'] = data_int['INTENSITY/STRENGTH'] data.drop(['Dilution','Intensity'],inplace=1,axis=1) data.reset_index(level=[0,1], inplace=True) data.head() data = data.groupby('#oID').mean() data.shape #save it data.to_csv('target.csv')
opc_python/hulab/collaboration/target_data_preparation.ipynb
dream-olfaction/olfaction-prediction
mit
Enhance Image
# Enhance image image_enhanced = cv2.equalizeHist(image)
machine-learning/enhance_contrast_of_greyscale_image.ipynb
tpin3694/tpin3694.github.io
mit