markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Writing final submission file to kaggle output disk
submission_data.to_csv('submission.csv', index=False)
_____no_output_____
MIT
hm-recbole-ifs.ipynb
ManashJKonwar/Kaggle-HM-Recommender
Lambda School Data Science - Quantile RegressionRegressing towards the median - or any quantile - as a way to mitigate outliers and control risk. LectureLet's look at data that has a bit of a skew to it:http://archive.ics.uci.edu/ml/datasets/Beijing+PM2.5+Data
import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression import statsmodels.formula.api as smf df = pd.read_csv('http://archive.ics.uci.edu/ml/machine-learning-databases/' '00381/PRSA_data_2010.1.1-2014.12.31.csv') df.head() df.describe() df['pm2.5'].plot.hist(); np.log(df['pm2.5']) # How does linear regression handle it? from sklearn.linear_model import LinearRegression # Let's drop NAs and limit to numeric values df = df._get_numeric_data().dropna() X = df.drop('pm2.5', axis='columns') y = df['pm2.5'] linear_reg = LinearRegression().fit(X, y) linear_reg.score(X, y) # Not bad - but what if we wanted to model the distribution more conservatively? # Let's try quantile import statsmodels.formula.api as smf # Different jargon/API in StatsModel documentation # "endogenous" response var is dependent (y), it is "inside" # "exogenous" variables are independent (X), it is "outside" # Bonus points - talk about "exogenous shocks" and you're a bona fide economist # ~ style formulas look like what R uses # y ~ x1 + x2 + ... # they also support * for interaction terms, polynomiasl # Also, these formulas break with . in variable name, so lets change that df = df.rename(index=str, columns={'pm2.5': 'pm25'}) # Now let's construct the formula string using all columns quant_formula = 'pm25 ~ ' + ' + '.join(df.drop('pm25', axis='columns').columns) print(quant_formula) quant_mod = smf.quantreg(quant_formula, data=df) quant_reg = quant_mod.fit(q=.5) quant_reg.summary() # "summary" is another very R-thing
pm25 ~ No + year + month + day + hour + DEWP + TEMP + PRES + cbwd + Iws + Is + Ir
MIT
module3-quantile-regression/Copy_of_LS_DS1_233_Quantile_Regression.ipynb
quinn-dougherty/DS-Unit-2-Sprint-3-Advanced-Regression
That fit to the median (q=0.5), also called "Least Absolute Deviation." The pseudo-R^2 isn't really directly comparable to the R^2 from linear regression, but it clearly isn't dramatically improved. Can we make it better?
help(quant_mod.fit) quantiles = (.05, .96, .1) for quantile in quantiles: print(quant_mod.fit(q=quantile).summary())
QuantReg Regression Results ============================================================================== Dep. Variable: pm25 Pseudo R-squared: 0.04130 Model: QuantReg Bandwidth: 8.908 Method: Least Squares Sparsity: 120.7 Date: Sun, 20 Jan 2019 No. Observations: 41757 Time: 23:01:53 Df Residuals: 41745 Df Model: 11 ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 3.072e-05 6.4e-06 4.803 0.000 1.82e-05 4.33e-05 No -6.994e-05 9.59e-06 -7.292 0.000 -8.87e-05 -5.11e-05 year 0.0998 0.012 8.275 0.000 0.076 0.123 month -0.4536 0.034 -13.419 0.000 -0.520 -0.387 day 0.1143 0.015 7.862 0.000 0.086 0.143 hour 0.3777 0.020 19.013 0.000 0.339 0.417 DEWP 0.7720 0.014 55.266 0.000 0.745 0.799 TEMP -0.8346 0.020 -41.621 0.000 -0.874 -0.795 PRES -0.1734 0.024 -7.290 0.000 -0.220 -0.127 Iws -0.0364 0.002 -17.462 0.000 -0.040 -0.032 Is 1.4573 0.195 7.466 0.000 1.075 1.840 Ir -1.2952 0.071 -18.209 0.000 -1.435 -1.156 ============================================================================== The condition number is large, 3.67e+10. This might indicate that there are strong multicollinearity or other numerical problems. QuantReg Regression Results ============================================================================== Dep. Variable: pm25 Pseudo R-squared: 0.2194 Model: QuantReg Bandwidth: 10.41 Method: Least Squares Sparsity: 1322. Date: Sun, 20 Jan 2019 No. Observations: 41757 Time: 23:01:55 Df Residuals: 41745 Df Model: 11 ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 0.0004 6.87e-05 5.306 0.000 0.000 0.000 No 7.821e-05 0.000 0.696 0.486 -0.000 0.000 year 1.0580 0.124 8.539 0.000 0.815 1.301 month -3.9661 0.446 -8.895 0.000 -4.840 -3.092 day 1.0816 0.136 7.936 0.000 0.814 1.349 hour 2.3661 0.192 12.354 0.000 1.991 2.741 DEWP 7.5176 0.235 32.004 0.000 7.057 7.978 TEMP -11.6991 0.302 -38.691 0.000 -12.292 -11.106 PRES -1.7121 0.244 -7.003 0.000 -2.191 -1.233 Iws -0.4151 0.034 -12.339 0.000 -0.481 -0.349 Is -5.7267 1.580 -3.624 0.000 -8.824 -2.630 Ir -9.3197 1.457 -6.397 0.000 -12.175 -6.464 ============================================================================== The condition number is large, 3.67e+10. This might indicate that there are strong multicollinearity or other numerical problems. QuantReg Regression Results ============================================================================== Dep. Variable: pm25 Pseudo R-squared: 0.06497 Model: QuantReg Bandwidth: 8.092 Method: Least Squares Sparsity: 104.4 Date: Sun, 20 Jan 2019 No. Observations: 41757 Time: 23:01:57 Df Residuals: 41745 Df Model: 11 ============================================================================== coef std err t P>|t| [0.025 0.975] ------------------------------------------------------------------------------ Intercept 5.214e-05 7.84e-06 6.650 0.000 3.68e-05 6.75e-05 No -9.232e-05 1.17e-05 -7.888 0.000 -0.000 -6.94e-05 year 0.1521 0.015 10.386 0.000 0.123 0.181 month -0.5581 0.042 -13.138 0.000 -0.641 -0.475 day 0.1708 0.017 9.893 0.000 0.137 0.205 hour 0.4604 0.024 19.350 0.000 0.414 0.507 DEWP 1.2350 0.017 70.845 0.000 1.201 1.269 TEMP -1.3088 0.024 -54.101 0.000 -1.356 -1.261 PRES -0.2652 0.029 -9.183 0.000 -0.322 -0.209 Iws -0.0436 0.003 -16.919 0.000 -0.049 -0.039 Is 1.0745 0.231 4.653 0.000 0.622 1.527 Ir -1.9619 0.087 -22.504 0.000 -2.133 -1.791 ============================================================================== The condition number is large, 3.67e+10. This might indicate that there are strong multicollinearity or other numerical problems.
MIT
module3-quantile-regression/Copy_of_LS_DS1_233_Quantile_Regression.ipynb
quinn-dougherty/DS-Unit-2-Sprint-3-Advanced-Regression
"Strong multicollinearity", eh? In other words - maybe we shouldn't throw every variable in our formula. Let's hand-craft a smaller one, picking the features with the largest magnitude t-statistics for their coefficients. Let's also search for more quantile cutoffs to see what's most effective.
quant_formula = 'pm25 ~ DEWP + TEMP + Ir + hour + Iws' quant_mod = smf.quantreg(quant_formula, data=df) for quantile in range(50, 100): quantile /= 100 quant_reg = quant_mod.fit(q=quantile) print((quantile, quant_reg.prsquared)) # Okay, this data seems *extremely* skewed # Let's trying logging import numpy as np df['pm25'] = np.log(1 + df['pm25']) quant_mod = smf.quantreg(quant_formula, data=df) quant_reg = quant_mod.fit(q=.25) quant_reg.summary() # "summary" is another very R-thing
_____no_output_____
MIT
module3-quantile-regression/Copy_of_LS_DS1_233_Quantile_Regression.ipynb
quinn-dougherty/DS-Unit-2-Sprint-3-Advanced-Regression
Overall - in this case, quantile regression is not *necessarily* superior to linear regression. But it does give us extra flexibility and another thing to tune - what the center of what we're actually fitting in the dependent variable.The basic case of `q=0.5` (the median) minimizes the absolute value of residuals, while OLS minimizes the squared value. By selecting `q=0.25`, we're targeting a lower quantile and are effectively saying that we only want to over-estimate at most 25% of the time - we're being *risk averse*.Depending on the data you're looking at, and the cost of making a false positive versus a false negative, this sort of flexibility can be extremely useful.Live - let's consider another dataset! Specifically, "SkillCraft" (data on competitive StarCraft players): http://archive.ics.uci.edu/ml/datasets/SkillCraft1+Master+Table+Dataset
import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression from sklearn.linear_model import LogisticRegression import statsmodels.formula.api as smf url = 'http://archive.ics.uci.edu/ml/machine-learning-databases/00272/SkillCraft1_Dataset.csv' df = pd.read_csv(url) df.head() df = df.replace('?', np.nan) df.shape, df.isna().sum() hasna = ['Age', 'HoursPerWeek', 'TotalHours'] for feat in hasna: df[feat] = pd.to_numeric(df[feat]) df[hasna].head() df.isna().sum(), df.dtypes, df.shape df.describe() from sklearn.linear_model import LinearRegression dff = df._get_numeric_data().dropna() X = df.drop('APM', axis='columns') y = df.APM linear_reg = LinearRegression().fit(X,y) linear_reg from sklearn.linear_model import LogisticRegression ## what fastest 10% of starcraft players like? ## wquantile regression with q=0.9 quant_formula = 'APM ~ ' + ' + '.join(df.drop('APM', axis='columns').columns) quant_mod = smf.quantreg(quant_formula, data=dff) quant_reg = quant_mod.fit(q=0.9) quant_reg.summary() # TODO Live! # Hint - we may only care about the *top* quantiles here # Another hint - there are missing values, but Pandas won't see them right away np.sqrt(0.4076)
_____no_output_____
MIT
module3-quantile-regression/Copy_of_LS_DS1_233_Quantile_Regression.ipynb
quinn-dougherty/DS-Unit-2-Sprint-3-Advanced-Regression
Assignment - birth weight dataBirth weight is a situation where, while the data itself is actually fairly normal and symmetric, our main goal is actually *not* to model mean weight (via OLS), but rather to identify mothers at risk of having children below a certain "at-risk" threshold weight.Quantile regression gives us just the tool we need. For the data we are using, see: http://people.reed.edu/~jones/141/BirthWgt.html bwt: baby's weight in ounces at birth gestation: duration of pregnancy in days parity: parity indicator (first born = 1, later birth = 0) age: mother's age in years height: mother's height in inches weight: mother's weight in pounds (during pregnancy) smoke: indicator for whether mother smokes (1=yes, 0=no) Use this data and `statsmodels` to fit a quantile regression, predicting `bwt` (birth weight) as a function of the other covariates. First, identify an appropriate `q` (quantile) to target a cutoff of 90 ounces - babies above that birth weight are generally healthy/safe, babies below are at-risk.Then, fit and iterate your model. Be creative! You may want to engineer features. Hint - mother's age likely is not simply linear in its impact, and the other features may interact as well.At the end, create at least *2* tables and *1* visualization to summarize your best model. Then (in writing) answer the following questions:- What characteristics of a mother indicate the highest likelihood of an at-risk (low weight) baby?- What can expectant mothers be told to help mitigate this risk?Note that second question is not exactly a data science question - and that's okay! You're not expected to be a medical expert, but it is a good exercise to do a little bit of digging into a particular domain and offer informal but informed opinions.
import numpy as np import pandas as pd from sklearn.linear_model import LinearRegression from sklearn.linear_model import LogisticRegression import statsmodels.formula.api as smf from scipy.stats import percentileofscore from numpy.testing import assert_almost_equal bwt_df = pd.read_csv('http://people.reed.edu/~jones/141/Bwt.dat') bwt_df.head() bwt_df.dtypes, bwt_df.isna().sum(), bwt_df.dtypes, bwt_df.isna().sum() bwt_df.describe() #First, identify an appropriate q (quantile) to target a cutoff of 90 ounces - # babies above that birth weight are generally healthy/safe, babies below are at-risk. q_90_test = 0.055 #percentileofscore(bwt_df.bwt, 90)/100 q_90 = np.divide(percentileofscore(bwt_df.bwt, 90), 100) bwt_df.bwt.quantile(q=q_90) #0.5-distn == q_90 quant_formula = 'bwt ~ ' + ' + '.join(bwt_df.drop(['bwt'], axis='columns').columns) quant_mod = smf.quantreg(quant_formula, data=bwt_df) quant_reg = quant_mod.fit(q=q_90) quant_reg.summary() bwt_df['weirdfeat'] = bwt_df.age * (bwt_df.parity - 2) bwt_df['age_per_weight'] = np.divide(bwt_df.age, bwt_df.weight) bwt_df['apw_times_parityplus1'] = bwt_df.age_per_weight * (bwt_df.parity+1) bwt_df['BMI'] = np.divide(bwt_df.weight, bwt_df.height**2) * 703 bwt_df['weirdishfeat'] = bwt_df.gestation ** (bwt_df.parity+1) bwt_df['gestation_squrd'] = bwt_df.gestation ** 2 #bwt_df['weirdfeat'] = bwt_df.weirdfeat + 3 quant_formula = 'bwt ~ ' + ' + '.join(bwt_df.drop(['bwt', 'weirdfeat', 'apw_times_parityplus1', 'weirdishfeat'], axis='columns').columns) quant_mod = smf.quantreg(quant_formula, data=bwt_df) quant_reg = quant_mod.fit(q=q_90) quant_reg.summary() #bwt_df.weirdfeat from seaborn import pairplot pairplot(data=bwt_df, x_vars=bwt_df.drop(['bwt', 'weirdfeat', 'apw_times_parityplus1', 'weirdishfeat'], axis='columns').columns, y_vars='bwt');
_____no_output_____
MIT
module3-quantile-regression/Copy_of_LS_DS1_233_Quantile_Regression.ipynb
quinn-dougherty/DS-Unit-2-Sprint-3-Advanced-Regression
Gestation is our most reliable feature.- it's errors are consistently reasonable, and it's p-value is consistently very small, adding gestation ** 2 improved the model. BMI is better- the model is better with BMI than without Adding BMI made age_per_weight less effective - age_per_weight's error and p-value both went up when BMI was added. Obviously, _don't smoke_
''' What characteristics of a mother indicate the highest likelihood of an at-risk (low weight) baby? What can expectant mothers be told to help mitigate this risk? '''
_____no_output_____
MIT
module3-quantile-regression/Copy_of_LS_DS1_233_Quantile_Regression.ipynb
quinn-dougherty/DS-Unit-2-Sprint-3-Advanced-Regression
Updating SFRDs: UV dataThanks to the improvements in observational facilities in the past few years, we were able to compute luminosity function more accurately. We now use this updated measurements of luminosity fucntion to update the values of SFRDs. In the present notebook, we focus on UV luminosity functions, which are described by the classical Schechter function (detailed description can be found in [this](https://github.com/Jayshil/csfrd/blob/main/p1.ipynb) notebook). We assumes the correlation between the Schechter function parameters similar to what observed in Bouwens et al. (2021) --- that means, at any redshift, the correlation assumed to be the same.
import numpy as np import matplotlib.pyplot as plt import astropy.constants as con import astropy.units as u from scipy.optimize import minimize as mz from scipy.optimize import curve_fit as cft import utils as utl import os
_____no_output_____
MIT
Results/res1.ipynb
Jayshil/csfrd
We have already computed the SFRDs by using [this](https://github.com/Jayshil/csfrd/blob/main/sfrd_all.py) code -- here we only plot the results.
ppr_uv = np.array(['Khusanova_et_al._2020', 'Ono_et_al._2017', 'Viironen_et_al._2018', 'Finkelstein_et_al._2015', 'Bouwens_et_al._2021', 'Alavi_et_al._2016', 'Livermore_et_al._2017', 'Atek_et_al._2015', 'Parsa_et_al._2016', 'Hagen_et_al._2015', 'Moutard_et_al._2019', 'Pello_et_al._2018', 'Bhatawdekar_et_al._2018']) cols = np.array(['cyan', 'deepskyblue', 'steelblue', 'dodgerblue', 'cornflowerblue', 'royalblue', 'navy', 'blue', 'slateblue', 'darkslateblue', 'blueviolet', 'indigo', 'mediumorchid']) #ppr_uv = np.array(['Khusanova_et_al._2020', 'Ono_et_al._2017', 'Viironen_et_al._2018', 'Finkelstein_et_al._2015', 'Bouwens_et_al._2021', 'Alavi_et_al._2016', 'Livermore_et_al._2017', 'Atek_et_al._2015', 'Parsa_et_al._2016', 'Moutard_et_al._2019', 'Pello_et_al._2018', 'Bhatawdekar_et_al._2018']) # Loading papers ppr_uv1 = np.loadtxt('sfrd_uv_new.dat', usecols=0, unpack=True, dtype=str) zd_uv, zu_uv, sfrd_uv, sfrd_uv_err = np.loadtxt('sfrd_uv_new.dat', usecols=(1,2,3,4), unpack=True) zcen_uv = (zd_uv + zu_uv)/2 zup, zdo = np.abs(zu_uv - zcen_uv), np.abs(zcen_uv - zd_uv) log_sfrd_uv, log_sfrd_uv_err = utl.log_err(sfrd_uv, sfrd_uv_err) plt.figure(figsize=(16, 9)) # Plotting them for i in range(len(ppr_uv)): zc_uv, zp, zn, lg_sf, lg_sfe = np.array([]), np.array([]), np.array([]), np.array([]), np.array([]) for j in range(len(ppr_uv1)): if ppr_uv1[j] == ppr_uv[i]: zc_uv = np.hstack((zc_uv, zcen_uv[j])) lg_sf = np.hstack((lg_sf, log_sfrd_uv[j])) lg_sfe = np.hstack((lg_sfe, log_sfrd_uv_err[j])) zp = np.hstack((zp, zup[j])) zn = np.hstack((zn, zdo[j])) if ppr_uv[i] == 'Hagen_et_al._2015': continue else: plt.errorbar(zc_uv, lg_sf, xerr=[zn, zp], yerr=lg_sfe, c=cols[i], label=ppr_uv[i].replace('_',' ') + '; UV LF', fmt='o', mfc='white', mew=2) #plt.plot(znew, psi2, label='Best fitted function') plt.xlabel('Redshift') plt.ylabel(r'$\log{\psi}$ ($M_\odot year^{-1} Mpc^{-3}$)') plt.grid() plt.ylim([-2.4, -1.2]) plt.xlim([0, 8.5]) plt.legend(loc='best')
_____no_output_____
MIT
Results/res1.ipynb
Jayshil/csfrd
Note that, for most of the values, the SFRD is tightly constrained. We again note here that, in this calculation we have assumed that the Schechter function parameters are correlated (except for lower redshifts), and the correlation matrix is according to Bouwens et al. (2021). For lowest redshifts ($z=0$ and $z=1$), we, however, assumed the independency among the Schechter function parameters.We can now overplot the best fitted function from Madau & Dickinson (2014) on this plot,$$ \psi(z) = 0.015 \frac{(1+z)^{2.7}}{1 + [(1+z)/2.9]^{5.6}} M_\odot \ year^{-1} Mpc^{-3}$$Here, the symbols have thir usual meanings.
# Defining best-fitted SFRD def psi_md(z): ab = (1+z)**2.7 cd = ((1+z)/2.9)**5.6 ef = 0.015*ab/(1+cd) return ef # Calculating psi(z) znew = np.linspace(0,9,1000) psi1 = psi_md(znew) psi2 = np.log10(psi1) plt.figure(figsize=(16, 9)) # Plotting them for i in range(len(ppr_uv)): zc_uv, zp, zn, lg_sf, lg_sfe = np.array([]), np.array([]), np.array([]), np.array([]), np.array([]) for j in range(len(ppr_uv1)): if ppr_uv1[j] == ppr_uv[i]: zc_uv = np.hstack((zc_uv, zcen_uv[j])) lg_sf = np.hstack((lg_sf, log_sfrd_uv[j])) lg_sfe = np.hstack((lg_sfe, log_sfrd_uv_err[j])) zp = np.hstack((zp, zup[j])) zn = np.hstack((zn, zdo[j])) if ppr_uv[i] == 'Hagen_et_al._2015': continue else: plt.errorbar(zc_uv, lg_sf, xerr=[zn, zp], yerr=lg_sfe, c=cols[i], label=ppr_uv[i].replace('_',' ') + '; UV LF', fmt='o', mfc='white', mew=2) plt.plot(znew, psi2, label='Best fitted function', lw=3, c='silver') plt.xlabel('Redshift') plt.ylabel(r'$\log{\psi}$ ($M_\odot year^{-1} Mpc^{-3}$)') plt.grid() plt.legend(loc='best')
_____no_output_____
MIT
Results/res1.ipynb
Jayshil/csfrd
It can readily be observed from the above figure that, the best-fitted function from Madau & Dickinson (2014) does not exactly match with our computation of SFRDs, which shows the need to correct for dust in these calculations. However, in the present work, we are not going to take the dust corrections. We shall compute the SFRDs for UV and IR seperately, and then just add them together. Hence, there is no need to fit the exact function to the data. What we do is to make all of the number variable in the best-fitted function from Madau & Dickinson (2014), and try to fit this to the data. Essentially, the function that we want to fit to the data is following:$$ \psi(z) = A \frac{(1+z)^{B}}{1 + [(1+z)/C]^{D}} M_\odot \ year^{-1} Mpc^{-3}$$here, $A$, $B$, $C$ and $D$ are variables.We use `scipy.optimize.minimize` function to perform this task. The idea is to compute the maximum likelihood function.
# New model def psi_new(z, aa, bb, cc, dd): ab = (1+z)**bb cd = ((1+z)/cc)**dd ef = aa*ab/(1+cd) return ef # Negative likelihood function def min_log_likelihood(x): model = psi_new(zcen_uv, x[0], x[1], x[2], x[3]) chi2 = (sfrd_uv - model)/sfrd_uv_err chi22 = np.sum(chi2**2) yy = 0.5*chi22 + np.sum(np.log(sfrd_uv_err)) return yy #xinit, pcov = cft(psi_new, zcen_uv, sfrd_uv, sigma=sfrd_uv_err) #xinit = np.array([0.015, 2.7, 2.9, 5.6]) xinit = np.array([0.01, 3., 3., 6.]) soln = mz(min_log_likelihood, xinit, method='L-BFGS-B') soln
_____no_output_____
MIT
Results/res1.ipynb
Jayshil/csfrd
So, the fitting in converged; that's good! Let's see how this new fitting looks like...
best_fit_fun = psi_new(znew, *soln.x) log_best_fit = np.log10(best_fit_fun) plt.figure(figsize=(16,9)) plt.errorbar(zcen_uv, log_sfrd_uv, xerr=[zup, zdo], yerr=log_sfrd_uv_err, fmt='o', c='cornflowerblue') plt.plot(znew, log_best_fit, label='Best fitted function', lw=2, c='orangered') plt.xlabel('Redshift') plt.ylabel(r'$\log{\psi}$ ($M_\odot year^{-1} Mpc^{-3}$)') plt.grid()
_____no_output_____
MIT
Results/res1.ipynb
Jayshil/csfrd
That's sounds about right. Here, the fitted function would be,$$ \psi(z) = 0.006 \frac{(1+z)^{1.37}}{1 + [(1+z)/4.95]^{5.22}} M_\odot \ year^{-1} Mpc^{-3}$$We want to make note here though. There are some points present in the plot which have large errorbars. Those are from the paper Hagen et al. (2015). From the quick look at the paper, it seems that, these large errorbars are there because of the large errorbars in $\phi_*$. Anyway, in the following I try to remove those points from the data and see if the shape of the best-fitted function changes or not.
# Loading new data sfrd1, sfrd_err1 = np.array([]), np.array([]) log_sfrd1, log_sfrd_err1 = np.array([]), np.array([]) zcen1, zdo1, zup1 = np.array([]), np.array([]), np.array([]) for i in range(len(ppr_uv1)): if ppr_uv1[i] != 'Hagen_et_al._2015': sfrd1 = np.hstack((sfrd1, sfrd_uv[i])) sfrd_err1 = np.hstack((sfrd_err1, sfrd_uv_err[i])) log_sfrd1 = np.hstack((log_sfrd1, log_sfrd_uv[i])) log_sfrd_err1 = np.hstack((log_sfrd_err1, log_sfrd_uv_err[i])) zcen1 = np.hstack((zcen1, zcen_uv[i])) zdo1 = np.hstack((zdo1, zdo[i])) zup1 = np.hstack((zup1, zup[i])) # Fitting new data # Negative likelihood function def min_log_likelihood1(x): model = psi_new(zcen1, x[0], x[1], x[2], x[3]) chi2 = (sfrd1 - model)/sfrd_err1 chi22 = np.sum(chi2**2) yy = 0.5*chi22 + np.sum(np.log(sfrd_err1)) return yy #xinit, pcov = cft(psi_new, zcen_uv, sfrd_uv, sigma=sfrd_uv_err) #xinit = np.array([0.015, 2.7, 2.9, 5.6]) xinit1 = np.array([0.01, 3., 3., 6.]) soln1 = mz(min_log_likelihood1, xinit1, method='L-BFGS-B') soln1 best_fit_fun1 = psi_new(znew, *soln1.x) log_best_fit1 = np.log10(best_fit_fun1) plt.figure(figsize=(16,9)) plt.plot(znew, log_best_fit1, label='Best fitted function', lw=2, c='silver') # Plotting Data for i in range(len(ppr_uv)): zc_uv, zp, zn, lg_sf, lg_sfe = np.array([]), np.array([]), np.array([]), np.array([]), np.array([]) for j in range(len(ppr_uv1)): if ppr_uv1[j] == ppr_uv[i]: zc_uv = np.hstack((zc_uv, zcen_uv[j])) lg_sf = np.hstack((lg_sf, log_sfrd_uv[j])) lg_sfe = np.hstack((lg_sfe, log_sfrd_uv_err[j])) zp = np.hstack((zp, zup[j])) zn = np.hstack((zn, zdo[j])) if ppr_uv[i] == 'Hagen_et_al._2015': continue else: plt.errorbar(zc_uv, lg_sf, xerr=[zn, zp], yerr=lg_sfe, c=cols[i], label=ppr_uv[i].replace('_',' ') + '; UV LF', fmt='o', mfc='white', mew=2) plt.xlabel('Redshift') plt.ylabel(r'$\log{\psi}$ ($M_\odot year^{-1} Mpc^{-3}$)') plt.ylim([-2.4, -1.2]) plt.xlim([0, 8.5]) plt.legend(loc='best') plt.grid()
_____no_output_____
MIT
Results/res1.ipynb
Jayshil/csfrd
Why do we care?Supply chains have grown to span the globe.Multiple functions are now combined (warehousing, inventory, transportation, demand planning and procurement)Runs within and across a firm"Bridge" & "Shockabsorber"From customers to suppliers.Adapt and adjust and be flexible.Prediciting the future is hard.Using diesel fuel as an example. \2 Diesel used as fuel for truck load and less than truckload transportation.
!wget https://www.eia.gov/petroleum/gasdiesel/xls/pswrgvwall.xls -O ./data/pswrgvwall.xls import pandas as pd diesel = pd.read_excel("./data/pswrgvwall.xls", sheet_name=None) diesel.keys() diesel['Data 1'] %matplotlib inline no_head = diesel['Data 1'] no_head.columns = no_head.iloc[1] no_head = no_head.iloc[2:] no_head['Date'] = pd.to_datetime(no_head['Date']) no_head.set_index('Date', inplace=True) no_head.plot(title="Diesel price $USD over time", figsize=(20,12))
_____no_output_____
MIT
SC0x/Unit 1 - Supply Chain Management Overview.ipynb
fhk/MITx_CTL_SCx
Question: What is the impact of such variability on a supply chain?If price was fixed you could desing a supply chain to last 10 years. But its not so you need to have a "shock absorber", these uncertainties/types of factors need to be considered. By using a data drive and metrics based approach organizations can pursue effeciencies that will pay dividends when prices are high and low. Observation: Supply Chain has been gained efficiencies such that the % contribution to GDP has been reducedFrom the Deloitter report the category is sitting at around 3%.https://www2.deloitte.com/us/en/insights/economy/spotlight/economics-insights-analysis-07-2019.html
from IPython.display import Image from IPython.core.display import HTML Image(url= "https://www2.deloitte.com/content/dam/insights/us/articles/5024_Economics-Spotlight-July2019/figures/5024_Fig2.jpg")
_____no_output_____
MIT
SC0x/Unit 1 - Supply Chain Management Overview.ipynb
fhk/MITx_CTL_SCx
Run in Google Colab View source on GitHub Earth Engine Python API Colab SetupThis notebook demonstrates how to setup the Earth Engine Python API in Colab and provides several examples of how to print and visualize Earth Engine processed data. Import API and get credentialsThe Earth Engine API is installed by default in Google Colaboratory so requires only importing and authenticating. These steps must be completed for each new Colab session, if you restart your Colab kernel, or if your Colab virtual machine is recycled due to inactivity. Import the APIRun the following cell to import the API into your session.
import ee
_____no_output_____
CC0-1.0
ee_api_colab_setup.ipynb
pahdsn/SENSE_2020_GEE
Authenticate and initializeRun the `ee.Authenticate` function to authenticate your access to Earth Engine servers and `ee.Initialize` to initialize it. Upon running the following cell you'll be asked to grant Earth Engine access to your Google account. Follow the instructions printed to the cell.
# Trigger the authentication flow. ee.Authenticate() # Initialize the library. ee.Initialize()
_____no_output_____
CC0-1.0
ee_api_colab_setup.ipynb
pahdsn/SENSE_2020_GEE
Test the APITest the API by printing the elevation of Mount Everest.
# Print the elevation of Mount Everest. dem = ee.Image('USGS/SRTMGL1_003') xy = ee.Geometry.Point([86.9250, 27.9881]) elev = dem.sample(xy, 30).first().get('elevation').getInfo() print('Mount Everest elevation (m):', elev)
_____no_output_____
CC0-1.0
ee_api_colab_setup.ipynb
pahdsn/SENSE_2020_GEE
Map visualization`ee.Image` objects can be displayed to notebook output cells. The following twoexamples demonstrate displaying a static image and an interactive map. Static imageThe `IPython.display` module contains the `Image` function, which can displaythe results of a URL representing an image generated from a call to the EarthEngine `getThumbUrl` function. The following cell will display a thumbnailof the global elevation model.
# Import the Image function from the IPython.display module. from IPython.display import Image # Display a thumbnail of global elevation. Image(url = dem.updateMask(dem.gt(0)) .getThumbURL({'min': 0, 'max': 4000, 'dimensions': 512, 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']}))
_____no_output_____
CC0-1.0
ee_api_colab_setup.ipynb
pahdsn/SENSE_2020_GEE
Interactive mapThe [`folium`](https://python-visualization.github.io/folium/)library can be used to display `ee.Image` objects on an interactive[Leaflet](https://leafletjs.com/) map. Folium has no defaultmethod for handling tiles from Earth Engine, so one must be definedand added to the `folium.Map` module before use.The following cell provides an example of adding a method for handing Earth Enginetiles and using it to display an elevation model to a Leaflet map.
# Import the Folium library. import folium # Define a method for displaying Earth Engine image tiles to folium map. def add_ee_layer(self, ee_image_object, vis_params, name): map_id_dict = ee.Image(ee_image_object).getMapId(vis_params) folium.raster_layers.TileLayer( tiles = map_id_dict['tile_fetcher'].url_format, attr = 'Map Data &copy; <a href="https://earthengine.google.com/">Google Earth Engine</a>', name = name, overlay = True, control = True ).add_to(self) # Add EE drawing method to folium. folium.Map.add_ee_layer = add_ee_layer # Set visualization parameters. vis_params = { 'min': 0, 'max': 4000, 'palette': ['006633', 'E5FFCC', '662A00', 'D8D8D8', 'F5F5F5']} # Create a folium map object. my_map = folium.Map(location=[20, 0], zoom_start=3, height=500) # Add the elevation model to the map object. my_map.add_ee_layer(dem.updateMask(dem.gt(0)), vis_params, 'DEM') # Add a layer control panel to the map. my_map.add_child(folium.LayerControl()) # Display the map. display(my_map)
_____no_output_____
CC0-1.0
ee_api_colab_setup.ipynb
pahdsn/SENSE_2020_GEE
Chart visualizationSome Earth Engine functions produce tabular data that can be plotted bydata visualization packages such as `matplotlib`. The following exampledemonstrates the display of tabular data from Earth Engine as a scatterplot. See [Charting in Colaboratory](https://colab.sandbox.google.com/notebooks/charts.ipynb)for more information.
# Import the matplotlib.pyplot module. import matplotlib.pyplot as plt # Fetch a Landsat image. img = ee.Image('LANDSAT/LT05/C01/T1_SR/LT05_034033_20000913') # Select Red and NIR bands, scale them, and sample 500 points. samp_fc = img.select(['B3','B4']).divide(10000).sample(scale=30, numPixels=500) # Arrange the sample as a list of lists. samp_dict = samp_fc.reduceColumns(ee.Reducer.toList().repeat(2), ['B3', 'B4']) samp_list = ee.List(samp_dict.get('list')) # Save server-side ee.List as a client-side Python list. samp_data = samp_list.getInfo() # Display a scatter plot of Red-NIR sample pairs using matplotlib. plt.scatter(samp_data[0], samp_data[1], alpha=0.2) plt.xlabel('Red', fontsize=12) plt.ylabel('NIR', fontsize=12) plt.show()
_____no_output_____
CC0-1.0
ee_api_colab_setup.ipynb
pahdsn/SENSE_2020_GEE
Creating your own dataset from Google Images*by: Francisco Ingham and Jeremy Howard. Inspired by [Adrian Rosebrock](https://www.pyimagesearch.com/2017/12/04/how-to-create-a-deep-learning-dataset-using-google-images/)* In this tutorial we will see how to easily create an image dataset through Google Images. **Note**: You will have to repeat these steps for any new category you want to Google (e.g once for dogs and once for cats).
from fastai.vision import *
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Get a list of URLs Search and scroll Go to [Google Images](http://images.google.com) and search for the images you are interested in. The more specific you are in your Google Search, the better the results and the less manual pruning you will have to do.Scroll down until you've seen all the images you want to download, or until you see a button that says 'Show more results'. All the images you scrolled past are now available to download. To get more, click on the button, and continue scrolling. The maximum number of images Google Images shows is 700.It is a good idea to put things you want to exclude into the search query, for instance if you are searching for the Eurasian wolf, "canis lupus lupus", it might be a good idea to exclude other variants: "canis lupus lupus" -dog -arctos -familiaris -baileyi -occidentalisYou can also limit your results to show only photos by clicking on Tools and selecting Photos from the Type dropdown. Download into file Now you must run some Javascript code in your browser which will save the URLs of all the images you want for you dataset.Press CtrlShiftJ in Windows/Linux and CmdOptJ in Mac, and a small window the javascript 'Console' will appear. That is where you will paste the JavaScript commands.You will need to get the urls of each of the images. Before running the following commands, you may want to disable ad blocking extensions (uBlock, AdBlockPlus etc.) in Chrome. Otherwise window.open() coomand doesn't work. Then you can run the following commands:```javascripturls = Array.from(document.querySelectorAll('.rg_di .rg_meta')).map(el=>JSON.parse(el.textContent).ou);window.open('data:text/csv;charset=utf-8,' + escape(urls.join('\n')));``` Create directory and upload urls file into your server Choose an appropriate name for your labeled images. You can run these steps multiple times to create different labels.
folder = 'black' file = 'urls_black.csv' folder = 'teddys' file = 'urls_teddys.csv' folder = 'grizzly' file = 'urls_grizzly.csv'
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
You will need to run this cell once per each category.
path = Path('data/bears') dest = path/folder dest.mkdir(parents=True, exist_ok=True) path.ls()
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Finally, upload your urls file. You just need to press 'Upload' in your working directory and select your file, then click 'Upload' for each of the displayed files.![uploaded file](images/download_images/upload.png) Download images Now you will need to download your images from their respective urls.fast.ai has a function that allows you to do just that. You just have to specify the urls filename as well as the destination folder and this function will download and save all images that can be opened. If they have some problem in being opened, they will not be saved.Let's download our images! Notice you can choose a maximum number of images to be downloaded. In this case we will not download all the urls.You will need to run this line once for every category.
classes = ['teddys','grizzly','black'] download_images(path/file, dest, max_pics=200) # If you have problems download, try with `max_workers=0` to see exceptions: download_images(path/file, dest, max_pics=20, max_workers=0)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Then we can remove any images that can't be opened:
for c in classes: print(c) verify_images(path/c, delete=True, max_size=500)
teddys
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
View data
np.random.seed(42) data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.2, ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats) # If you already cleaned your data, run this cell instead of the one before # np.random.seed(42) # data = ImageDataBunch.from_csv(path, folder=".", valid_pct=0.2, csv_labels='cleaned.csv', # ds_tfms=get_transforms(), size=224, num_workers=4).normalize(imagenet_stats)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Good! Let's take a look at some of our pictures then.
data.classes data.show_batch(rows=3, figsize=(7,8)) data.classes, data.c, len(data.train_ds), len(data.valid_ds)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Train model
learn = cnn_learner(data, models.resnet34, metrics=error_rate) learn.fit_one_cycle(4) learn.save('stage-1') learn.unfreeze() learn.lr_find() learn.recorder.plot() learn.fit_one_cycle(2, max_lr=slice(3e-5,3e-4)) learn.save('stage-2')
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Interpretation
learn.load('stage-2'); interp = ClassificationInterpretation.from_learner(learn) interp.plot_confusion_matrix()
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Cleaning UpSome of our top losses aren't due to bad performance by our model. There are images in our data set that shouldn't be.Using the `ImageCleaner` widget from `fastai.widgets` we can prune our top losses, removing photos that don't belong.
from fastai.widgets import *
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
First we need to get the file paths from our top_losses. We can do this with `.from_toplosses`. We then feed the top losses indexes and corresponding dataset to `ImageCleaner`.Notice that the widget will not delete images directly from disk but it will create a new csv file `cleaned.csv` from where you can create a new ImageDataBunch with the corrected labels to continue training your model. In order to clean the entire set of images, we need to create a new dataset without the split. The video lecture demostrated the use of the `ds_type` param which no longer has any effect. See [the thread](https://forums.fast.ai/t/duplicate-widget/30975/10) for more details.
db = (ImageList.from_folder(path) .no_split() .label_from_folder() .transform(get_transforms(), size=224) .databunch() ) # If you already cleaned your data using indexes from `from_toplosses`, # run this cell instead of the one before to proceed with removing duplicates. # Otherwise all the results of the previous step would be overwritten by # the new run of `ImageCleaner`. # db = (ImageList.from_csv(path, 'cleaned.csv', folder='.') # .no_split() # .label_from_df() # .transform(get_transforms(), size=224) # .databunch() # )
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Then we create a new learner to use our new databunch with all the images.
learn_cln = cnn_learner(db, models.resnet34, metrics=error_rate) learn_cln.load('stage-2'); ds, idxs = DatasetFormatter().from_toplosses(learn_cln)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Make sure you're running this notebook in Jupyter Notebook, not Jupyter Lab. That is accessible via [/tree](/tree), not [/lab](/lab). Running the `ImageCleaner` widget in Jupyter Lab is [not currently supported](https://github.com/fastai/fastai/issues/1539).
ImageCleaner(ds, idxs, path)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Flag photos for deletion by clicking 'Delete'. Then click 'Next Batch' to delete flagged photos and keep the rest in that row. `ImageCleaner` will show you a new row of images until there are no more to show. In this case, the widget will show you images until there are none left from `top_losses.ImageCleaner(ds, idxs)` You can also find duplicates in your dataset and delete them! To do this, you need to run `.from_similars` to get the potential duplicates' ids and then run `ImageCleaner` with `duplicates=True`. The API works in a similar way as with misclassified images: just choose the ones you want to delete and click 'Next Batch' until there are no more images left. Make sure to recreate the databunch and `learn_cln` from the `cleaned.csv` file. Otherwise the file would be overwritten from scratch, loosing all the results from cleaning the data from toplosses.
ds, idxs = DatasetFormatter().from_similars(learn_cln) ImageCleaner(ds, idxs, path, duplicates=True)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Remember to recreate your ImageDataBunch from your `cleaned.csv` to include the changes you made in your data! Putting your model in production First thing first, let's export the content of our `Learner` object for production:
learn.export()
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
This will create a file named 'export.pkl' in the directory where we were working that contains everything we need to deploy our model (the model, the weights but also some metadata like the classes or the transforms/normalization used). You probably want to use CPU for inference, except at massive scale (and you almost certainly don't need to train in real-time). If you don't have a GPU that happens automatically. You can test your model on CPU like so:
defaults.device = torch.device('cpu') img = open_image(path/'black'/'00000021.jpg') img
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
We create our `Learner` in production enviromnent like this, jsut make sure that `path` contains the file 'export.pkl' from before.
learn = load_learner(path) pred_class,pred_idx,outputs = learn.predict(img) pred_class
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
So you might create a route something like this ([thanks](https://github.com/simonw/cougar-or-not) to Simon Willison for the structure of this code):```[email protected]("/classify-url", methods=["GET"])async def classify_url(request): bytes = await get_bytes(request.query_params["url"]) img = open_image(BytesIO(bytes)) _,_,losses = learner.predict(img) return JSONResponse({ "predictions": sorted( zip(cat_learner.data.classes, map(float, losses)), key=lambda p: p[1], reverse=True ) })```(This example is for the [Starlette](https://www.starlette.io/) web app toolkit.) Things that can go wrong - Most of the time things will train fine with the defaults- There's not much you really need to tune (despite what you've heard!)- Most likely are - Learning rate - Number of epochs Learning rate (LR) too high
learn = cnn_learner(data, models.resnet34, metrics=error_rate) learn.fit_one_cycle(1, max_lr=0.5)
Total time: 00:13 epoch train_loss valid_loss error_rate 1 12.220007 1144188288.000000 0.765957 (00:13)
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Learning rate (LR) too low
learn = cnn_learner(data, models.resnet34, metrics=error_rate)
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Previously we had this result:```Total time: 00:57epoch train_loss valid_loss error_rate1 1.030236 0.179226 0.028369 (00:14)2 0.561508 0.055464 0.014184 (00:13)3 0.396103 0.053801 0.014184 (00:13)4 0.316883 0.050197 0.021277 (00:15)```
learn.fit_one_cycle(5, max_lr=1e-5) learn.recorder.plot_losses()
_____no_output_____
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
As well as taking a really long time, it's getting too many looks at each image, so may overfit. Too few epochs
learn = cnn_learner(data, models.resnet34, metrics=error_rate, pretrained=False) learn.fit_one_cycle(1)
Total time: 00:14 epoch train_loss valid_loss error_rate 1 0.602823 0.119616 0.049645 (00:14)
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Too many epochs
np.random.seed(42) data = ImageDataBunch.from_folder(path, train=".", valid_pct=0.9, bs=32, ds_tfms=get_transforms(do_flip=False, max_rotate=0, max_zoom=1, max_lighting=0, max_warp=0 ),size=224, num_workers=4).normalize(imagenet_stats) learn = cnn_learner(data, models.resnet50, metrics=error_rate, ps=0, wd=0) learn.unfreeze() learn.fit_one_cycle(40, slice(1e-6,1e-4))
Total time: 06:39 epoch train_loss valid_loss error_rate 1 1.513021 1.041628 0.507326 (00:13) 2 1.290093 0.994758 0.443223 (00:09) 3 1.185764 0.936145 0.410256 (00:09) 4 1.117229 0.838402 0.322344 (00:09) 5 1.022635 0.734872 0.252747 (00:09) 6 0.951374 0.627288 0.192308 (00:10) 7 0.916111 0.558621 0.184982 (00:09) 8 0.839068 0.503755 0.177656 (00:09) 9 0.749610 0.433475 0.144689 (00:09) 10 0.678583 0.367560 0.124542 (00:09) 11 0.615280 0.327029 0.100733 (00:10) 12 0.558776 0.298989 0.095238 (00:09) 13 0.518109 0.266998 0.084249 (00:09) 14 0.476290 0.257858 0.084249 (00:09) 15 0.436865 0.227299 0.067766 (00:09) 16 0.457189 0.236593 0.078755 (00:10) 17 0.420905 0.240185 0.080586 (00:10) 18 0.395686 0.255465 0.082418 (00:09) 19 0.373232 0.263469 0.080586 (00:09) 20 0.348988 0.258300 0.080586 (00:10) 21 0.324616 0.261346 0.080586 (00:09) 22 0.311310 0.236431 0.071429 (00:09) 23 0.328342 0.245841 0.069597 (00:10) 24 0.306411 0.235111 0.064103 (00:10) 25 0.289134 0.227465 0.069597 (00:09) 26 0.284814 0.226022 0.064103 (00:09) 27 0.268398 0.222791 0.067766 (00:09) 28 0.255431 0.227751 0.073260 (00:10) 29 0.240742 0.235949 0.071429 (00:09) 30 0.227140 0.225221 0.075092 (00:09) 31 0.213877 0.214789 0.069597 (00:09) 32 0.201631 0.209382 0.062271 (00:10) 33 0.189988 0.210684 0.065934 (00:09) 34 0.181293 0.214666 0.073260 (00:09) 35 0.184095 0.222575 0.073260 (00:09) 36 0.194615 0.229198 0.076923 (00:10) 37 0.186165 0.218206 0.075092 (00:09) 38 0.176623 0.207198 0.062271 (00:10) 39 0.166854 0.207256 0.065934 (00:10) 40 0.162692 0.206044 0.062271 (00:09)
Apache-2.0
nbs/dl1/lesson2-download.ipynb
piggybox/course-v3
Observations and Insights
# Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import scipy.stats as stats import numpy as np from scipy.stats import linregress # Study data files mouse_metadata_path = "data/Mouse_metadata.csv" study_results_path = "data/Study_results.csv" # Read the mouse data and the study results mouse_metadata = pd.read_csv(mouse_metadata_path) study_results = pd.read_csv(study_results_path) # Combine the data into a single dataset study_data_complete = pd.merge(study_results, mouse_metadata, how="left", on="Mouse ID") # Display the data table for preview study_data_complete.head() # Checking the number of mice. study_data_complete['Mouse ID'].nunique() # Optional: Get all the data for the duplicate mouse ID. study_data_complete[study_data_complete["Mouse ID"] == "g989"] # Create a clean DataFrame by dropping the duplicate mouse by its ID. clean = study_data_complete[study_data_complete["Mouse ID"] != "g989"] clean.head() # Checking the number of mice in the clean DataFrame. clean['Mouse ID'].nunique()
_____no_output_____
ADSL
Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb
keelywright1/matplotlib-challenge
Summary Statistics
clean # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume # for each regimen grp = clean.groupby('Drug Regimen')['Tumor Volume (mm3)'] pd.DataFrame({'mean':grp.mean(),'median':grp.median(),'var':grp.var(),'std':grp.std(),'sem':grp.sem()}) # Generate a summary statistics table of mean, median, variance, standard deviation, and SEM of the tumor volume for each regimen # Using the aggregation method, produce the same summary statistics in a single line grp.agg(['mean','median','var','std','sem'])
_____no_output_____
ADSL
Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb
keelywright1/matplotlib-challenge
Bar and Pie Charts
# Generate a bar plot showing the total number of measurements taken on each drug regimen using pandas. # plot the mouse counts for each drug using pandas plt.figure(figsize=[15,6]) measurements = clean.groupby('Drug Regimen').Sex.count() measurements.plot(kind='bar',rot=45,title='Total Measurements per Drug') plt.ylabel('Measurements') plt.show() measurements measurements.values # Generate a bar plot showing the total number of measurements taken on each drug regimen using pyplot. # plot the bar graph of mice count per drug regimen plt.figure(figsize=[15,6]) plt.bar(measurements.index,measurements.values) plt.title('Total Measurements per Drug') plt.ylabel('Measurements') plt.xlabel('Drug regimen') plt.show() pd.DataFrame.plot() clean.Sex.value_counts().index # Generate a pie plot showing the distribution of female versus male mice using pandas clean.Sex.value_counts().plot.pie(autopct='%1.1f%%', explode=[.1,0],shadow=True) plt.show() # Generate a pie plot showing the distribution of female versus male mice using pyplot plt.pie(clean.Sex.value_counts(), autopct='%1.1f%%', labels=clean.Sex.value_counts().index,explode=[.1,0],shadow=True) plt.show()
_____no_output_____
ADSL
Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb
keelywright1/matplotlib-challenge
Quartiles, Outliers and Boxplots
# Reset index so drug regimen column persists after inner merge # Start by getting the last (greatest) timepoint for each mouse timemax = clean.groupby('Mouse ID').max().Timepoint.reset_index() # Merge this group df with the original dataframe to get the tumor volume at the last timepoint tumormax = timemax.merge(clean,on=['Mouse ID','Timepoint']) # show all rows of data tumormax # get mouse count per drug tumormax.groupby('Drug Regimen').Timepoint.count() # Calculate the final tumor volume of each mouse across four of the treatment regimens: # Capomulin, Ramicane, Infubinol, and Ceftamin # Put treatments into a list for for loop (and later for plot labels) drugs = ['Capomulin', 'Ramicane', 'Infubinol', 'Ceftamin'] # Create empty list to fill with tumor vol data (for plotting) tumor_list = [] # set drug regimen as index and drop associated regimens while only keeping Capomulin, Ramicane, Infubinol, and Ceftamin for drug in drugs: # add subset # tumor volumes for each Drug Regimen # Locate the rows which contain mice on each drug and get the tumor volumes tumor_data = tumormax[tumormax['Drug Regimen'] == drug]['Tumor Volume (mm3)'] # Calculate the IQR and quantitatively determine if there are any potential outliers. iqr = tumor_data.quantile(.75) - tumor_data.quantile(.25) # Determine outliers using upper and lower bounds lower_bound = tumor_data.quantile(.25) - (1.5*iqr) upper_bound = tumor_data.quantile(.75) + (1.5*iqr) tumor_list.append(tumor_data) # isolated view of just capomulin for later use print(f'{drug} potential outliers: {tumor_data[(tumor_data<lower_bound)|(tumor_data>upper_bound)]}') # Generate a box plot of the final tumor volume of each mouse across four regimens of interest plt.figure(figsize=[10,5]) #set drugs to be analyzed, colors for the plots, and markers plt.boxplot(tumor_list,labels=drugs, flierprops={'markerfacecolor':'red','markersize':30}) plt.ylabel('Final Tumor Valume (mm3)') plt.xticks(fontsize=18) plt.show()
_____no_output_____
ADSL
Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb
keelywright1/matplotlib-challenge
Line and Scatter Plots
# Generate a line plot of tumor volume vs. time point for a mouse treated with Capomulin #change index to mouse ID #remove other mouse IDs so only s185 shows #set the x-axis equal to the Timepoint and y-axis to Tumor Volume plt.figure(figsize=[15,6]) clean[(clean['Drug Regimen']=='Capomulin')&(clean['Mouse ID']=='s185')]\ .set_index('Timepoint')['Tumor Volume (mm3)'].plot() plt.ylabel('Tumor Volume (mm3)') plt.title('Tumor Volume vs. Timepoint for Mouse s185') plt.grid() plt.show() # Generate a scatter plot of average tumor volume vs. mouse weight for the Capomulin regimen # group by mouse ID to find average tumor volume tumor_weight = clean[clean['Drug Regimen']=='Capomulin'].groupby('Mouse ID').mean()\ .set_index('Weight (g)')['Tumor Volume (mm3)'] # establish x-axis value for the weight of the mice # produce scatter plot of the data plt.figure(figsize=[15,6]) plt.scatter(tumor_weight.index,tumor_weight.values) plt.xlabel('Weight (g) Average') plt.ylabel('Tumor Volume (mm3) Average') plt.title('Capomulin Treatment Weight vs Tumor Volume Average') plt.show()
_____no_output_____
ADSL
Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb
keelywright1/matplotlib-challenge
Correlation and Regression
tumor_weight.head() # Calculate the correlation coefficient and linear regression model # for mouse weight and average tumor volume for the Capomulin regimen #establish x and y values and find St. Pearson Correlation Coefficient for Mouse Weight and Tumor Volume Avg linear_corr = stats.pearsonr(tumor_weight.index,tumor_weight.values) # establish linear regression values model = linregress(tumor_weight.index,tumor_weight.values) # linear regression line y_values=tumor_weight.index*model[0]+model[1] # scatter plot of the data plt.figure(figsize=[15,6]) plt.plot(tumor_weight.index,y_values,color='red') plt.xlabel('Weight (g) Average') plt.ylabel('Tumor Volume (mm3) Average') plt.title('Capomulin Treatment Weight vs Tumor Volume Average') plt.scatter(tumor_weight.index,tumor_weight.values) plt.show() #print St. Pearson Correlation Coefficient print(f'The correlation between mouse weight and average tumor volume is {linear_corr[0]:.2f}')
_____no_output_____
ADSL
Pymaceuticals/.ipynb_checkpoints/keely_pymaceuticals_starter-checkpoint.ipynb
keelywright1/matplotlib-challenge
Question Answering Download and Prepare Data
!wget https://dl.fbaipublicfiles.com/MLQA/MLQA_V1.zip !unzip MLQA_V1.zip
_____no_output_____
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
Prepare Data
import json def read_data(file_path, max_context_size = 100): # Read dataset with open(file_path) as f: data = json.load(f) contexts = [] questions = [] answers = [] labels = [] for i in range(len(data['data'])): paragraph_object = data['data'][i]["paragraphs"] for j in range(len(paragraph_object)): context_object = paragraph_object[j] context_text = context_object['context'] if len(context_text.split()) > max_context_size: continue for k in range(len(context_object['qas'])): question_object = context_object['qas'][k] question_text = question_object['question'] answer_object = question_object['answers'][0] answer_text = answer_object['text'] answer_start = answer_object['answer_start'] answer_end = answer_start + len(answer_text) answer_start = len(context_text[:answer_start].split()) answer_end = answer_start + len(answer_text.split()) if answer_end >= max_context_size: answer_end = max_context_size -1 labels.append([answer_start, answer_end]) questions.append(question_text) contexts.append(context_text) answers.append(answer_text) with open('train_contexts.txt', 'w') as f: f.write(('\n').join(contexts)) with open('train_questions.txt', 'w') as f: f.write(('\n').join(questions)) return {'qas':questions, 'ctx':contexts, 'ans':answers, 'lbl':labels} train_data = read_data('MLQA_V1/test/test-context-ar-question-ar.json') for i in range(10): print(train_data['qas'][i]) print(train_data['ctx'][i]) print(train_data['ans'][i]) print("==============")
ما الذي جعل شريط الاختبار للطائرة؟ بحيرة جرووم كانت تستخدم للقصف المدفعي والتدريب علي المدفعية خلال الحرب العالمية الثانية، ولكن تم التخلي عنها بعد ذلك حتى نيسان / أبريل 1955، عندما تم اختياره من قبل فريق لوكهيد اسكنك كموقع مثالي لاختبار لوكهيد يو-2 - 2 طائرة التجسس. قاع البحيرة قدم الشريط المثالية التي يمكن عمل اختبارات الطائرات المزعجة، ارتفاع سلسلة جبال وادي الإيمجرانت ومحيط NTS يحمي موقع الاختبار من أعين المتطفلين والتدخل الخارجي. قاع البحيرة ============== من كان يرافق طائرة يو -2 عند التسليم؟ شيدت لوكهيد قاعدة مؤقتة في الموقع، ثم عرفت باسم الموقع الثاني أو "المزرعة"، التي تتألف من أكثر بقليل من بضعة مخابئ، وحلقات عمل ومنازل متنقلة لفريقها الصغير. في ثلاثة أشهر فقط شيد مدرج طوله 5000 ودخل الخدمة بحلول تموز / يوليو 1955. حصلت المزرعة على تسليم أول يو 2 في 24 يوليو، 1955 من بوربانك على سي 124 جلوب ماستر الثاني طائرة شحن، يرافقه فنيي وكهيد على دي سي 3. انطلق أول يو - 2 من الجرووم في 4 أغسطس، 1955. بدأت عمليات تحليق أسطول يو 2 تحت سيطرة وكالة المخابرات المركزية الأمريكية في الأجواء السوفياتية بحلول منتصف عام 1956. فنيي وكهيد ============== ما هو نوع العمل الذي يواجهه الطيارون العسكريون إذا انتقلوا إلى \n مناطق محظورة؟ على عكس الكثير من حدود نيليس، والمنطقة المحيطة بها في البحيرة بشكل دائم خارج الحدود سواء على المدنيين وطبيعية حركة الطيران العسكري. محطات الرادار لحماية المنطقة، والأفراد غير مصرح بها سرعان ما تطرد. حتى طيارين التدريب العسكري في خطر NAFR إجراءات التأديبية إذا تواجدوا بطريق الخطأ في "المربع"الحظور للجرووم والأجواء المحيطة بها. إجراءات التأديبية ============== متى تم نشر مقال مجلة الطيران؟ في كانون الثاني 2006، نشر مؤرخ الفضاء دواين أ يوم مقال نشر في المجلة الإلكترونية الطيران والفضاء استعراض بعنوان "رواد الفضاء والمنطقة 51 : حادث سكايلاب". المقال كان مبنيا على مذكرة مكتوبة في عام 1974 إلى مديروكالة المخابرات المركزية يام كولبي من قبل عملاء مجهولين لوكالة الاستخبارات المركزية. وذكرت المذكرة أن رواد الفضاء على متن سكايلاب 4، وذلك كجزء من برنامج أوسع نطاقا، عن غير قصد بالتقاط صور لموقع الذي قالت المذكرة : كانون الثاني 2006 ============== ما هو الموقع الذي أصبح مركزاً للأطباق الطائرة ونظريات المؤامرة؟ لطبيعتها السرية وفيما لا شك فيه بحوث تصنيف الطائرات، إلى جانب تقارير عن الظواهر غير العادية، قد أدت الي ان تصبح منطقة 51 مركزا للاطباق الطائرة الحديثة ونظريات المؤامرة. بعض الأنشطة المذكورة في مثل هذه النظريات في منطقة 51 تشمل ما يلي : منطقة 51 ============== ما كان محور مؤامرة الجسم الغريب الحديثة؟\n لطبيعتها السرية وفيما لا شك فيه بحوث تصنيف الطائرات، إلى جانب تقارير عن الظواهر غير العادية، قد أدت الي ان تصبح منطقة 51 مركزا للاطباق الطائرة الحديثة ونظريات المؤامرة. بعض الأنشطة المذكورة في مثل هذه النظريات في منطقة 51 تشمل ما يلي : منطقة 51 ============== مالذي يُظن بأنه قد تم بنائه في روزويل؟ التخزين، والفحص، والهندسة العكسية للمركبة الفضائية الغريبة المحطمة (بما في ذلك مواد يفترض ان تعافى في روزويل)، ودراسة شاغليها (حية أو ميتة)، وصناعة الطائرات على أساس التكنولوجيا الغريبة. صناعة الطائرات على أساس التكنولوجيا الغريبة ============== متى يقوم Qos بالتفاوض على كيفية عمل الشبكة؟ ويمكن أن تتوافق الشبكة أو البروتوكول الذي يدعم جودة الخدمات على عقد المرور مع تطبيق البرمجيات والقدرة الاحتياطية في عقد الشبكة، على سبيل المثال خلال مرحلة إقامة الدورات. وهي يمكن أن تحقق رصدا لمستوى الأداء خلال الدورة، على سبيل المثال معدل البيانات والتأخير، والتحكم ديناميكيا عن طريق جدولة الأولويات في عقد الشبكة. وقد تفرج عن القدرة الاحتياطية خلال مرحلة الهدم. مرحلة إقامة الدورات ============== ما هو أحد الشروط للتجارة الشبكية المتنوعة؟ جودة الخدمة قد تكون مطلوبة لأنواع معينة من حركة مرور الشبكة، على سبيل المثال : جودة الخدمة ============== كم عدد قوائم الانتظار الموجودة على أجهزة توجيه المختلفة؟\n الموجهات لدعم DiffServ استخدام قوائم متعددة للحزم في انتظار انتقال من عرض النطاق الترددي مقيدة (على سبيل المثال، منطقة واسعة) واجهات. راوتر الباعة يوفر قدرات مختلفة لتكوين هذا السلوك، لتشمل عددا من قوائم معتمدة، والأولويات النسبية لقوائم الانتظار، وعرض النطاق الترددي المخصصة لكل قائمة انتظار. متعددة ==============
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
Imports
import re import nltk import time import numpy as np import tkseem as tk import tensorflow as tf import matplotlib.ticker as ticker import matplotlib.pyplot as plt
_____no_output_____
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
Tokenization
qa_tokenizer = tk.WordTokenizer() qa_tokenizer.train('train_questions.txt') print('Vocab size ', qa_tokenizer.vocab_size) cx_tokenizer = tk.WordTokenizer() cx_tokenizer.train('train_contexts.txt') print('Vocab size ', cx_tokenizer.vocab_size) train_inp_data = qa_tokenizer.encode_sentences(train_data['qas']) train_tar_data = cx_tokenizer.encode_sentences(train_data['ctx']) train_tar_lbls = train_data['lbl'] train_inp_data.shape, train_tar_data.shape
Training WordTokenizer ... Vocab size 8883 Training WordTokenizer ... Vocab size 10000
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
Create Dataset
BATCH_SIZE = 64 BUFFER_SIZE = len(train_inp_data) dataset = tf.data.Dataset.from_tensor_slices((train_inp_data, train_tar_data, train_tar_lbls)).shuffle(BUFFER_SIZE) dataset = dataset.batch(BATCH_SIZE, drop_remainder=True)
_____no_output_____
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
Create Encoder and Decoder
class Encoder(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, enc_units, batch_sz): super(Encoder, self).__init__() self.batch_sz = batch_sz self.enc_units = enc_units self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(self.enc_units, recurrent_initializer='glorot_uniform') def call(self, x, hidden): x = self.embedding(x) output = self.gru(x, initial_state = hidden) return output def initialize_hidden_state(self): return tf.zeros((self.batch_sz, self.enc_units)) class Decoder(tf.keras.Model): def __init__(self, vocab_size, embedding_dim, dec_units, output_sz): super(Decoder, self).__init__() self.dec_units = dec_units self.embedding_dim = embedding_dim self.output_sz = output_sz self.embedding = tf.keras.layers.Embedding(vocab_size, embedding_dim) self.gru = tf.keras.layers.GRU(self.dec_units, return_sequences=False, recurrent_initializer='glorot_uniform') self.fc11 = tf.keras.layers.Dense(embedding_dim) self.fc12 = tf.keras.layers.Dense(output_sz) self.fc21 = tf.keras.layers.Dense(embedding_dim) self.fc22 = tf.keras.layers.Dense(output_sz) def call(self, x, hidden): x = self.embedding(x) x = self.gru(x, initial_state = hidden) x1 = self.fc11(x) x2 = self.fc21(x) x1 = self.fc12(x1) x2 = self.fc22(x2) return [x1, x2] def loss_fn(true, pred): cross_entropy = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True) return (cross_entropy(true[:,0:1], pred[0]) + cross_entropy(true[:,1:2], pred[1]))/2
_____no_output_____
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
Training
units = 1024 embedding_dim = 256 max_length_inp = train_inp_data.shape[1] max_length_tar = train_tar_data.shape[1] vocab_tar_size = cx_tokenizer.vocab_size vocab_inp_size = qa_tokenizer.vocab_size steps_per_epoch = len(train_inp_data) // BATCH_SIZE decoder = Decoder(vocab_tar_size, embedding_dim, units, max_length_tar) encoder = Encoder(vocab_inp_size, embedding_dim, units, BATCH_SIZE) optim = tf.optimizers.Adam() epochs = 25 for epoch in range(epochs): enc_hidden = encoder.initialize_hidden_state() epoch_loss = 0 for idx, (inp, tar, true) in enumerate(dataset): with tf.GradientTape() as tape: hidden = encoder(inp, enc_hidden) pred = decoder(tar, hidden) loss = loss_fn(true, pred) variables = decoder.trainable_variables + encoder.trainable_variables gradients = tape.gradient(loss, variables) optim.apply_gradients(zip(gradients, variables)) epoch_loss += loss.numpy() print(f"Epoch {epoch} loss: {epoch_loss/steps_per_epoch:.3f}")
Epoch 0 loss: 4.386 Epoch 1 loss: 4.264 Epoch 2 loss: 4.238 Epoch 3 loss: 4.105 Epoch 4 loss: 3.932 Epoch 5 loss: 3.758 Epoch 6 loss: 3.643 Epoch 7 loss: 3.548 Epoch 8 loss: 3.456 Epoch 9 loss: 3.382 Epoch 10 loss: 3.285 Epoch 11 loss: 3.215 Epoch 12 loss: 3.141 Epoch 13 loss: 3.047 Epoch 14 loss: 2.916 Epoch 15 loss: 2.831 Epoch 16 loss: 2.748 Epoch 17 loss: 2.614 Epoch 18 loss: 2.462 Epoch 19 loss: 2.306 Epoch 20 loss: 2.126 Epoch 21 loss: 1.944 Epoch 22 loss: 1.770 Epoch 23 loss: 1.637 Epoch 24 loss: 1.414
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
evaluation
def answer(question_txt, context_txt, answer_txt_tru): question = qa_tokenizer.encode_sentences([question_txt], out_length = max_length_inp) context = cx_tokenizer.encode_sentences([context_txt], out_length = max_length_tar) question = tf.convert_to_tensor(question) context = tf.convert_to_tensor(context) result = '' hidden = [tf.zeros((1, units))] enc_hidden = encoder(question, hidden) pred = decoder(context, enc_hidden) start = tf.argmax(pred[0], axis = -1).numpy()[0] end = tf.argmax(pred[1], axis = -1).numpy()[0] if start >= len(context_txt.split()): start = len(context_txt.split()) - 1 if end >= len(context_txt.split()): end = len(context_txt.split()) - 1 # if one word prediction if end == start: end += 1 answer_txt = (' ').join(context_txt.split()[start:end]) print("Question : ", question_txt) print("Context : ",context_txt) print("Pred Answer : ",answer_txt) print("True Answer : ", answer_txt_tru) print("======================") answer("في أي عام توفي وليام ؟", "توفي وليام في عام 1990", "1990") answer("ماهي عاصمة البحرين ؟", "عاصمة البحرين هي المنامة", "المنامة") answer("في أي دولة ولد جون ؟", "ولد في فرنسا عام 1988", "فرنسا") answer("أين تركت الهاتف ؟", "تركت الهاتف فوق الطاولة", "فوق الطاولة")
Question : في أي عام توفي وليام ؟ Context : توفي وليام في عام 1990 Pred Answer : 1990 True Answer : 1990 ====================== Question : ماهي عاصمة البحرين ؟ Context : عاصمة البحرين هي المنامة Pred Answer : المنامة True Answer : المنامة ====================== Question : في أي دولة ولد جون ؟ Context : ولد في فرنسا عام 1988 Pred Answer : 1988 True Answer : فرنسا ====================== Question : أين تركت الهاتف ؟ Context : تركت الهاتف فوق الطاولة Pred Answer : الطاولة True Answer : فوق الطاولة ======================
MIT
tasks/question_answering/Question_Answering.ipynb
ARBML/tkseem
Single gene name
geneinfo('USP4')
_____no_output_____
MIT
example.ipynb
kaspermunch/geneinfo
List of names
geneinfo(['LARS2', 'XCR1'])
_____no_output_____
MIT
example.ipynb
kaspermunch/geneinfo
Get all protein coding genes in a (hg38) region
for gene in mg.query('q=chr2:49500000-50000000 AND type_of_gene:protein-coding', species='human', fetch_all=True): geneinfo(gene['symbol'])
Fetching 4 gene(s) . . .
MIT
example.ipynb
kaspermunch/geneinfo
Plot data over gene annotation
chrom, start, end = 'chr3', 49500000, 50600000 ax = geneplot(chrom, start, end, figsize=(10, 5)) ax.plot(np.linspace(start, end, 1000), np.random.random(1000), 'o') ; mpld3.display() geneinfo(['HYAL3', 'IFRD2'])
_____no_output_____
MIT
example.ipynb
kaspermunch/geneinfo
Convolutional Neural NetworksIn this notebook we will implement a convolutional neural network. Rather than doing everything from scratch we will make use of [TensorFlow 2](https://www.tensorflow.org/) and the [Keras](https://keras.io) high level interface. Installing TensorFlow and KerasTensorFlow and Keras are not included with the base Anaconda install, but can be easily installed by running the following commands on the Anaconda Command Prompt/terminal window:```conda install notebook jupyterlab nb_conda_kernelsconda create -n tf tensorflow ipykernel mkl```Once this has been done, you should be able to select the `Python [conda env:tf]` kernel from the Kernel->Change Kernel menu item at the top of this notebook. Then, we import TensorFlow package:
import tensorflow as tf
_____no_output_____
MIT
Neural Networks/Convolutional Neural Networks.ipynb
PeterJamesNee/Examples
Creating a simple network with TensorFlowWe will start by creating a very simple fully connected feedforward network using TensorFlow/Keras. The network will mimic the one we implemented previously, but TensorFlow/Keras will take care of most of the details for us. MNIST DatasetFirst, let us load the MNIST digits dataset that we will be using to train our network. This is available directly within Keras:
(x_train, y_train),(x_test, y_test) = tf.keras.datasets.mnist.load_data()
_____no_output_____
MIT
Neural Networks/Convolutional Neural Networks.ipynb
PeterJamesNee/Examples
The data comes as a set of integers in the range [0,255] representing the shade of gray of a given pixel. Let's first rescale them to be in the range [0,1]:
x_train, x_test = x_train / 255.0, x_test / 255.0
_____no_output_____
MIT
Neural Networks/Convolutional Neural Networks.ipynb
PeterJamesNee/Examples
Now we can build a neural network model using Keras. This uses a very simple high-level modular structure where we only have the specify the layers in our model and the properties of each layer. The layers we will have are as follows:1. Input layer: This will be a 28x28 matrix of numbers.2. `Flatten` layer: Convert our 28x28 pixel image into an array of size 784.3. `Dense` layer: a fully-connected layer of the type we have been using up to now. We will use 30 neurons and the sigmoid activation function.4. `Dense` layer: fully-connected output layer.
model = tf.keras.models.Sequential([ tf.keras.layers.Flatten(input_shape=(28, 28)), tf.keras.layers.Dense(30, activation='sigmoid'), tf.keras.layers.Dense(10, activation='softsign') ]) model.compile(optimizer='adam', loss='mean_squared_logarithmic_error', metrics=['accuracy']) model.fit(x_train, y_train, epochs=5) model.evaluate(x_test, y_test)
313/313 [==============================] - 0s 719us/step - loss: 0.1253 - accuracy: 0.9629
MIT
Neural Networks/Convolutional Neural Networks.ipynb
PeterJamesNee/Examples
ExercisesExperiment with this network:1. Change the number of neurons in the hidden layer.2. Add more hidden layers.3. Change the activation function in the hidden layer to `relu`.4. Change the activation in the output layer to `softmax`.How does the performance of your network change with these modifications? TaskImplement the neural network in "[Gradient-based learning applied to document recognition](http://yann.lecun.com/exdb/publis/pdf/lecun-98.pdf)", by Yann LeCun, Léon Bottou, Yoshua Bengio, and Patrick Haffner.
model = tf.keras.models.Sequential([ tf.keras.layers.Conv2D(6,5, activation='sigmoid',input_shape=(28, 28,1)), tf.keras.layers.MaxPooling2D(pool_size=(2,2)), tf.keras.layers.Conv2D(16,5, activation='sigmoid'), tf.keras.layers.MaxPooling2D(pool_size=(2,2)), tf.keras.layers.Flatten(), tf.keras.layers.Dense(84, activation='sigmoid'), tf.keras.layers.Dense(10, activation='softmax') ]) model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) model.fit(x_train, y_train, epochs=5)
Epoch 1/5
MIT
Neural Networks/Convolutional Neural Networks.ipynb
PeterJamesNee/Examples
Introduction This notebook was intended to show the nuclear isotope abundance dependent on baryon density. I abandoned it later though. If you want to pick it up, go ahead. No guarantees all rights reserved and you are responsible for your interpretation of all this, etc. Imports
import numpy as np import matplotlib.pyplot as plt %matplotlib inline plt.rc('font', size=18) plt.rcParams['figure.figsize'] = (10.0, 7.0)
_____no_output_____
Unlicense
Abundances.ipynb
ErikHogenbirk/DMPlots
Read data
def read_datathief(fn): data = np.loadtxt(fn, converters={0: lambda x: x[:-1]}) return data[:, 0], data[:, 1] ab = {} elems = ['he4', 'd2', 'he3', 'li3'] ab['x_he4'], ab['he4'] = read_datathief('data/abundances/He4.txt') ab['x_d2'], ab['d2'] = read_datathief('data/abundances/D2.txt') ab['x_he3'], ab['he3'] = read_datathief('data/abundances/He3.txt') ab['x_li3'], ab['li3'] = read_datathief('data/abundances/Li.txt') ab['li3_c'] = 1.58e-10 ab['li3_c_d'] = 0.3e-10 ab['d2_c'] = 2.53e-5 ab['d2_c_d'] = 0.04e-5 ab['he3_c'] = 1.1e-5 ab['he3_c_d'] = 0.2e-5 ab['he4_c'] = 0.2449 ab['he4_c_d'] = 0.004
_____no_output_____
Unlicense
Abundances.ipynb
ErikHogenbirk/DMPlots
Plots All on one
for el in elems: plt.plot(ab['x_' + el], ab[el]) plt.xscale('log') plt.yscale('log') plt.xlabel('Baryon fraction') plt.ylabel('Abundance') # plt.ylim(1e-10, 1)
_____no_output_____
Unlicense
Abundances.ipynb
ErikHogenbirk/DMPlots
Fancy plot
f, (ax1, ax2, ax3) = plt.subplots(3, 1, sharex=True) axs = [ax1, ax2, ax3] c = {el: 'C%d' % i for i, el in enumerate(elems) } planck_ab = 0.02230 for el in elems: for ax in axs: ax.plot(ab['x_' + el], ab[el], color=c[el], lw=3) ax.axhline(ab[el+'_c'], color=c[el]) ax.axvline(planck_ab) ax1.set_ylim(0.22, 0.26) ax2.set_ylim(1e-6, 1e-3) ax3.set_ylim(1e-10, 1e-8) for ax in axs: ax.set_yscale('log') ax.set_xscale('log') for ax in (ax1, ax2): ax.spines['bottom'].set_visible(False) ax = ax1 ax.xaxis.tick_bottom() ax.xaxis.tick_top() ax.tick_params(labeltop='off') plt.sca(ax2) plt.tick_params( axis='x', # changes apply to the x-axis which='both', # both major and minor ticks are affected bottom='off', # ticks along the bottom edge are off top='off', # ticks along the top edge are off labelbottom='off') for ax in (ax2, ax3): ax.spines['top'].set_visible(False) # ax.tick_params(labeltop='off') # don't put tick labels at the top d = .01 # how big to make the diagonal lines in axes coordinates # arguments to pass to plot, just so we don't keep repeating them ax = ax1 kwargs = dict(transform=ax.transAxes, color='k', clip_on=False) ax.plot((-d, +d), (-d, +d), **kwargs) # top-left diagonal ax.plot((1 - d, 1 + d), (-d, +d), **kwargs) # top-right diagonal ax = ax2 kwargs = dict(transform=ax.transAxes, color='k', clip_on=False) ax.plot((-d, +d), (-d, +d), **kwargs) # top-left diagonal ax.plot((1 - d, 1 + d), (-d, +d), **kwargs) # top-right diagonal kwargs.update(transform=ax.transAxes) # switch to the bottom axes ax.plot((-d, +d), (1 - d, 1 + d), **kwargs) # bottom-left diagonal ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) # bottom-right diagonal ax = ax3 kwargs.update(transform=ax.transAxes) # switch to the bottom axes ax.plot((-d, +d), (1 - d, 1 + d), **kwargs) # bottom-left diagonal ax.plot((1 - d, 1 + d), (1 - d, 1 + d), **kwargs) # bottom-right diagonal plt.xlim(4e-3, 3e-2)
_____no_output_____
Unlicense
Abundances.ipynb
ErikHogenbirk/DMPlots
Supervised LearningThis worksheet covers concepts covered in the second part of day 2 - Feature Engineering. It should take no more than 40-60 minutes to complete. Please raise your hand if you get stuck. Import the LibrariesFor this exercise, we will be using:* Pandas (http://pandas.pydata.org/pandas-docs/stable/)* Numpy (https://docs.scipy.org/doc/numpy/reference/)* Matplotlib (http://matplotlib.org/api/pyplot_api.html)* Scikit-learn (http://scikit-learn.org/stable/documentation.html)* YellowBrick (http://www.scikit-yb.org/en/latest/)* Seaborn (https://seaborn.pydata.org)* Lime (https://github.com/marcotcr/lime)
# Load Libraries - Make sure to run this cell! import pandas as pd import numpy as np import re from collections import Counter from sklearn import feature_extraction, tree, model_selection, metrics from yellowbrick.classifier import ClassificationReport from yellowbrick.classifier import ConfusionMatrix import matplotlib.pyplot as plt import matplotlib import lime %matplotlib inline
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Worksheet - DGA Detection using Machine LearningThis worksheet is a step-by-step guide on how to detect domains that were generated using "Domain Generation Algorithm" (DGA). We will walk you through the process of transforming raw domain strings to Machine Learning features and creating a decision tree classifer which you will use to determine whether a given domain is legit or not. Once you have implemented the classifier, the worksheet will walk you through evaluating your model. Overview 2 main steps:1. **Feature Engineering** - from raw domain strings to numeric Machine Learning features using DataFrame manipulations2. **Machine Learning Classification** - predict whether a domain is legit or not using a Decision Tree Classifier **DGA - Background**"Various families of malware use domain generationalgorithms (DGAs) to generate a large number of pseudo-randomdomain names to connect to a command and control (C2) server.In order to block DGA C2 traffic, security organizations mustfirst discover the algorithm by reverse engineering malwaresamples, then generate a list of domains for a given seed. Thedomains are then either preregistered, sink-holed or publishedin a DNS blacklist. This process is not only tedious, but canbe readily circumvented by malware authors. An alternativeapproach to stop malware from using DGAs is to intercept DNSqueries on a network and predict whether domains are DGAgenerated. Much of the previous work in DGA detection is basedon finding groupings of like domains and using their statisticalproperties to determine if they are DGA generated. However,these techniques are run over large time windows and cannot beused for real-time detection and prevention. In addition, many ofthese techniques also use contextual information such as passiveDNS and aggregations of all NXDomains throughout a network.Such requirements are not only costly to integrate, they may notbe possible due to real-world constraints of many systems (suchas endpoint detection). An alternative to these systems is a muchharder problem: detect DGA generation on a per domain basiswith no information except for the domain name. Previous workto solve this harder problem exhibits poor performance and manyof these systems rely heavily on manual creation of features;a time consuming process that can easily be circumvented bymalware authors..." [Citation: Woodbridge et. al 2016: "Predicting Domain Generation Algorithms with Long Short-Term Memory Networks"]A better alternative for real-world deployment would be to use "featureless deep learning" - We have a separate notebook where you can see how this can be implemented!**However, let's learn the basics first!!!** Feature Engineering Breakpoint: Load Features and LabelsIf you got stuck in Part 1, please simply load the feature matrix we prepared for you, so you can move on to Part 2 and train a Decision Tree Classifier.
df_final = pd.read_csv('../../Data/dga_features_final_df.csv') print(df_final.isDGA.value_counts()) df_final.head() # Load dictionary of common english words from part 1 from six.moves import cPickle as pickle with open('../../Data/d_common_en_words' + '.pickle', 'rb') as f: d = pickle.load(f)
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Part 2 - Machine LearningTo learn simple classification procedures using [sklearn](http://scikit-learn.org/stable/) we have split the work flow into 5 steps. Step 1: Prepare Feature matrix and ```target``` vector containing the URL labels- In statistics, the feature matrix is often referred to as ```X```- target is a vector containing the labels for each URL (often also called *y* in statistics)- In sklearn both the input and target can either be a pandas DataFrame/Series or numpy array/vector respectively (can't be lists!)Tasks:- assign 'isDGA' column to a pandas Series named 'target'- drop 'isDGA' column from ```dga``` DataFrame and name the resulting pandas DataFrame 'feature_matrix'
#Your code here ...
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Step 2: Simple Cross-ValidationTasks:- split your feature matrix X and target vector into train and test subsets using sklearn [model_selection.train_test_split](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html)
# Simple Cross-Validation: Split the data set into training and test data #Your code here ...
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Step 3: Train the model and make a predictionFinally, we have prepared and segmented the data. Let's start classifying!! Tasks:- Use the sklearn [tree.DecisionTreeClassfier()](http://scikit-learn.org/stable/modules/generated/sklearn.tree.DecisionTreeClassifier.html), create a decision tree with standard parameters, and train it using the ```.fit()``` function with ```X_train``` and ```target_train``` data.- Next, pull a few random rows from the data and see if your classifier got it correct.If you are interested in trying a real unknown domain, you'll have to create a function to generate the features for that domain before you run it through the classifier (see function ```is_dga``` a few cells below).
# Train the decision tree based on the entropy criterion #Your code here ... # For simplicity let's just copy the needed function in here again def H_entropy (x): # Calculate Shannon Entropy prob = [ float(x.count(c)) / len(x) for c in dict.fromkeys(list(x)) ] H = - sum([ p * np.log2(p) for p in prob ]) return H def vowel_consonant_ratio (x): # Calculate vowel to consonant ratio x = x.lower() vowels_pattern = re.compile('([aeiou])') consonants_pattern = re.compile('([b-df-hj-np-tv-z])') vowels = re.findall(vowels_pattern, x) consonants = re.findall(consonants_pattern, x) try: ratio = len(vowels) / len(consonants) except: # catch zero devision exception ratio = 0 return ratio # ngrams: Implementation according to Schiavoni 2014: "Phoenix: DGA-based Botnet Tracking and Intelligence" # http://s2lab.isg.rhul.ac.uk/papers/files/dimva2014.pdf def ngrams(word, n): # Extract all ngrams and return a regular Python list # Input word: can be a simple string or a list of strings # Input n: Can be one integer or a list of integers # if you want to extract multipe ngrams and have them all in one list l_ngrams = [] if isinstance(word, list): for w in word: if isinstance(n, list): for curr_n in n: ngrams = [w[i:i+curr_n] for i in range(0,len(w)-curr_n+1)] l_ngrams.extend(ngrams) else: ngrams = [w[i:i+n] for i in range(0,len(w)-n+1)] l_ngrams.extend(ngrams) else: if isinstance(n, list): for curr_n in n: ngrams = [word[i:i+curr_n] for i in range(0,len(word)-curr_n+1)] l_ngrams.extend(ngrams) else: ngrams = [word[i:i+n] for i in range(0,len(word)-n+1)] l_ngrams.extend(ngrams) # print(l_ngrams) return l_ngrams def ngram_feature(domain, d, n): # Input is your domain string or list of domain strings # a dictionary object d that contains the count for most common english words # finally you n either as int list or simple int defining the ngram length # Core magic: Looks up domain ngrams in english dictionary ngrams and sums up the # respective english dictionary counts for the respective domain ngram # sum is normalized l_ngrams = ngrams(domain, n) # print(l_ngrams) count_sum=0 for ngram in l_ngrams: if d[ngram]: count_sum+=d[ngram] try: feature = count_sum/(len(domain)-n+1) except: feature = 0 return feature def average_ngram_feature(l_ngram_feature): # input is a list of calls to ngram_feature(domain, d, n) # usually you would use various n values, like 1,2,3... return sum(l_ngram_feature)/len(l_ngram_feature) def is_dga(domain, clf, d): # Function that takes new domain string, trained model 'clf' as input and # dictionary d of most common english words # returns prediction domain_features = np.empty([1,5]) # order of features is ['length', 'digits', 'entropy', 'vowel-cons', 'ngrams'] domain_features[0,0] = len(domain) pattern = re.compile('([0-9])') domain_features[0,1] = len(re.findall(pattern, domain)) domain_features[0,2] = H_entropy(domain) domain_features[0,3] = vowel_consonant_ratio(domain) domain_features[0,4] = average_ngram_feature([ngram_feature(domain, d, 1), ngram_feature(domain, d, 2), ngram_feature(domain, d, 3)]) pred = clf.predict(domain_features) return pred[0] print('Predictions of domain %s is [0 means legit and 1 dga]: ' %('spardeingeld'), is_dga('spardeingeld', clf, d)) print('Predictions of domain %s is [0 means legit and 1 dga]: ' %('google'), is_dga('google', clf, d)) print('Predictions of domain %s is [0 means legit and 1 dga]: ' %('1vxznov16031kjxneqjk1rtofi6'), is_dga('1vxznov16031kjxneqjk1rtofi6', clf, d)) print('Predictions of domain %s is [0 means legit and 1 dga]: ' %('lthmqglxwmrwex'), is_dga('lthmqglxwmrwex', clf, d))
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Step 4: Assess model accuracy with simple cross-validationTasks:- Make predictions for all your data. Call the ```.predict()``` method on the clf with your training data ```X_train``` and store the results in a variable called ```target_pred```.- Use sklearn [metrics.accuracy_score](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.accuracy_score.html) to determine your models accuracy. Detailed Instruction: - Use your trained model to predict the labels of your test data ```X_test```. Run ```.predict()``` method on the clf with your test data ```X_test``` and store the results in a variable called ```target_pred```.. - Then calculate the accuracy using ```target_test``` (which are the true labels/groundtruth) AND your models predictions on the test portion ```target_pred``` as inputs. The advantage here is to see how your model performs on new data it has not been seen during the training phase. The fair approach here is a simple **cross-validation**! - Print out the confusion matrix using [metrics.confusion_matrix](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.confusion_matrix.html)- Use Yellowbrick to visualize the classification report and confusion matrix. (http://www.scikit-yb.org/en/latest/examples/modelselect.htmlcommon-metrics-for-evaluating-classifiers)
# fair approach: make prediction on test data portion #Your code here ... # Classification Report...neat summary #Your code here ...
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Step 5: Assess model accuracy with k-fold cross-validationTasks:- Partition the dataset into *k* different subsets- Create *k* different models by training on *k-1* subsets and testing on the remaining subsets- Measure the performance on each of the models and take the average measure.*Short-Cut*All of these steps can be easily achieved by simply using sklearn's [model_selection.KFold()](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.KFold.html) and [model_selection.cross_val_score()](http://scikit-learn.org/stable/modules/generated/sklearn.model_selection.cross_val_score.html) functions.
#Your code here ...
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
(Optional) Visualizing your TreeAs an optional step, you can actually visualize your tree. The following code will generate a graph of your decision tree. You will need graphviz (http://www.graphviz.org) and pydotplus (or pydot) installed for this to work.The Griffon VM has this installed already, but if you try this on a Mac, or Linux machine you will need to install graphviz.
# These libraries are used to visualize the decision tree and require that you have GraphViz # and pydot or pydotplus installed on your computer. from sklearn.externals.six import StringIO from IPython.core.display import Image import pydotplus as pydot dot_data = StringIO() tree.export_graphviz(clf, out_file=dot_data, feature_names=['length', 'digits', 'entropy', 'vowel-cons', 'ngrams'], filled=True, rounded=True, special_characters=True) graph = pydot.graph_from_dot_data(dot_data.getvalue()) Image(graph.create_png())
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Other ModelsNow that you've built a Decision Tree, let's try out two other classifiers and see how they perform on this data. For this next exercise, create classifiers using:* Support Vector Machine* Random Forest* K-Nearest Neighbors (http://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KNeighborsClassifier.html) Once you've done that, run the various performance metrics to determine which classifier works best.
from sklearn import svm from sklearn.ensemble import RandomForestClassifier from sklearn.neighbors import KNeighborsClassifier #Create the Random Forest Classifier #Next, create the SVM classifier #Finally the knn
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
Explain a PredictionIn the example below, you can use LIME to explain how a classifier arrived at its prediction. Try running LIME with the various classifiers you've created and various rows to see how it functions.
import lime.lime_tabular explainer = lime.lime_tabular.LimeTabularExplainer(feature_matrix_train, feature_names=['length', 'digits', 'entropy', 'vowel-cons', 'ngrams'], class_names=['legit', 'isDGA'], discretize_continuous=False) exp = explainer.explain_instance(feature_matrix_test.iloc[5], random_forest_clf.predict_proba, num_features=5) exp.show_in_notebook(show_table=True, show_all=True)
_____no_output_____
BSD-3-Clause
Notebooks/Day 2 - Feature Engineering and Supervised Learning/DGA Detection using Supervised Learning.ipynb
ahouseholder/machine-learning-for-security-professionals
k-Nearest Neighbor (kNN) exercise*Complete and hand in this completed worksheet (including its outputs and any supporting code outside of the worksheet) with your assignment submission. For more details see the [assignments page](http://vision.stanford.edu/teaching/cs231n/assignments.html) on the course website.*The kNN classifier consists of two stages:- During training, the classifier takes the training data and simply remembers it- During testing, kNN classifies every test image by comparing to all training images and transfering the labels of the k most similar training examples- The value of k is cross-validatedIn this exercise you will implement these steps and understand the basic Image Classification pipeline, cross-validation, and gain proficiency in writing efficient, vectorized code.
# Run some setup code for this notebook. import random import numpy as np from cs231n.data_utils import load_CIFAR10 import matplotlib.pyplot as plt # This is a bit of magic to make matplotlib figures appear inline in the notebook # rather than in a new window. %matplotlib inline plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots plt.rcParams['image.interpolation'] = 'nearest' plt.rcParams['image.cmap'] = 'gray' # Some more magic so that the notebook will reload external python modules; # see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython %load_ext autoreload %autoreload 2 # Load the raw CIFAR-10 data. cifar10_dir = 'cs231n/datasets/cifar-10-batches-py' X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir) # As a sanity check, we print out the size of the training and test data. print 'Training data shape: ', X_train.shape print 'Training labels shape: ', y_train.shape print 'Test data shape: ', X_test.shape print 'Test labels shape: ', y_test.shape # Visualize some examples from the dataset. # We show a few examples of training images from each class. classes = ['plane', 'car', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck'] num_classes = len(classes) samples_per_class = 7 for y, cls in enumerate(classes): idxs = np.flatnonzero(y_train == y) idxs = np.random.choice(idxs, samples_per_class, replace=False) for i, idx in enumerate(idxs): plt_idx = i * num_classes + y + 1 plt.subplot(samples_per_class, num_classes, plt_idx) plt.imshow(X_train[idx].astype('uint8')) plt.axis('off') if i == 0: plt.title(cls) plt.show() # Subsample the data for more efficient code execution in this exercise num_training = 5000 mask = range(num_training) X_train = X_train[mask] y_train = y_train[mask] num_test = 500 mask = range(num_test) X_test = X_test[mask] y_test = y_test[mask] X_train.shape, X_test.shape # Reshape the image data into rows X_train = np.reshape(X_train, (X_train.shape[0], -1)) X_test = np.reshape(X_test, (X_test.shape[0], -1)) print X_train.shape, X_test.shape from cs231n.classifiers import KNearestNeighbor # Create a kNN classifier instance. # Remember that training a kNN classifier is a noop: # the Classifier simply remembers the data and does no further processing classifier = KNearestNeighbor() classifier.train(X_train, y_train)
_____no_output_____
WTFPL
assignment1/knn.ipynb
kamikat/cs231n
We would now like to classify the test data with the kNN classifier. Recall that we can break down this process into two steps: 1. First we must compute the distances between all test examples and all train examples. 2. Given these distances, for each test example we find the k nearest examples and have them vote for the labelLets begin with computing the distance matrix between all training and test examples. For example, if there are **Ntr** training examples and **Nte** test examples, this stage should result in a **Nte x Ntr** matrix where each element (i,j) is the distance between the i-th test and j-th train example.First, open `cs231n/classifiers/k_nearest_neighbor.py` and implement the function `compute_distances_two_loops` that uses a (very inefficient) double loop over all pairs of (test, train) examples and computes the distance matrix one element at a time.
# Open cs231n/classifiers/k_nearest_neighbor.py and implement # compute_distances_two_loops. # Test your implementation: dists = classifier.compute_distances_two_loops(X_test) print dists.shape # We can visualize the distance matrix: each row is a single test example and # its distances to training examples plt.imshow(dists, interpolation='none') plt.show()
_____no_output_____
WTFPL
assignment1/knn.ipynb
kamikat/cs231n
**Inline Question 1:** Notice the structured patterns in the distance matrix, where some rows or columns are visible brighter. (Note that with the default color scheme black indicates low distances while white indicates high distances.)- What in the data is the cause behind the distinctly bright rows?- What causes the columns? **Your Answer**:- The test image is too bright or too dark- The training data is too bright or too dark
# Now implement the function predict_labels and run the code below: # We use k = 1 (which is Nearest Neighbor). y_test_pred = classifier.predict_labels(dists, k=1) # Compute and print the fraction of correctly predicted examples num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Got 137 / 500 correct => accuracy: 0.274000
WTFPL
assignment1/knn.ipynb
kamikat/cs231n
You should expect to see approximately `27%` accuracy. Now lets try out a larger `k`, say `k = 5`:
y_test_pred = classifier.predict_labels(dists, k=5) num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Got 139 / 500 correct => accuracy: 0.278000
WTFPL
assignment1/knn.ipynb
kamikat/cs231n
You should expect to see a slightly better performance than with `k = 1`.
# Now lets speed up distance matrix computation by using partial vectorization # with one loop. Implement the function compute_distances_one_loop and run the # code below: dists_one = classifier.compute_distances_one_loop(X_test) # To ensure that our vectorized implementation is correct, we make sure that it # agrees with the naive implementation. There are many ways to decide whether # two matrices are similar; one of the simplest is the Frobenius norm. In case # you haven't seen it before, the Frobenius norm of two matrices is the square # root of the squared sum of differences of all elements; in other words, reshape # the matrices into vectors and compute the Euclidean distance between them. difference = np.linalg.norm(dists - dists_one, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Now implement the fully vectorized version inside compute_distances_no_loops # and run the code dists_two = classifier.compute_distances_no_loops(X_test) # check that the distance matrix agrees with the one we computed before: difference = np.linalg.norm(dists - dists_two, ord='fro') print 'Difference was: %f' % (difference, ) if difference < 0.001: print 'Good! The distance matrices are the same' else: print 'Uh-oh! The distance matrices are different' # Let's compare how fast the implementations are def time_function(f, *args): """ Call a function f with args and return the time (in seconds) that it took to execute. """ import time tic = time.time() f(*args) toc = time.time() return toc - tic two_loop_time = time_function(classifier.compute_distances_two_loops, X_test) print 'Two loop version took %f seconds' % two_loop_time one_loop_time = time_function(classifier.compute_distances_one_loop, X_test) print 'One loop version took %f seconds' % one_loop_time no_loop_time = time_function(classifier.compute_distances_no_loops, X_test) print 'No loop version took %f seconds' % no_loop_time # you should see significantly faster performance with the fully vectorized implementation
Two loop version took 27.158314 seconds One loop version took 40.179075 seconds No loop version took 0.529196 seconds
WTFPL
assignment1/knn.ipynb
kamikat/cs231n
Cross-validationWe have implemented the k-Nearest Neighbor classifier but we set the value k = 5 arbitrarily. We will now determine the best value of this hyperparameter with cross-validation.
num_folds = 5 k_choices = [1, 3, 5, 8, 10, 12, 15, 20, 50, 100] X_train_folds = [] y_train_folds = [] ################################################################################ # TODO: # # Split up the training data into folds. After splitting, X_train_folds and # # y_train_folds should each be lists of length num_folds, where # # y_train_folds[i] is the label vector for the points in X_train_folds[i]. # # Hint: Look up the numpy array_split function. # ################################################################################ X_train_folds = np.split(X_train, num_folds) y_train_folds = np.split(y_train, num_folds) ################################################################################ # END OF YOUR CODE # ################################################################################ # A dictionary holding the accuracies for different values of k that we find # when running cross-validation. After running cross-validation, # k_to_accuracies[k] should be a list of length num_folds giving the different # accuracy values that we found when using that value of k. k_to_accuracies = { k: [] for k in k_choices } ################################################################################ # TODO: # # Perform k-fold cross validation to find the best value of k. For each # # possible value of k, run the k-nearest-neighbor algorithm num_folds times, # # where in each case you use all but one of the folds as training data and the # # last fold as a validation set. Store the accuracies for all fold and all # # values of k in the k_to_accuracies dictionary. # ################################################################################ for i in range(num_folds): classifier = KNearestNeighbor() X_test = X_train_folds[i] y_test = y_train_folds[i] X_train = np.concatenate([ fold for j, fold in enumerate(X_train_folds) if j != i ]) y_train = np.concatenate([ fold for j, fold in enumerate(y_train_folds) if j != i ]) classifier.train(X_train, y_train) dists = classifier.compute_distances_no_loops(X_test) for k in k_choices: predict = classifier.predict_labels(dists, k=k) num_correct = np.sum(predict == y_test) accuracy = float(num_correct) / X_test.shape[0] k_to_accuracies[k] += [accuracy] ################################################################################ # END OF YOUR CODE # ################################################################################ # Print out the computed accuracies for k in sorted(k_to_accuracies): for accuracy in k_to_accuracies[k]: print 'k = %d, accuracy = %f' % (k, accuracy) # plot the raw observations for k in k_choices: accuracies = k_to_accuracies[k] plt.scatter([k] * len(accuracies), accuracies) # plot the trend line with error bars that correspond to standard deviation accuracies_mean = np.array([np.mean(v) for k,v in sorted(k_to_accuracies.items())]) accuracies_std = np.array([np.std(v) for k,v in sorted(k_to_accuracies.items())]) plt.errorbar(k_choices, accuracies_mean, yerr=accuracies_std) plt.title('Cross-validation on k') plt.xlabel('k') plt.ylabel('Cross-validation accuracy') plt.show() # Based on the cross-validation results above, choose the best value for k, # retrain the classifier using all the training data, and test it on the test # data. You should be able to get above 28% accuracy on the test data. best_k = 20 classifier = KNearestNeighbor() classifier.train(X_train, y_train) y_test_pred = classifier.predict(X_test, k=best_k) # Compute and display the accuracy num_correct = np.sum(y_test_pred == y_test) accuracy = float(num_correct) / num_test print 'Got %d / %d correct => accuracy: %f' % (num_correct, num_test, accuracy)
Got 162 / 500 correct => accuracy: 0.324000
WTFPL
assignment1/knn.ipynb
kamikat/cs231n
Monte Carlo - Forecasting Stock Prices - Part I *Suggested Answers follow (usually there are multiple ways to solve a problem in Python).* Load the data for Microsoft (‘MSFT’) for the period ‘2000-1-1’ until today.
import numpy as np import pandas as pd from pandas_datareader import data as wb import matplotlib.pyplot as plt from scipy.stats import norm %matplotlib inline data = pd.read_csv('D:/Python/MSFT_2000.csv', index_col = 'Date')
_____no_output_____
MIT
Python for Finance - Code Files/103 Monte Carlo - Predicting Stock Prices - Part I/CSV/Python 2 CSV/MC Predicting Stock Prices - Part I - Solution_CSV.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
Use the .pct_change() method to obtain the log returns of Microsoft for the designated period.
log_returns = np.log(1 + data.pct_change()) log_returns.tail() data.plot(figsize=(10, 6)); log_returns.plot(figsize = (10, 6))
_____no_output_____
MIT
Python for Finance - Code Files/103 Monte Carlo - Predicting Stock Prices - Part I/CSV/Python 2 CSV/MC Predicting Stock Prices - Part I - Solution_CSV.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
Assign the mean value of the log returns to a variable, called “U”, and their variance to a variable, called “var”.
u = log_returns.mean() u var = log_returns.var() var
_____no_output_____
MIT
Python for Finance - Code Files/103 Monte Carlo - Predicting Stock Prices - Part I/CSV/Python 2 CSV/MC Predicting Stock Prices - Part I - Solution_CSV.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
Calculate the drift, using the following formula: $$drift = u - \frac{1}{2} \cdot var$$
drift = u - (0.5 * var) drift
_____no_output_____
MIT
Python for Finance - Code Files/103 Monte Carlo - Predicting Stock Prices - Part I/CSV/Python 2 CSV/MC Predicting Stock Prices - Part I - Solution_CSV.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
Store the standard deviation of the log returns in a variable, called “stdev”.
stdev = log_returns.std() stdev
_____no_output_____
MIT
Python for Finance - Code Files/103 Monte Carlo - Predicting Stock Prices - Part I/CSV/Python 2 CSV/MC Predicting Stock Prices - Part I - Solution_CSV.ipynb
siddharthjain1611/Python_for_Finance_Investment_Fundamentals-and-Data-Analytics
Siamese Neural Network Recommendation for Friends (for Website) This notebook presents the final code that will be used for the Movinder [website](https://movinder.herokuapp.com/) when `Get recommendation with SiameseNN!` is selected by user.
import pandas as pd import json import datetime, time from sklearn.model_selection import train_test_split import itertools import os import zipfile import random import numpy as np import requests import matplotlib.pyplot as plt import scipy.sparse as sp from sklearn.metrics import roc_auc_score
_____no_output_____
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
--- (1) Read data
movies = json.load(open('movies.json')) friends = json.load(open('friends.json')) ratings = json.load(open('ratings.json')) soup_movie_features = sp.load_npz('soup_movie_features_11.npz') soup_movie_features = soup_movie_features.toarray()
_____no_output_____
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
(1.2) Simulate new friend's input The new group of friends will need to provide information that will be later used for training the model and predicting the ratings they will give to other movies. The friends will have a new id `new_friend_id`. They will provide a rating specified in the dictionary with the following keys: `movie_id_ml` (id of the movie rated), `rating` (rating of that movie on the scale from 1 to 5), and `friend_id` that will be the friends id specified as `new_friend_id`. In addition to this rating information, the users will have to provide to the system the information that includes their average age in the group `friends_age` and gender `friends_gender`.
new_friend_id = len(friends) new_ratings = [{'movie_id_ml': 302.0, 'rating': 4.0, 'friend_id': new_friend_id}, {'movie_id_ml': 304.0, 'rating': 4.0, 'friend_id': new_friend_id}, {'movie_id_ml': 307.0, 'rating': 4.0, 'friend_id': new_friend_id}] new_ratings new_friend = {'friend_id': new_friend_id, 'friends_age': 25.5, 'friends_gender': 0.375} new_friend # extend the existing data with this new information friends.append(new_friend) ratings.extend(new_ratings)
_____no_output_____
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
--- (2) Train the LightFM Model We will be using the [LightFM](http://lyst.github.io/lightfm/docs/index.html) implementation of SiameseNN to train our model using the user and item (i.e. movie) features. First, we create `scipy.sparse` matrices from raw data and they can be used to fit the LightFM model.
from lightfm.data import Dataset from lightfm import LightFM from lightfm.evaluation import precision_at_k from lightfm.evaluation import auc_score
_____no_output_____
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
(2.1) Build ID mappings We create a mapping between the user and item ids from our input data to indices that will be internally used by this model. This needs to be done since the LightFM works with user and items ids that are consecutive non-negative integers. Using `dataset.fit` we assign internal numerical id to every user and item we passed in.
dataset = Dataset() item_str_for_eval = "x['title'],x['release'], x['unknown'], x['action'], x['adventure'],x['animation'], x['childrens'], x['comedy'], x['crime'], x['documentary'], x['drama'], x['fantasy'], x['noir'], x['horror'], x['musical'],x['mystery'], x['romance'], x['scifi'], x['thriller'], x['war'], x['western'], *soup_movie_features[x['soup_id']]" friend_str_for_eval = "x['friends_age'], x['friends_gender']" dataset.fit(users=(int(x['friend_id']) for x in friends), items=(int(x['movie_id_ml']) for x in movies), item_features=(eval("("+item_str_for_eval+")") for x in movies), user_features=((eval(friend_str_for_eval)) for x in friends)) num_friends, num_items = dataset.interactions_shape() print(f'Mappings - Num friends: {num_friends}, num_items {num_items}.')
Mappings - Num friends: 192, num_items 1251.
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
(2.2) Build the interactions and feature matrices The `interactions` matrix contains interactions between `friend_id` and `movie_id_ml`. It puts 1 if friends `friend_id` rated movie `movie_id_ml`, and 0 otherwise.
(interactions, weights) = dataset.build_interactions(((int(x['friend_id']), int(x['movie_id_ml'])) for x in ratings)) print(repr(interactions))
<192x1251 sparse matrix of type '<class 'numpy.int32'>' with 59123 stored elements in COOrdinate format>
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
The `item_features` is also a sparse matrix that contains movie ids with their corresponding features. In the item features, we include the following features: movie title, when it was released, all genres it belongs to, and vectorized representation of movie keywords, cast members, and countries it was released in.
item_features = dataset.build_item_features(((x['movie_id_ml'], [eval("("+item_str_for_eval+")")]) for x in movies) ) print(repr(item_features))
<1251x2487 sparse matrix of type '<class 'numpy.float32'>' with 2502 stored elements in Compressed Sparse Row format>
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
The `user_features` is also a sparse matrix that contains movie ids with their corresponding features. The user features include their age, and gender.
user_features = dataset.build_user_features(((x['friend_id'], [eval(friend_str_for_eval)]) for x in friends) ) print(repr(user_features))
<192x342 sparse matrix of type '<class 'numpy.float32'>' with 384 stored elements in Compressed Sparse Row format>
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration
(2.3) Building a model After some hyperparameters tuning, we end up to having the best model performance with the following values:- Epocks = 150- Learning rate = 0.015- Max sampled = 11- Loss type = WARPReferences:- The WARP (Weighted Approximate-Rank Pairwise) lso for implicit feedback learning-rank. Originally implemented in [WSABIE paper](http://www.thespermwhale.com/jaseweston/papers/wsabie-ijcai.pdf).- Extension to apply to recommendation settings in the 2013 k-order statistic loss [paper](http://www.ee.columbia.edu/~ronw/pubs/recsys2013-kaos.pdf) in the form of the k-OS WARP loss, also implemented in LightFM.
epochs = 150 lr = 0.015 max_sampled = 11 loss_type = "warp" # "bpr" model = LightFM(learning_rate=lr, loss=loss_type, max_sampled=max_sampled) model.fit_partial(interactions, epochs=epochs, user_features=user_features, item_features=item_features) train_precision = precision_at_k(model, interactions, k=10, user_features=user_features, item_features=item_features).mean() train_auc = auc_score(model, interactions, user_features=user_features, item_features=item_features).mean() print(f'Precision: {train_precision}, AUC: {train_auc}') def predict_top_k_movies(model, friends_id, k): n_users, n_movies = train.shape if use_features: prediction = model.predict(friends_id, np.arange(n_movies), user_features=friends_features, item_features=item_features)#predict(model, user_id, np.arange(n_movies)) else: prediction = model.predict(friends_id, np.arange(n_movies))#predict(model, user_id, np.arange(n_movies)) movie_ids = np.arange(train.shape[1]) return movie_ids[np.argsort(-prediction)][:k] dfm = pd.DataFrame(movies) dfm = dfm.sort_values(by="movie_id_ml") k = 10 friends_id = new_friend_id movie_ids = np.array(dfm.movie_id_ml.unique())#np.array(list(df_movies.movie_id_ml.unique())) #np.arange(interactions.shape[1]) print(movie_ids.shape) n_users, n_items = interactions.shape scores = model.predict(friends_id, np.arange(n_items), user_features=user_features, item_features=item_features) # scores = model.predict(friends_id, np.arange(n_items)) known_positives = movie_ids[interactions.tocsr()[friends_id].indices] top_items = movie_ids[np.argsort(-scores)] print(f"Friends {friends_id}") print(" Known positives:") for x in known_positives[:k]: print(f" {x} | {dfm[dfm.movie_id_ml==x]['title'].iloc[0]}" ) print(" Recommended:") for x in top_items[:k]: print(f" {x} | {dfm[dfm.movie_id_ml==x]['title'].iloc[0]}" )
(1251,) Friends 191 Known positives: 301 | in & out 302 | l.a. confidential 307 | the devil's advocate Recommended: 48 | hoop dreams 292 | rosewood 255 | my best friend's wedding 286 | the english patient 284 | tin cup 299 | hoodlum 125 | phenomenon 1 | toy story 315 | apt pupil 7 | twelve monkeys
MIT
movie_recommendation_with_LightFM_friends_WEBAPP.ipynb
LukasSteffensen/movielens-imdb-exploration