markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
We can inspect the pipeline definition in JSON format:
import json definition = json.loads(pipeline.definition()) definition
_____no_output_____
Apache-2.0
notebooks/tf-2-workflow-smpipelines.ipynb
yegortokmakov/amazon-sagemaker-workshop
After upserting its definition, we can start the pipeline with the `Pipeline` object's `start` method:
pipeline.upsert(role_arn=role) execution = pipeline.start()
_____no_output_____
Apache-2.0
notebooks/tf-2-workflow-smpipelines.ipynb
yegortokmakov/amazon-sagemaker-workshop
We can now confirm that the pipeline is executing. In the log output below, confirm that `PipelineExecutionStatus` is `Executing`.
execution.describe()
_____no_output_____
Apache-2.0
notebooks/tf-2-workflow-smpipelines.ipynb
yegortokmakov/amazon-sagemaker-workshop
Typically this pipeline should take about 10 minutes to complete. We can wait for completion by invoking `wait()`. After execution is complete, we can list the status of the pipeline steps.
execution.wait() execution.list_steps()
_____no_output_____
Apache-2.0
notebooks/tf-2-workflow-smpipelines.ipynb
yegortokmakov/amazon-sagemaker-workshop
Check the score reportAfter the batch scoring job in the pipeline is complete, the batch scoring report is uploaded to S3. For simplicity, this report simply states the test MSE, but in general reports can include as much detail as desired. Reports such as these also can be formatted for use in conditional approval steps in SageMaker Pipelines. For example, the pipeline could have a condition step that only allows further steps to proceed only if the MSE is lower than some threshold.
report_path = f"{step_batch.outputs[0].destination}/score-report.txt" !aws s3 cp {report_path} ./score-report.txt && cat score-report.txt
_____no_output_____
Apache-2.0
notebooks/tf-2-workflow-smpipelines.ipynb
yegortokmakov/amazon-sagemaker-workshop
ML Lineage Tracking SageMaker ML Lineage Tracking creates and stores information about the steps of a ML workflow from data preparation to model deployment. With the tracking information you can reproduce the workflow steps, track model and dataset lineage, and establish model governance and audit standards.Let's now check out the lineage of the model generated by the pipeline above. The lineage table identifies the resources used in training, including the timestamped train and test data sources, and the specific version of the TensorFlow 2 container in use during the training job.
from sagemaker.lineage.visualizer import LineageTableVisualizer viz = LineageTableVisualizer(sagemaker.session.Session()) for execution_step in reversed(execution.list_steps()): if execution_step['StepName'] == 'TF2WorkflowTrain': display(viz.show(pipeline_execution_step=execution_step))
_____no_output_____
Apache-2.0
notebooks/tf-2-workflow-smpipelines.ipynb
yegortokmakov/amazon-sagemaker-workshop
link: https://www.kaggle.com/jindongwang92/crossposition-activity-recognitionhttps://archive.ics.uci.edu/ml/datasets/pamap2+physical+activity+monitoring DSADSColumns 1~405 are features, listed in the order of 'Torso', 'Right Arm', 'Left Arm', 'Right Leg', and 'Left Leg'. Each position contains 81 columns of features. * Column 406 is the activity sequence indicating the executing of activities (usually not used in experiments). * Column 407 is the activity label (1~19). * Column 408 denotes the person (1~8)B. Barshan and M. C. Yuksek, “Recognizing daily and sports activities ¨ in two open source machine learning environments using body-worn sensor units,” The Computer Journal, vol. 57, no. 11, pp. 1649–1667, 2014. Feature extraction byJindong Wang, Yiqiang Chen, Lisha Hu, Xiaohui Peng, and Philip S. Yu. Stratified Transfer Learning for Cross-domain Activity Recognition. 2018 IEEE International Conference on Pervasive Computing and Communications (PerCom).
import scipy.io import warnings warnings.filterwarnings('ignore') import numpy as np import pandas as pd import seaborn as sns import matplotlib.pyplot as plt filename = "dsads" mat = scipy.io.loadmat('../Dataset/DASDS/'+filename+".mat") mat raw = pd.DataFrame(mat["data_dsads"]) raw.head() columns = ["Feat"+str(i) for i in range(405)] + ["ActivitySeq", "ActivityID", "PersonID"] raw.columns = columns raw.head() raw["ActivityID"].unique() activityNames = [ "sitting", "standing", "lying on back side", "lying on right side", "ascending stairs", "descending stairs", "standing in an elevator still", "moving around in an elevator", "walking in a parking lot", "walking on a treadmill1", "walking on a treadmill2", "running on a treadmill3", "exercising on a stepper", "exercising on a cross trainer", "cycling in horizontal positions", "cycling in vertical positions", "rowing", "jumping", "playing basketball" ] def add_activityname(x): name = "R"+str(int(x["PersonID"]))+"_"+activityNames[int(x["ActivityID"])-1] name = activityNames[int(x["ActivityID"])-1] return name raw["ActivityName"] = raw.apply(add_activityname, axis=1) df = raw.drop('ActivityID', 1) df = df.drop('PersonID', 1) df = df.drop('ActivitySeq', 1) df.head() # Scale to [0, 1] for i in range(243): f = (df["Feat"+str(i)]+1)/2 df["Feat"+str(i)] = f df.head() df.to_csv(filename+".feat", index=False) df["ActivityName"].unique() activity_labels = df["ActivityName"].unique() ind = np.arange(len(activity_labels)) plt.rcParams['figure.figsize'] = [10, 5] nRow = [] for label in activity_labels: c = len(df[df["ActivityName"]==label]) nRow.append(c) plt.rcParams['figure.figsize'] = [20, 5] p1 = plt.bar(ind, nRow) plt.ylabel('Number of records') plt.title('Number of records in raw data of each activity class') plt.xticks(ind, activity_labels, rotation='vertical') plt.show() from functools import cmp_to_key from matplotlib import colors as mcolors plt.rcParams['figure.figsize'] = [10, 5] vectors = df colors = ["red", "green", "blue", "gold", "yellow"] + list(mcolors.TABLEAU_COLORS.values()) p = vectors["ActivityName"] v = vectors[["ActivityName"]] v["c"] = 1 labels = p.unique() count = v.groupby(['ActivityName']).agg(['count'])[("c", "count")] labels, count def compare(item1, item2): return count[item2] - count[item1] print(labels) labels = sorted(labels, key=cmp_to_key(compare)) sizes = [count[l] for l in labels] fig1, ax1 = plt.subplots() patches, texts = ax1.pie(sizes, colors=colors) ax1.axis('equal') # Equal aspect ratio ensures that pie is drawn as a circle. plt.legend(patches, labels, loc="best") plt.tight_layout() plt.show()
['sitting' 'standing' 'lying on back side' 'lying on right side' 'ascending stairs' 'descending stairs' 'standing in an elevator still' 'moving around in an elevator' 'walking in a parking lot' 'walking on a treadmill1' 'walking on a treadmill2' 'running on a treadmill3' 'exercising on a stepper' 'exercising on a cross trainer' 'cycling in horizontal positions' 'cycling in vertical positions' 'rowing' 'jumping' 'playing basketball']
MIT
Reports/v0/DSADS Dataset.ipynb
hillshadow/continual-learning-for-HAR
#@title Calculation of density of gases #@markdown Demonstration of ideal gas law and equations of state. An introduction to equations of state can be seen in the [EoS Wikipedia pages](https://en.wikipedia.org/wiki/Equation_of_state). #@markdown <br><br>This document is part of the module ["Introduction to Gas Processing using NeqSim in Colab"](https://colab.research.google.com/github/EvenSol/NeqSim-Colab/blob/master/notebooks/examples_of_NeqSim_in_Colab.ipynb#scrollTo=_eRtkQnHpL70). %%capture !pip install neqsim import neqsim from neqsim.thermo.thermoTools import * import matplotlib import numpy as np import matplotlib.pyplot as plt import math plt.style.use('classic') %matplotlib inline #@title Introduction to Gas Laws #@markdown This video gives an intriduction to behavour of gases as function of pressure and temperature from IPython.display import YouTubeVideo YouTubeVideo('QhnlyHV8evY', width=600, height=400)
_____no_output_____
Apache-2.0
notebooks/thermodynamics/density_of_gas.ipynb
EvenSol/testneqsim
Comparison of ideal and real gas behaviourIn the following example we use the ideal gas law and the PR/SRK-EOS to calculate the density of a pure component gas. At low pressure we see that the ideal gas and the real density are the same, at higher pressures the real gas density is higher, while at very high pressures the ideal gas density is the highest. The reason for this is that at intermediate pressures, the atractive forces is dominating, while at very high pressures repulsive forces starts to dominate.The ideal gas equation of state is $pV=nRT$ where $R$ is the gas constant $8.314kJ/mol$. The real gas equation of state is given on the form $pV=ZnRT$ where $Z$ is the gas compressibility factor. The density can be calculated from $\rho=n/V\times M$ where $M$is the molar mass.Use the form to select molecule, temperature and pressure range, and calculate the density and compressibility of the gas. Can you find a gas that shifts from gas to liquid when pressure is increased?
#@title Select component and equation of state. Set temperature [K] and pressure range [bara]. { run: "auto" } componentName = "CO2" #@param ["methane", "ethane", "propane", "CO2", "nitrogen"] temperature = 323.0 #@param {type:"number"} minPressure = 1.0 #@param {type:"number"} maxPressure = 350.0 #@param {type:"number"} eosname = "srk" #@param ["srk", "pr"] R = 8.314 # J/mol/K # Creating a fluid in neqsim fluid1 = fluid(eosname) #create a fluid using the SRK-EoS fluid1.addComponent(componentName, 1.0) #adding 1 mole to the fluid fluid1.init(0); print('molar mass of ', componentName, ' is ', fluid1.getMolarMass()*1000 , ' kg/mol') def idealgasdensity(pressure, temperature): m3permol = R*temperature/(pressure*1e5) m3perkg = m3permol/fluid1.getMolarMass() return 1.0/m3perkg def realgasdensity(pressure, temperature): fluid1.setPressure(pressure) fluid1.setTemperature(temperature) TPflash(fluid1) fluid1.initPhysicalProperties(); return fluid1.getDensity('kg/m3') def compressibility(pressure, temperature): fluid1.setPressure(pressure) fluid1.setTemperature(temperature) TPflash(fluid1) fluid1.initPhysicalProperties(); return fluid1.getZ() pressure = np.arange(minPressure, maxPressure, int((maxPressure-minPressure)/100)+1) idealdensity = [idealgasdensity(P,temperature) for P in pressure] realdensity = [realgasdensity(P,temperature) for P in pressure] compressibility = [compressibility(P,temperature) for P in pressure] plt.figure() plt.subplot(2, 1, 1) plt.plot(pressure, idealdensity, '--') plt.plot(pressure, realdensity, '-') plt.xlabel('Pressure [bara]') plt.ylabel('Density [kg/m3]') plt.legend(['ideal', 'real']) plt.subplot(2, 1, 2) plt.plot(pressure, compressibility, '-') plt.xlabel('Pressure [bara]') plt.ylabel('Z [-]') plt.legend(['compressibility factor'])
molar mass of CO2 is 44.01 kg/mol
Apache-2.0
notebooks/thermodynamics/density_of_gas.ipynb
EvenSol/testneqsim
Pressure of gas as function of volume1 m3 methane at 1 bar and 25 C is compressed to 200 bar and cooled to 25 C. What isthe volume of the gas? What is the density of the compressed gas?
componentName = "nitrogen" #@param ["methane", "ethane", "propane", "CO2", "nitrogen"] temperature = 298.15 #@param {type:"number"} initialVolume = 1.0 #@param {type:"number"} initialPressure = 1.0 #@param {type:"number"} endPressure = 10.0 #@param {type:"number"} R = 8.314 # J/mol/K initialMoles = initialPressure*1e5*1.0/(R*temperature) # Creating a fluid in neqsim fluid1 = fluid('srk') #create a fluid using the SRK-EoS fluid1.addComponent(componentName, initialMoles) #adding 1 Sm3 to the fluid fluid1.setTemperature(temperature) fluid1.setPressure(initialPressure) TPflash(fluid1) fluid1.initPhysicalProperties() startVolume = fluid1.getVolume('m3/sec') print('initialVolume ', startVolume, 'm3') print('initial gas density ', fluid1.getDensity('kg/m3'), 'kg/m3') print('initial gas compressiility ', fluid1.getZ(), ' [-]') fluid1.setPressure(endPressure) TPflash(fluid1) fluid1.initPhysicalProperties() endVolume = fluid1.getVolume('m3/sec') print('end volume ', fluid1.getVolume('Sm3/sec'), 'm3') print('volume ratio ', endVolume/startVolume, ' m3/m3') print('end gas density ', fluid1.getDensity('kg/m3'), ' kg/m3') print('end gas compressibility ', fluid1.getZ(), ' [-]')
initialVolume 1.0000083601119607 m3 initial gas density 1.1301476964655142 kg/m3 initial gas compressiility 0.9999527817885948 [-] end volume 0.09997979623211148 m3 volume ratio 0.09997896039680884 m3/m3 end gas density 11.30767486281327 kg/m3 end gas compressibility 0.9997423956912075 [-]
Apache-2.0
notebooks/thermodynamics/density_of_gas.ipynb
EvenSol/testneqsim
Calculation of density of LNGThe density of liquified methane at the boiling point at atomspheric pressure can be calcuated as demonstrated in the following example. In this case we use the SRK EoS and the PR-EoS.
# Creating a fluid in neqsim eos = 'srk' #@param ["srk", "pr"] pressure = 1.01325 #@param {type:"number"} temperature = -162.0 #@param {type:"number"} fluid1 = fluid(eos) #create a fluid using the SRK-EoS fluid1.addComponent('methane', 1.0) fluid1.setTemperature(temperature) fluid1.setPressure(pressure) bubt(fluid1) fluid1.initPhysicalProperties() print('temperature at boiling point ', fluid1.getTemperature()-273.15, 'C') print('LNG density ', fluid1.getDensity('kg/m3'), ' kg/m3')
temperature at boiling point -161.1441471093413 C LNG density 428.1719693971862 kg/m3
Apache-2.0
notebooks/thermodynamics/density_of_gas.ipynb
EvenSol/testneqsim
Accuracy of EoS for calculating the densityThe density calculated with any equation of state will have an uncertainty. The GERG-2008 is a reference equation of state with high accuracy in prediction of thermodynamic properties. In the following example we compare the gas density calculations of SRK/PR with the GERG-(2008 version)-EoS.
#@title Select component and equation of state. Set temperature [K] and pressure range [bara]. { run: "auto" } componentName = "methane" #@param ["methane", "ethane", "propane", "CO2", "nitrogen"] temperature = 298.0 #@param {type:"number"} minPressure = 1.0 #@param {type:"number"} maxPressure = 500.0 #@param {type:"number"} eosname = "srk" #@param ["srk", "pr"] R = 8.314 # J/mol/K # Creating a fluid in neqsim fluid1 = fluid(eosname) #create a fluid using the SRK-EoS fluid1.addComponent(componentName, 1.0) #adding 1 mole to the fluid fluid1.init(0); def realgasdensity(pressure, temperature): fluid1.setPressure(pressure) fluid1.setTemperature(temperature) TPflash(fluid1) fluid1.initPhysicalProperties(); return fluid1.getDensity('kg/m3') def GERGgasdensity(pressure, temperature): fluid1.setPressure(pressure) fluid1.setTemperature(temperature) TPflash(fluid1) return fluid1.getPhase('gas').getDensity_GERG2008() pressure = np.arange(minPressure, maxPressure, int((maxPressure-minPressure)/100)+1) realdensity = [realgasdensity(P,temperature) for P in pressure] GERG2008density = [GERGgasdensity(P,temperature) for P in pressure] deviation = [((realgasdensity(P,temperature)-GERGgasdensity(P,temperature))/GERGgasdensity(P,temperature)*100.0) for P in pressure] plt.figure() plt.subplot(2, 1, 1) plt.plot(pressure, realdensity, '-') plt.plot(pressure, GERG2008density, '--') plt.xlabel('Pressure [bara]') plt.ylabel('Density [kg/m3]') plt.legend(['EoS', 'GERG-2008']) plt.subplot(2, 1, 2) plt.plot(pressure, deviation) plt.xlabel('Pressure [bara]') plt.ylabel('Deviation [%]')
_____no_output_____
Apache-2.0
notebooks/thermodynamics/density_of_gas.ipynb
EvenSol/testneqsim
Calculation of density and compressibility factor for a natual gas mixtureIn the following example we calculate the density of a multicomponent gas mixture.
#@title Select equation of state and set temperature [C] and pressure [bara] { run: "auto" } temperature = 15.0 #@param {type:"number"} pressure = 100.0 #@param {type:"number"} eosname = "srk" #@param ["srk", "pr"] fluid1 = fluid(eosname) fluid1.addComponent('nitrogen', 1.2) fluid1.addComponent('CO2', 2.6) fluid1.addComponent('methane', 85.8) fluid1.addComponent('ethane', 7.0) fluid1.addComponent('propane', 3.4) fluid1.setTemperature(temperature, 'C') fluid1.setPressure(pressure, 'bara') TPflash(fluid1) fluid1.initProperties() print('gas compressibility ', fluid1.getZ(), ' -') print('gas density ', fluid1.getDensity('kg/m3'), ' kg/m3')
gas compressibility 0.7735308707131694 - gas density 102.2871090151433 kg/m3
Apache-2.0
notebooks/thermodynamics/density_of_gas.ipynb
EvenSol/testneqsim
![image.png](attachment:d6a8cd51-df64-466f-ba92-34ac05fff01f.png)
import torch import h5py import numpy as np import csv
_____no_output_____
MIT
intro_PyTorch/3.1.ipynb
caffeflow/intro_pytorch
加载数据, 创建tensor
wine_path = "./data/chapter3/winequality-white.csv" wine_data = np.loadtxt(fname=wine_path,delimiter=';',skiprows=1) # 第一行是标签 wine_data.shape wine_label = next(csv.reader(open(wine_path),delimiter=';')) wine_label = np.array(wine_label) wine_label.shape # ndarray转为tensor wine_data = torch.from_numpy(wine_data)
_____no_output_____
MIT
intro_PyTorch/3.1.ipynb
caffeflow/intro_pytorch
预处理张量
# 划分出评分做为ground_truth wine_content = wine_data[:,:-1] wine_score = wine_data[:,-1] wine_content.shape,wine_score.shape wine_score
_____no_output_____
MIT
intro_PyTorch/3.1.ipynb
caffeflow/intro_pytorch
特征缩放
# 标准化 content_mean = wine_content.mean(dim=0) content_var = wine_content.var(dim=0) content_normalized = (wine_content - content_mean)/torch.sqrt(content_var)
_____no_output_____
MIT
intro_PyTorch/3.1.ipynb
caffeflow/intro_pytorch
数据审查
# 酒分3个等级 content_bad = wine_content[torch.lt(wine_score,6)] content_mid = wine_content[torch.ge(wine_score,6) & torch.lt(wine_score,8)] content_good = wine_content[torch.gt(wine_score,8)] content_bad.shape # 对酒中的化学含量做平均值 content_bad = content_bad.mean(dim=0) content_mid = content_mid.mean(dim=0) content_good = content_good.mean(dim=0) content_bad.shape for i,args in enumerate(zip(wine_label,content_bad,content_mid,content_good)): print('{:2} {:20} {:6.2f} {:6.2f} {:6.2f}'.format(i,*args))
0 fixed acidity 6.96 6.81 7.42 1 volatile acidity 0.31 0.26 0.30 2 citric acid 0.33 0.33 0.39 3 residual sugar 7.05 6.08 4.12 4 chlorides 0.05 0.04 0.03 5 free sulfur dioxide 35.34 35.21 33.40 6 total sulfur dioxide 148.60 133.64 116.00 7 density 1.00 0.99 0.99 8 pH 3.17 3.20 3.31 9 sulphates 0.48 0.49 0.47 10 alcohol 9.85 10.80 12.18
MIT
intro_PyTorch/3.1.ipynb
caffeflow/intro_pytorch
**[Python Micro-Course Home Page](https://www.kaggle.com/learn/python)**--- These exercises accompany the tutorial on [functions and getting help](https://www.kaggle.com/colinmorris/functions-and-getting-help).As before, don't forget to run the setup code below before jumping into question 1.
# SETUP. You don't need to worry for now about what this code does or how it works. from learntools.core import binder; binder.bind(globals()) from learntools.python.ex2 import * print('Setup complete.')
Setup complete.
MIT
Project Notes/Kaggle Learn/01 Python/exercise02 functions and getting help.ipynb
JoaoAnt/Projects
Exercises 1.Complete the body of the following function according to its docstring.HINT: Python has a builtin function `round`
def round_to_two_places(num): """Return the given number rounded to two decimal places. >>> round_to_two_places(3.14159) 3.14 """ return round(num,2) pass round_to_two_places(3.14) q1.check() # Uncomment the following for a hint #q1.hint() # Or uncomment the following to peek at the solution #q1.solution()
_____no_output_____
MIT
Project Notes/Kaggle Learn/01 Python/exercise02 functions and getting help.ipynb
JoaoAnt/Projects
2.The help for `round` says that `ndigits` (the second argument) may be negative.What do you think will happen when it is? Try some examples in the following cell?Can you think of a case where this would be useful?
# Put your test code here round(105.8555, -1) print('Yes') #q2.solution()
_____no_output_____
MIT
Project Notes/Kaggle Learn/01 Python/exercise02 functions and getting help.ipynb
JoaoAnt/Projects
3.In a previous programming problem, the candy-sharing friends Alice, Bob and Carol tried to split candies evenly. For the sake of their friendship, any candies left over would be smashed. For example, if they collectively bring home 91 candies, they'll take 30 each and smash 1.Below is a simple function that will calculate the number of candies to smash for *any* number of total candies.Modify it so that it optionally takes a second argument representing the number of friends the candies are being split between. If no second argument is provided, it should assume 3 friends, as before.Update the docstring to reflect this new behaviour.
def to_smash(total_candies, number_friends=3): """Return the number of leftover candies that must be smashed after distributing the given number of candies evenly between 3 friends. >>> to_smash(91) 1 """ return total_candies % number_friends q3.check() #q3.hint() #q3.solution()
_____no_output_____
MIT
Project Notes/Kaggle Learn/01 Python/exercise02 functions and getting help.ipynb
JoaoAnt/Projects
4.It may not be fun, but reading and understanding error messages will be an important part of your Python career.Each code cell below contains some commented-out buggy code. For each cell...1. Read the code and predict what you think will happen when it's run.2. Then uncomment the code and run it to see what happens. (**Tip**: In the kernel editor, you can highlight several lines and press `ctrl`+`/` to toggle commenting.)3. Fix the code (so that it accomplishes its intended purpose without throwing an exception)
round_to_two_places(9.9999) x = -10 y = 5 # Which of the two variables above has the smallest absolute value? smallest_abs = min(abs(x),abs(y)) def f(x): y = abs(x) return y print(f(5))
5
MIT
Project Notes/Kaggle Learn/01 Python/exercise02 functions and getting help.ipynb
JoaoAnt/Projects
Channel Flow Example
# Written for JHTDB by German G Saltar Rivera (2019) # To use K3D capabilities, use Firefox or Chrome browser. # Safari has trouble with K3D generated objects # import pyJHTDB from pyJHTDB import libJHTDB import time as tt import numpy as np import k3d #https://github.com/K3D-tools/K3D-jupyter import ipywidgets as widgets from ipywidgets import interact, interactive, fixed import math from numpy import sin,cos,pi lJHTDB = libJHTDB() lJHTDB.initialize() #Add token auth_token = "edu.jhu.pha.turbulence.testing-201311" #Replace with your own token here lJHTDB.add_token(auth_token) #Set domain to be queried time = 0 FD4Lag4 = 44 deltax = 0.01 deltay = 0.008 deltaz = 0.008 nx=100 ny=100 nz=100 xmin, xmax = 0, deltax*nx ymin, ymax = -1, -1+deltay*ny zmin, zmax = 0, deltaz*nz #Creates query points and arranges their coordinates into the required (n,3)-type array points = np.zeros((nx*ny*nz,3),dtype='float32') x=np.linspace(xmin,xmax,nx,dtype='float32') y=np.linspace(ymin,ymax,ny,dtype='float32') z=np.linspace(zmin,zmax,nz,dtype='float32') count = 0 for ii in range(np.size(x)): for jj in range(np.size(y)): for kk in range(np.size(z)): points[count,0] = x[ii] points[count,1] = y[jj] points[count,2] = z[kk] count = count + 1 print(np.shape(points)) #Queries the velocity gradient from JHTDB start = tt.time() velgrad = lJHTDB.getData( time, point_coords=points,sinterp = FD4Lag4, data_set = 'channel', getFunction = 'getVelocityGradient') lJHTDB.finalize() end = tt.time() print(end - start) print(velgrad.shape) #Calculates the q-criterion qc = np.zeros((np.size(velgrad[:,0]),1)) qc[:,0] = -0.5*(velgrad[:,0]**2+velgrad[:,4]**2+velgrad[:,8]**2+2*(velgrad[:,1]*velgrad[:,3]+velgrad[:,2]*velgrad[:,6]+velgrad[:,5]*velgrad[:,7])) count2 = 0 qcriterion = np.zeros((nx,ny,nz)) for ii in range(np.size(x)): for jj in range(np.size(y)): for kk in range(np.size(z)): qcriterion[ii,jj,kk] = qc[count2] count2 = count2 + 1 print(qcriterion.shape) #Creates a K3D-volume rendering object #In order to export the plot as html object, in the controls panel, click on "Snapshot" vol = k3d.volume(qcriterion, color_range=[2,100], color_map=np.array(k3d.basic_color_maps.Jet,dtype=np.float32), bounds=[xmin,xmax,ymin,ymax,zmin,zmax], alpha_coef=100,name="Channel Flow Q vizualization") plt = k3d.plot() plt.camera_auto_fit = False plt.camera = [1.5,0.2,1.5,0,-1,-0.5,0,1,0] plt += vol plt.axes = ['x','y','z'] plt.display()
_____no_output_____
Apache-2.0
examples/JHTDB_visualization_with_K3D.ipynb
lento234/pyJHTDB
Forced Isotropic Turbulence Example
#Set domain to be queried #Generates a 3D plot of Q iso-surface with overlayed kinetic energy volume #rendering in a [0,0.5]^3 subcube in isotropic turbulence time1 = 3.00 nx1=80 ny1=80 nz1=80 xmin1, xmax1 = 0, 0.5 ymin1, ymax1 = 0, 0.5 zmin1, zmax1 = 0, 0.5 #Creates query points and arranges their coordinates into the required (n,3)-type array points1 = np.zeros((nx1*ny1*nz1,3),dtype='float32') x1=np.linspace(xmin1,xmax1,nx1,dtype='float32') y1=np.linspace(ymin1,ymax1,ny1,dtype='float32') z1=np.linspace(zmin1,zmax1,nz1,dtype='float32') count = 0 for ii in range(np.size(x1)): for jj in range(np.size(y1)): for kk in range(np.size(z1)): points1[count,0] = x1[ii] points1[count,1] = y1[jj] points1[count,2] = z1[kk] count = count + 1 #Queries the velocity from JHTDB lJHTDB.initialize() start = tt.time() Lag4 = 4 vel1 = lJHTDB.getData( time1, point_coords=points1,sinterp = Lag4, data_set = 'isotropic1024coarse', getFunction = 'getVelocity') end = tt.time() print(end - start) lJHTDB.finalize() print(vel1.shape) #Queries the velocity gradient from JHTDB lJHTDB.initialize() start = tt.time() grad1 = lJHTDB.getData( time1, point_coords=points1,sinterp = FD4Lag4, data_set = 'isotropic1024coarse', getFunction = 'getVelocityGradient') end = tt.time() print(end - start) lJHTDB.finalize() print(grad1.shape) #Calculates the q-criterion q1 = np.zeros((np.size(grad1[:,0]),1)) q1[:,0] = -0.5*(grad1[:,0]**2+grad1[:,4]**2+grad1[:,8]**2+2*(grad1[:,1]*grad1[:,3]+grad1[:,2]*grad1[:,6]+grad1[:,5]*grad1[:,7])) #Calculates the kinetic energy e1 = np.zeros((np.size(vel1[:,0]),1)) e1[:,0] = vel1[:,0]**2 + vel1[:,1]**2 + vel1[:,2]**2 #Arrange 1D arrays into 3D arrays qcrit = np.zeros((nx1,ny1,nz1)) energ = np.zeros((nx1,ny1,nz1)) count2 = 0 for ii in range(nx1): for jj in range(ny1): for kk in range(nz1): qcrit[ii,jj,kk] = q1[count2] energ[ii,jj,kk] = e1[count2] count2 += 1 #Creates a K3D isosurface object isosurface = k3d.marching_cubes(qcrit,xmin=xmin1,xmax=xmax1,ymin=ymin1,ymax=ymax1, zmin=zmin1, zmax=zmax1, level=250, color = 0xf4ea0e,name = 'isotropic: Q Isosurface') #Creates a K3D volume rendering object volume = k3d.volume(energ, color_range=[0.1*np.max(energ),0.8*np.max(energ)], color_map=np.array(k3d.basic_color_maps.Jet,dtype=np.float32), bounds=[xmin1,xmax1,ymin1,ymax1,zmin1,zmax1] ,alpha_coef=15,name="isotropic: kinetic energy") plot = k3d.plot() plot.camera_auto_fit = True plot += volume plot += isosurface plot.axes = ['x','y','z'] plot.display()
_____no_output_____
Apache-2.0
examples/JHTDB_visualization_with_K3D.ipynb
lento234/pyJHTDB
Kobart tokenizer sampleTokenizer를 간단하게 살펴봅니다.
from kobart import get_kobart_tokenizer
_____no_output_____
Apache-2.0
src/summarization/2. tokenizer_sample.ipynb
youngerous/kobart-voice-summarization
tokenize
tok = get_kobart_tokenizer() # only tokenize tokenized = tok.tokenize('비정형데이터분석 팀 식사과정입니다. 무야호!') tokenized # convert to indice tok.convert_tokens_to_ids(tokenized) # encode = tokenize + convert_tokens_to_ids tok.encode('비정형데이터분석 팀 식사과정입니다. 무야호!')
_____no_output_____
Apache-2.0
src/summarization/2. tokenizer_sample.ipynb
youngerous/kobart-voice-summarization
check vocab
vocab = dict(sorted(tok.vocab.items(), key=lambda item: item[1])) len(vocab) vocab
_____no_output_____
Apache-2.0
src/summarization/2. tokenizer_sample.ipynb
youngerous/kobart-voice-summarization
DraftKings NFL Constraint Satisfaction===This is the companion code to a [blog post](https://zwlevonian.medium.com/integer-linear-programming-with-pulp-optimizing-a-draftkings-nfl-lineup-5e7524dd42d3) I wrote on Medium.
import pandas as pd import pulp
_____no_output_____
MIT
_notebook/DraftKingsNFLConstraintSatisfaction.ipynb
levon003/ml-visualized
Load in the weekly data
df = pd.read_csv('DKSalaries.csv') len(df) df.sample(n=5) # trim any postponed games, since those can't be included in a lineup df = df[df['Game Info'] != 'Postponed'] len(df) exclude_list = ['Dak Prescott'] df = df[~df['Name'].isin(exclude_list)] len(df) # this is equivalent to an extra constraint that requires playing only players with a minimum cost # does not apply to DST, since that's kind of a special category df = df[(df.Salary >= 4000)|(df['Roster Position'] == 'DST')] len(df)
_____no_output_____
MIT
_notebook/DraftKingsNFLConstraintSatisfaction.ipynb
levon003/ml-visualized
Create the constraint problemGoal: maximize AvgPointsPerGame - TotalPlayers = 9 - TotalSalary <= 50000 - TotalPosition_WR = 3 - TotalPosition_RB = 2 - TotalPosition_TE = 1 - TotalPosition_QB = 1 - TotalPosition_FLEX = 1 - TotalPosition_DST = 1 - Each player in only one position (relevant only for FLEX)
prob = pulp.LpProblem('DK_NFL_weekly', pulp.LpMaximize) player_vars = [pulp.LpVariable(f'player_{row.ID}', cat='Binary') for row in df.itertuples()] # total assigned players constraint prob += pulp.lpSum(player_var for player_var in player_vars) == 9 # position constraints # TODO fix this, currently won't work # as it makes the problem infeasible def get_position_sum(player_vars, df, position): return pulp.lpSum([player_vars[i] * (position in df['Roster Position'].iloc[i]) for i in range(len(df))]) prob += get_position_sum(player_vars, df, 'QB') == 1 prob += get_position_sum(player_vars, df, 'DST') == 1 # to account for the FLEX position, we allow additional selections of the 3 FLEX-eligible roles prob += get_position_sum(player_vars, df, 'RB') >= 2 prob += get_position_sum(player_vars, df, 'WR') >= 3 prob += get_position_sum(player_vars, df, 'TE') >= 1 # total salary constraint prob += pulp.lpSum(df.Salary.iloc[i] * player_vars[i] for i in range(len(df))) <= 50000 # finally, specify the goal prob += pulp.lpSum([df.AvgPointsPerGame.iloc[i] * player_vars[i] for i in range(len(df))]) # solve and print the status prob.solve() print(pulp.LpStatus[prob.status]) # for each of the player variables, total_salary_used = 0 mean_AvgPointsPerGame = 0 for i in range(len(df)): if player_vars[i].value() == 1: row = df.iloc[i] print(row['Roster Position'], row.Name, row.TeamAbbrev, row.Salary, row.AvgPointsPerGame) total_salary_used += row.Salary mean_AvgPointsPerGame += row.AvgPointsPerGame #mean_AvgPointsPerGame /= 9 # divide by total players in roster to get a mean total_salary_used, mean_AvgPointsPerGame
RB/FLEX Dalvin Cook MIN 8200 28.65 QB Russell Wilson SEA 7600 32.01 WR/FLEX Tyler Lockett SEA 6800 22.07 WR/FLEX Corey Davis TEN 5900 17.98 RB/FLEX Melvin Gordon III DEN 5300 15.72 WR/FLEX CeeDee Lamb DAL 4900 14.21 TE/FLEX Hunter Henry LAC 4000 9.63 WR/FLEX Keelan Cole JAX 4000 12.37 DST Colts IND 3300 11.71
MIT
_notebook/DraftKingsNFLConstraintSatisfaction.ipynb
levon003/ml-visualized
Python Crash CourseMaster in Data Science - Sapienza UniversityHomework 2: Python ChallengesA.A. 2017/18Tutor: Francesco Fabbri![time_to_code.jpg](attachment:time_to_code.jpg) InstructionsSo guys, here we are! **Finally** you're facing your first **REAL** homework. Are you ready to fight?We're going to apply all the Pythonic stuff seen before AND EVEN MORE... Simple rules:1. Don't touch the instructions, you **just have to fill the blank rows**.2. This is supposed to be an exercise for improving your Pythonic Skills in a spirit of collaboration so...of course you can help your classmates and obviously get a really huge help as well from all the others (as the proverb says: "I get help from you and then you help me", right?!...)3. **RULE OF THUMB** for you during the homework: - *1st Step:* try to solve the problem alone - *2nd Step:* googling random the answer - *3rd Step:* ask to your colleagues - *3rd Step:* screaming and complaining about life - *4th Step:* ask to Tutors And the Prize? The Beer?The glory?!:Guys the life is hard...in this Master it's even worse...Soooo, since that you seem so smart I want to test you before the start of all the courses....But not now.You have to come prepared to the challenge, so right now solve these first 6 exercises, then it will be the time for **FIGHTING** and (for one of you) **DRINKING**.![bevehomer.PNG](attachment:bevehomer.PNG) Warm-up... 1. 12! is equal to...
n=12 x=1 while n>1: x=x*n n=n-1 print(x)
479001600
MIT
02/homework_day2.ipynb
Py101/py101-assignments-andremarco
2. More math...Write a program which will find all such numbers which are divisible by 7 but are not a multiple of 5, between 0 and 1000 (both included). The numbers obtained should be printed in a comma-separated sequence on a single line. (range and CFS)
ris=[] for x in range(0,1001): if x%7==0 and x%5!=0: ris.append(str(x)) r2=','.join(ris) print(r2)
7,14,21,28,42,49,56,63,77,84,91,98,112,119,126,133,147,154,161,168,182,189,196,203,217,224,231,238,252,259,266,273,287,294,301,308,322,329,336,343,357,364,371,378,392,399,406,413,427,434,441,448,462,469,476,483,497,504,511,518,532,539,546,553,567,574,581,588,602,609,616,623,637,644,651,658,672,679,686,693,707,714,721,728,742,749,756,763,777,784,791,798,812,819,826,833,847,854,861,868,882,889,896,903,917,924,931,938,952,959,966,973,987,994
MIT
02/homework_day2.ipynb
Py101/py101-assignments-andremarco
2. Count capital lettersIn this exercises you're going to deal with YOUR DATA. Indeed, in the list below there are stored your Favorite Tv Series. But, as you can see, there is something weird. There are too much CaPITal LeTTErs. Your task is to count the capital letters in all the strings and then print the total number of capital letters in all the list.
tv_series = ['Game of THRroneS', 'big bang tHeOrY', 'MR robot', 'WesTWorlD', 'fIRefLy', "i haven't", 'HOW I MET your mothER', 'friENds', 'bRon broen', 'gossip girl', 'prISon break', 'breaking BAD'] #alfab=["A","B","C","D","E""F","G","H","I","L","M","N","O","P","Q","R","S","T","U","V","Z","X","Y","K","W"] #cont=0 ris=[] for i in tv_series: k=0 for a in i: if a.isupper(): k=k+1 ris.append(k) ris
_____no_output_____
MIT
02/homework_day2.ipynb
Py101/py101-assignments-andremarco
3. A remarkUsing the list above, create a dictionary where the keys are Unique IDs and values the TV Series.You have to do the exercise keeping in mind these 2 constraints: 1. The order of the IDs has to be **dependent on the alphabetical order of the titles**, i.e. 0: first_title_in_alphabetical_order and so on...2. **Solve the mess** of the capital letter: we want them only at the start of the words ("prISon break" should be "Prison Break")
lista=[] for i in tv_series: lista.append(i.title()) idx=list(range(0+1,12+1)) dic_one=dict(zip(sorted(lista),idx)) print(dic_one)
{'Big Bang Theory': 1, 'Breaking Bad': 2, 'Bron Broen': 3, 'Firefly': 4, 'Friends': 5, 'Game Of Thrrones': 6, 'Gossip Girl': 7, 'How I Met Your Mother': 8, "I Haven'T": 9, 'Mr Robot': 10, 'Prison Break': 11, 'Westworld': 12}
MIT
02/homework_day2.ipynb
Py101/py101-assignments-andremarco
4. Dictionary to its maximumInvert the keys with the values in the dictionary built before.
inv_dic={v: k for k, v in dic_one.items()} inv_dic
_____no_output_____
MIT
02/homework_day2.ipynb
Py101/py101-assignments-andremarco
Have you done in **one line of code**? If not, try now! 4. Other boring mathLet's talk about our beloved exams. Starting from the exams and CFU below, are you able to compute the weighted mean of them?Let's do it and print the result.Description of the data:exams[1] = $(title_1, grade_1)$cfu[1] = $CFU_1$
exams = [('BIOINFORMATICS', 29), ('DATA MANAGEMENT FOR DATA SCIENCE', 30), ('DIGITAL EPIDEMIOLOGY', 26), ('NETWORKING FOR BIG DATA AND LABORATORY',28), ('QUANTITATIVE MODELS FOR ECONOMIC ANALYSIS AND MANAGEMENT','30 e lode'), ('DATA MINING TECHNOLOGY FOR BUSINESS AND SOCIETY', 30), ('STATISTICAL LEARNING',30), ('ALGORITHMIC METHODS OF DATA MINING AND LABORATORY',30), ('FUNDAMENTALS OF DATA SCIENCE AND LABORATORY', 29)] cfu = sum([6,6,6,9,6,6,6,9,9]) # create a list in which are stored the marks voti=[] for i in exams: voti.append(i[1]) crediti=[6,6,6,9,6,6,6,9,9] # must transform the "30 e lode" value in integer value prova=[] for n,i in enumerate(voti): if i=="30 e lode": voti[n]=30 fin=[] for x in range(len(crediti)): c=0 c=crediti[x]*voti[x] fin.append(c) average_1=sum(fin)/sum(crediti) print(average_1)
29.095238095238095
MIT
02/homework_day2.ipynb
Py101/py101-assignments-andremarco
5. Palindromic numbersWrite a script which finds all the Palindromic numbers, in the range [0,**N**] (bounds included). The numbers obtained should be printed in a comma-separated sequence on a single line.What is **N**?Looking at the exercise before:**N** = (Total number of CFU) x (Sum of all the grades)(details: https://en.wikipedia.org/wiki/Palindromic_number)
top=cfu*sum(voti) tot_num=list(range(1,top)) def palindo(s): return str(s)==str(s)[::-1] tt=[] for i in tot_num: c=palindo(i) if c==True: tt.append(str(i)) r6=','.join(tt) print(r6)
1,2,3,4,5,6,7,8,9,11,22,33,44,55,66,77,88,99,101,111,121,131,141,151,161,171,181,191,202,212,222,232,242,252,262,272,282,292,303,313,323,333,343,353,363,373,383,393,404,414,424,434,444,454,464,474,484,494,505,515,525,535,545,555,565,575,585,595,606,616,626,636,646,656,666,676,686,696,707,717,727,737,747,757,767,777,787,797,808,818,828,838,848,858,868,878,888,898,909,919,929,939,949,959,969,979,989,999,1001,1111,1221,1331,1441,1551,1661,1771,1881,1991,2002,2112,2222,2332,2442,2552,2662,2772,2882,2992,3003,3113,3223,3333,3443,3553,3663,3773,3883,3993,4004,4114,4224,4334,4444,4554,4664,4774,4884,4994,5005,5115,5225,5335,5445,5555,5665,5775,5885,5995,6006,6116,6226,6336,6446,6556,6666,6776,6886,6996,7007,7117,7227,7337,7447,7557,7667,7777,7887,7997,8008,8118,8228,8338,8448,8558,8668,8778,8888,8998,9009,9119,9229,9339,9449,9559,9669,9779,9889,9999,10001,10101,10201,10301,10401,10501,10601,10701,10801,10901,11011,11111,11211,11311,11411,11511,11611,11711,11811,11911,12021,12121,12221,12321,12421,12521,12621,12721,12821,12921,13031,13131,13231,13331,13431,13531,13631,13731,13831,13931,14041,14141,14241,14341,14441,14541,14641,14741,14841,14941,15051,15151,15251,15351,15451,15551,15651,15751,15851,15951,16061,16161,16261,16361,16461
MIT
02/homework_day2.ipynb
Py101/py101-assignments-andremarco
6. StackOverflow Let's start using your new best friend. Now I'm going to give other task, slightly more difficult BUT this time, just googling, you will find easily the answer on the www.stackoverflow.com. You can use the code there for solving the exercise BUT you have to understand the solution there **COMMENTING** the code, showing me you understood the thinking process behind the code. 6. AShow me an example of how to use **PROPERLY** the *Try - Except* statements
# you start with a try statement: if python can do this statement you will find "Hello". try: print("HELLO") # in the other case will be execute the except statement. In this case it will be execute only if there is an ImportError except ImportError: print ("NO module found")
HELLO
MIT
02/homework_day2.ipynb
Py101/py101-assignments-andremarco
6. BGiving this list of words below, after copying in a variable, explain and provide me a code for obtaining a **Bag of Words** from them.(Hint: use dictionaries and loops) ['theory', 'of', 'bron', 'firefly', 'thrones', 'break', 'bad', 'mother', 'firefly', "haven't", 'prison', 'big', 'friends', 'girl', 'westworld', 'bad', "haven't", 'gossip', 'thrones', 'your', 'big', 'how', 'friends', 'theory', 'your', 'bron', 'bad', 'bad', 'breaking', 'met', 'breaking', 'breaking', 'game', 'bron', 'your', 'breaking', 'met', 'bang', 'how', 'mother', 'bad', 'theory', 'how', 'i', 'friends', "haven't", 'of', 'of', 'gossip', 'i', 'robot', 'of', 'prison', 'bad', 'friends', 'friends', 'i', 'robot', 'bang', 'mother', 'bang', 'i', 'of', 'bad', 'friends', 'theory', 'i', 'friends', 'thrones', 'prison', 'theory', 'theory', 'big', 'of', 'bang', 'how', 'thrones', 'bang', 'theory', 'friends', 'game', 'bang', 'mother', 'broen', 'bad', 'game', 'break', 'break', 'bang', 'big', 'gossip', 'robot', 'met', 'i', 'game', 'your', 'met', 'bad', 'firefly', 'your']
list_6=['theory', 'of', 'bron', 'firefly', 'thrones', 'break', 'bad', 'mother', 'firefly', "haven't", 'prison', 'big', 'friends', 'girl', 'westworld', 'bad', "haven't", 'gossip', 'thrones', 'your', 'big', 'how', 'friends', 'theory', 'your', 'bron', 'bad', 'bad', 'breaking', 'met', 'breaking', 'breaking', 'game', 'bron', 'your', 'breaking', 'met', 'bang', 'how', 'mother', 'bad', 'theory', 'how', 'i', 'friends', "haven't", 'of', 'of', 'gossip', 'i', 'robot', 'of', 'prison', 'bad', 'friends', 'friends', 'i', 'robot', 'bang', 'mother', 'bang', 'i', 'of', 'bad', 'friends', 'theory', 'i', 'friends', 'thrones', 'prison', 'theory', 'theory', 'big', 'of', 'bang', 'how', 'thrones', 'bang', 'theory', 'friends', 'game', 'bang', 'mother', 'broen', 'bad', 'game', 'break', 'break', 'bang', 'big', 'gossip', 'robot', 'met', 'i', 'game', 'your', 'met', 'bad', 'firefly', 'your'] indice=list(range(0+1,len(list_6)+1)) dic_6=dict(zip(list_6,indice)) print(dic_6)
{'theory': 79, 'of': 74, 'bron': 34, 'firefly': 99, 'thrones': 77, 'break': 88, 'bad': 98, 'mother': 83, "haven't": 46, 'prison': 70, 'big': 90, 'friends': 80, 'girl': 14, 'westworld': 15, 'gossip': 91, 'your': 100, 'how': 76, 'breaking': 36, 'met': 97, 'game': 95, 'bang': 89, 'i': 94, 'robot': 92, 'broen': 84}
MIT
02/homework_day2.ipynb
Py101/py101-assignments-andremarco
6. CAnd now, write down a code which computes the first 10 Fibonacci numbers(details: https://en.wikipedia.org/wiki/Fibonacci_number)
y=0 z=1 rr=[] for count in range(1,11): v=0 v=z z=y+z y=v count=count+1 rr.append(z) print(rr)
[1, 2, 3, 5, 8, 13, 21, 34, 55, 89]
MIT
02/homework_day2.ipynb
Py101/py101-assignments-andremarco
SVM
# evaluate a logistic regression model using k-fold cross-validation from numpy import mean from numpy import std from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from sklearn.model_selection import ShuffleSplit from sklearn.linear_model import LogisticRegression # create dataset #X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1) # prepare the cross-validation procedure #cv = KFold(n_splits=5, test_size= 0.2, random_state=0) cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=42) # create model model = SVC(kernel='rbf', C=1, class_weight=class_weight) # evaluate model scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1) # report performance print('Accuracy: %.4f (%.4f)' % (mean(scores), std(scores))) scores import numpy as np import matplotlib.pyplot as plt from sklearn import svm, datasets from sklearn.metrics import auc from sklearn.metrics import plot_roc_curve from sklearn.model_selection import StratifiedKFold # ############################################################################# # Classification and ROC analysis # Run classifier with cross-validation and plot ROC curves cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=42) classifier = svm.SVC(kernel='rbf', probability=True, class_weight=class_weight, random_state=42) tprs = [] aucs = [] mean_fpr = np.linspace(0, 1, 100) fig, ax = plt.subplots() for i, (train, test) in enumerate(cv.split(X, y)): classifier.fit(X, y) viz = plot_roc_curve(classifier, X, y, name='ROC fold {}'.format(i), alpha=0.3, lw=1, ax=ax) interp_tpr = np.interp(mean_fpr, viz.fpr, viz.tpr) interp_tpr[0] = 0.0 tprs.append(interp_tpr) aucs.append(viz.roc_auc) ax.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r', label='Chance', alpha=.8) mean_tpr = np.mean(tprs, axis=0) mean_tpr[-1] = 1.0 mean_auc = auc(mean_fpr, mean_tpr) std_auc = np.std(aucs) ax.plot(mean_fpr, mean_tpr, color='b', label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc), lw=2, alpha=.8) std_tpr = np.std(tprs, axis=0) tprs_upper = np.minimum(mean_tpr + std_tpr, 1) tprs_lower = np.maximum(mean_tpr - std_tpr, 0) ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2, label=r'$\pm$ 1 std. dev.') ax.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05], title="Receiver operating characteristic") ax.legend(loc="lower right") plt.show()
_____no_output_____
MIT
Diabetics Prediction (ML) CV=5.ipynb
AmitHasanShuvo/Prediction-of-Clinical-Risk-Factors-of-Diabetes-Using-ML-Resolving-Class-Imbalance
LR
from numpy import mean from numpy import std from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from sklearn.model_selection import ShuffleSplit from sklearn.linear_model import LogisticRegression # create dataset #X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1) # prepare the cross-validation procedure #cv = KFold(n_splits=5, test_size= 0.2, random_state=0) cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=42) # create model model = LogisticRegression(class_weight=class_weight) # evaluate model scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1) # report performance print('Accuracy: %.4f (%.4f)' % (mean(scores), std(scores))) scores import numpy as np import matplotlib.pyplot as plt from sklearn import svm, datasets from sklearn.metrics import auc from sklearn.metrics import plot_roc_curve from sklearn.model_selection import StratifiedKFold # ############################################################################# # Data IO and generation # Import some data to play with #iris = datasets.load_iris() #X = iris.data #y = iris.target #X, y = X[y != 2], y[y != 2] #n_samples, n_features = X.shape # Add noisy features #random_state = np.random.RandomState(0) #X = np.c_[X, random_state.randn(n_samples, 200 * n_features)] # ############################################################################# # Classification and ROC analysis # Run classifier with cross-validation and plot ROC curves cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=42) classifier = LogisticRegression(class_weight=class_weight,random_state=42) tprs = [] aucs = [] mean_fpr = np.linspace(0, 1, 100) fig, ax = plt.subplots() for i, (train, test) in enumerate(cv.split(X, y)): classifier.fit(X, y) viz = plot_roc_curve(classifier, X, y, name='ROC fold {}'.format(i), alpha=0.3, lw=1, ax=ax) interp_tpr = np.interp(mean_fpr, viz.fpr, viz.tpr) interp_tpr[0] = 0.0 tprs.append(interp_tpr) aucs.append(viz.roc_auc) ax.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r', label='Chance', alpha=.8) mean_tpr = np.mean(tprs, axis=0) mean_tpr[-1] = 1.0 mean_auc = auc(mean_fpr, mean_tpr) std_auc = np.std(aucs) ax.plot(mean_fpr, mean_tpr, color='b', label=r'Mean ROC (AUC = %0.2f $\pm$ %0.2f)' % (mean_auc, std_auc), lw=2, alpha=.8) std_tpr = np.std(tprs, axis=0) tprs_upper = np.minimum(mean_tpr + std_tpr, 1) tprs_lower = np.maximum(mean_tpr - std_tpr, 0) ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2, label=r'$\pm$ 1 std. dev.') ax.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05], title="Receiver operating characteristic example") ax.legend(loc="lower right") plt.show()
_____no_output_____
MIT
Diabetics Prediction (ML) CV=5.ipynb
AmitHasanShuvo/Prediction-of-Clinical-Risk-Factors-of-Diabetes-Using-ML-Resolving-Class-Imbalance
RF
from numpy import mean from numpy import std from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from sklearn.ensemble import RandomForestClassifier from sklearn.model_selection import ShuffleSplit # create dataset #X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1) # prepare the cross-validation procedure #cv = KFold(n_splits=5, test_size= 0.2, random_state=0) cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=42) # create model model = RandomForestClassifier(class_weight=class_weight) # evaluate model scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1) # report performance print('Accuracy: %.4f (%.4f)' % (mean(scores), std(scores))) scores import numpy as np import matplotlib.pyplot as plt from sklearn import svm, datasets from sklearn.metrics import auc from sklearn.metrics import plot_roc_curve from sklearn.model_selection import StratifiedKFold # ############################################################################# # Data IO and generation # Import some data to play with #iris = datasets.load_iris() #X = iris.data #y = iris.target #X, y = X[y != 2], y[y != 2] #n_samples, n_features = X.shape # Add noisy features #random_state = np.random.RandomState(0) #X = np.c_[X, random_state.randn(n_samples, 200 * n_features)] # ############################################################################# # Classification and ROC analysis # Run classifier with cross-validation and plot ROC curves cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=42) classifier = RandomForestClassifier(class_weight=class_weight,random_state=42) tprs = [] aucs = [] mean_fpr = np.linspace(0, 1, 100) fig, ax = plt.subplots() for i, (train, test) in enumerate(cv.split(X, y)): classifier.fit(X, y) #viz = plot_roc_curve(classifier, X, y, # name='ROC fold {}'.format(i), # alpha=0.3, lw=1, ax=ax) interp_tpr = np.interp(mean_fpr, viz.fpr, viz.tpr) interp_tpr[0] = 0.0 tprs.append(interp_tpr) aucs.append(viz.roc_auc) ax.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r', label='Chance', alpha=.8) mean_tpr = np.mean(tprs, axis=0) mean_tpr[-1] = 1.0 mean_auc = auc(mean_fpr, mean_tpr) std_auc = np.std(aucs) ax.plot(mean_fpr, mean_tpr, color='b', label=r'Mean ROC (AUC = %0.4f $\pm$ %0.4f)' % (mean_auc, std_auc), lw=2, alpha=.8) std_tpr = np.std(tprs, axis=0) tprs_upper = np.minimum(mean_tpr + std_tpr, 1) tprs_lower = np.maximum(mean_tpr - std_tpr, 0) ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2, label=r'$\pm$ 1 std. dev.') ax.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05], title="Receiver operating characteristic example") ax.legend(loc="lower right") plt.show()
_____no_output_____
MIT
Diabetics Prediction (ML) CV=5.ipynb
AmitHasanShuvo/Prediction-of-Clinical-Risk-Factors-of-Diabetes-Using-ML-Resolving-Class-Imbalance
DT
from numpy import mean from numpy import std from sklearn.model_selection import KFold from sklearn.model_selection import cross_val_score from sklearn.model_selection import ShuffleSplit from sklearn.tree import DecisionTreeClassifier # create dataset #X, y = make_classification(n_samples=1000, n_features=20, n_informative=15, n_redundant=5, random_state=1) # prepare the cross-validation procedure #cv = KFold(n_splits=5, test_size= 0.2, random_state=0) cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=4200) # create model model = DecisionTreeClassifier(class_weight=class_weight) # evaluate model scores = cross_val_score(model, X, y, scoring='accuracy', cv=cv, n_jobs=-1) # report performance print('Accuracy: %.4f (%.4f)' % (mean(scores), std(scores))) scores import numpy as np import matplotlib.pyplot as plt from sklearn import svm, datasets from sklearn.metrics import auc from sklearn.metrics import plot_roc_curve from sklearn.model_selection import StratifiedKFold # ############################################################################# # Data IO and generation # Import some data to play with #iris = datasets.load_iris() #X = iris.data #y = iris.target #X, y = X[y != 2], y[y != 2] #n_samples, n_features = X.shape # Add noisy features #random_state = np.random.RandomState(0) #X = np.c_[X, random_state.randn(n_samples, 200 * n_features)] # ############################################################################# # Classification and ROC analysis # Run classifier with cross-validation and plot ROC curves cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=4200) classifier = DecisionTreeClassifier(class_weight=class_weight,random_state=4200) tprs = [] aucs = [] mean_fpr = np.linspace(0, 1, 100) fig, ax = plt.subplots() for i, (train, test) in enumerate(cv.split(X, y)): classifier.fit(X, y) viz = plot_roc_curve(classifier, X, y, name='ROC fold {}'.format(i), alpha=0.3, lw=1, ax=ax) interp_tpr = np.interp(mean_fpr, viz.fpr, viz.tpr) interp_tpr[0] = 0.0 tprs.append(interp_tpr) aucs.append(viz.roc_auc) ax.plot([0, 1], [0, 1], linestyle='--', lw=2, color='r', label='Chance', alpha=.8) mean_tpr = np.mean(tprs, axis=0) mean_tpr[-1] = 1.0 mean_auc = auc(mean_fpr, mean_tpr) std_auc = np.std(aucs) ax.plot(mean_fpr, mean_tpr, color='b', label=r'Mean ROC (AUC = %0.4f $\pm$ %0.4f)' % (mean_auc, std_auc), lw=2, alpha=.8) std_tpr = np.std(tprs, axis=0) tprs_upper = np.minimum(mean_tpr + std_tpr, 1) tprs_lower = np.maximum(mean_tpr - std_tpr, 0) ax.fill_between(mean_fpr, tprs_lower, tprs_upper, color='grey', alpha=.2, label=r'$\pm$ 1 std. dev.') ax.set(xlim=[-0.05, 1.05], ylim=[-0.05, 1.05], title="Receiver operating characteristic example") ax.legend(loc="lower right") plt.show() #from sklearn.model_selection import cross_val_score #from sklearn import svm #clf = svm.SVC(kernel='rbf', C=1, class_weight=class_weight) #scores = cross_val_score(clf, X, y, cv=5) #print("Accuracy: %0.4f (+/- %0.4f)" % (scores.mean(), scores.std() * 2)) #clf.score(X_test, y_test)
_____no_output_____
MIT
Diabetics Prediction (ML) CV=5.ipynb
AmitHasanShuvo/Prediction-of-Clinical-Risk-Factors-of-Diabetes-Using-ML-Resolving-Class-Imbalance
ANN
import keras from keras.models import Sequential from keras.layers import Dense,Dropout classifier=Sequential() classifier.add(Dense(units=256, kernel_initializer='uniform',activation='relu',input_dim=24)) classifier.add(Dense(units=128, kernel_initializer='uniform',activation='relu')) classifier.add(Dropout(p=0.1)) classifier.add(Dense(units=64, kernel_initializer='uniform',activation='relu')) classifier.add(Dropout(p=0.3)) classifier.add(Dense(units=32, kernel_initializer='uniform',activation='relu')) classifier.add(Dense(units=1, kernel_initializer='uniform',activation='sigmoid')) classifier.compile(optimizer='adam',loss="binary_crossentropy",metrics=['accuracy']) classifier.fit(X_train,y_train,batch_size=10,epochs=100,class_weight=class_weight) #clf_svc_rbf.fit(X_train,y_train) from sklearn.metrics import confusion_matrix,classification_report,roc_auc_score,auc,f1_score y_pred = classifier.predict(X_test)>0.8 import matplotlib.pyplot as plt cm = confusion_matrix(y_test,y_pred) #plt.figure(figsize=(5,5)) #sns.heatmap(cm,annot=True) #plt.show() #print(classification_report(y_test,y_pred_clf_svc_rbf)) print(classification_report(y_test, y_pred)) #plot_confusion_matrix(confusion_matrix(y_test, y_pred)) from sklearn.metrics import roc_curve, auc false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred) roc_auc = auc(false_positive_rate, true_positive_rate) roc_auc #from sklearn.tree import DecisionTreeClassifier #from sklearn.model_selection import cross_val_score #dt = DecisionTreeClassifier(class_weight=class_weight) #scores = cross_val_score(clf, X, y, cv=5) #print("Accuracy: %0.4f (+/- %0.4f)" % (scores.mean(), scores.std() * 2)) ''' from sklearn.linear_model import LogisticRegression from sklearn.metrics import confusion_matrix,classification_report,roc_auc_score,auc,f1_score lr = LogisticRegression() lr.fit(X_train,y_train) y_pred_logistic = lr.predict(X_test) import matplotlib.pyplot as plt cm = confusion_matrix(y_test,y_pred_logistic) plt.figure(figsize=(5,5)) sns.heatmap(cm,annot=True,linewidths=.3) plt.show() print(classification_report(y_test,y_pred_logistic)) from sklearn.metrics import roc_curve, auc false_positive_rate, true_positive_rate, thresholds = roc_curve(y_test, y_pred_logistic) roc_auc = auc(false_positive_rate, true_positive_rate) roc_auc print(f1_score(y_test, y_pred_logistic,average="macro")) ''' from sklearn import datasets from sklearn.model_selection import cross_val_score from sklearn.linear_model import LogisticRegression from sklearn.naive_bayes import GaussianNB from sklearn.ensemble import RandomForestClassifier from sklearn.ensemble import VotingClassifier clf1 = SVC(kernel='rbf', C=1, class_weight=class_weight,random_state=42) clf2 = LogisticRegression(class_weight=class_weight,random_state=42) clf3 = RandomForestClassifier(class_weight=class_weight,random_state=42) clf4 = DecisionTreeClassifier(class_weight=class_weight,random_state=42) #clf5 = Sequential() eclf = VotingClassifier( estimators=[('svm', clf1), ('lr', clf2), ('rf', clf3), ('dt',clf4)], voting='hard') for clf, label in zip([clf1, clf2, clf3,clf4 ,eclf], ['SVM', 'LR', 'RF','DT', 'Ensemble']): scores = cross_val_score(clf, X, y, scoring='accuracy', cv=5) print("Accuracy: %0.4f (+/- %0.4f) [%s]" % (scores.mean(), scores.std(), label)) scores
Accuracy: 0.8886 (+/- 0.0027) [Ensemble]
MIT
Diabetics Prediction (ML) CV=5.ipynb
AmitHasanShuvo/Prediction-of-Clinical-Risk-Factors-of-Diabetes-Using-ML-Resolving-Class-Imbalance
Define data convert functions
def peek(iterable): try: first = next(iterable) except StopIteration: return None return first, itertools.chain([first], iterable) def json_to_feather(filename, new_filename_base, records_per_file = 1000000, pipe_func = None): records = map(json.loads, open(filename)) records_per_file = 1000000 file_num = 0 peek_res = peek(records) while peek_res is not None: _, records = peek_res data = pd.DataFrame.from_records(records, nrows = records_per_file) data.to_feather(f"{new_filename_base}_tmp_{file_num}.feather") peek_res = peek(records) file_num += 1 dfs = list() for read_num in range(file_num): tmp_filename = f"{new_filename_base}_tmp_{read_num}.feather" small_df = pd.read_feather(tmp_filename) if pipe_func is not None: small_df = small_df.pipe(pipe_func) dfs.append(small_df) os.remove(tmp_filename) data = pd.concat(dfs, axis = 0).reset_index() data.to_feather(f"{new_filename_base}.feather") def starts_with(df, start_str): mask = df.columns.str.startswith(start_str) columns = list(df.columns[mask]) return(columns) def pipeable_drop(df, labels): return(df.drop(columns = labels)) def pipeable_drop_startswith(df, labels, start): new_df = (df.drop(columns = labels) .pipe(lambda x: x.drop(columns = starts_with(x, start))) ) return(new_df)
_____no_output_____
MIT
capstone.ipynb
jasonbossert/TDI_Capstone
Convert Data to Feather
filename = "yelp_academic_dataset_business.json" new_filename_base = "yelp_business" business_drop = partial(pipeable_drop, labels = ["address", "is_open", "attributes", "hours"]) json_to_feather(filename, new_filename_base, pipe_func = business_drop) filename = "yelp_academic_dataset_user.json" new_filename_base = "yelp_user" users_drop = partial(pipeable_drop_startswith, labels = ["name", "useful", "funny", "cool", "elite", "friends", "fans"], start = "compliment") json_to_feather(filename, new_filename_base, pipe_func = users_drop) filename = "yelp_academic_dataset_review.json" new_filename_base = "yelp_review" review_drop = partial(pipeable_drop, labels = ["text"]) json_to_feather(filename, new_filename_base, pipe_func = review_drop)
_____no_output_____
MIT
capstone.ipynb
jasonbossert/TDI_Capstone
GBDT
regr = GradientBoostingRegressor(max_depth=20, random_state=0,max_features=2,n_estimators=333) regr.fit(X_train, Y_trainmin/90000) regr2 = GradientBoostingRegressor(max_depth=20, random_state=0,max_features=2,n_estimators=333) regr2.fit(X_train, Y_trainmax/100000) from sklearn.metrics import mean_squared_error from sklearn.metrics import r2_score y_pred = regr.predict(X_train.values) print('最小薪酬训练集均方根误差',np.sqrt(mean_squared_error(Y_trainmin.values.T[0], y_pred*90000))) # y_pred = regr.predict(X_test.values) # print('最小薪酬测试集均方根误差',np.sqrt(mean_squared_error(Y_testmin.values.T[0], y_pred*90000))) # print('最小薪酬R square',r2_score(Y_testmin.values.T[0], y_pred*90000))#衡量正确率 y_pred2 = regr2.predict(X_train.values) print('最大薪酬训练集均方根误差',np.sqrt(mean_squared_error(Y_trainmax.values.T[0], y_pred2*100000))) # y_pred2 = regr.predict(X_test.values) # print('最大薪酬测试集均方根误差',np.sqrt(mean_squared_error(Y_testmax.values.T[0], y_pred2*100000))) # print('最大薪酬R square',r2_score(Y_testmax.values.T[0], y_pred2*100000))#衡量正确率 minsalary = y_pred*90000 maxsalary= y_pred2*100000 Y_trainmax.columns =['最高工资'] Y_trainmin.columns =['最低工资'] Y_trainmax.to_csv('trainmaxsalary.csv',index=None) Y_trainmin.to_csv('trainminsalary.csv',index=None) #预测值 file = open('pretrainmaxsalary.csv','a') for i in range(len(maxsalary)): s = str(maxsalary[i]).replace('[','').replace(']','')#去除[],这两行按数据不同,可以选择 s = s.replace("'",'').replace(',','') +'\n' #去除单引号,逗号,每行末尾追加换行符 file.write(s) file.close() file = open('pretrainminsalary.csv','a') for i in range(len(minsalary)): s = str(minsalary[i]).replace('[','').replace(']','')#去除[],这两行按数据不同,可以选择 s = s.replace("'",'').replace(',','') +'\n' #去除单引号,逗号,每行末尾追加换行符 file.write(s) file.close() maxsalary = pd.read_csv('pretrainmaxsalary.csv',header=None) minsalary = pd.read_csv('pretrainminsalary.csv',header=None) # minsalary.columns =['预测最高工资'] # minsalary.columns =['预测最低工资'] X_train.columns =['职位','城市','经验'] X_train['预测最低工资']=minsalary X_train['预测最高工资']=maxsalary X_train.to_csv('result/salaryresult.csv',index=None,encoding="utf_8_sig") #将职位城市经验从字典中转换出来 city = pd.read_pickle('dict/city.pkl') experience = pd.read_pickle('dict/experience.pkl') occupation = pd.read_pickle('dict/occupation.pkl') salary = pd.read_csv('result/salaryresult.csv') precity =[] for i in salary['城市']: for key,values in city.items(): if values == i: precity.append(key) salary['城市'] = precity preexperience =[] for i in salary['经验']: for key,values in experience.items(): if values == i: preexperience.append(key) #print(preexperience) salary['经验'] = preexperience preoccupation =[] for i in salary['职位']: for key,values in occupation.items(): if values == i: preoccupation.append(key) #print(preexperience) salary['职位'] = preoccupation salary.to_csv('result/Resultsalary.csv',index=None,encoding="utf_8_sig")
_____no_output_____
MIT
src/Prediction/salary/presalary.ipynb
chenshihang/Analysis-of-College-Graduates-Employment-Orientation
T1053.002 - Scheduled Task/Job: At (Windows)Adversaries may abuse the at.exe utility to perform task scheduling for initial or recurring execution of malicious code. The [at](https://attack.mitre.org/software/S0110) utility exists as an executable within Windows for scheduling tasks at a specified time and date. Using [at](https://attack.mitre.org/software/S0110) requires that the Task Scheduler service be running, and the user to be logged on as a member of the local Administrators group. An adversary may use at.exe in Windows environments to execute programs at system startup or on a scheduled basis for persistence. [at](https://attack.mitre.org/software/S0110) can also be abused to conduct remote Execution as part of Lateral Movement and or to run a process under the context of a specified account (such as SYSTEM).Note: The at.exe command line utility has been deprecated in current versions of Windows in favor of schtasks. Atomic Tests
#Import the Module before running the tests. # Checkout Jupyter Notebook at https://github.com/cyb3rbuff/TheAtomicPlaybook to run PS scripts. Import-Module /Users/0x6c/AtomicRedTeam/atomics/invoke-atomicredteam/Invoke-AtomicRedTeam.psd1 - Force
_____no_output_____
MIT
playbook/tactics/privilege-escalation/T1053.002.ipynb
haresudhan/The-AtomicPlaybook
Atomic Test 1 - At.exe Scheduled taskExecutes cmd.exeNote: deprecated in Windows 8+Upon successful execution, cmd.exe will spawn at.exe and create a scheduled task that will spawn cmd at a specific time.**Supported Platforms:** windows Attack Commands: Run with `command_prompt````command_promptat 13:20 /interactive cmd```
Invoke-AtomicTest T1053.002 -TestNumbers 1
_____no_output_____
MIT
playbook/tactics/privilege-escalation/T1053.002.ipynb
haresudhan/The-AtomicPlaybook
Image ClassificationIn this project, you'll classify images from the [CIFAR-10 dataset](https://www.cs.toronto.edu/~kriz/cifar.html). The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images. Get the DataRun the following cell to download the [CIFAR-10 dataset for python](https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz).
""" DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ from urllib.request import urlretrieve from os.path import isfile, isdir from tqdm import tqdm import problem_unittests as tests import tarfile cifar10_dataset_folder_path = 'cifar-10-batches-py' # Use Floyd's cifar-10 dataset if present floyd_cifar10_location = '/input/cifar-10/python.tar.gz' if isfile(floyd_cifar10_location): tar_gz_path = floyd_cifar10_location else: tar_gz_path = 'cifar-10-python.tar.gz' class DLProgress(tqdm): last_block = 0 def hook(self, block_num=1, block_size=1, total_size=None): self.total = total_size self.update((block_num - self.last_block) * block_size) self.last_block = block_num if not isfile(tar_gz_path): with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar: urlretrieve( 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz', tar_gz_path, pbar.hook) if not isdir(cifar10_dataset_folder_path): with tarfile.open(tar_gz_path) as tar: tar.extractall() tar.close() tests.test_folder_path(cifar10_dataset_folder_path)
All files found!
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Explore the DataThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named `data_batch_1`, `data_batch_2`, etc.. Each batch contains the labels and images that are one of the following:* airplane* automobile* bird* cat* deer* dog* frog* horse* ship* truckUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the `batch_id` and `sample_id`. The `batch_id` is the id for a batch (1-5). The `sample_id` is the id for a image and label pair in the batch.Ask yourself "What are all possible labels?", "What is the range of values for the image data?", "Are the labels in order or random?". Answers to questions like these will help you preprocess the data and end up with better predictions.
%matplotlib inline %config InlineBackend.figure_format = 'retina' import helper import numpy as np # Explore the dataset batch_id = 1 sample_id = 5 helper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)
Stats of batch 1: Samples: 10000 Label Counts: {0: 1005, 1: 974, 2: 1032, 3: 1016, 4: 999, 5: 937, 6: 1030, 7: 1001, 8: 1025, 9: 981} First 20 Labels: [6, 9, 9, 4, 1, 1, 2, 7, 8, 3, 4, 7, 7, 2, 9, 9, 9, 3, 2, 6] Example of Image 5: Image - Min Value: 0 Max Value: 252 Image - Shape: (32, 32, 3) Label - Label Id: 1 Name: automobile
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Implement Preprocess Functions NormalizeIn the cell below, implement the `normalize` function to take in image data, `x`, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as `x`.
def normalize(x): """ Normalize a list of sample image data in the range of 0 to 1 : x: List of image data. The image shape is (32, 32, 3) : return: Numpy array of normalize data """ x_norm = x.reshape(x.size) x_norm = (x_norm - min(x_norm))/(max(x_norm)-min(x_norm)) return x_norm.reshape(x.shape) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_normalize(normalize)
Tests Passed
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
One-hot encodeJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the `one_hot_encode` function. The input, `x`, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to `one_hot_encode`. Make sure to save the map of encodings outside the function.Hint: Don't reinvent the wheel.
def one_hot_encode(x): """ One hot encode a list of sample labels. Return a one-hot encoded vector for each label. : x: List of sample Labels : return: Numpy array of one-hot encoded labels """ one_hot_array = np.zeros((len(x), 10)) for index in range(len(x)): val_index = x[index] one_hot_array[index][val_index] = 1 return one_hot_array """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_one_hot_encode(one_hot_encode)
Tests Passed
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Randomize DataAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset. Preprocess all the data and save itRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.
""" DON'T MODIFY ANYTHING IN THIS CELL """ # Preprocess Training, Validation, and Testing Data helper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)
_____no_output_____
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Check PointThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
""" DON'T MODIFY ANYTHING IN THIS CELL """ import pickle import problem_unittests as tests import helper # Load the Preprocessed Validation data valid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))
_____no_output_____
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Build the networkFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.>**Note:** If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages to build each layer, except the layers you build in the "Convolutional and Max Pooling Layer" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.>However, if you would like to get the most out of this course, try to solve all the problems _without_ using anything from the TF Layers packages. You **can** still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the `conv2d` class, [tf.layers.conv2d](https://www.tensorflow.org/api_docs/python/tf/layers/conv2d), you would want to use the TF Neural Network version of `conv2d`, [tf.nn.conv2d](https://www.tensorflow.org/api_docs/python/tf/nn/conv2d). Let's begin! InputThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions* Implement `neural_net_image_input` * Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) * Set the shape using `image_shape` with batch size set to `None`. * Name the TensorFlow placeholder "x" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).* Implement `neural_net_label_input` * Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) * Set the shape using `n_classes` with batch size set to `None`. * Name the TensorFlow placeholder "y" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).* Implement `neural_net_keep_prob_input` * Return a [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder) for dropout keep probability. * Name the TensorFlow placeholder "keep_prob" using the TensorFlow `name` parameter in the [TF Placeholder](https://www.tensorflow.org/api_docs/python/tf/placeholder).These names will be used at the end of the project to load your saved model.Note: `None` for shapes in TensorFlow allow for a dynamic size.
import tensorflow as tf def neural_net_image_input(image_shape): """ Return a Tensor for a batch of image input : image_shape: Shape of the images : return: Tensor for image input. """ return tf.placeholder(tf.float32, shape=(None, image_shape[0], image_shape[1], image_shape[2]), name='x') def neural_net_label_input(n_classes): """ Return a Tensor for a batch of label input : n_classes: Number of classes : return: Tensor for label input. """ return tf.placeholder(tf.uint8, (None, n_classes), name='y') def neural_net_keep_prob_input(): """ Return a Tensor for keep probability : return: Tensor for keep probability. """ return tf.placeholder(tf.float32, None, name='keep_prob') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tf.reset_default_graph() tests.test_nn_image_inputs(neural_net_image_input) tests.test_nn_label_inputs(neural_net_label_input) tests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)
Image Input Tests Passed. Label Input Tests Passed. Keep Prob Tests Passed.
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Convolution and Max Pooling LayerConvolution layers have a lot of success with images. For this code cell, you should implement the function `conv2d_maxpool` to apply convolution then max pooling:* Create the weight and bias using `conv_ksize`, `conv_num_outputs` and the shape of `x_tensor`.* Apply a convolution to `x_tensor` using weight and `conv_strides`. * We recommend you use same padding, but you're welcome to use any padding.* Add bias* Add a nonlinear activation to the convolution.* Apply Max Pooling using `pool_ksize` and `pool_strides`. * We recommend you use same padding, but you're welcome to use any padding.**Note:** You **can't** use [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) for **this** layer, but you can still use TensorFlow's [Neural Network](https://www.tensorflow.org/api_docs/python/tf/nn) package. You may still use the shortcut option for all the **other** layers.
def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): """ Apply convolution then max pooling to x_tensor :param x_tensor: TensorFlow Tensor :param conv_num_outputs: Number of outputs for the convolutional layer :param conv_ksize: kernal size 2-D Tuple for the convolutional layer :param conv_strides: Stride 2-D Tuple for convolution :param pool_ksize: kernal size 2-D Tuple for pool :param pool_strides: Stride 2-D Tuple for pool : return: A tensor that represents convolution and max pooling of x_tensor """ x_tensor_dims = x_tensor._shape.ndims channel_num = x_tensor._shape.dims[x_tensor_dims - 1].value mu = 0 sigma = 0.1 conv_weight = tf.Variable(tf.truncated_normal(shape=(conv_ksize[0], conv_ksize[1], channel_num, conv_num_outputs), mean=mu, stddev=sigma)) conv_bias = tf.Variable(tf.zeros(conv_num_outputs)) conv = tf.nn.conv2d(x_tensor, conv_weight, strides=[1, conv_strides[0], conv_strides[1], 1], padding='SAME') + conv_bias conv = tf.nn.relu(conv) return tf.nn.max_pool(conv, ksize=[1, pool_ksize[1], pool_ksize[1], 1], strides=[1, pool_strides[0], pool_strides[1], 1], padding='SAME') """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_con_pool(conv2d_maxpool)
Tests Passed
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Flatten LayerImplement the `flatten` function to change the dimension of `x_tensor` from a 4-D tensor to a 2-D tensor. The output should be the shape (*Batch Size*, *Flattened Image Size*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def flatten(x_tensor): """ Flatten x_tensor to (Batch Size, Flattened Image Size) : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions. : return: A tensor of size (Batch Size, Flattened Image Size). """ shaped = x_tensor.get_shape().as_list() reshaped = tf.reshape(x_tensor, [-1, shaped[1] * shaped[2] * shaped[3]]) return reshaped """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_flatten(flatten)
Tests Passed
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Fully-Connected LayerImplement the `fully_conn` function to apply a fully connected layer to `x_tensor` with the shape (*Batch Size*, *num_outputs*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.
def fully_conn(x_tensor, num_outputs): """ Apply a fully connected layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ weight = tf.Variable(tf.truncated_normal(shape=[x_tensor.get_shape().as_list()[1], num_outputs], mean=0.0, stddev=0.1)) bias = tf.Variable(tf.zeros(shape=num_outputs)) return tf.nn.relu(tf.matmul(x_tensor, weight) + bias) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_fully_conn(fully_conn)
Tests Passed
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Output LayerImplement the `output` function to apply a fully connected layer to `x_tensor` with the shape (*Batch Size*, *num_outputs*). Shortcut option: you can use classes from the [TensorFlow Layers](https://www.tensorflow.org/api_docs/python/tf/layers) or [TensorFlow Layers (contrib)](https://www.tensorflow.org/api_guides/python/contrib.layers) packages for this layer. For more of a challenge, only use other TensorFlow packages.**Note:** Activation, softmax, or cross entropy should **not** be applied to this.
def output(x_tensor, num_outputs): """ Apply a output layer to x_tensor using weight and bias : x_tensor: A 2-D tensor where the first dimension is batch size. : num_outputs: The number of output that the new tensor should be. : return: A 2-D tensor where the second dimension is num_outputs. """ weight = tf.Variable(tf.truncated_normal(shape=[x_tensor.get_shape().as_list()[1], num_outputs], mean=0.0, stddev=0.1)) bias = tf.Variable(tf.zeros(shape=num_outputs)) return tf.matmul(x_tensor, weight) + bias """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_output(output)
Tests Passed
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Create Convolutional ModelImplement the function `conv_net` to create a convolutional neural network model. The function takes in a batch of images, `x`, and outputs logits. Use the layers you created above to create this model:* Apply 1, 2, or 3 Convolution and Max Pool layers* Apply a Flatten Layer* Apply 1, 2, or 3 Fully Connected Layers* Apply an Output Layer* Return the output* Apply [TensorFlow's Dropout](https://www.tensorflow.org/api_docs/python/tf/nn/dropout) to one or more layers in the model using `keep_prob`.
def conv_net(x, keep_prob): """ Create a convolutional neural network model : x: Placeholder tensor that holds image data. : keep_prob: Placeholder tensor that hold dropout keep probability. : return: Tensor that represents logits """ # Play around with different number of outputs, kernel size and stride # Function Definition from Above: # conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) conv_num_outputs = 10 conv_ksize = (3, 3) conv_strides = (1, 1) pool_ksize = (2, 2) pool_strides = (2, 2) x_tensor = conv2d_maxpool(x, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides) # Function Definition from Above: # flatten(x_tensor) x_tensor = flatten(x_tensor) # Play around with different number of outputs # Function Definition from Above: # fully_conn(x_tensor, num_outputs) num_outputs = 100 x_tensor = fully_conn(x_tensor, num_outputs) x_tensor = tf.nn.dropout(x_tensor, keep_prob) # Set this to the number of classes # Function Definition from Above: # output(x_tensor, num_outputs) num_outputs = 10 model = output(x_tensor, num_outputs) # TODO: return output return model """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ ############################## ## Build the Neural Network ## ############################## # Remove previous weights, bias, inputs, etc.. tf.reset_default_graph() # Inputs x = neural_net_image_input((32, 32, 3)) y = neural_net_label_input(10) keep_prob = neural_net_keep_prob_input() # Model logits = conv_net(x, keep_prob) # Name logits Tensor, so that is can be loaded from disk after training logits = tf.identity(logits, name='logits') # Loss and Optimizer cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y)) optimizer = tf.train.AdamOptimizer().minimize(cost) # Accuracy correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1)) accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy') tests.test_conv_net(conv_net)
Neural Network Built!
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Train the Neural Network Single OptimizationImplement the function `train_neural_network` to do a single optimization. The optimization should use `optimizer` to optimize in `session` with a `feed_dict` of the following:* `x` for image input* `y` for labels* `keep_prob` for keep probability for dropoutThis function will be called for each batch, so `tf.global_variables_initializer()` has already been called.Note: Nothing needs to be returned. This function is only optimizing the neural network.
def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch): """ Optimize the session on a batch of images and labels : session: Current TensorFlow session : optimizer: TensorFlow optimizer function : keep_probability: keep probability : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data """ feed_dict = { x: feature_batch, y: label_batch, keep_prob: keep_probability} session.run(optimizer, feed_dict=feed_dict) """ DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE """ tests.test_train_nn(train_neural_network)
Tests Passed
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Show StatsImplement the function `print_stats` to print loss and validation accuracy. Use the global variables `valid_features` and `valid_labels` to calculate validation accuracy. Use a keep probability of `1.0` to calculate the loss and validation accuracy.
def print_stats(session, feature_batch, label_batch, cost, accuracy): """ Print information about loss and validation accuracy : session: Current TensorFlow session : feature_batch: Batch of Numpy image data : label_batch: Batch of Numpy label data : cost: TensorFlow cost function : accuracy: TensorFlow accuracy function """ current_cost = session.run(cost,feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.}) valid_accuracy = session.run(accuracy,feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.}) print('Loss: {:<8.3} Valid Accuracy: {:<5.3}'.format(current_cost,valid_accuracy))
_____no_output_____
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
HyperparametersTune the following parameters:* Set `epochs` to the number of iterations until the network stops learning or start overfitting* Set `batch_size` to the highest number that your machine has memory for. Most people set them to common sizes of memory: * 64 * 128 * 256 * ...* Set `keep_probability` to the probability of keeping a node using dropout
# TODO: Tune Parameters epochs = 20 batch_size = 128 keep_probability = 0.5
_____no_output_____
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Train on a Single CIFAR-10 BatchInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.
""" DON'T MODIFY ANYTHING IN THIS CELL """ print('Checking the Training on a Single Batch...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): batch_i = 1 for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy)
Checking the Training on a Single Batch... Epoch 1, CIFAR-10 Batch 1: Loss: 2.09 Valid Accuracy: 0.321 Epoch 2, CIFAR-10 Batch 1: Loss: 1.93 Valid Accuracy: 0.397 Epoch 3, CIFAR-10 Batch 1: Loss: 1.81 Valid Accuracy: 0.432 Epoch 4, CIFAR-10 Batch 1: Loss: 1.68 Valid Accuracy: 0.458 Epoch 5, CIFAR-10 Batch 1: Loss: 1.58 Valid Accuracy: 0.461 Epoch 6, CIFAR-10 Batch 1: Loss: 1.48 Valid Accuracy: 0.478 Epoch 7, CIFAR-10 Batch 1: Loss: 1.38 Valid Accuracy: 0.484 Epoch 8, CIFAR-10 Batch 1: Loss: 1.3 Valid Accuracy: 0.495 Epoch 9, CIFAR-10 Batch 1: Loss: 1.25 Valid Accuracy: 0.498 Epoch 10, CIFAR-10 Batch 1: Loss: 1.14 Valid Accuracy: 0.504 Epoch 11, CIFAR-10 Batch 1: Loss: 1.1 Valid Accuracy: 0.512 Epoch 12, CIFAR-10 Batch 1: Loss: 1.04 Valid Accuracy: 0.513 Epoch 13, CIFAR-10 Batch 1: Loss: 0.968 Valid Accuracy: 0.526 Epoch 14, CIFAR-10 Batch 1: Loss: 0.916 Valid Accuracy: 0.525 Epoch 15, CIFAR-10 Batch 1: Loss: 0.92 Valid Accuracy: 0.522 Epoch 16, CIFAR-10 Batch 1: Loss: 0.854 Valid Accuracy: 0.526 Epoch 17, CIFAR-10 Batch 1: Loss: 0.794 Valid Accuracy: 0.525 Epoch 18, CIFAR-10 Batch 1: Loss: 0.773 Valid Accuracy: 0.528 Epoch 19, CIFAR-10 Batch 1: Loss: 0.723 Valid Accuracy: 0.532 Epoch 20, CIFAR-10 Batch 1: Loss: 0.684 Valid Accuracy: 0.536
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Fully Train the ModelNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.
""" DON'T MODIFY ANYTHING IN THIS CELL """ save_model_path = './image_classification' print('Training...') with tf.Session() as sess: # Initializing the variables sess.run(tf.global_variables_initializer()) # Training cycle for epoch in range(epochs): # Loop over all batches n_batches = 5 for batch_i in range(1, n_batches + 1): for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size): train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels) print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='') print_stats(sess, batch_features, batch_labels, cost, accuracy) # Save Model saver = tf.train.Saver() save_path = saver.save(sess, save_model_path)
Training... Epoch 1, CIFAR-10 Batch 1: Loss: 2.1 Valid Accuracy: 0.31 Epoch 1, CIFAR-10 Batch 2: Loss: 1.8 Valid Accuracy: 0.391 Epoch 1, CIFAR-10 Batch 3: Loss: 1.64 Valid Accuracy: 0.411 Epoch 1, CIFAR-10 Batch 4: Loss: 1.61 Valid Accuracy: 0.438 Epoch 1, CIFAR-10 Batch 5: Loss: 1.65 Valid Accuracy: 0.467 Epoch 2, CIFAR-10 Batch 1: Loss: 1.74 Valid Accuracy: 0.468 Epoch 2, CIFAR-10 Batch 2: Loss: 1.39 Valid Accuracy: 0.491 Epoch 2, CIFAR-10 Batch 3: Loss: 1.29 Valid Accuracy: 0.499 Epoch 2, CIFAR-10 Batch 4: Loss: 1.33 Valid Accuracy: 0.514 Epoch 2, CIFAR-10 Batch 5: Loss: 1.49 Valid Accuracy: 0.525 Epoch 3, CIFAR-10 Batch 1: Loss: 1.55 Valid Accuracy: 0.517 Epoch 3, CIFAR-10 Batch 2: Loss: 1.2 Valid Accuracy: 0.529 Epoch 3, CIFAR-10 Batch 3: Loss: 1.15 Valid Accuracy: 0.531 Epoch 3, CIFAR-10 Batch 4: Loss: 1.27 Valid Accuracy: 0.533 Epoch 3, CIFAR-10 Batch 5: Loss: 1.38 Valid Accuracy: 0.55 Epoch 4, CIFAR-10 Batch 1: Loss: 1.41 Valid Accuracy: 0.545 Epoch 4, CIFAR-10 Batch 2: Loss: 1.09 Valid Accuracy: 0.556 Epoch 4, CIFAR-10 Batch 3: Loss: 1.09 Valid Accuracy: 0.546 Epoch 4, CIFAR-10 Batch 4: Loss: 1.15 Valid Accuracy: 0.556 Epoch 4, CIFAR-10 Batch 5: Loss: 1.31 Valid Accuracy: 0.556 Epoch 5, CIFAR-10 Batch 1: Loss: 1.28 Valid Accuracy: 0.558 Epoch 5, CIFAR-10 Batch 2: Loss: 1.03 Valid Accuracy: 0.566 Epoch 5, CIFAR-10 Batch 3: Loss: 0.96 Valid Accuracy: 0.565 Epoch 5, CIFAR-10 Batch 4: Loss: 1.08 Valid Accuracy: 0.566 Epoch 5, CIFAR-10 Batch 5: Loss: 1.23 Valid Accuracy: 0.581 Epoch 6, CIFAR-10 Batch 1: Loss: 1.18 Valid Accuracy: 0.576 Epoch 6, CIFAR-10 Batch 2: Loss: 0.946 Valid Accuracy: 0.574 Epoch 6, CIFAR-10 Batch 3: Loss: 0.943 Valid Accuracy: 0.576 Epoch 6, CIFAR-10 Batch 4: Loss: 0.993 Valid Accuracy: 0.581 Epoch 6, CIFAR-10 Batch 5: Loss: 1.14 Valid Accuracy: 0.581 Epoch 7, CIFAR-10 Batch 1: Loss: 1.15 Valid Accuracy: 0.587 Epoch 7, CIFAR-10 Batch 2: Loss: 0.884 Valid Accuracy: 0.59 Epoch 7, CIFAR-10 Batch 3: Loss: 0.855 Valid Accuracy: 0.589 Epoch 7, CIFAR-10 Batch 4: Loss: 0.966 Valid Accuracy: 0.586 Epoch 7, CIFAR-10 Batch 5: Loss: 1.07 Valid Accuracy: 0.596 Epoch 8, CIFAR-10 Batch 1: Loss: 1.02 Valid Accuracy: 0.596 Epoch 8, CIFAR-10 Batch 2: Loss: 0.827 Valid Accuracy: 0.597 Epoch 8, CIFAR-10 Batch 3: Loss: 0.826 Valid Accuracy: 0.593 Epoch 8, CIFAR-10 Batch 4: Loss: 0.896 Valid Accuracy: 0.594 Epoch 8, CIFAR-10 Batch 5: Loss: 1.02 Valid Accuracy: 0.596 Epoch 9, CIFAR-10 Batch 1: Loss: 0.968 Valid Accuracy: 0.598 Epoch 9, CIFAR-10 Batch 2: Loss: 0.782 Valid Accuracy: 0.601 Epoch 9, CIFAR-10 Batch 3: Loss: 0.746 Valid Accuracy: 0.602 Epoch 9, CIFAR-10 Batch 4: Loss: 0.849 Valid Accuracy: 0.596 Epoch 9, CIFAR-10 Batch 5: Loss: 0.958 Valid Accuracy: 0.597 Epoch 10, CIFAR-10 Batch 1: Loss: 0.984 Valid Accuracy: 0.591 Epoch 10, CIFAR-10 Batch 2: Loss: 0.736 Valid Accuracy: 0.603 Epoch 10, CIFAR-10 Batch 3: Loss: 0.672 Valid Accuracy: 0.608 Epoch 10, CIFAR-10 Batch 4: Loss: 0.794 Valid Accuracy: 0.601 Epoch 10, CIFAR-10 Batch 5: Loss: 0.92 Valid Accuracy: 0.607 Epoch 11, CIFAR-10 Batch 1: Loss: 0.895 Valid Accuracy: 0.606 Epoch 11, CIFAR-10 Batch 2: Loss: 0.711 Valid Accuracy: 0.61 Epoch 11, CIFAR-10 Batch 3: Loss: 0.663 Valid Accuracy: 0.61 Epoch 11, CIFAR-10 Batch 4: Loss: 0.762 Valid Accuracy: 0.608 Epoch 11, CIFAR-10 Batch 5: Loss: 0.887 Valid Accuracy: 0.603 Epoch 12, CIFAR-10 Batch 1: Loss: 0.902 Valid Accuracy: 0.613 Epoch 12, CIFAR-10 Batch 2: Loss: 0.671 Valid Accuracy: 0.606 Epoch 12, CIFAR-10 Batch 3: Loss: 0.568 Valid Accuracy: 0.615 Epoch 12, CIFAR-10 Batch 4: Loss: 0.736 Valid Accuracy: 0.604 Epoch 12, CIFAR-10 Batch 5: Loss: 0.855 Valid Accuracy: 0.611 Epoch 13, CIFAR-10 Batch 1: Loss: 0.899 Valid Accuracy: 0.615 Epoch 13, CIFAR-10 Batch 2: Loss: 0.648 Valid Accuracy: 0.616 Epoch 13, CIFAR-10 Batch 3: Loss: 0.587 Valid Accuracy: 0.614 Epoch 13, CIFAR-10 Batch 4: Loss: 0.694 Valid Accuracy: 0.613 Epoch 13, CIFAR-10 Batch 5: Loss: 0.818 Valid Accuracy: 0.611 Epoch 14, CIFAR-10 Batch 1: Loss: 0.865 Valid Accuracy: 0.619 Epoch 14, CIFAR-10 Batch 2: Loss: 0.581 Valid Accuracy: 0.618 Epoch 14, CIFAR-10 Batch 3: Loss: 0.565 Valid Accuracy: 0.618 Epoch 14, CIFAR-10 Batch 4: Loss: 0.651 Valid Accuracy: 0.615 Epoch 14, CIFAR-10 Batch 5: Loss: 0.761 Valid Accuracy: 0.621 Epoch 15, CIFAR-10 Batch 1: Loss: 0.852 Valid Accuracy: 0.62 Epoch 15, CIFAR-10 Batch 2: Loss: 0.575 Valid Accuracy: 0.624 Epoch 15, CIFAR-10 Batch 3: Loss: 0.533 Valid Accuracy: 0.619 Epoch 15, CIFAR-10 Batch 4: Loss: 0.604 Valid Accuracy: 0.608 Epoch 15, CIFAR-10 Batch 5: Loss: 0.783 Valid Accuracy: 0.605 Epoch 16, CIFAR-10 Batch 1: Loss: 0.826 Valid Accuracy: 0.618 Epoch 16, CIFAR-10 Batch 2: Loss: 0.607 Valid Accuracy: 0.623 Epoch 16, CIFAR-10 Batch 3: Loss: 0.536 Valid Accuracy: 0.609 Epoch 16, CIFAR-10 Batch 4: Loss: 0.546 Valid Accuracy: 0.617 Epoch 16, CIFAR-10 Batch 5: Loss: 0.72 Valid Accuracy: 0.608 Epoch 17, CIFAR-10 Batch 1: Loss: 0.791 Valid Accuracy: 0.622 Epoch 17, CIFAR-10 Batch 2: Loss: 0.538 Valid Accuracy: 0.621 Epoch 17, CIFAR-10 Batch 3: Loss: 0.486 Valid Accuracy: 0.621 Epoch 17, CIFAR-10 Batch 4: Loss: 0.571 Valid Accuracy: 0.623 Epoch 17, CIFAR-10 Batch 5: Loss: 0.68 Valid Accuracy: 0.616 Epoch 18, CIFAR-10 Batch 1: Loss: 0.783 Valid Accuracy: 0.621 Epoch 18, CIFAR-10 Batch 2: Loss: 0.536 Valid Accuracy: 0.624 Epoch 18, CIFAR-10 Batch 3: Loss: 0.462 Valid Accuracy: 0.625 Epoch 18, CIFAR-10 Batch 4: Loss: 0.555 Valid Accuracy: 0.622 Epoch 18, CIFAR-10 Batch 5: Loss: 0.642 Valid Accuracy: 0.617 Epoch 19, CIFAR-10 Batch 1: Loss: 0.728 Valid Accuracy: 0.622 Epoch 19, CIFAR-10 Batch 2: Loss: 0.507 Valid Accuracy: 0.625 Epoch 19, CIFAR-10 Batch 3: Loss: 0.442 Valid Accuracy: 0.631 Epoch 19, CIFAR-10 Batch 4: Loss: 0.495 Valid Accuracy: 0.624 Epoch 19, CIFAR-10 Batch 5: Loss: 0.606 Valid Accuracy: 0.62 Epoch 20, CIFAR-10 Batch 1: Loss: 0.726 Valid Accuracy: 0.624 Epoch 20, CIFAR-10 Batch 2: Loss: 0.493 Valid Accuracy: 0.626 Epoch 20, CIFAR-10 Batch 3: Loss: 0.466 Valid Accuracy: 0.623 Epoch 20, CIFAR-10 Batch 4: Loss: 0.487 Valid Accuracy: 0.616 Epoch 20, CIFAR-10 Batch 5: Loss: 0.592 Valid Accuracy: 0.622
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
CheckpointThe model has been saved to disk. Test ModelTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.
""" DON'T MODIFY ANYTHING IN THIS CELL """ %matplotlib inline %config InlineBackend.figure_format = 'retina' import tensorflow as tf import pickle import helper import random # Set batch size if not already set try: if batch_size: pass except NameError: batch_size = 64 save_model_path = './image_classification' n_samples = 4 top_n_predictions = 3 def test_model(): """ Test the saved model against the test dataset """ test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb')) loaded_graph = tf.Graph() with tf.Session(graph=loaded_graph) as sess: # Load model loader = tf.train.import_meta_graph(save_model_path + '.meta') loader.restore(sess, save_model_path) # Get Tensors from loaded model loaded_x = loaded_graph.get_tensor_by_name('x:0') loaded_y = loaded_graph.get_tensor_by_name('y:0') loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0') loaded_logits = loaded_graph.get_tensor_by_name('logits:0') loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0') # Get accuracy in batches for memory limitations test_batch_acc_total = 0 test_batch_count = 0 for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size): test_batch_acc_total += sess.run( loaded_acc, feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0}) test_batch_count += 1 print('Testing Accuracy: {}\n'.format(test_batch_acc_total/test_batch_count)) # Print Random Samples random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples))) random_test_predictions = sess.run( tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions), feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0}) helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions) test_model()
INFO:tensorflow:Restoring parameters from ./image_classification Testing Accuracy: 0.6132318037974683
MIT
image-classification/dlnd_image_classification.ipynb
cfcdavidchan/Deep-Learning-Foundation-Nanodegree
Learn the standard library to at least know what's there itertools and collections have very useful features - chain - product - permutations - combinations - izip
%matplotlib inline %config InlineBackend.figure_format='retina' import matplotlib.pyplot as plt import seaborn as sns sns.set_context('talk') sns.set_style('darkgrid') plt.rcParams['figure.figsize'] = 12, 8 # plotsize import numpy as np import pandas as pd # plot residuals from itertools import groupby # NOT REGULAR GROUPBY from itertools import product, cycle, izip import re # regular expressions
_____no_output_____
MIT
notebooks/09-Extras.ipynb
jbwhit/WSP-312-Tips-and-Tricks
Challenge (Easy)Write a function to return the total number of digits in a given string, and those digits.
test_string = """de3456yghj87654edfghuio908ujhgyuY^YHJUi8ytgh gtyujnh y7""" count = 0 digits = [] for x in test_string: try: int(x) count += 1 digits.append(int(x)) except: pass print("Number of digits:", str(count) + ";") print("They are:", digits)
_____no_output_____
MIT
notebooks/09-Extras.ipynb
jbwhit/WSP-312-Tips-and-Tricks
Challenge (Tricky)Same as above -- but were consecutive digits are available, return as a single number. Ex. "2a78b123" returns "3 numbers, they are: 2, 78, 123"
test_string groups = [] uniquekeys = [] for k, g in groupby(test_string, lambda x: x.isdigit()): groups.append(list(g)) uniquekeys.append(k) print(groups) print(uniquekeys) numbers = [] for x, y in izip(groups, uniquekeys): if y: numbers.append(int(''.join([j for j in x]))) print("Number:", np.sum(uniquekeys)) print("They are:", numbers) # In one cell def solution_2(test_string): groups = [] uniquekeys = [] for k, g in groupby(test_string, lambda x: x.isdigit()): if k: groups.append(int(''.join([j for j in g]))) return len(groups), groups print(solution_2(test_string))
_____no_output_____
MIT
notebooks/09-Extras.ipynb
jbwhit/WSP-312-Tips-and-Tricks
Challenge (Tricky)Same as above, but do it a second way.
def solution_3(test_string): """Regular expressions can be a very powerful and useful tool.""" groups = [int(j) for j in re.findall(r'\d+', test_string)] return len(groups), groups solution_3(test_string)
_____no_output_____
MIT
notebooks/09-Extras.ipynb
jbwhit/WSP-312-Tips-and-Tricks
Challenge (Hard)Same as above, but all valid numbers expressed in digits, commas, and decimal points. Ex. "a23.42dx9,331nm87,55" -> 4; 23.42, 9331, 87, 55Left as an exercise :) Don't spend much time on this one. Generators
def ex1(num): """A stupid example generator to prove a point.""" while num > 1: num += 1 yield num hey = ex1(5) hey.next() hey.next()
_____no_output_____
MIT
notebooks/09-Extras.ipynb
jbwhit/WSP-312-Tips-and-Tricks
GotchasModifying a dictionary's keys while iterating over it. ```pythonfor key in dictionary: if key == "bat": del dictionary[key]```If you have to do someeven_better_name like this: ```pythonlist_of_keys = dictionary.keys()for key in list_of_keys: if key == "bat": del dictionary[key]```
even_better_name = 5 even_better_name = 5 even_better_name = 5 even_better_name = 5 even_better_name = 5 even_better_name = 5
_____no_output_____
MIT
notebooks/09-Extras.ipynb
jbwhit/WSP-312-Tips-and-Tricks
1. Store .csv files into dataframe individually
# import .csv into dataframe of big cities health data # organized the big cities health data in the excel sheet big_cities_csv_file = "data/Big_Cities_Health_Data_Inventory.csv" big_cities_health_df = pd.read_csv(big_cities_csv_file) big_cities_health_df.head() # import 2010 hospital beds by ownership types; test results in 1,000 populations # added a year for each hospital dataset manually in the excel worksheet, since these are pretty small data hospital_beds_2010_csv_file = "data/2010_hospital_1000population_beds_ownership_type.csv" df_2010 = pd.read_csv(hospital_beds_2010_csv_file) df_2010.head() # import 2011 hospital beds by ownership types; test results in 1,000 populations # added a year for each hospital dataset manually in the excel worksheet, since these are pretty small data hospital_beds_2011_csv_file = "data/2011_hospital_1000population_beds_ownership_type.csv" df_2011 = pd.read_csv(hospital_beds_2011_csv_file) df_2011.head() # import 2012 hospital beds by ownership types; test results in 1,000 populations # added a year for each hospital dataset manually in the excel worksheet, since these are pretty small data hospital_beds_2012_csv_file = "data/2012_hospital_1000population_beds_ownership_type.csv" df_2012 = pd.read_csv(hospital_beds_2012_csv_file) df_2012.head() # import 2013 hospital beds by ownership types; test results in 1,000 populations # added a year for each hospital dataset manually in the excel worksheet, since these are pretty small data hospital_beds_2013_csv_file = "data/2013_hospital_1000population_beds_ownership_type.csv" df_2013 = pd.read_csv(hospital_beds_2013_csv_file) df_2013.head() # import 2014 hospital beds by ownership types; test results in 1,000 populations # added a year for each hospital dataset manually in the excel worksheet, since these are pretty small data hospital_beds_2014_csv_file = "data/2014_hospital_1000population_beds_ownership_type.csv" df_2014 = pd.read_csv(hospital_beds_2014_csv_file) df_2014.head() # import 2015 hospital beds by ownership types; test results in 1,000 populations # added a year for each hospital dataset manually in the excel worksheet, since these are pretty small data hospital_beds_2015_csv_file = "data/2015_hospital_1000population_beds_ownership_type.csv" df_2015 = pd.read_csv(hospital_beds_2015_csv_file) df_2015.head() # check the rows of the hospital data for year 2010, 2011, 2012, 2013, 2014, 2015 dataframe, they should have 51 rows for each state. df_2015.shape df_2014.shape df_2013.shape df_2012.shape df_2011.shape df_2010.shape # combine all hospital data by years hosp_df=pd.concat([df_2010,df_2011,df_2012,df_2013,df_2014,df_2015], axis=0) hosp_df.head() # check the rows again to be make sure we get the combined data of the year and state of hospital data. hosp_df.shape
_____no_output_____
MIT
ETL_Cities_Health_Data_Project.ipynb
erikayi/Project-2-ETL-Cities-Health-Data
2. Extract the data sources A. cleaning the Big Cities Health Data for year and state; plus, any information that needs to be cleaned out before loading into the sql database
# for references of created dataframe for big cities health data: # big_cities_csv_file = "data/Big_Cities_Health_Data_Inventory.csv" # big_cities_health_df = pd.read_csv(big_cities_csv_file) # big_cities_health_df.head() # Extract information of the cities data by year, category, indicator, gender, race/ethnicity, value, place health_cities = big_cities_health_df[['Year', 'Indicator Category', 'Indicator', 'Gender', 'Race/ Ethnicity', 'Value', 'Place']] # health_cities.head() # rename the big health data in cities and correct them in relation to the information they given new_health_cities = health_cities.rename(columns={'Year':'year', 'Indicator Category':'category', 'Indicator':'cause_of_death', 'Gender':'gender', 'Race/ Ethnicity':'race_ethnicity', 'Value':'death_rate', 'Place':'city_state'}) new_health_cities.head() # split the city and the state new_health_cities[['city','state']] = new_health_cities.city_state.str.split(expand=True, pat=",") new_health_cities.head() # drop the city_state and city columns for cleaner look new_health_cities_df = new_health_cities.drop(columns=['city_state', 'city']) new_health_cities_df.head() ordered_health_data = new_health_cities_df.sort_values("year", ascending=True) ordered_health_data.head() ordered_health_data.shape # split the year min.year and max.year of the data for the year included '-' # previous city health dataframe working on # new_health_cities[['year']] = new_health_cities.year.str.split(expand=True, pat="-") # new city dataframe we worked on ordered_health_data[['Year1','Year2']] = ordered_health_data['year'].str.split('-',expand=True) ordered_health_data.head() # end of the city data set ordered_health_data.tail() # check the city data rows to make sure we don't lose any information ordered_health_data.shape # store the extracted years in previous steps into max year ordered_health_data['Max_Year'] = np.where((ordered_health_data['Year2'].isnull()), ordered_health_data['Year1'], ordered_health_data['Year2']) ordered_health_data.head() ordered_health_data.tail() # drop the unnecessary year data in the city dataframe new_ordered_health_data = ordered_health_data.drop(columns=['year', 'Year1', 'Year2']) new_ordered_health_data.head() # check rows once again new_ordered_health_data.shape # rename the max year into year to be constant with other dataframe which it will be joined together at the end city_data = new_ordered_health_data.rename(columns={'Max_Year':'year'}) city_data.head() city_data.tail() # sort by year in ascending order in city data sort_city_data = city_data.sort_values(by='year',ascending=True, inplace=False) sort_city_data # save this data in csv file as 'new_health_data.csv' sort_city_data.to_csv(r'D:\Github\Project-2-ETL-Cities-Health-Data\data\new_health_data.csv', index=True)
_____no_output_____
MIT
ETL_Cities_Health_Data_Project.ipynb
erikayi/Project-2-ETL-Cities-Health-Data
B. cleaning the hospital beds data for year and state; plus any necessary data that needs to be cleaned, such as null values
# show hospital bed rate in dataframe structure that we made earlier hosp_df.head() hosp_df.tail() # check the rows if it's matching same as earlier after joining them together hosp_df.shape # rename each columns for hospital bed data for each year and state, state local gov, non-profit, for-profit, and total new_hosp_df = hosp_df.rename(columns={'Year':'year', 'Location':'state', 'State/Local Government':'state_local_gov', 'Non-Profit':'non_profit', 'For-Profit':'profit', 'Total':'total'}) new_hosp_df.head() new_hosp_df.tail() new_hosp_df.shape # drop the null values if it exists, and they happened to have some null values after I joined them together, # so I went back to this part to fix them. hospital_df = new_hosp_df.dropna() hospital_df.head() hospital_df.tail() # as you can see here, the number of rows decreased from 306 to 253. This indicates there are some null values within the data. hospital_df.shape # make a dictionary of the state for converting state name into abbreviation # so, it will match with the other data set of cities # and so, we can join both of the tables by each state and a year, which this was our goal us_state_abbrev = { 'Alabama': 'AL', 'Alaska': 'AK', 'Arizona': 'AZ', 'Arkansas': 'AR', 'California': 'CA', 'Colorado': 'CO', 'Connecticut': 'CT', 'Delaware': 'DE', 'Florida': 'FL', 'Georgia': 'GA', 'Hawaii': 'HI', 'Idaho': 'ID', 'Illinois': 'IL', 'Indiana': 'IN', 'Iowa': 'IA', 'Kansas': 'KS', 'Kentucky': 'KY', 'Louisiana': 'LA', 'Maine': 'ME', 'Maryland': 'MD', 'Massachusetts': 'MA', 'Michigan': 'MI', 'Minnesota': 'MN', 'Mississippi': 'MS', 'Missouri': 'MO', 'Montana': 'MT', 'Nebraska': 'NE', 'Nevada': 'NV', 'New Hampshire': 'NH', 'New Jersey': 'NJ', 'New Mexico': 'NM', 'New York': 'NY', 'North Carolina': 'NC', 'North Dakota': 'ND', 'Ohio': 'OH', 'Oklahoma': 'OK', 'Oregon': 'OR', 'Pennsylvania': 'PA', 'Rhode Island': 'RI', 'South Carolina': 'SC', 'South Dakota': 'SD', 'Tennessee': 'TN', 'Texas': 'TX', 'Utah': 'UT', 'Vermont': 'VT', 'Virginia': 'VA', 'Washington': 'WA', 'West Virginia': 'WV', 'Wisconsin': 'WI', 'Wyoming': 'WY'} # replace the name of the state into abbreviation hospital_df['state'] = hospital_df['state'].map(us_state_abbrev).fillna(hospital_df['state']) hospital_df.head() hospital_df.tail() # check the rows using .shape function. # it is same as previous one. # so there is no errors on adding extra values into the dataframe table. whew! hospital_df.shape
_____no_output_____
MIT
ETL_Cities_Health_Data_Project.ipynb
erikayi/Project-2-ETL-Cities-Health-Data
C. Cleaning the data before joining
# look into the data where the columns type sort_city_data.info() # convert the year object into integers sort_city_data['year'] = sort_city_data['year'].astype(int) # check if the year turned into integer sort_city_data.info() # check hospital bed data column type hospital_df.info() # convert the column type into string for both of the data set sort_city_data['state']=sort_city_data['state'].str.strip() hospital_df['state']=hospital_df['state'].str.strip() # drop the duplicates of the city data to be make sure if they have anything repeated to avoid errors sort_city_data.drop_duplicates(keep='first') # drop any null values left if exists cleaned_city = sort_city_data.dropna() cleaned_city.head() # check the shape (rows) of the dataframe # 13512 rows matches with the cleaned dataframe on the previous part of the project cleaned_city.shape # merge the both dataframees together at 'state' and 'year' on both left and right join # save it as joined_df joined_df= pd.merge(cleaned_city,hospital_df, how='inner', left_on=['state','year'], right_on=['state','year']) joined_df.head() joined_df.tail() joined_df # check if the null values dropped joined_df.shape # save the csv as new hospital data joined_df.to_csv(r'D:\Github\Project-2-ETL-Cities-Health-Data\data\new_hospital_data.csv', index=True) # below are the work that I saved each dataframe for hospital beds data by the year. # I commented out the rest of the individual dataframe of hospital beds data by the year because # it was unnecessary to use, since we already have joined dataframe for hospital beds data # beds_2010 = hospital_beds_2010_csv_file_df[['Year','Location', 'State/Local Government', 'Non-Profit', 'For-Profit']].copy() # beds_2010.head() # # rename each labels with year 2010 # new_beds_2010 = beds_2010.rename(columns={'Year':'year', # 'Location':'state', # 'State/Local Government':'state_local_2010', # 'Non-Profit':'nonprofit_2010', # 'For-Profit':'profit_2010'}) # new_beds_2010.head() # beds_2011 = hospital_beds_2011_csv_file_df[['Year','Location', 'State/Local Government', 'Non-Profit', 'For-Profit']].copy() # # beds_2010.head() # # rename each labels with year 2011 # new_beds_2011 = beds_2011.rename(columns={'Year':'year', # 'Location':'state', # 'State/Local Government':'state_local_2011', # 'Non-Profit':'nonprofit_2011', # 'For-Profit':'profit_2011'}) # new_beds_2011.head() # beds_2012 = hospital_beds_2012_csv_file_df[['Year','Location', 'State/Local Government', 'Non-Profit', 'For-Profit']].copy() # # beds_2010.head() # # rename each labels with year 2012 # new_beds_2012 = beds_2012.rename(columns={'Year':'year', # 'Location':'state', # 'State/Local Government':'state_local_2012', # 'Non-Profit':'nonprofit_2012', # 'For-Profit':'profit_2012'}) # new_beds_2012.head() # beds_2013 = hospital_beds_2013_csv_file_df[['Year','Location', 'State/Local Government', 'Non-Profit', 'For-Profit']].copy() # # beds_2010.head() # # rename each labels with year 2013 # new_beds_2013 = beds_2013.rename(columns={'Year':'year', # 'Location':'state', # 'State/Local Government':'state_local_2013', # 'Non-Profit':'nonprofit_2013', # 'For-Profit':'profit_2013'}) # new_beds_2013.head() # beds_2014 = hospital_beds_2014_csv_file_df[['Year','Location', 'State/Local Government', 'Non-Profit', 'For-Profit']].copy() # # beds_2010.head() # # rename each labels with year 2013 # new_beds_2014 = beds_2014.rename(columns={'Year':'year', # 'Location':'state', # 'State/Local Government':'state_local_2014', # 'Non-Profit':'nonprofit_2014', # 'For-Profit':'profit_2014'}) # new_beds_2014.head() # beds_2015 = hospital_beds_2015_csv_file_df[['Year','Location', 'State/Local Government', 'Non-Profit', 'For-Profit']].copy() # # beds_2010.head() # # rename each labels with year 2015 # new_beds_2015 = beds_2015.rename(columns={'Year':'year', # 'Location':'state', # 'State/Local Government':'state_local_2015', # 'Non-Profit':'nonprofit_2015', # 'For-Profit':'profit_2015'}) # new_beds_2015.head()
_____no_output_____
MIT
ETL_Cities_Health_Data_Project.ipynb
erikayi/Project-2-ETL-Cities-Health-Data
Loading the extracted data into SQL database
# import dependencies from pin import username, password # make a connection string for the database on localhost, and create engine for the database we made rds_connection_string = (f"{username}:{password}@localhost:5432/healthcities_db") engine = create_engine(f'postgresql://{rds_connection_string}')
_____no_output_____
MIT
ETL_Cities_Health_Data_Project.ipynb
erikayi/Project-2-ETL-Cities-Health-Data
Check the table names
engine.table_names()
_____no_output_____
MIT
ETL_Cities_Health_Data_Project.ipynb
erikayi/Project-2-ETL-Cities-Health-Data
Use Pandas to load csv converted DataFrame into SQL database
cleaned_city.to_sql(name='health_cities', con=engine, if_exists='append', index=False) cleaned_city.shape hospital_df.to_sql(name='hospital_data', con=engine, if_exists='append', index=False) hospital_df.shape # this is where I previously loaded each converted DataFrame into SQL database, and tried to join them using SQL database # instead of joining in pandas dataframe. However, it went through with errors and ran some troubleshooting. # so, my collegue helped me out, and suggest them to join in pandas dataframe first just to make a life an ease. # and, whew! that worked through without having errors and went through succeessfully. so, think simple and fail fast! # new_beds_2010.to_sql(name='hospital_data_2010', con=engine, if_exists='append', index=False) # new_beds_2011.to_sql(name='hospital_data_2011', con=engine, if_exists='append', index=False) # new_beds_2012.to_sql(name='hospital_data_2012', con=engine, if_exists='append', index=False) # new_beds_2013.to_sql(name='hospital_data_2013', con=engine, if_exists='append', index=False) # new_beds_2014.to_sql(name='hospital_data_2014', con=engine, if_exists='append', index=False) # new_beds_2015.to_sql(name='hospital_data_2015', con=engine, if_exists='append', index=False)
_____no_output_____
MIT
ETL_Cities_Health_Data_Project.ipynb
erikayi/Project-2-ETL-Cities-Health-Data
Confirm if the data has been added into SQL database query successfully for health_cities database.
cities_df = pd.read_sql_query('SELECT * FROM health_cities', con=engine) cities_df.head() cities_df.shape cities_df.tail() cleaned_hospital_df = pd.read_sql_query('SELECT * FROM hospital_data', con=engine) cleaned_hospital_df.head() cleaned_hospital_df.tail() cleaned_hospital_df.shape # This is just individual query that I worked with previous dataframe and tried to load each hospital beds data in each query. # However, these queries made the life more complicated with a full of errors and running into some troubleshooting. # So, I commented out just to show where I have made a mistake. # hospitals_2010 = pd.read_sql_query('SELECT * FROM hospital_data_2010', con=engine) # hospitals_2010.head() # hospitals_2011 = pd.read_sql_query('SELECT * FROM hospital_data_2011', con=engine) # hospitals_2011.head() # hospitals_2012 = pd.read_sql_query('SELECT * FROM hospital_data_2012', con=engine) # hospitals_2012.head() # hospitals_2013 = pd.read_sql_query('SELECT * FROM hospital_data_2013', con=engine) # hospitals_2013.head() # hospitals_2014 = pd.read_sql_query('SELECT * FROM hospital_data_2014', con=engine) # hospitals_2014.head() # hospitals_2015 = pd.read_sql_query('SELECT * FROM hospital_data_2015', con=engine) # hospitals_2015.head()
_____no_output_____
MIT
ETL_Cities_Health_Data_Project.ipynb
erikayi/Project-2-ETL-Cities-Health-Data
Merging the data together using SQL database
merged_health_data = pd.read_sql_query('SELECT hc.category, hc.cause_of_death, hc.gender, hc.race_ethnicity, hc.death_rate, ho.state, hc.year, ho.state_local_gov, ho.non_profit, ho.profit, ho.total FROM health_cities hc INNER JOIN hospital_data ho ON hc.year = ho.year AND hc.state = ho.state ORDER BY hc.year ASC', con=engine) merged_health_data.head() merged_health_data.tail() # always check for shape for correct number of rows # I've ran into problem where I have more rows that I have after I join these two data tables # and my colleague helped me that I should check for number of rows count just to make sure I am inserting the tables correctly. merged_health_data.shape
_____no_output_____
MIT
ETL_Cities_Health_Data_Project.ipynb
erikayi/Project-2-ETL-Cities-Health-Data
Source we used
# Data Source 1- Health Status across US urban cities # https://data.world/health/big-cities-health # Data Source 2 - Hospital Data # https://www.kff.org/other/state-indicator/beds-by-ownership/?currentTimeframe=10&sortModel=%7B%22colId%22:%22Location%22,%22sort%22:%22asc%22%7D
_____no_output_____
MIT
ETL_Cities_Health_Data_Project.ipynb
erikayi/Project-2-ETL-Cities-Health-Data
1. Inspecting transfusion.data fileBlood transfusion saves lives - from replacing lost blood during major surgery or a serious injury to treating various illnesses and blood disorders. Ensuring that there's enough blood in supply whenever needed is a serious challenge for the health professionals. According to WebMD, "about 5 million Americans need a blood transfusion every year".Our dataset is from a mobile blood donation vehicle in Taiwan. The Blood Transfusion Service Center drives to different universities and collects blood as part of a blood drive. We want to predict whether or not a donor will give blood the next time the vehicle comes to campus.The data is stored in datasets/transfusion.data and it is structured according to RFMTC marketing model (a variation of RFM). We'll explore what that means later in this notebook. First, let's inspect the data.
# Print out the first 5 lines from the transfusion.data file !head -n 5 datasets/transfusion.data
Recency (months),Frequency (times),Monetary (c.c. blood),Time (months),"whether he/she donated blood in March 2007" 2 ,50,12500,98 ,1 0 ,13,3250,28 ,1 1 ,16,4000,35 ,1 2 ,20,5000,45 ,1
MIT
DataCamp/Give Life: Predict Blood Donations/notebook.ipynb
lukzmu/data-courses
2. Loading the blood donations dataWe now know that we are working with a typical CSV file (i.e., the delimiter is ,, etc.). We proceed to loading the data into memory.
# Import pandas import pandas as pd # Read in dataset transfusion = pd.read_csv('datasets/transfusion.data') # Print out the first rows of our dataset transfusion.head()
_____no_output_____
MIT
DataCamp/Give Life: Predict Blood Donations/notebook.ipynb
lukzmu/data-courses
3. Inspecting transfusion DataFrameLet's briefly return to our discussion of RFM model. RFM stands for Recency, Frequency and Monetary Value and it is commonly used in marketing for identifying your best customers. In our case, our customers are blood donors.RFMTC is a variation of the RFM model. Below is a description of what each column means in our dataset:R (Recency - months since the last donation)F (Frequency - total number of donation)M (Monetary - total blood donated in c.c.)T (Time - months since the first donation)a binary variable representing whether he/she donated blood in March 2007 (1 stands for donating blood; 0 stands for not donating blood)It looks like every column in our DataFrame has the numeric type, which is exactly what we want when building a machine learning model. Let's verify our hypothesis.
# Print a concise summary of transfusion DataFrame transfusion.info()
<class 'pandas.core.frame.DataFrame'> RangeIndex: 748 entries, 0 to 747 Data columns (total 5 columns): Recency (months) 748 non-null int64 Frequency (times) 748 non-null int64 Monetary (c.c. blood) 748 non-null int64 Time (months) 748 non-null int64 whether he/she donated blood in March 2007 748 non-null int64 dtypes: int64(5) memory usage: 29.3 KB
MIT
DataCamp/Give Life: Predict Blood Donations/notebook.ipynb
lukzmu/data-courses
4. Creating target columnWe are aiming to predict the value in whether he/she donated blood in March 2007 column. Let's rename this it to target so that it's more convenient to work with.
# Rename target column as 'target' for brevity transfusion.rename( columns={'whether he/she donated blood in March 2007': 'target'}, inplace=True ) # Print out the first 2 rows transfusion.head(2)
_____no_output_____
MIT
DataCamp/Give Life: Predict Blood Donations/notebook.ipynb
lukzmu/data-courses
5. Checking target incidenceWe want to predict whether or not the same donor will give blood the next time the vehicle comes to campus. The model for this is a binary classifier, meaning that there are only 2 possible outcomes:0 - the donor will not give blood1 - the donor will give bloodTarget incidence is defined as the number of cases of each individual target value in a dataset. That is, how many 0s in the target column compared to how many 1s? Target incidence gives us an idea of how balanced (or imbalanced) is our dataset.
# Print target incidence proportions, rounding output to 3 decimal places transfusion.target.value_counts(normalize=True).round(3)
_____no_output_____
MIT
DataCamp/Give Life: Predict Blood Donations/notebook.ipynb
lukzmu/data-courses
6. Splitting transfusion into train and test datasetsWe'll now use train_test_split() method to split transfusion DataFrame.Target incidence informed us that in our dataset 0s appear 76% of the time. We want to keep the same structure in train and test datasets, i.e., both datasets must have 0 target incidence of 76%. This is very easy to do using the train_test_split() method from the scikit learn library - all we need to do is specify the stratify parameter. In our case, we'll stratify on the target column.
# Import train_test_split method from sklearn.model_selection import train_test_split # Split transfusion DataFrame into # X_train, X_test, y_train and y_test datasets, # stratifying on the `target` column X_train, X_test, y_train, y_test = train_test_split( transfusion.drop(columns='target'), transfusion.target, test_size=0.25, random_state=42, stratify=transfusion['target'], ) # Print out the first 2 rows of X_train X_train.head(2)
_____no_output_____
MIT
DataCamp/Give Life: Predict Blood Donations/notebook.ipynb
lukzmu/data-courses
7. Selecting model using TPOTTPOT is a Python Automated Machine Learning tool that optimizes machine learning pipelines using genetic programming.TPOT will automatically explore hundreds of possible pipelines to find the best one for our dataset. Note, the outcome of this search will be a scikit-learn pipeline, meaning it will include any pre-processing steps as well as the model.We are using TPOT to help us zero in on one model that we can then explore and optimize further.
# Import TPOTClassifier and roc_auc_score from tpot import TPOTClassifier from sklearn.metrics import roc_auc_score # Instantiate TPOTClassifier tpot = TPOTClassifier( generations=5, population_size=20, verbosity=2, scoring='roc_auc', random_state=42, disable_update_check=True, config_dict='TPOT light' ) tpot.fit(X_train, y_train) # AUC score for tpot model tpot_auc_score = roc_auc_score(y_test, tpot.predict_proba(X_test)[:, 1]) print(f'\nAUC score: {tpot_auc_score:.4f}') # Print best pipeline steps print('\nBest pipeline steps:', end='\n') for idx, (name, transform) in enumerate(tpot.fitted_pipeline_.steps, start=1): # Print idx and transform print(f'{idx}. {transform}')
_____no_output_____
MIT
DataCamp/Give Life: Predict Blood Donations/notebook.ipynb
lukzmu/data-courses
8. Checking the varianceTPOT picked LogisticRegression as the best model for our dataset with no pre-processing steps, giving us the AUC score of 0.7850. This is a great starting point. Let's see if we can make it better.One of the assumptions for linear regression models is that the data and the features we are giving it are related in a linear fashion, or can be measured with a linear distance metric. If a feature in our dataset has a high variance that's an order of magnitude or more greater than the other features, this could impact the model's ability to learn from other features in the dataset.Correcting for high variance is called normalization. It is one of the possible transformations you do before training a model. Let's check the variance to see if such transformation is needed.
# X_train's variance, rounding the output to 3 decimal places pd.DataFrame.var(X_train).round(3)
_____no_output_____
MIT
DataCamp/Give Life: Predict Blood Donations/notebook.ipynb
lukzmu/data-courses
9. Log normalizationMonetary (c.c. blood)'s variance is very high in comparison to any other column in the dataset. This means that, unless accounted for, this feature may get more weight by the model (i.e., be seen as more important) than any other feature.One way to correct for high variance is to use log normalization.
# Import numpy import numpy as np # Copy X_train and X_test into X_train_normed and X_test_normed X_train_normed, X_test_normed = X_train.copy(), X_test.copy() # Specify which column to normalize col_to_normalize = 'Monetary (c.c. blood)' # Log normalization for df_ in [X_train_normed, X_test_normed]: # Add log normalized column df_['monetary_log'] = np.log(df_[col_to_normalize]) # Drop the original column df_.drop(columns=[col_to_normalize], inplace=True) # Check the variance for X_train_normed X_train_normed.var().round(3)
_____no_output_____
MIT
DataCamp/Give Life: Predict Blood Donations/notebook.ipynb
lukzmu/data-courses
10. Training the linear regression modelThe variance looks much better now. Notice that now Time (months) has the largest variance, but it's not the orders of magnitude higher than the rest of the variables, so we'll leave it as is.We are now ready to train the linear regression model.
# Importing modules from sklearn import linear_model # Instantiate LogisticRegression logreg = linear_model.LogisticRegression( solver='liblinear', random_state=42 ) # Train the model logreg.fit(X_train_normed, y_train) # AUC score for tpot model logreg_auc_score = roc_auc_score(y_test, logreg.predict_proba(X_test_normed)[:, 1]) print(f'\nAUC score: {logreg_auc_score:.4f}')
AUC score: 0.7891
MIT
DataCamp/Give Life: Predict Blood Donations/notebook.ipynb
lukzmu/data-courses
11. ConclusionThe demand for blood fluctuates throughout the year. As one prominent example, blood donations slow down during busy holiday seasons. An accurate forecast for the future supply of blood allows for an appropriate action to be taken ahead of time and therefore saving more lives.In this notebook, we explored automatic model selection using TPOT and AUC score we got was 0.7850. This is better than simply choosing 0 all the time (the target incidence suggests that such a model would have 76% success rate). We then log normalized our training data and improved the AUC score by 0.5%. In the field of machine learning, even small improvements in accuracy can be important, depending on the purpose.Another benefit of using logistic regression model is that it is interpretable. We can analyze how much of the variance in the response variable (target) can be explained by other variables in our dataset.
# Importing itemgetter from operator import itemgetter # Sort models based on their AUC score from highest to lowest sorted( [('tpot', tpot_auc_score), ('logreg', logreg_auc_score)], key=itemgetter(1), reverse=True, )
_____no_output_____
MIT
DataCamp/Give Life: Predict Blood Donations/notebook.ipynb
lukzmu/data-courses
IntroductionIn this chapter, we will use Game of Thrones as a case study to practice our newly learnt skills of network analysis.It is suprising right? What is the relationship between a fatansy TV show/novel and network science or Python(not dragons).If you haven't heard of Game of Thrones, then you must be really good at hiding. Game of Thrones is a hugely popular television series by HBO based on the (also) hugely popular book series A Song of Ice and Fire by George R.R. Martin. In this notebook, we will analyze the co-occurrence network of the characters in the Game of Thrones books. Here, two characters are considered to co-occur if their names appear in the vicinity of 15 words from one another in the books.The figure below is a precusor of what we will analyse in this chapter.![](images/got.png)The dataset is publicly avaiable for the 5 books at https://github.com/mathbeveridge/asoiaf. This is an interaction network and were created by connecting two characters whenever their names (or nicknames) appeared within 15 words of one another in one of the books. The edge weight corresponds to the number of interactions. Blog: https://networkofthrones.wordpress.com
from nams import load_data as cf books = cf.load_game_of_thrones_data()
_____no_output_____
MIT
notebooks/05-casestudies/01-gameofthrones.ipynb
khanin-th/Network-Analysis-Made-Simple
The resulting DataFrame books has 5 columns: Source, Target, Type, weight, and book. Source and target are the two nodes that are linked by an edge. As we know a network can have directed or undirected edges and in this network all the edges are undirected. The weight attribute of every edge tells us the number of interactions that the characters have had over the book, and the book column tells us the book number.Let's have a look at the data.
# We also add this weight_inv to our dataset. # Why? we will discuss it in a later section. books['weight_inv'] = 1/books.weight books.head()
_____no_output_____
MIT
notebooks/05-casestudies/01-gameofthrones.ipynb
khanin-th/Network-Analysis-Made-Simple
From the above data we can see that the characters Addam Marbrand and Tywin Lannister have interacted 6 times in the first book.We can investigate this data by using the pandas DataFrame. Let's find all the interactions of Robb Stark in the third book.
robbstark = ( books.query("book == 3") .query("Source == 'Robb-Stark' or Target == 'Robb-Stark'") ) robbstark.head()
_____no_output_____
MIT
notebooks/05-casestudies/01-gameofthrones.ipynb
khanin-th/Network-Analysis-Made-Simple
As you can see this data easily translates to a network problem. Now it's time to create a network.We create a graph for each book. It's possible to create one `MultiGraph`(Graph with multiple edges between nodes) instead of 5 graphs, but it is easier to analyse and manipulate individual `Graph` objects rather than a `MultiGraph`.
# example of creating a MultiGraph # all_books_multigraph = nx.from_pandas_edgelist( # books, source='Source', target='Target', # edge_attr=['weight', 'book'], # create_using=nx.MultiGraph) # we create a list of graph objects using # nx.from_pandas_edgelist and specifying # the edge attributes. graphs = [nx.from_pandas_edgelist( books[books.book==i], source='Source', target='Target', edge_attr=['weight', 'weight_inv']) for i in range(1, 6)] # The Graph object associated with the first book. graphs[0] # To access the relationship edges in the graph with # the edge attribute weight data (data=True) relationships = list(graphs[0].edges(data=True)) relationships[0:3]
_____no_output_____
MIT
notebooks/05-casestudies/01-gameofthrones.ipynb
khanin-th/Network-Analysis-Made-Simple