markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Setup TMVA
TMVA.Tools.Instance()
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Reader. One reader for each application.
reader = TMVA.Reader("Color:!Silent") reader_S = TMVA.Reader("Color:!Silent") reader_B = TMVA.Reader("Color:!Silent")
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Inputs=============Load dataAn unknown sample
trfile = "Zp2TeV_ttbar.root" data = TFile.Open(trfile) tree = data.Get('tree')
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Known signal
trfile_S = "Zp1TeV_ttbar.root" data_S = TFile.Open(trfile_S) tree_S = data_S.Get('tree')
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Known background
trfile_B = "SM_ttbar.root" data_B = TFile.Open(trfile_B) tree_B = data_B.Get('tree')
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Set input variables. Do this for each reader
branches = {} for branch in tree.GetListOfBranches(): branchName = branch.GetName() branches[branchName] = array('f', [-999]) tree.SetBranchAddress(branchName, branches[branchName]) if branchName not in ["mtt_truth", "weight", "nlep", "njets"]: reader.AddVariable(branchName, branches[branchName]) branches_S = {} for branch in tree_S.GetListOfBranches(): branchName = branch.GetName() branches_S[branchName] = array('f', [-999]) tree_S.SetBranchAddress(branchName, branches_S[branchName]) if branchName not in ["mtt_truth", "weight", "nlep", "njets"]: reader_S.AddVariable(branchName, branches_S[branchName]) branches_B = {} for branch in tree_B.GetListOfBranches(): branchName = branch.GetName() branches_B[branchName] = array('f', [-999]) tree_B.SetBranchAddress(branchName, branches_B[branchName]) if branchName not in ["mtt_truth", "weight", "nlep", "njets"]: reader_B.AddVariable(branchName, branches_B[branchName])
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Book method(s)=============BDT
methodName1 = "BDT" weightfile = 'dataset/weights/TMVAClassification_{0}.weights.xml'.format(methodName1) reader.BookMVA( methodName1, weightfile ) reader_S.BookMVA( methodName1, weightfile ) reader_B.BookMVA( methodName1, weightfile )
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
BDTG
methodName2 = "BDTG" weightfile = 'dataset/weights/TMVAClassification_{0}.weights.xml'.format(methodName2) reader.BookMVA( methodName2, weightfile ) reader_S.BookMVA( methodName2, weightfile ) reader_B.BookMVA( methodName2, weightfile )
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Loop events for evaluation================ Book histograms
nbins, xmin, xmax=20, -1, 1
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Signal
tag = "S" hname="BDT_{0}".format(tag) h1 = TH1F(hname, hname, nbins, xmin, xmax) h1.Sumw2() hname="BDTG_{0}".format(tag) h2 = TH1F(hname, hname, nbins, xmin, xmax) h2.Sumw2() nevents = tree_S.GetEntries() for i in range(nevents): tree_S.GetEntry(i) BDT = reader_S.EvaluateMVA(methodName1) BDTG = reader_S.EvaluateMVA(methodName2) h1.Fill(BDT) h2.Fill(BDTG)
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Background
tag = "B" hname="BDT_{0}".format(tag) h3 = TH1F(hname, hname, nbins, xmin, xmax) h3.Sumw2() hname="BDTG_{0}".format(tag) h4 = TH1F(hname, hname, nbins, xmin, xmax) h4.Sumw2() nevents = tree_B.GetEntries() for i in range(nevents): tree_B.GetEntry(i) BDT = reader_B.EvaluateMVA(methodName1) BDTG = reader_B.EvaluateMVA(methodName2) h3.Fill(BDT) h4.Fill(BDTG)
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
New sample
tag = "N" hname="BDT_{0}".format(tag) h5 = TH1F(hname, hname, nbins, xmin, xmax) h5.Sumw2() hname="BDTG_{0}".format(tag) h6 = TH1F(hname, hname, nbins, xmin, xmax) h6.Sumw2() nevents = tree.GetEntries() for i in range(nevents): tree.GetEntry(i) BDT = reader.EvaluateMVA(methodName1) BDTG = reader.EvaluateMVA(methodName2) h5.Fill(BDT) h6.Fill(BDTG)
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Helper function to normalize hists
def norm_hists(h): h_new = h.Clone() hname = h.GetName() + "_normalized" h_new.SetName(hname) h_new.SetTitle(hname) ntot = h.Integral() if ntot!=0: h_new.Scale(1./ntot) return h_new
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Plotting
myc = TCanvas("c", "c", 800, 600) myc.SetFillColor(0) myc.cd()
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Compare the performance for BDT
nh1 = norm_hists(h1) nh1.GetXaxis().SetTitle("BDT") nh1.GetYaxis().SetTitle("A.U.") nh1.Draw("hist") nh3 = norm_hists(h3) nh3.SetLineColor(2) nh3.SetMarkerColor(2) nh3.Draw("same hist") nh5 = norm_hists(h5) nh5.SetLineColor(4) nh5.SetMarkerColor(4) nh5.Draw("same") ymin = 0 ymax = max(nh1.GetMaximum(), nh3.GetMaximum(), nh5.GetMaximum()) nh1.GetYaxis().SetRangeUser(ymin, ymax*1.5)
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Draw legends
lIy = 0.92 lg = TLegend(0.60, lIy-0.25, 0.85, lIy) lg.SetBorderSize(0) lg.SetFillStyle(0) lg.SetTextFont(42) lg.SetTextSize(0.04) lg.AddEntry(nh1, "Signal 1 TeV", "l") lg.AddEntry(nh3, "Background", "l") lg.AddEntry(nh5, "Signal 2 TeV", "l") lg.Draw() myc.Draw() myc.SaveAs("TMVA_tutorial_cla_app_1.png")
Info in <TCanvas::Print>: png file TMVA_tutorial_cla_app_1.png has been created
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Compare the performance for BDTG
nh1 = norm_hists(h2) nh1.GetXaxis().SetTitle("BDTG") nh1.GetYaxis().SetTitle("A.U.") nh1.Draw("hist") nh3 = norm_hists(h4) nh3.SetLineColor(2) nh3.SetMarkerColor(2) nh3.Draw("same hist") nh5 = norm_hists(h6) nh5.SetLineColor(4) nh5.SetMarkerColor(4) nh5.Draw("same") ymin = 0 ymax = max(nh1.GetMaximum(), nh3.GetMaximum(), nh5.GetMaximum()) nh1.GetYaxis().SetRangeUser(ymin, ymax*1.5)
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Draw legends
lIy = 0.92 lg = TLegend(0.60, lIy-0.25, 0.85, lIy) lg.SetBorderSize(0) lg.SetFillStyle(0) lg.SetTextFont(42) lg.SetTextSize(0.04) lg.AddEntry(nh1, "Signal 1 TeV", "l") lg.AddEntry(nh3, "Background", "l") lg.AddEntry(nh5, "Signal 2 TeV", "l") lg.Draw() myc.Draw() myc.SaveAs("TMVA_tutorial_cla_app_2.png")
Info in <TCanvas::Print>: png file TMVA_tutorial_cla_app_2.png has been created
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Draw all canvases
from ROOT import gROOT gROOT.GetListOfCanvases().Draw()
_____no_output_____
CC-BY-4.0
MVA/TMVA_tutorial_classification_tmva_app.py.nbconvert.ipynb
LailinXu/hepstat-tutorial
Data Processing
%pylab inline matplotlib.rcParams['figure.figsize'] = [20, 10] import pandas as pd import numpy as np import warnings warnings.filterwarnings("ignore") # All variables we concern about columnNames1 = ["releaseNum", "1968ID", "personNumber", "gender", "marriage", "familyNumber", "sequenceNum", "relationToHead", "age", 'employmentStatus', "education", "nonHeadlaborIncome"] columnNames2 = ["releaseNum", "1968ID", "personNumber", "gender", "marriage", "familyNumber", "sequenceNum", "relationToHead", "age", 'employmentStatus', "education"] FcolumnNames1999_2001 = ['releaseNum', 'familyID', 'composition', 'headCount', 'ageHead', 'maritalStatus', 'employmentStatus', 'liquidWealth', 'race', 'industry' ,'geoCode','incomeHead', "incomeWife", 'foodCost', 'houseCost', 'transCost', 'educationCost', 'childCost', 'healthCost', 'education', 'participation', 'investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', "wealthWithHomeEquity"] FcolumnNames2003_2007 = ['releaseNum', 'familyID', 'composition', 'headCount', 'ageHead', 'maritalStatus', 'employmentStatus', 'liquidWealth', 'race', 'industry', 'incomeHead', "incomeWife", 'foodCost', 'houseCost', 'transCost', 'educationCost', 'childCost', 'healthCost', 'geoCode', 'education', 'participation', 'investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', "wealthWithHomeEquity"] FcolumnNames2019 = ['releaseNum', 'familyID', 'composition', 'headCount', 'ageHead', 'maritalStatus', 'employmentStatus', 'liquidWealth', 'race', 'industry' ,'incomeHead', 'incomeWife', 'participation', 'investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', 'wealthWithHomeEquity', 'foodCost', 'houseCost', 'transCost', 'educationCost', 'childCost', 'healthCost', 'geoCode', 'education'] # The timeline we care about years = [1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, 2015, 2017] # The function used to complile all years data into one dataFrame, # the input "features" is a list of features. def compile_data_with_features(features, years): df = pd.DataFrame() # Loading the data through years for year in years: df_sub = pd.read_excel("individual/" + str(year) + ".xlsx") if year >= 2005: df_sub.columns = columnNames1 df_sub['year'] = year df = pd.concat([df, df_sub[['year'] + features + ["nonHeadlaborIncome"]]]) else: df_sub.columns = columnNames2 df_sub['year'] = year df = pd.concat([df, df_sub[['year'] + features]]) df = df.reset_index(drop = True) return df def Fcompile_data_with_features(features, years): df = pd.DataFrame() # Loading the data through years for year in years: df_sub = pd.read_excel("family/" + str(year) + ".xlsx") if year >= 1999 and year <= 2001: df_sub.columns = FcolumnNames1999_2001 elif year >= 2003 and year <= 2007: df_sub.columns = FcolumnNames2003_2007 else: df_sub.columns = FcolumnNames2019 df_sub['year'] = year df = pd.concat([df, df_sub[['familyID','year'] + features]]) df = df.reset_index(drop = True) return df # The function is used to drop the values we do not like in the dataFrame, # the input "features" and "values" are both list def drop_values(features, values, df): for feature in features: for value in values: df = df[df[feature] != value] df = df.reset_index(drop = True) return df
_____no_output_____
MIT
20201120/20201116/empirical/.ipynb_checkpoints/DataProcessing-checkpoint.ipynb
dongxulee/lifeCycle
Individual Data
Idf = compile_data_with_features(["1968ID", "personNumber", "familyNumber","gender", "marriage", "age", 'employmentStatus', "education", "relationToHead"], years) Idf["ID"] = Idf["1968ID"]* 1000 + Idf["personNumber"] # pick out the head in the individual df_head = Idf[Idf["relationToHead"] == 10] df_head = df_head.reset_index(drop = True) # compile individuals with all 10 years data. completeIndividualData = [] for ID, value in df_head.groupby("ID"): if len(value) == len(years): completeIndividualData.append(value) print("Number of heads with complete data: ", len(completeIndividualData)) # prepare the combined dataset and set up dummy variables for qualitative data df = Fcompile_data_with_features(['composition', 'headCount', 'ageHead', 'maritalStatus', 'employmentStatus', 'liquidWealth', 'race', 'industry' ,'geoCode','incomeHead', "incomeWife", 'foodCost', 'houseCost', 'transCost', 'educationCost', 'childCost', 'healthCost', 'education', 'participation', 'investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', "wealthWithHomeEquity"], years)
_____no_output_____
MIT
20201120/20201116/empirical/.ipynb_checkpoints/DataProcessing-checkpoint.ipynb
dongxulee/lifeCycle
Family Data
# prepare the combined dataset and set up dummy variables for qualitative data df = Fcompile_data_with_features(['composition', 'headCount', 'ageHead', 'maritalStatus', 'employmentStatus', 'liquidWealth', 'race', 'industry' ,'geoCode','incomeHead', "incomeWife", 'foodCost', 'houseCost', 'transCost', 'educationCost', 'childCost', 'healthCost', 'education', 'participation', 'investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', "wealthWithHomeEquity"], years) df = drop_values(["ageHead"],[999], df) df = drop_values(["maritalStatus"],[8,9], df) df = drop_values(["employmentStatus"],[0, 22, 98, 99], df) df = drop_values(["liquidWealth"],[999999998,999999999], df) df = drop_values(["race"],[0,8,9], df) df = drop_values(["industry"],[999,0], df) df = drop_values(["education"],[99,0], df) df["totalExpense"] = df[['foodCost', 'houseCost', 'transCost', 'educationCost', 'childCost', 'healthCost']].sum(axis = 1) df["laborIncome"] = df["incomeHead"] + df["incomeWife"] df["costPerPerson"] = df["totalExpense"]/df["headCount"] maritalStatus = ["Married", "neverMarried", "Widowed", "Divorced", "Separated"] employmentStatus = ["Working", "temporalLeave", "unemployed", "retired", "disabled", "keepHouse", "student", "other"] race = ["White", "Black","AmericanIndian","Asian","Latino","otherBW","otherRace"] # Education # < 8th grade: middle school # >= 8 and < 12: high scho0l # >=12 and < 15: college # >= 15 post graduate education = ["middleSchool", "highSchool", "college", "postGraduate"] # Industry # < 400 manufacturing # >= 400 and < 500 publicUtility # >= 500 and < 680 retail # >= 680 and < 720 finance # >= 720 and < 900 service # >= 900 otherIndustry industry = ["manufacturing", "publicUtility", "retail", "finance", "service", "otherIndustry"] data = [] for i in range(len(df)): dataCollect = [] # marital status dataCollect.append(maritalStatus[int(df.iloc[i]["maritalStatus"]-1)]) # employment dataCollect.append(employmentStatus[int(df.iloc[i]["employmentStatus"]-1)]) # race dataCollect.append(race[int(df.iloc[i]["race"] - 1)]) # Education variable if df.iloc[i]["education"] < 8: dataCollect.append(education[0]) elif df.iloc[i]["education"] >= 8 and df.iloc[i]["education"] < 12: dataCollect.append(education[1]) elif df.iloc[i]["education"] >= 12 and df.iloc[i]["education"] < 15: dataCollect.append(education[2]) else: dataCollect.append(education[3]) # industry variable if df.iloc[i]["industry"] < 400: dataCollect.append(industry[0]) elif df.iloc[i]["industry"] >= 400 and df.iloc[i]["industry"] < 500: dataCollect.append(industry[1]) elif df.iloc[i]["industry"] >= 500 and df.iloc[i]["industry"] < 680: dataCollect.append(industry[2]) elif df.iloc[i]["industry"] >= 680 and df.iloc[i]["industry"] < 720: dataCollect.append(industry[3]) elif df.iloc[i]["industry"] >= 720 and df.iloc[i]["industry"] < 900: dataCollect.append(industry[4]) else: dataCollect.append(industry[5]) data.append(dataCollect) # Categorical dataFrame df_cat = pd.DataFrame(data, columns = ["maritalStatus", "employmentStatus", "race", "education", "industry"]) Fdf = pd.concat([df[["familyID", "year",'composition', 'headCount', 'ageHead', 'liquidWealth', 'laborIncome', "costPerPerson","totalExpense", 'participation', 'investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', "wealthWithHomeEquity"]], df_cat[["maritalStatus", "employmentStatus", "education","race", "industry"]]], axis=1) # Adjust for inflation. years = [1999, 2001, 2003, 2005, 2007, 2009, 2011, 2013, 2015, 2017] values_at2020 = np.array([1.55, 1.46, 1.40, 1.32, 1.24, 1.20, 1.15, 1.11, 1.09, 1.05]) values_at2005 = values_at2020/1.32 values_at2005 quantVariables = ['annuityIRA', 'investmentAmount', 'liquidWealth', 'laborIncome', 'costPerPerson','costPerPerson', 'totalExpense', 'wealthWithoutHomeEquity', 'wealthWithHomeEquity'] for i in range(len(Fdf)): for variable in quantVariables: Fdf.at[i, variable] = round(Fdf.at[i, variable] * values_at2005[years.index(Fdf.at[i,"year"])], 2)
_____no_output_____
MIT
20201120/20201116/empirical/.ipynb_checkpoints/DataProcessing-checkpoint.ipynb
dongxulee/lifeCycle
Link Family Data with Individual Head Data
completeFamilyData = [] for individual in completeIndividualData: idf = pd.DataFrame() for i in range(len(individual)): idf = pd.concat([idf, Fdf[(Fdf.year == individual.iloc[i].year)& (Fdf.familyID == individual.iloc[i].familyNumber)]]) completeFamilyData.append(idf.set_index("year", drop = True)) FamilyData = [f for f in completeFamilyData if len(f) == len(years)] len(FamilyData) # skilled definition with college and postGraduate skilled_index = [] for i in range(1973): if "postGraduate" in FamilyData[i].education.values or "college" in FamilyData[i].education.values: skilled_index.append(i) len(skilled_index) # skilled definition with postGraduate skilled_index = [] for i in range(1973): if "postGraduate" in FamilyData[i].education.values: skilled_index.append(i) len(skilled_index) # working in the finance industry finance_index = [] for i in range(1973): if "finance" in FamilyData[i].industry.values: finance_index.append(i) len(finance_index) a = FamilyData[randint(0, 1973)] a
_____no_output_____
MIT
20201120/20201116/empirical/.ipynb_checkpoints/DataProcessing-checkpoint.ipynb
dongxulee/lifeCycle
Individual plot
def inFeaturePlot(FamilyData, feature, n): plt.figure() for i in range(n[0],n[1]): FamilyData[i][feature].plot(marker='o') plt.show() def plotFeatureVsAge(FamilyData, feature, n): plt.figure() for i in range(n[0],n[1]): plt.plot(FamilyData[i].ageHead, FamilyData[i][feature], marker = 'o') plt.show() inFeaturePlot(FamilyData,"laborIncome" , [1,100])
_____no_output_____
MIT
20201120/20201116/empirical/.ipynb_checkpoints/DataProcessing-checkpoint.ipynb
dongxulee/lifeCycle
Average variable plot
def plotFeature(FamilyData, feature): df = FamilyData[0][feature] * 0 for i in range(len(FamilyData)): df = df + FamilyData[i][feature] df = df/len(FamilyData) df.plot(marker='o') print(df) # laborIncome plotFeature(FamilyData, "laborIncome") # laborIncome plotFeature(FamilyData, "investmentAmount") # Expenditure plotFeature(FamilyData, "totalExpense") # wealthWithoutHomeEquity plotFeature(FamilyData, "wealthWithoutHomeEquity") # wealthWithHomeEquity plotFeature(FamilyData, "wealthWithHomeEquity") plotFeature(FamilyData, "annuityIRA")
year 1999 31462.509377 2001 34869.478459 2003 31720.994932 2005 40220.458186 2007 50619.510897 2009 41815.880892 2011 62674.657881 2013 65190.544349 2015 78211.521034 2017 84070.374050 Name: annuityIRA, dtype: float64
MIT
20201120/20201116/empirical/.ipynb_checkpoints/DataProcessing-checkpoint.ipynb
dongxulee/lifeCycle
Compare The Distribution Over Age
df = Fdf[(Fdf["ageHead"]>=20) & (Fdf["ageHead"]<=80)] df[['liquidWealth', 'laborIncome', 'costPerPerson', 'totalExpense','investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', 'wealthWithHomeEquity']] = df[['liquidWealth', 'laborIncome', 'costPerPerson', 'totalExpense','investmentAmount', 'annuityIRA', 'wealthWithoutHomeEquity', 'wealthWithHomeEquity']]/1000 df.shape df.columns ww = df.groupby("ageHead")["liquidWealth"].mean() nn = df.groupby("ageHead")["annuityIRA"].mean() cc = df.groupby("ageHead")["totalExpense"].mean() kk = df.groupby("ageHead")["investmentAmount"].mean() ytyt = df.groupby("ageHead")["laborIncome"].mean() plt.figure(figsize = [14,8]) plt.plot(ww, label = "wealth") plt.plot(cc, label = "Consumption") plt.plot(kk, label = "Stock") plt.legend() plt.plot(nn, label = "IRA") np.save('nn',nn)
_____no_output_____
MIT
20201120/20201116/empirical/.ipynb_checkpoints/DataProcessing-checkpoint.ipynb
dongxulee/lifeCycle
Data
path = Path('/content/drive/My Drive/Archieve/ValueLabs'); path.ls() train_df = pd.read_csv(path/'Train.csv'); train_df.head() bs = 24
_____no_output_____
MIT
Colab Notebooks/competition/valuelabs_ml_hiring_challenge.ipynb
ankschoubey/notes
LM
data_lm = (TextList .from_df(train_df,cols=['question', 'answer_text', 'distractor']) .split_by_rand_pct(0.1) .label_for_lm() .databunch(bs=bs) ) data_lm.save(path/'lm.pkl') data_lm = load_data(path, 'lm.pkl', bs=bs) data_lm.show_batch() learn = language_model_learner(data_lm, AWD_LSTM, drop_mult=0.3) learn.lr_find() learn.recorder.plot(skip_end=10) learn.fit_one_cycle(1, 1e-2, moms=(0.8,0.7)) learn.save(path/'fit_head') learn.load(path/'fit_head') #@title Default title text variable_name = "hello" #@param {type:"string"} learn.fit_one_cycle(10, 1e-3, moms=(0.8,0.7)) learn.save(path/'fine_tuned') learn.load(path/'fine_tuned') learn.predict("hi what is the problem",20) test_df = pd.read_csv(path/'Test.csv');test_df.head() combined = test_df['question']+test_df['answer_text'] combined.head() combined.shape output = [] for index, value in combined.iteritems(): if index % 100 == 0: print(index) output.append(learn.predict(value)) import pickle with open(path/'output_list','w') as f: pickle.dumps(output, f) output[:5]
_____no_output_____
MIT
Colab Notebooks/competition/valuelabs_ml_hiring_challenge.ipynb
ankschoubey/notes
RadarCOVID-Report Data Extraction
import datetime import json import logging import os import shutil import tempfile import textwrap import uuid import matplotlib.pyplot as plt import matplotlib.ticker import numpy as np import pandas as pd import retry import seaborn as sns %matplotlib inline current_working_directory = os.environ.get("PWD") if current_working_directory: os.chdir(current_working_directory) sns.set() matplotlib.rcParams["figure.figsize"] = (15, 6) extraction_datetime = datetime.datetime.utcnow() extraction_date = extraction_datetime.strftime("%Y-%m-%d") extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1) extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d") extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H") current_hour = datetime.datetime.utcnow().hour are_today_results_partial = current_hour != 23
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Constants
from Modules.ExposureNotification import exposure_notification_io spain_region_country_code = "ES" germany_region_country_code = "DE" default_backend_identifier = spain_region_country_code backend_generation_days = 7 * 2 daily_summary_days = 7 * 4 * 3 daily_plot_days = 7 * 4 tek_dumps_load_limit = daily_summary_days + 1
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Parameters
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER") if environment_backend_identifier: report_backend_identifier = environment_backend_identifier else: report_backend_identifier = default_backend_identifier report_backend_identifier environment_enable_multi_backend_download = \ os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD") if environment_enable_multi_backend_download: report_backend_identifiers = None else: report_backend_identifiers = [report_backend_identifier] report_backend_identifiers environment_invalid_shared_diagnoses_dates = \ os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES") if environment_invalid_shared_diagnoses_dates: invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",") else: invalid_shared_diagnoses_dates = [] invalid_shared_diagnoses_dates
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
COVID-19 Cases
report_backend_client = \ exposure_notification_io.get_backend_client_with_identifier( backend_identifier=report_backend_identifier) @retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10)) def download_cases_dataframe_from_ecdc(): return pd.read_csv( "https://opendata.ecdc.europa.eu/covid19/casedistribution/csv/data.csv") confirmed_df_ = download_cases_dataframe_from_ecdc() confirmed_df = confirmed_df_.copy() confirmed_df = confirmed_df[["dateRep", "cases", "geoId"]] confirmed_df.rename( columns={ "dateRep":"sample_date", "cases": "new_cases", "geoId": "country_code", }, inplace=True) confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True) confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d") confirmed_df.sort_values("sample_date", inplace=True) confirmed_df.tail() def sort_source_regions_for_display(source_regions: list) -> list: if report_backend_identifier in source_regions: source_regions = [report_backend_identifier] + \ list(sorted(set(source_regions).difference([report_backend_identifier]))) else: source_regions = list(sorted(source_regions)) return source_regions confirmed_days = pd.date_range( start=confirmed_df.iloc[0].sample_date, end=extraction_datetime) source_regions_at_date_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"]) source_regions_at_date_df["source_regions_at_date"] = \ source_regions_at_date_df.sample_date.apply( lambda x: report_backend_client.source_regions_for_date(date=x)) source_regions_at_date_df.sort_values("sample_date", inplace=True) source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \ source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x))) source_regions_at_date_df["sample_date_string"] = \ source_regions_at_date_df.sample_date.dt.strftime("%Y-%m-%d") source_regions_at_date_df.tail() source_regions_for_summary_df = \ source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy() source_regions_for_summary_df.rename(columns={"_source_regions_group": "source_regions"}, inplace=True) source_regions_for_summary_df.head() confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"] confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns) for source_regions_group, source_regions_group_series in \ source_regions_at_date_df.groupby("_source_regions_group"): source_regions_set = set(source_regions_group.split(",")) confirmed_source_regions_set_df = \ confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy() confirmed_source_regions_group_df = \ confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \ .reset_index().sort_values("sample_date") confirmed_source_regions_group_df["covid_cases"] = \ confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round() confirmed_source_regions_group_df = \ confirmed_source_regions_group_df[confirmed_output_columns] confirmed_source_regions_group_df.fillna(method="ffill", inplace=True) confirmed_source_regions_group_df = \ confirmed_source_regions_group_df[ confirmed_source_regions_group_df.sample_date.isin( source_regions_group_series.sample_date_string)] confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df) confirmed_df = confirmed_output_df.copy() confirmed_df.tail() confirmed_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True) confirmed_df.sort_values("sample_date_string", inplace=True) confirmed_df.tail() confirmed_df[["new_cases", "covid_cases"]].plot()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Extract API TEKs
raw_zip_path_prefix = "Data/TEKs/Raw/" fail_on_error_backend_identifiers = [report_backend_identifier] multi_backend_exposure_keys_df = \ exposure_notification_io.download_exposure_keys_from_backends( backend_identifiers=report_backend_identifiers, generation_days=backend_generation_days, fail_on_error_backend_identifiers=fail_on_error_backend_identifiers, save_raw_zip_path_prefix=raw_zip_path_prefix) multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"] multi_backend_exposure_keys_df.rename( columns={ "generation_datetime": "sample_datetime", "generation_date_string": "sample_date_string", }, inplace=True) multi_backend_exposure_keys_df.head() early_teks_df = multi_backend_exposure_keys_df[ multi_backend_exposure_keys_df.rolling_period < 144].copy() early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6 early_teks_df[early_teks_df.sample_date_string != extraction_date] \ .rolling_period_in_hours.hist(bins=list(range(24))) early_teks_df[early_teks_df.sample_date_string == extraction_date] \ .rolling_period_in_hours.hist(bins=list(range(24))) multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[ "sample_date_string", "region", "key_data"]] multi_backend_exposure_keys_df.head() active_regions = \ multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist() active_regions multi_backend_summary_df = multi_backend_exposure_keys_df.groupby( ["sample_date_string", "region"]).key_data.nunique().reset_index() \ .pivot(index="sample_date_string", columns="region") \ .sort_index(ascending=False) multi_backend_summary_df.rename( columns={"key_data": "shared_teks_by_generation_date"}, inplace=True) multi_backend_summary_df.rename_axis("sample_date", inplace=True) multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int) multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days) multi_backend_summary_df.head() def compute_keys_cross_sharing(x): teks_x = x.key_data_x.item() common_teks = set(teks_x).intersection(x.key_data_y.item()) common_teks_fraction = len(common_teks) / len(teks_x) return pd.Series(dict( common_teks=common_teks, common_teks_fraction=common_teks_fraction, )) multi_backend_exposure_keys_by_region_df = \ multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index() multi_backend_exposure_keys_by_region_df["_merge"] = True multi_backend_exposure_keys_by_region_combination_df = \ multi_backend_exposure_keys_by_region_df.merge( multi_backend_exposure_keys_by_region_df, on="_merge") multi_backend_exposure_keys_by_region_combination_df.drop( columns=["_merge"], inplace=True) if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1: multi_backend_exposure_keys_by_region_combination_df = \ multi_backend_exposure_keys_by_region_combination_df[ multi_backend_exposure_keys_by_region_combination_df.region_x != multi_backend_exposure_keys_by_region_combination_df.region_y] multi_backend_exposure_keys_cross_sharing_df = \ multi_backend_exposure_keys_by_region_combination_df \ .groupby(["region_x", "region_y"]) \ .apply(compute_keys_cross_sharing) \ .reset_index() multi_backend_cross_sharing_summary_df = \ multi_backend_exposure_keys_cross_sharing_df.pivot_table( values=["common_teks_fraction"], columns="region_x", index="region_y", aggfunc=lambda x: x.item()) multi_backend_cross_sharing_summary_df multi_backend_without_active_region_exposure_keys_df = \ multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier] multi_backend_without_active_region = \ multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist() multi_backend_without_active_region exposure_keys_summary_df = multi_backend_exposure_keys_df[ multi_backend_exposure_keys_df.region == report_backend_identifier] exposure_keys_summary_df.drop(columns=["region"], inplace=True) exposure_keys_summary_df = \ exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame() exposure_keys_summary_df = \ exposure_keys_summary_df.reset_index().set_index("sample_date_string") exposure_keys_summary_df.sort_index(ascending=False, inplace=True) exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True) exposure_keys_summary_df.head()
/opt/hostedtoolcache/Python/3.8.6/x64/lib/python3.8/site-packages/pandas/core/frame.py:4110: SettingWithCopyWarning: A value is trying to be set on a copy of a slice from a DataFrame See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy return super().drop(
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Dump API TEKs
tek_list_df = multi_backend_exposure_keys_df[ ["sample_date_string", "region", "key_data"]].copy() tek_list_df["key_data"] = tek_list_df["key_data"].apply(str) tek_list_df.rename(columns={ "sample_date_string": "sample_date", "key_data": "tek_list"}, inplace=True) tek_list_df = tek_list_df.groupby( ["sample_date", "region"]).tek_list.unique().reset_index() tek_list_df["extraction_date"] = extraction_date tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour tek_list_path_prefix = "Data/TEKs/" tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json" tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json" tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json" for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]: os.makedirs(os.path.dirname(path), exist_ok=True) tek_list_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json( tek_list_current_path, lines=True, orient="records") tek_list_df.drop(columns=["extraction_date_with_hour"]).to_json( tek_list_daily_path, lines=True, orient="records") tek_list_df.to_json( tek_list_hourly_path, lines=True, orient="records") tek_list_df.head()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Load TEK Dumps
import glob def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame: extracted_teks_df = pd.DataFrame(columns=["region"]) file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json")))) if limit: file_paths = file_paths[:limit] for file_path in file_paths: logging.info(f"Loading TEKs from '{file_path}'...") iteration_extracted_teks_df = pd.read_json(file_path, lines=True) extracted_teks_df = extracted_teks_df.append( iteration_extracted_teks_df, sort=False) extracted_teks_df["region"] = \ extracted_teks_df.region.fillna(spain_region_country_code).copy() if region: extracted_teks_df = \ extracted_teks_df[extracted_teks_df.region == region] return extracted_teks_df daily_extracted_teks_df = load_extracted_teks( mode="Daily", region=report_backend_identifier, limit=tek_dumps_load_limit) daily_extracted_teks_df.head() exposure_keys_summary_df_ = daily_extracted_teks_df \ .sort_values("extraction_date", ascending=False) \ .groupby("sample_date").tek_list.first() \ .to_frame() exposure_keys_summary_df_.index.name = "sample_date_string" exposure_keys_summary_df_["tek_list"] = \ exposure_keys_summary_df_.tek_list.apply(len) exposure_keys_summary_df_ = exposure_keys_summary_df_ \ .rename(columns={"tek_list": "shared_teks_by_generation_date"}) \ .sort_index(ascending=False) exposure_keys_summary_df = exposure_keys_summary_df_ exposure_keys_summary_df.head()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Daily New TEKs
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply( lambda x: set(sum(x, []))).reset_index() tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True) tek_list_df.head() def compute_teks_by_generation_and_upload_date(date): day_new_teks_set_df = tek_list_df.copy().diff() try: day_new_teks_set = day_new_teks_set_df[ day_new_teks_set_df.index == date].tek_list.item() except ValueError: day_new_teks_set = None if pd.isna(day_new_teks_set): day_new_teks_set = set() day_new_teks_df = daily_extracted_teks_df[ daily_extracted_teks_df.extraction_date == date].copy() day_new_teks_df["shared_teks"] = \ day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set)) day_new_teks_df["shared_teks"] = \ day_new_teks_df.shared_teks.apply(len) day_new_teks_df["upload_date"] = date day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True) day_new_teks_df = day_new_teks_df[ ["upload_date", "generation_date", "shared_teks"]] day_new_teks_df["generation_to_upload_days"] = \ (pd.to_datetime(day_new_teks_df.upload_date) - pd.to_datetime(day_new_teks_df.generation_date)).dt.days day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0] return day_new_teks_df shared_teks_generation_to_upload_df = pd.DataFrame() for upload_date in daily_extracted_teks_df.extraction_date.unique(): shared_teks_generation_to_upload_df = \ shared_teks_generation_to_upload_df.append( compute_teks_by_generation_and_upload_date(date=upload_date)) shared_teks_generation_to_upload_df \ .sort_values(["upload_date", "generation_date"], ascending=False, inplace=True) shared_teks_generation_to_upload_df.tail() today_new_teks_df = \ shared_teks_generation_to_upload_df[ shared_teks_generation_to_upload_df.upload_date == extraction_date].copy() today_new_teks_df.tail() if not today_new_teks_df.empty: today_new_teks_df.set_index("generation_to_upload_days") \ .sort_index().shared_teks.plot.bar() generation_to_upload_period_pivot_df = \ shared_teks_generation_to_upload_df[ ["upload_date", "generation_to_upload_days", "shared_teks"]] \ .pivot(index="upload_date", columns="generation_to_upload_days") \ .sort_index(ascending=False).fillna(0).astype(int) \ .droplevel(level=0, axis=1) generation_to_upload_period_pivot_df.head() new_tek_df = tek_list_df.diff().tek_list.apply( lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index() new_tek_df.rename(columns={ "tek_list": "shared_teks_by_upload_date", "extraction_date": "sample_date_string",}, inplace=True) new_tek_df.tail() shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[ shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \ [["upload_date", "shared_teks"]].rename( columns={ "upload_date": "sample_date_string", "shared_teks": "shared_teks_uploaded_on_generation_date", }) shared_teks_uploaded_on_generation_date_df.head() estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \ .groupby(["upload_date"]).shared_teks.max().reset_index() \ .sort_values(["upload_date"], ascending=False) \ .rename(columns={ "upload_date": "sample_date_string", "shared_teks": "shared_diagnoses", }) invalid_shared_diagnoses_dates_mask = \ estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates) estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0 estimated_shared_diagnoses_df.head()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Hourly New TEKs
hourly_extracted_teks_df = load_extracted_teks( mode="Hourly", region=report_backend_identifier, limit=25) hourly_extracted_teks_df.head() hourly_new_tek_count_df = hourly_extracted_teks_df \ .groupby("extraction_date_with_hour").tek_list. \ apply(lambda x: set(sum(x, []))).reset_index().copy() hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \ .sort_index(ascending=True) hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff() hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply( lambda x: len(x) if not pd.isna(x) else 0) hourly_new_tek_count_df.rename(columns={ "new_tek_count": "shared_teks_by_upload_date"}, inplace=True) hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[ "extraction_date_with_hour", "shared_teks_by_upload_date"]] hourly_new_tek_count_df.head() hourly_summary_df = hourly_new_tek_count_df.copy() hourly_summary_df.set_index("extraction_date_with_hour", inplace=True) hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index() hourly_summary_df["datetime_utc"] = pd.to_datetime( hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H") hourly_summary_df.set_index("datetime_utc", inplace=True) hourly_summary_df = hourly_summary_df.tail(-1) hourly_summary_df.head()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Data Merge
result_summary_df = exposure_keys_summary_df.merge( new_tek_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = result_summary_df.merge( estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer") result_summary_df.head() result_summary_df = confirmed_df.tail(daily_summary_days).merge( result_summary_df, on=["sample_date_string"], how="left") result_summary_df.head() result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string) result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left") result_summary_df.set_index(["sample_date", "source_regions"], inplace=True) result_summary_df.drop(columns=["sample_date_string"], inplace=True) result_summary_df.sort_index(ascending=False, inplace=True) result_summary_df.head() with pd.option_context("mode.use_inf_as_na", True): result_summary_df = result_summary_df.fillna(0).astype(int) result_summary_df["teks_per_shared_diagnosis"] = \ (result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0) result_summary_df["shared_diagnoses_per_covid_case"] = \ (result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0) result_summary_df.head(daily_plot_days) weekly_result_summary_df = result_summary_df \ .sort_index(ascending=True).fillna(0).rolling(7).agg({ "covid_cases": "sum", "shared_teks_by_generation_date": "sum", "shared_teks_by_upload_date": "sum", "shared_diagnoses": "sum" }).sort_index(ascending=False) with pd.option_context("mode.use_inf_as_na", True): weekly_result_summary_df = weekly_result_summary_df.fillna(0).astype(int) weekly_result_summary_df["teks_per_shared_diagnosis"] = \ (weekly_result_summary_df.shared_teks_by_upload_date / weekly_result_summary_df.shared_diagnoses).fillna(0) weekly_result_summary_df["shared_diagnoses_per_covid_case"] = \ (weekly_result_summary_df.shared_diagnoses / weekly_result_summary_df.covid_cases).fillna(0) weekly_result_summary_df.head() last_7_days_summary = weekly_result_summary_df.to_dict(orient="records")[1] last_7_days_summary
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Report Results
display_column_name_mapping = { "sample_date": "Sample\u00A0Date\u00A0(UTC)", "source_regions": "Source Countries", "datetime_utc": "Timestamp (UTC)", "upload_date": "Upload Date (UTC)", "generation_to_upload_days": "Generation to Upload Period in Days", "region": "Backend", "region_x": "Backend\u00A0(A)", "region_y": "Backend\u00A0(B)", "common_teks": "Common TEKs Shared Between Backends", "common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)", "covid_cases": "COVID-19 Cases in Source Countries (7-day Rolling Average)", "shared_teks_by_generation_date": "Shared TEKs by Generation Date", "shared_teks_by_upload_date": "Shared TEKs by Upload Date", "shared_diagnoses": "Shared Diagnoses (Estimation)", "teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis", "shared_diagnoses_per_covid_case": "Usage Ratio (Fraction of Cases in Source Countries Which Shared Diagnosis)", "shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date", } summary_columns = [ "covid_cases", "shared_teks_by_generation_date", "shared_teks_by_upload_date", "shared_teks_uploaded_on_generation_date", "shared_diagnoses", "teks_per_shared_diagnosis", "shared_diagnoses_per_covid_case", ]
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Daily Summary Table
result_summary_df_ = result_summary_df.copy() result_summary_df = result_summary_df[summary_columns] result_summary_with_display_names_df = result_summary_df \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) result_summary_with_display_names_df
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Daily Summary Plots
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \ .droplevel(level=["source_regions"]) \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar( title=f"Daily Summary", rot=45, subplots=True, figsize=(15, 22), legend=False) ax_ = summary_ax_list[-1] ax_.get_figure().tight_layout() ax_.get_figure().subplots_adjust(top=0.95) ax_.yaxis.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0)) _ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Daily Generation to Upload Period Table
display_generation_to_upload_period_pivot_df = \ generation_to_upload_period_pivot_df \ .head(backend_generation_days) display_generation_to_upload_period_pivot_df \ .head(backend_generation_days) \ .rename_axis(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) fig, generation_to_upload_period_pivot_table_ax = plt.subplots( figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df))) generation_to_upload_period_pivot_table_ax.set_title( "Shared TEKs Generation to Upload Period Table") sns.heatmap( data=display_generation_to_upload_period_pivot_df .rename_axis(columns=display_column_name_mapping) .rename_axis(index=display_column_name_mapping), fmt=".0f", annot=True, ax=generation_to_upload_period_pivot_table_ax) generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Hourly Summary Plots
hourly_summary_ax_list = hourly_summary_df \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .plot.bar( title=f"Last 24h Summary", rot=45, subplots=True, legend=False) ax_ = hourly_summary_ax_list[-1] ax_.get_figure().tight_layout() ax_.get_figure().subplots_adjust(top=0.9) _ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Publish Results
def get_temporary_image_path() -> str: return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png") def save_temporary_plot_image(ax): if isinstance(ax, np.ndarray): ax = ax[0] media_path = get_temporary_image_path() ax.get_figure().savefig(media_path) return media_path def save_temporary_dataframe_image(df): import dataframe_image as dfi media_path = get_temporary_image_path() dfi.export(df, media_path) return media_path github_repository = os.environ.get("GITHUB_REPOSITORY") if github_repository is None: github_repository = "pvieito/Radar-STATS" github_project_base_url = "https://github.com/" + github_repository display_formatters = { display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}", display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}", } daily_summary_table_html = result_summary_with_display_names_df \ .head(daily_plot_days) \ .rename_axis(index=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .to_html(formatters=display_formatters) multi_backend_summary_table_html = multi_backend_summary_df \ .head(daily_plot_days) \ .rename_axis(columns=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) \ .to_html(formatters=display_formatters) def format_multi_backend_cross_sharing_fraction(x): if pd.isna(x): return "-" elif round(x * 100, 1) == 0: return "" else: return f"{x:.1%}" multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \ .rename_axis(columns=display_column_name_mapping) \ .rename(columns=display_column_name_mapping) \ .rename_axis(index=display_column_name_mapping) \ .to_html( classes="table-center", formatters=display_formatters, float_format=format_multi_backend_cross_sharing_fraction) multi_backend_cross_sharing_summary_table_html = \ multi_backend_cross_sharing_summary_table_html \ .replace("<tr>","<tr style=\"text-align: center;\">") extraction_date_result_summary_df = \ result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date] extraction_date_result_hourly_summary_df = \ hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour] covid_cases = \ extraction_date_result_summary_df.covid_cases.sum() shared_teks_by_generation_date = \ extraction_date_result_summary_df.shared_teks_by_generation_date.sum() shared_teks_by_upload_date = \ extraction_date_result_summary_df.shared_teks_by_upload_date.sum() shared_diagnoses = \ extraction_date_result_summary_df.shared_diagnoses.sum() teks_per_shared_diagnosis = \ extraction_date_result_summary_df.teks_per_shared_diagnosis.sum() shared_diagnoses_per_covid_case = \ extraction_date_result_summary_df.shared_diagnoses_per_covid_case.sum() shared_teks_by_upload_date_last_hour = \ extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int) report_source_regions = extraction_date_result_summary_df.index \ .get_level_values("source_regions").item().split(",") display_source_regions = ", ".join(report_source_regions) if len(report_source_regions) == 1: display_brief_source_regions = report_source_regions[0] else: display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺" summary_plots_image_path = save_temporary_plot_image( ax=summary_ax_list) summary_table_image_path = save_temporary_dataframe_image( df=result_summary_with_display_names_df) hourly_summary_plots_image_path = save_temporary_plot_image( ax=hourly_summary_ax_list) multi_backend_summary_table_image_path = save_temporary_dataframe_image( df=multi_backend_summary_df) generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image( ax=generation_to_upload_period_pivot_table_ax)
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Save Results
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-" result_summary_df.to_csv( report_resources_path_prefix + "Summary-Table.csv") result_summary_df.to_html( report_resources_path_prefix + "Summary-Table.html") hourly_summary_df.to_csv( report_resources_path_prefix + "Hourly-Summary-Table.csv") multi_backend_summary_df.to_csv( report_resources_path_prefix + "Multi-Backend-Summary-Table.csv") multi_backend_cross_sharing_summary_df.to_csv( report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv") generation_to_upload_period_pivot_df.to_csv( report_resources_path_prefix + "Generation-Upload-Period-Table.csv") _ = shutil.copyfile( summary_plots_image_path, report_resources_path_prefix + "Summary-Plots.png") _ = shutil.copyfile( summary_table_image_path, report_resources_path_prefix + "Summary-Table.png") _ = shutil.copyfile( hourly_summary_plots_image_path, report_resources_path_prefix + "Hourly-Summary-Plots.png") _ = shutil.copyfile( multi_backend_summary_table_image_path, report_resources_path_prefix + "Multi-Backend-Summary-Table.png") _ = shutil.copyfile( generation_to_upload_period_pivot_table_image_path, report_resources_path_prefix + "Generation-Upload-Period-Table.png")
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Publish Results as JSON
summary_results_api_df = result_summary_df.reset_index() summary_results_api_df["sample_date_string"] = \ summary_results_api_df["sample_date"].dt.strftime("%Y-%m-%d") summary_results_api_df["source_regions"] = \ summary_results_api_df["source_regions"].apply(lambda x: x.split(",")) today_summary_results_api_df = \ summary_results_api_df.to_dict(orient="records")[0] summary_results = dict( backend_identifier=report_backend_identifier, source_regions=report_source_regions, extraction_datetime=extraction_datetime, extraction_date=extraction_date, extraction_date_with_hour=extraction_date_with_hour, last_hour=dict( shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour, shared_diagnoses=0, ), today=today_summary_results_api_df, last_7_days=last_7_days_summary, daily_results=summary_results_api_df.to_dict(orient="records")) summary_results = \ json.loads(pd.Series([summary_results]).to_json(orient="records"))[0] with open(report_resources_path_prefix + "Summary-Results.json", "w") as f: json.dump(summary_results, f, indent=4)
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Publish on README
with open("Data/Templates/README.md", "r") as f: readme_contents = f.read() readme_contents = readme_contents.format( extraction_date_with_hour=extraction_date_with_hour, github_project_base_url=github_project_base_url, daily_summary_table_html=daily_summary_table_html, multi_backend_summary_table_html=multi_backend_summary_table_html, multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html, display_source_regions=display_source_regions) with open("README.md", "w") as f: f.write(readme_contents)
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Publish on Twitter
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER") github_event_name = os.environ.get("GITHUB_EVENT_NAME") if enable_share_to_twitter and github_event_name == "schedule" and \ (shared_teks_by_upload_date_last_hour or not are_today_results_partial): import tweepy twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"] twitter_api_auth_keys = twitter_api_auth_keys.split(":") auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1]) auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3]) api = tweepy.API(auth) summary_plots_media = api.media_upload(summary_plots_image_path) summary_table_media = api.media_upload(summary_table_image_path) generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path) media_ids = [ summary_plots_media.media_id, summary_table_media.media_id, generation_to_upload_period_pivot_table_image_media.media_id, ] if are_today_results_partial: today_addendum = " (Partial)" else: today_addendum = "" status = textwrap.dedent(f""" #RadarCOVID – {extraction_date_with_hour} Source Countries: {display_brief_source_regions} Today{today_addendum}: - Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour) - Shared Diagnoses: ≤{shared_diagnoses:.0f} - Usage Ratio: ≤{shared_diagnoses_per_covid_case:.2%} Last 7 Days: - Shared Diagnoses: ≤{last_7_days_summary["shared_diagnoses"]:.0f} - Usage Ratio: ≤{last_7_days_summary["shared_diagnoses_per_covid_case"]:.2%} Info: {github_project_base_url}#documentation """) status = status.encode(encoding="utf-8") api.update_status(status=status, media_ids=media_ids)
_____no_output_____
Apache-2.0
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2020-11-07.ipynb
pvieito/Radar-STATS
Custom conversionsHere we show how custom conversions can be passed to OpenSCM-Units' `ScmUnitRegistry`.
# NBVAL_IGNORE_OUTPUT import traceback import pandas as pd from openscm_units import ScmUnitRegistry
_____no_output_____
BSD-3-Clause
notebooks/custom-conversions.ipynb
openscm/openscm-units
Custom conversions DataFrameOn initialisation, a `pd.DataFrame` can be provided which contains the custom conversions. This `pd.DataFrame` should be formatted as shown below, with an index that contains the different species and columns which contain the conversion for different metrics.
metric_conversions_custom = pd.DataFrame([ { "Species": "CH4", "Custom1": 20, "Custom2": 25, }, { "Species": "N2O", "Custom1": 341, "Custom2": 300, }, ]).set_index("Species") metric_conversions_custom
_____no_output_____
BSD-3-Clause
notebooks/custom-conversions.ipynb
openscm/openscm-units
With such a `pd.DataFrame`, we can use custom conversions in our unit registry as shown.
# initialise the unit registry with custom conversions unit_registry = ScmUnitRegistry(metric_conversions=metric_conversions_custom) # add standard conversions before moving on unit_registry.add_standards() # start with e.g. N2O nitrous_oxide = unit_registry("N2O") display(f"N2O: {nitrous_oxide}") # our unit registry allows us to make conversions using the # conversion factors we previously defined with unit_registry.context("Custom1"): display(f"N2O in CO2-equivalent: {nitrous_oxide.to('CO2')}")
_____no_output_____
BSD-3-Clause
notebooks/custom-conversions.ipynb
openscm/openscm-units
Interesting to note that the loss of high-frequency information had almost no effect on the SVM classifier.Also interesting that shot noise was more harmful to the vRNN than to SNN methods.Not surprising that additive white noise had the greatest effect on accuracy for all classification methods.
mf1 = pd.read_hdf('results/whitenoise_mag_exp_res_11_03_42.h5') mf2 = pd.read_hdf('results/whitenoise_mag_exp_res_10_35_19.h5') mf = pd.concat((mf1, mf2)) mf.columns mf_filt = mf[mf['noise magnitude'] < 0.15] ff = sns.lmplot("noise magnitude", "accuracy", hue="approach", data=mf_filt, x_estimator=np.mean, scatter_kws={"alpha": 0.5}, fit_reg=False, legend=False) ax = ff.axes[0][0] ax.set_title("Effect of White Noise Magnitude on Accuracy") ax.set_xlabel("Noise Magnitude") ax.set_ylabel("Accuracy") leg = ax.legend(title="Approach", bbox_to_anchor=(1, 0.6), frameon=True, fancybox=True, shadow=True, framealpha=1) fl = leg.get_frame() fl.set_facecolor('white') fl.set_edgecolor('black') ax.set_xlim((0.0, 0.11)) ff.fig.savefig("noise_mag.pdf", format="pdf")
_____no_output_____
MIT
plot_noise_results.ipynb
Seanny123/rnn-comparison
Table of ContentsTower of HanoiLearning OutcomesDemo: How to Play Tower of HanoiDefinitionsDemo: How to Play Tower of HanoiStudent ActivityReflectionDid any group derive formula for minimum number of moves?Questions? Tower of Hanoi as RL problem Tower of Hanoi SolutionsGreedy Tower of Hanoi Tower of Hanoi SolutionsTHERE MUST BE A BETTER WAY!RECURSION2 Requirements for RecursionThink, Pair, &amp; ShareRecursion Steps to Solve Tower of HanoiIllustrated Recursive Steps to Solve Tower of HanoiCheck for understandingCheck for understandingCheck for understandingTakeawaysBonus MaterialDynamic ProgrammingWhat would Dynamic Programming look like for Tower of Hanoi?Tower of Hanoi for Final ProjectFurther Study Tower of Hanoi Learning Outcomes__By the end of this session, you should be able to__:- Solve Tower of Hanoi by hand.- Explain how to Tower of Hanoi with recursion in your words. Demo: How to Play Tower of HanoiThe Goal: Move all disks from start to finish.Rules: 1. Only one disk may be moved at a time.2. Each move consists of taking the upper disk from one of the rods and sliding it onto another rod, on top of the other disks that may already be present on that rod.3. No disk may be placed on top of a smaller disk. __My nephew enjoys the Tower of Hanoi.__ Definitions------ Rod: The vertical shaft- Disks: The items on the rod Demo: How to Play Tower of Hanoi1) Solve with 1 Disc 2) Solve with 2 Discs > If you can’t write it down in English, you can’t code it. > — Peter Halpern Student ActivityIn small groups, solve Tower of Hanoi: https://www.mathsisfun.com/games/towerofhanoi-flash.htmlRecord the minimal number of steps for each number of discs:1. 3 discs1. 4 discs1. 5 discs1. 6 discsIf someone has never solved the puzzle, they should lead. If you have solved it, only give hints when the team is stuck. ReflectionHow difficult was each version? How many more steps were needed for each increase in disk? Could you write a formula to model the minimum of numbers as number of disks increase?
reset -fs from IPython.display import YouTubeVideo # 3 rings YouTubeVideo('S4HOSbrS4bY') # 6 rings YouTubeVideo('iFV821yY7Ns')
_____no_output_____
Apache-2.0
01_rl_introduction__markov_decision_process/2_tower_of_hanoi_intro.ipynb
loftiskg/rl-course
Did any group derive formula for minimum number of moves?
# Calculate the optimal number of moves print(f"{'# disks':>7} | {'# moves':>10}") for n_disks in range(1, 21): n_moves = (2 ** n_disks)-1 print(f"{n_disks:>7} {n_moves:>10,}")
# disks | # moves 1 1 2 3 3 7 4 15 5 31 6 63 7 127 8 255 9 511 10 1,023 11 2,047 12 4,095 13 8,191 14 16,383 15 32,767 16 65,535 17 131,071 18 262,143 19 524,287 20 1,048,575
Apache-2.0
01_rl_introduction__markov_decision_process/2_tower_of_hanoi_intro.ipynb
loftiskg/rl-course
Auto detection to main + 4 cropped images**Pipeline:**1. Load cropped image csv file2. Apply prediction3. Save prediction result back to csv file* pred_value* pred_cat* pred_bbox
# Import libraries %matplotlib inline from pycocotools.coco import COCO from keras.models import load_model # from utils.utils import * # from utils.bbox import * # from utils.image import load_image_pixels from keras.preprocessing.image import load_img from keras.preprocessing.image import img_to_array import numpy as np import pandas as pd import skimage.io as io import matplotlib.pyplot as plt import pylab import torchvision.transforms.functional as TF import PIL import os import json from urllib.request import urlretrieve pylab.rcParams['figure.figsize'] = (8.0, 10.0) # Define image directory projectDir=os.getcwd() dataDir='.' dataType='val2017' imageDir='{}/images/'.format(dataDir) annFile='{}/images/{}_selected/annotations/instances_{}.json'.format(dataDir,dataType,dataType)
_____no_output_____
MIT
thesis_code/auto_detection.ipynb
hhodac/keras-yolo3
Utilities
class BoundBox: def __init__(self, xmin, ymin, xmax, ymax, objness = None, classes = None): self.xmin = xmin self.ymin = ymin self.xmax = xmax self.ymax = ymax self.objness = objness self.classes = classes self.label = -1 self.score = -1 def get_label(self): if self.label == -1: self.label = np.argmax(self.classes) return self.label def get_score(self): if self.score == -1: self.score = self.classes[self.get_label()] return self.score def _sigmoid(x): return 1. / (1. + np.exp(-x)) def decode_netout(netout, anchors, obj_thresh, net_h, net_w): grid_h, grid_w = netout.shape[:2] nb_box = 3 netout = netout.reshape((grid_h, grid_w, nb_box, -1)) nb_class = netout.shape[-1] - 5 boxes = [] netout[..., :2] = _sigmoid(netout[..., :2]) netout[..., 4:] = _sigmoid(netout[..., 4:]) netout[..., 5:] = netout[..., 4][..., np.newaxis] * netout[..., 5:] netout[..., 5:] *= netout[..., 5:] > obj_thresh for i in range(grid_h*grid_w): row = i // grid_w col = i % grid_w for b in range(nb_box): # 4th element is objectness score objectness = netout[int(row)][int(col)][b][4] if(objectness.all() <= obj_thresh): continue # first 4 elements are x, y, w, and h x, y, w, h = netout[int(row)][int(col)][b][:4] x = (col + x) / grid_w # center position, unit: image width y = (row + y) / grid_h # center position, unit: image height w = anchors[2 * b + 0] * np.exp(w) / net_w # unit: image width h = anchors[2 * b + 1] * np.exp(h) / net_h # unit: image height # last elements are class probabilities classes = netout[int(row)][col][b][5:] box = BoundBox(x-w/2, y-h/2, x+w/2, y+h/2, objectness, classes) boxes.append(box) return boxes def correct_yolo_boxes(boxes, image_h, image_w, net_h, net_w): new_w, new_h = net_w, net_h for i in range(len(boxes)): x_offset, x_scale = (net_w - new_w)/2./net_w, float(new_w)/net_w y_offset, y_scale = (net_h - new_h)/2./net_h, float(new_h)/net_h boxes[i].xmin = int((boxes[i].xmin - x_offset) / x_scale * image_w) boxes[i].xmax = int((boxes[i].xmax - x_offset) / x_scale * image_w) boxes[i].ymin = int((boxes[i].ymin - y_offset) / y_scale * image_h) boxes[i].ymax = int((boxes[i].ymax - y_offset) / y_scale * image_h) def _interval_overlap(interval_a, interval_b): x1, x2 = interval_a x3, x4 = interval_b if x3 < x1: if x4 < x1: return 0 else: return min(x2,x4) - x1 else: if x2 < x3: return 0 else: return min(x2,x4) - x3 def bbox_iou(box1, box2): intersect_w = _interval_overlap([box1.xmin, box1.xmax], [box2.xmin, box2.xmax]) intersect_h = _interval_overlap([box1.ymin, box1.ymax], [box2.ymin, box2.ymax]) intersect = intersect_w * intersect_h w1, h1 = box1.xmax-box1.xmin, box1.ymax-box1.ymin w2, h2 = box2.xmax-box2.xmin, box2.ymax-box2.ymin union = w1*h1 + w2*h2 - intersect return float(intersect) / union def do_nms(boxes, nms_thresh): if len(boxes) > 0: nb_class = len(boxes[0].classes) else: return for c in range(nb_class): sorted_indices = np.argsort([-box.classes[c] for box in boxes]) for i in range(len(sorted_indices)): index_i = sorted_indices[i] if boxes[index_i].classes[c] == 0: continue for j in range(i+1, len(sorted_indices)): index_j = sorted_indices[j] if bbox_iou(boxes[index_i], boxes[index_j]) >= nms_thresh: boxes[index_j].classes[c] = 0 # load and prepare an image def load_image_pixels(filename, shape): # load the image to get its shape image = load_img(filename) width, height = image.size # load the image with the required size image = load_img(filename, target_size=shape) # convert to numpy array image = img_to_array(image) # scale pixel values to [0, 1] image = image.astype('float32') image /= 255.0 # add a dimension so that we have one sample image = np.expand_dims(image, 0) return image, width, height # get all of the results above a threshold def get_boxes(boxes, labels, thresh): v_boxes, v_labels, v_scores = list(), list(), list() # enumerate all boxes for box in boxes: # enumerate all possible labels for i in range(len(labels)): # check if the threshold for this label is high enough if box.classes[i] > thresh: v_boxes.append(box) v_labels.append(labels[i]) v_scores.append(box.classes[i]*100) # don't break, many labels may trigger for one box return v_boxes, v_labels, v_scores # draw all results def draw_boxes(filename, v_boxes, v_labels, v_scores): # load the image data = plt.imread(filename) # plot the image plt.imshow(data) # get the context for drawing boxes ax = plt.gca() # plot each box for i in range(len(v_boxes)): box = v_boxes[i] # get coordinates y1, x1, y2, x2 = box.ymin, box.xmin, box.ymax, box.xmax # calculate width and height of the box width, height = x2 - x1, y2 - y1 # create the shape rect = plt.Rectangle((x1, y1), width, height, fill=False, color='white') # draw the box ax.add_patch(rect) # draw text and score in top left corner label = "%s (%.3f)" % (v_labels[i], v_scores[i]) plt.text(x1, y1, label, color='white') # show the plot plt.show()
_____no_output_____
MIT
thesis_code/auto_detection.ipynb
hhodac/keras-yolo3
Load model
# load yolov3 model model = load_model('yolov3_model.h5') # define the expected input shape for the model input_w, input_h = 416, 416 # define the anchors anchors = [[116,90, 156,198, 373,326], [30,61, 62,45, 59,119], [10,13, 16,30, 33,23]] # define the probability threshold for detected objects class_threshold = 0.6 # define the labels labels = ["person", "bicycle", "car", "motorbike", "airplane", "bus", "train", "truck", "boat", "traffic light", "fire hydrant", "stop sign", "parking meter", "bench", "bird", "cat", "dog", "horse", "sheep", "cow", "elephant", "bear", "zebra", "giraffe", "backpack", "umbrella", "handbag", "tie", "suitcase", "frisbee", "skis", "snowboard", "sports ball", "kite", "baseball bat", "baseball glove", "skateboard", "surfboard", "tennis racket", "bottle", "wine glass", "cup", "fork", "knife", "spoon", "bowl", "banana", "apple", "sandwich", "orange", "broccoli", "carrot", "hot dog", "pizza", "donut", "cake", "chair", "couch", "pottedplant", "bed", "diningtable", "toilet", "tvmonitor", "laptop", "mouse", "remote", "keyboard", "cell phone", "microwave", "oven", "toaster", "sink", "refrigerator", "book", "clock", "vase", "scissors", "teddy bear", "hair drier", "toothbrush"]
WARNING:tensorflow:From /Users/haiho/PycharmProjects/yolov3_huynhngocanh/venv/lib/python3.5/site-packages/tensorflow_core/python/ops/resource_variable_ops.py:1630: calling BaseResourceVariable.__init__ (from tensorflow.python.ops.resource_variable_ops) with constraint is deprecated and will be removed in a future version. Instructions for updating: If using Keras pass *_constraint arguments to layers.
MIT
thesis_code/auto_detection.ipynb
hhodac/keras-yolo3
Gather & concatenate all csv files
all_files = [] cat = 'book' for subdir, dirs, files in os.walk(os.path.join(imageDir,cat)): for filename in files: filepath = subdir + os.sep + filename if filepath.endswith(".csv"): all_files.append(filepath) print(filepath) li = [] for filename in all_files: df = pd.read_csv(filename, index_col=None, header=0) li.append(df) df_images = pd.concat(li, axis=0, ignore_index=True) df_images.head()
_____no_output_____
MIT
thesis_code/auto_detection.ipynb
hhodac/keras-yolo3
Apply prediction to multiple images
df_pred = pd.DataFrame(columns=['pred','pred_cat','pred_bbox']) iou_threshold = 0.5 for idx, item in df_images.iterrows(): file_path = os.path.join(item['path'], item['filename']) image, image_w, image_h = load_image_pixels(file_path, (input_w, input_h)) yhat = model.predict(image) boxes = list() for i in range(len(yhat)): # decode the output of the network boxes += decode_netout(yhat[i][0], anchors[i], class_threshold, input_h, input_w) # correct the sizes of the bounding boxes for the shape of the image correct_yolo_boxes(boxes, image_h, image_w, input_h, input_w) # suppress non-maximal boxes do_nms(boxes, 0.5) # get the details of the detected objects v_boxes, v_labels, v_scores = get_boxes(boxes, labels, class_threshold) ########## # summarize what we found # for i in range(len(v_boxes)): # print(v_labels[i], v_scores[i]) # draw what we found # draw_boxes(file_path, v_boxes, v_labels, v_scores) ########## boxes = item['bbox'].lstrip("[") boxes = boxes.rstrip("]") boxes = boxes.strip() x, y, w, h = list(map(int,boxes.split(","))) _box = BoundBox(x, y, x+w, y+h) is_detected = False for i, box in enumerate(v_boxes): # y1, x1, y2, x2 = box.ymin, box.xmin, box.ymax, box.xmax # print(bbox_iou(box, _box)) # print(bbox_iou(_box, box)) iou = bbox_iou(box, _box) if iou > iou_threshold: df_pred = df_pred.append({ 'pred': v_scores[i], 'pred_cat': v_labels[i], 'pred_bbox': [box.xmin, box.ymin, box.xmax-box.xmin, box.ymax-box.ymin] }, ignore_index=True) is_detected=True break if not is_detected: df_pred = df_pred.append({ 'pred': np.nan, 'pred_cat': np.nan, 'pred_bbox': np.nan }, ignore_index=True) df = pd.concat([df_images, df_pred], axis=1) df.info() df.head() df.to_csv(imageDir+cat+"/prediction_results.csv", index=False)
_____no_output_____
MIT
thesis_code/auto_detection.ipynb
hhodac/keras-yolo3
Generate speaker labels from max raw audio magnitudes
data_dir = '/Users/cgn/Dropbox (Facebook)/EGOCOM/raw_audio/wav/' fn_dict = {} for fn in sorted(os.listdir(data_dir)): key = fn[9:23] + fn[32:37] if 'part' in fn else fn[9:21] fn_dict[key] = fn_dict[key] + [fn] if key in fn_dict else [fn] samplerate = 44100 window = 1 # Averages signals with windows of N seconds. window_length = int(samplerate * window) labels = {} for key in list(fn_dict.keys()): print(key, end = " | ") fns = fn_dict[key] wavs = [wavfile.read(data_dir + fn)[1] for fn in fns] duration = min(len(w) for w in wavs) wavs = np.stack([w[:duration] for w in wavs]) # Only use the magnitudes of both left and right for each audio wav. mags = abs(wavs).sum(axis = 2) # DOWNSAMPLED (POOLED) Discretized/Fast (no overlap) gaussian smoothing with one-second time window. kwargs = { 'pool_size': window_length, 'weights': gaussian_kernel(kernel_length=window_length), 'filler': False, } pooled_mags = np.apply_along_axis(audio.avg_pool_1d, 1, mags, **kwargs) # Create noisy speaker labels threshold = np.percentile(pooled_mags, 10, axis = 1) no_one_speaking = (pooled_mags > np.expand_dims(threshold, axis = 1)).sum(axis = 0) == 0 speaker_labels = np.argmax(pooled_mags, axis = 0) speaker_labels[no_one_speaking] = -1 # User 1-based indexing for speaker labels (ie increase by 1) speaker_labels = [z if z < 0 else z + 1 for z in speaker_labels] # Store results labels[key] = speaker_labels # Write result to file loc = '/Users/cgn/Dropbox (Facebook)/EGOCOM/raw_audio_speaker_labels_{}.json'.format(str(window)) def default(o): if isinstance(o, np.int64): return int(o) raise TypeError import json with open(loc, 'w') as fp: json.dump(labels, fp, default = default) fp.close() # Read result into a dict import json with open(loc, 'r') as fp: labels = json.load(fp) fp.close()
_____no_output_____
MIT
paper_experiments_work_log/speaker_recognition.ipynb
cgnorthcutt/EgoCom-Dataset
Generate ground truth speaker labels
def create_gt_speaker_labels( df_times_speaker, duration_in_seconds, time_window_seconds = 0.5, ): stack = rev_times[::-1] stack_time = stack.pop() label_times = np.arange(0, duration_in_seconds, time_window_seconds) result = [-1] * len(label_times) for i, t in enumerate(label_times): while stack_time['endTime'] > t and stack_time['endTime'] <= t + time_window_seconds: result[i] = stack_time['speaker'] if len(stack) == 0: break stack_time = stack.pop() return result df = pd.read_csv("/Users/cgn/Dropbox (Facebook)/EGOCOM/ground_truth_transcriptions.csv")[ ["key", "endTime", "speaker", ] ].dropna() gt_speaker_labels = {} for key, sdf in df.groupby('key'): print(key, end = " | ") wavs = [wavfile.read(data_dir + fn)[1] for fn in fn_dict[key]] duration = min(len(w) for w in wavs) DL = sdf[["endTime", "speaker"]].to_dict('list') rev_times = [dict(zip(DL,t)) for t in zip(*DL.values())] duration_in_seconds = np.ceil(duration / float(samplerate)) gt_speaker_labels[key] = create_gt_speaker_labels(rev_times, duration_in_seconds, window) # Write result to file loc = '/Users/cgn/Dropbox (Facebook)/EGOCOM/rev_ground_truth_speaker_labels_{}.json'.format(str(window)) with open(loc, 'w') as fp: json.dump(gt_speaker_labels, fp, default = default) fp.close() # Read result into a dict with open(loc, 'r') as fp: gt_speaker_labels = json.load(fp) fp.close() scores = [] for key in labels.keys(): true = gt_speaker_labels[key] pred = labels[key] if len(true) > len(pred): true = true[:-1] # diff = round(accuracy_score(true[:-1], pred) - accuracy_score(true[1:], pred), 3) # scores.append(diff) # print(key, accuracy_score(true[1:], pred), accuracy_score(true[:-1], pred), diff) score = accuracy_score(true, pred) scores.append(score) print(key, np.round(score, 3)) print('Average accuracy:', str(np.round(np.mean(scores), 3)* 100) + '%') loc = '/Users/cgn/Dropbox (Facebook)/EGOCOM/subtitles/' for key in labels.keys(): gt = gt_speaker_labels[key] est = labels[key] with open(loc + "speaker_" + key + '.srt', 'w') as f: print(key, end = " | ") for t, s_est in enumerate(est): s_gt = gt[t] print(t + 1, file = f) print(async_srt_format_timestamp(t*window), end = "", file = f) print(' --> ', end = '', file = f) print(async_srt_format_timestamp(t*window+window), file = f) print('Rev.com Speaker:', end = " ", file = f) if s_gt == -1: print('No one is speaking', file = f) elif s_gt == 1: print('Curtis', file = f) else: print('Speaker ' + str(s_gt), file = f) print('MaxMag Speaker:', end = " ", file = f) if s_est == -1: print('No one is speaking', file = f) elif s_est == 1: print('Curtis', file = f) else: print('Speaker ' + str(s_est), file = f) print(file = f)
day_1__con_1__part1 | day_1__con_1__part2 | day_1__con_1__part3 | day_1__con_1__part4 | day_1__con_1__part5 | day_1__con_2__part1 | day_1__con_2__part2 | day_1__con_2__part3 | day_1__con_2__part4 | day_1__con_2__part5 | day_1__con_3__part1 | day_1__con_3__part2 | day_1__con_3__part3 | day_1__con_3__part4 | day_1__con_4__part1 | day_1__con_4__part2 | day_1__con_4__part3 | day_1__con_4__part4 | day_1__con_5__part1 | day_1__con_5__part2 | day_1__con_5__part3 | day_1__con_5__part4 | day_1__con_5__part5 | day_2__con_1__part1 | day_2__con_1__part2 | day_2__con_1__part3 | day_2__con_1__part4 | day_2__con_1__part5 | day_2__con_2__part1 | day_2__con_2__part2 | day_2__con_2__part3 | day_2__con_2__part4 | day_2__con_3 | day_2__con_4 | day_2__con_5 | day_2__con_6 | day_2__con_7 | day_3__con_1 | day_3__con_2 | day_3__con_3 | day_3__con_4 | day_3__con_5 | day_3__con_6 | day_4__con_1 | day_4__con_2 | day_4__con_3 | day_4__con_4 | day_4__con_5 | day_4__con_6 | day_5__con_1 | day_5__con_2 | day_5__con_3 | day_5__con_4 | day_5__con_5 | day_5__con_6 | day_5__con_7 | day_5__con_8 | day_6__con_1 | day_6__con_2 | day_6__con_3 | day_6__con_4 | day_6__con_5 | day_6__con_6 |
MIT
paper_experiments_work_log/speaker_recognition.ipynb
cgnorthcutt/EgoCom-Dataset
Generate subtitles
for key in labels.keys(): gt = labels[key] with open("subtitles/est_" + key + '.srt', 'w') as f: for t, s in enumerate(gt): print(t + 1, file = f) print(async_srt_format_timestamp(t*window), end = "", file = f) print(' --> ', end = '', file = f) print(async_srt_format_timestamp(t*window+window), file = f) print('Max mag of wavs speaker id', file = f) if s == -1: print('No one is speaking', file = f) elif s == 1: print('Curtis', file = f) else: print('Speaker ' + str(s), file = f) print(file = f)
_____no_output_____
MIT
paper_experiments_work_log/speaker_recognition.ipynb
cgnorthcutt/EgoCom-Dataset
Caníbales y misionerosmediante búsqueda primero en anchura.
from copy import deepcopy from collections import deque import sys # (m, c, b) hace referencia a el número de misioneros, canibales y el bote class Estado(object): def __init__(self, misioneros, canibales, bote): self.misioneros = misioneros self.canibales = canibales self.bote = bote #se establecen los movimientos que estos tendran def siguientes(self): if self.bote == 1: signo = -1 direction = "Ida" else: signo = 1 direction = "Vuelta" for m in range(3): for c in range(3): nuevo_Estado = Estado(self.misioneros+signo*m, self.canibales+signo*c, self.bote+signo*1); if m+c >= 1 and m+c <= 2 and nuevo_Estado.validar(): # comprobar si la acción y el estado resultante son válidos accion = " %d misioneros y %d canibales %s. %r" % ( m, c, direction, nuevo_Estado) yield accion, nuevo_Estado def validar(self): # validacion inicial if self.misioneros < 0 or self.canibales < 0 or self.misioneros > 3 or self.canibales > 3 or (self.bote != 0 and self.bote != 1): return False # luego verifica si los misioneros superan en número a los canibales # más canibales que misioneros en la orilla original if self.canibales > self.misioneros and self.misioneros > 0: return False # más canibales que misioneros en otra orilla if self.canibales < self.misioneros and self.misioneros < 3: return False return True # valida estado objetivo def estado_final(self): return self.canibales == 0 and self.misioneros == 0 and self.bote == 0 # funcion para devolver los estados en los que se encuentra def __repr__(self): return "< Estado (%d, %d, %d) >" % (self.misioneros, self.canibales, self.bote) # clase nodo class Nodo(object): #se crea la clase nodo para hacer la relacion con sus adyacencias def __init__(self, nodo_pariente, estado, accion, profundidad): self.nodo_pariente = nodo_pariente self.estado = estado self.accion = accion self.profundidad = profundidad # metodo expandir #Busca lo nodos adyacentes def expandir(self): for (accion, estado_sig) in self.estado.siguientes(): nodo_sig = Nodo( nodo_pariente=self, estado=estado_sig, accion=accion, profundidad=self.profundidad + 1) yield nodo_sig # funcion para guardar y devolver la solucion del problema def devolver_solucion(self): solucion = [] nodo = self while nodo.nodo_pariente is not None: solucion.append(nodo.accion) nodo = nodo.nodo_pariente solucion.reverse() return solucion # metodo BFS - busqueda por anchura def BFS(Estado_inicial): nodo_inicial = Nodo( nodo_pariente=None, estado=Estado_inicial, accion=None, profundidad=0) # Se establece la conexion con los nodos hijos cola = deque([nodo_inicial]) #se crea la cola profundidad_maxima = -1 while True: if not cola: return None nodo = cola.popleft() if nodo.profundidad > profundidad_maxima: #se defienen los nuevos nodos profundidad_maxima = nodo.profundidad if nodo.estado.estado_final(): solucion = nodo.devolver_solucion() return solucion cola.extend(nodo.expandir()) def main(): #Caso prueba Estado_inicial = Estado(3,3,1) solucion = BFS(Estado_inicial) if solucion is None: print("no tiene solucion") else: for pasos in solucion: print("%s" % pasos) if __name__ == "__main__": main()
0 misioneros y 2 canibales Ida. < Estado (3, 1, 0) > 0 misioneros y 1 canibales Vuelta. < Estado (3, 2, 1) > 0 misioneros y 2 canibales Ida. < Estado (3, 0, 0) > 0 misioneros y 1 canibales Vuelta. < Estado (3, 1, 1) > 2 misioneros y 0 canibales Ida. < Estado (1, 1, 0) > 1 misioneros y 1 canibales Vuelta. < Estado (2, 2, 1) > 2 misioneros y 0 canibales Ida. < Estado (0, 2, 0) > 0 misioneros y 1 canibales Vuelta. < Estado (0, 3, 1) > 0 misioneros y 2 canibales Ida. < Estado (0, 1, 0) > 0 misioneros y 1 canibales Vuelta. < Estado (0, 2, 1) > 0 misioneros y 2 canibales Ida. < Estado (0, 0, 0) >
MIT
Artificial-Int.ipynb
danielordonezg/Machine-Learning-Algorithms
Función en Python que reciba un grafo, un nodo inicial y un nodo final y devuelva la ruta del nodo inicial al nodo final utilizando búsqueda primero en profundidad. Se deben crear las clases Grafo y Nodo con sus respectivos métodos y atributos. La función debe retornar None en caso de que no haya ninguna ruta posible.
class Enlace: #pendientes def __init__(self, a=None, b=None): self.a = a self.b = b def __eq__(self, other): return self.a == other.a and self.b == other.b def __str__(self): return "(" + str(self.a) + "," + str(self.b) + ")" def __repr__(self): return self.__str__() class Nodo: def __init__(self, padre=None, nombre=""): self.padre = padre self.nombre = nombre def __eq__(self, other): return self.nombre == other.nombre def __str__(self): return self.nombre def __repr__(self): return self.__str__() class Grafo: #se crea el grafo con los nodos y enlaces entre si def __init__(self, nodos=[], enlaces=[]): self.nodos = nodos self.enlaces = enlaces def __str__(self): return "Nodos : " + str(self.nodos) + " Enlaces : " + str(self.enlaces) def __repr__(self): return self.__str__() def hallarRuta(grafo, nodoInicial, nodoFinal): #Se halla el recorrido final del grafo actual = nodoInicial p=[actual] estadoFinal = nodoFinal visitados = [actual] rutaFinal=[] while len(p) > 0: # se verifica que el grafo no este vacio, que a este llegen los nodos adyacentes if (actual!=estadoFinal): siguiente = generar_siguiente(actual, grafo, visitados) if siguiente != Nodo(nombre=""): p.append(siguiente) actual = p[len(p)-1] visitados.append(actual)#se toma cómo agrega a la "ruta" de visitados el nodo en el que nos hallamos else: p.pop() actual = p[len(p)-1] else: while len(p) > 0: rutaFinal.insert(0,p.pop())#finalmente se le insertan todos todos los visitados a rutaFinal break print("La ruta final es") print(rutaFinal) def generar_siguiente(actual, grafo, visitados):# una ves visitado uno de los nodos, pasa al siguiente nodo, al siguiente hijo #comprueba si este nodo ya fue visitado o no for i in range(len(grafo.enlaces)): if actual.nombre == grafo.enlaces[i].a: nodoSiguiente = Nodo(padre=actual.nombre, nombre=grafo.enlaces[i].b) if nodoSiguiente not in visitados: return nodoSiguiente break return Nodo(nombre="") #EJEMPLO #acontinuación se definen las adyacencia del grafo a consultar el recorrrido nodos = [Nodo(nombre="A"),Nodo(nombre="B"),Nodo(nombre="C"),Nodo(nombre="D"),Nodo(nombre="F"),Nodo(nombre="E")] E1 = Enlace(a = "A", b = "C" ) E2 = Enlace(a = "A", b = "D" ) E3 = Enlace(a = "A", b = "F" ) E4 = Enlace(a = "B", b = "E" ) E5 = Enlace(a = "C", b = "F" ) E6 = Enlace(a = "D", b = "E" ) E7 = Enlace(a = "E", b = "F" ) enlaces = [E1,E2,E3,E4,E5,E6,E7] grafo = Grafo(nodos=nodos,enlaces=enlaces) nodoA = Nodo(nombre="A") nodoB = Nodo(nombre="F") ruta = hallarRuta(grafo, nodoA, nodoB) print(ruta)
La ruta final es [A, C, F] None
MIT
Artificial-Int.ipynb
danielordonezg/Machine-Learning-Algorithms
Desarrollar un programa en Python que solucione el problema del rompecabezas des-lizante para 8 números utilizando búsqueda en anchura . El programa debe leer elestado inicial desde un archivo. Algunas configuraciones no tienen solución.
class Estados(): def __init__(self,Mat,Npadre=None): self.Npadre=Npadre self.Mat=Mat #Mat=[[0 1 2],[3 4 5],[6 7 8]] def BuscZ(self): #Buscar el 0 dentro de la matriz itera=0 Pos=[] for y,i in enumerate(self.Mat): if 0 in i: itera+=1 Pos.append(y) Pos.append(i.index(0)) #Funcion Busca 0 return Pos #prueba=Estados([[1,2,0],[3,4,5],[6,7,8]]) #prueba.BuscZ() #Buscar Hijos- Movimientos def BuscaH(): #arriba #abajo #derecha #izquierda
_____no_output_____
MIT
Artificial-Int.ipynb
danielordonezg/Machine-Learning-Algorithms
Desarrollar un programa en Python que encuentre la ruta de salida en un laberinto representado por una matriz de 0 y 1. Un 0 significa que se puede pasar por esa casilla un 1 representa que hay pared en dicha casilla y 2 que es la salida. El programa debe leer la configuración del laberinto desde un archivo, solicitar al usuario el estado inicial y dibujar el laberinto con la ruta de salida. Se debe utilizar búsqueda primero en profundidad.
#creacion de ambiente from IPython.display import display import ipywidgets as widgets import time import random # se crea el tablero class Tablero: def __init__(self, tamanoCelda=(40, 40), nCeldas=(5,5)): #dimensiones del tablero self.out = widgets.HTML() display(self.out) self.tamanoCelda = tamanoCelda self.nCeldas = nCeldas def dibujar(self, agente , trashes, obstaculos,pila=[]): tablero = "<h1 style='color:green'>Encuentra la salida </h1>" tablero+="<br>" tablero += "<table border='1' >{}</table>" filas = "" for i in range(self.nCeldas[0]): s = "" for j in range(self.nCeldas[1]): agenteaux = Agente(x = j, y = i) if agente == agenteaux: contenido = \ "<span style='font-size:{tamanoEmoticon}px;'>{emoticon}</span>".\ format(tamanoEmoticon=agente.tamanoEmoticon, emoticon=agente.emoticon) elif agenteaux in trashes: index = trashes.index(agenteaux) contenido = \ "<span style='font-size:{tamanoEmoticon}px;'>{emoticon}</span>".\ format(tamanoEmoticon=trashes[index].tamanoEmoticon, emoticon=trashes[index].emoticon) elif agenteaux in obstaculos: index = obstaculos.index(agenteaux) contenido = \ "<span style='font-size:{tamanoEmoticon}px;'>{emoticon}</span>".\ format(tamanoEmoticon=obstaculos[index].tamanoEmoticon, emoticon=obstaculos[index].emoticon) elif Nodo(x=j,y=i) in pila: contenido = \ "<span style='font-size:{tamanoEmoticon}px;'>{emoticon}</span>".\ format(tamanoEmoticon=30, emoticon="👣") else: contenido="" s += "<td style='height:{alto}px;width:{ancho}px'>{contenido}</td>".\ format(alto=self.tamanoCelda[0], ancho=self.tamanoCelda[1], contenido=contenido) filas += "<tr>{}</tr>".format(s) tablero = tablero.format(filas) self.out.value=tablero return trashes #Agente #se crea la clase agente y los movimientos que este tendra class Agente: def __init__(self, x=2, y=2, emoticon="🧍‍♂️", tamanoEmoticon=30): #estado inicial del agente self.x = x self.y = y self.emoticon = emoticon self.tamanoEmoticon = tamanoEmoticon def __eq__(self, other): return self.x == other.x and self.y == other.y def __str__(self): return "("+str(self.x)+","+str(self.y)+","+self.emoticon+")" def __repr__(self): return self.__str__() #se definen los movimientos que tendra el agente #movimientos en "X" y en "Y" respectivamente def Abajo(self): # movimiento hacia abajo self.y += 1 def Arriba(self): #movimiento hacia arriba self.y -= 1 def Derecha(self): #movimiento hacia la derecha self.x += 1 def Izquierda(self): #Movimiento hacia la izquierda self.x -= 1 class Nodo: # se define la clase nodo def __init__(self, x=0, y=0): self.x = x self.y = y def __eq__(self, other): return self.x == other.x and self.y==other.y def __str__(self): return str("(posX: "+str(self.x)+", posY: "+str(self.y)+")") def __repr__(self): return self.__str__() def generar_siguiente(actual, visitados): #se comprueba si la posicion ya fue visitado o no, si ya lo fue busca otro nodo posx=int(actual.x) posy=int(actual.y) if posy < len(nombres)-1: #Movimiento hacia abajo Abajo=int(arreglo[posy+1][posx]) if Abajo==0 or Abajo==2: Siguiente=Nodo(x=posx,y=posy+1) if Siguiente not in visitados: return Siguiente #Movimiento hacia la derecha if posx < len(nombres)-1 : Derecha=int(arreglo[posy][posx+1]) if Derecha==0 or Derecha==2: Siguiente=Nodo(x=posx+1,y=posy) if Siguiente not in visitados: return Siguiente if posy > 0 : #MOVIMIENTO HACIA ARRIBA, MIENTRAS QUE NI SE SAlGA SE LAS DIMENSIONES Arriba=int(arreglo[posy-1][posx]) if Arriba==0 or Arriba==2: Siguiente=Nodo(x=posx,y=posy-1) if Siguiente not in visitados: return Siguiente if posx>0: #MOVIMIENTO HACIA LA IZQUIERDA, HASTA EL BORDE DEL ESCENARIO Izq=int(arreglo[posy][posx-1]) if Izq==0 or Izq==2: Siguiente=Nodo(x=posx-1,y=posy) if Siguiente not in visitados: return Siguiente return Nodo(x=-1,y=-1) def HallarRuta(agente , elementos): # SE DEFINE LA FUNCION, LA CUAL NOS AYUDARA A BUSCAR LA SALIDA DEL LABERINTO escenario=elementos[0] # print(arreglo) for i in range(len(arreglo)):# se recorre nombres.append(i) #rSE RECORRE TODO EL GRAFO PARA SABER EL NODO FINAL estadoFinal=Nodo() for i in range(len(nombres)): arreglito=arreglo[i] for j in range(len(arreglito)): if int(arreglito[j])==2: estadoFinal=Nodo(x=i,y=j) actual = Nodo(x=agente.x,y=agente.y) visitados = [actual] pila=[actual] while len(pila)!=0 and actual != estadoFinal: # MIENTRAS LA PILA EN LA QUE SE ALMACENAN LOS NO CONSULTADOS NO SEA 0 #Se busca el siguiente nodo, ignorando los ya visitados siguienteNodo=generar_siguiente(actual,visitados) if siguienteNodo != Nodo(x=-1,y=-1): pila.append(siguienteNodo)# el nodo actual pasa a ser visitado actual=siguienteNodo # se busca un nuevo nodo agente.x=int(actual.x) agente.y=int(actual.y) visitados.append(actual) else: pila.pop() actual=pila[len(pila)-1] agente.x=int(actual.x) agente.y=int(actual.y) escenario.dibujar(agente, elementos[1], elementos[2],pila) # se dibuja el escenario time.sleep(1) # tiempo de retraso #Importar para leer archivo csv from google.colab import files files.upload() import csv, operator def obtenerArregloArchivo(): Trashes=[] Obstaculos=[] cont=0 with open('EscenarioPunto#4.csv') as csvarchivo: entrada = csv.reader(csvarchivo) arreglo= [] for reg in entrada: arreglo.append(reg) if cont == 0: cantfilas = len(reg) for i in range(len(reg)): if len(reg) == 1: agente.energia = int(reg[0]) else: if reg[i] == '1': Obstaculos.append(Agente(x=i , y=cont , emoticon="⛓️" )) if reg[i] == '2': Trashes.append(Agente(x=i , y=cont , emoticon="🥇" )) cont+=1 escenario = Tablero(nCeldas=(cont,cantfilas)) elementos = [escenario , Trashes, Obstaculos, agente, arreglo] return elementos #crearEscenarioArchivo() #Posicion inicil del agente posx = input(" X: ") posy = input(" Y: ") nombres=[] agente = Agente(x=int(posx), y = int(posy)) elementos = obtenerArregloArchivo() arreglo=elementos[4] ruta = HallarRuta(agente , elementos)
_____no_output_____
MIT
Artificial-Int.ipynb
danielordonezg/Machine-Learning-Algorithms
Prelim Exam Question 1.A 4 x 4 matrix whose diagonal elements are all one (1's)
import numpy as np A = np.array([1,1,1,1,]) C = np.diag(A) print (C)
[[1 0 0 0] [0 1 0 0] [0 0 1 0] [0 0 0 1]]
Apache-2.0
Prelim_Exam.ipynb
MishcaGestoso/Linear-Algebra-58019
Question 2. doubles all the values of each element.
import numpy as np A = np.array([1, 1, 1, 1]) B = np.diag(A) print (C*2) #To double the value of array C
[[2 0 0 0] [0 2 0 0] [0 0 2 0] [0 0 0 2]]
Apache-2.0
Prelim_Exam.ipynb
MishcaGestoso/Linear-Algebra-58019
Question 3. The cross-product of matrices, A = [2,7,4] and B = [3,9,8]
import numpy as np A = np.array([2, 7, 4]) B = np.array([3, 9, 8]) #To compute the cross of arrays A and B cross = np.cross(A,B) print(cross)
[20 -4 -3]
Apache-2.0
Prelim_Exam.ipynb
MishcaGestoso/Linear-Algebra-58019
SkyScan ConfigMake temporary changes to a running SkyScan instance. It will revert back to the values in the environment file when restart.
broker="192.168.1.47" # update with the IP for the Raspberry PI !pip install paho-mqtt import paho.mqtt.client as mqtt import json client = mqtt.Client("notebook-config")
_____no_output_____
Apache-2.0
Config.ipynb
rcaudill/SkyScan
Camera ZoomThis is how much the camera is zoomed in. It is an Int number between 0-9999 (max)
client.connect(broker) data = {} data['cameraZoom'] = 9999 # Update this Value json_data = json.dumps(data) client.publish("skyscan/config/json",json_data)
_____no_output_____
Apache-2.0
Config.ipynb
rcaudill/SkyScan
Camera DelayFloat value for the number of seconds to wait after sending the camera move API command, before sending the take picture API command.
client.connect(broker) data = {} data['cameraDelay'] = 0.25 # Update this Value json_data = json.dumps(data) client.publish("skyscan/config/json",json_data)
_____no_output_____
Apache-2.0
Config.ipynb
rcaudill/SkyScan
Camera Move SpeedThis is how fast the camea will move. It is an Int number between 0-99 (max)
client.connect(broker) data = {} data['cameraMoveSpeed'] = 99 # Update this Value json_data = json.dumps(data) client.publish("skyscan/config/json",json_data)
_____no_output_____
Apache-2.0
Config.ipynb
rcaudill/SkyScan
Camera LeadThis is how far the tracker should move the center point in front of the currently tracked plane. It is a float, and is measured in seconds, example: 0.25 . It is based on the planes current heading and how fast it is going.
client.connect(broker) data = {} data['cameraLead'] = 0.45 # Update this Value json_data = json.dumps(data) client.publish("skyscan/config/json",json_data)
_____no_output_____
Apache-2.0
Config.ipynb
rcaudill/SkyScan
Camera BearingThis is a float to correct the cameras heading to help it better align with True North. It can be from -180 to 180.
client.connect(broker) data = {} data['cameraBearing'] = -2 # Update this Value json_data = json.dumps(data) client.publish("skyscan/config/json",json_data)
_____no_output_____
Apache-2.0
Config.ipynb
rcaudill/SkyScan
Minimum ElevationThe minimum elevation above the horizon which the Tracker will follow an airplane. Int value between 0-90 degrees.
client.connect(broker) data = {} data['minElevation'] = 45 # Update this Value json_data = json.dumps(data) client.publish("skyscan/config/json",json_data)
_____no_output_____
Apache-2.0
Config.ipynb
rcaudill/SkyScan
Running and Plotting Coeval Cubes The aim of this tutorial is to introduce you to how `21cmFAST` does the most basic operations: producing single coeval cubes, and visually verifying them. It is a great place to get started with `21cmFAST`.
%matplotlib inline import matplotlib.pyplot as plt import os # We change the default level of the logger so that # we can see what's happening with caching. import logging, sys, os logger = logging.getLogger('21cmFAST') logger.setLevel(logging.INFO) import py21cmfast as p21c # For plotting the cubes, we use the plotting submodule: from py21cmfast import plotting # For interacting with the cache from py21cmfast import cache_tools print(f"Using 21cmFAST version {p21c.__version__}")
Using 21cmFAST version 3.0.2
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Clear the cache so that we get the same results for the notebook every time (don't worry about this for now). Also, set the default output directory to `_cache/`:
if not os.path.exists('_cache'): os.mkdir('_cache') p21c.config['direc'] = '_cache' cache_tools.clear_cache(direc="_cache")
2020-10-02 09:51:10,651 | INFO | Removed 0 files from cache.
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Basic Usage The simplest (and typically most efficient) way to produce a coeval cube is simply to use the `run_coeval` method. This consistently performs all steps of the calculation, re-using any data that it can without re-computation or increased memory overhead.
coeval8, coeval9, coeval10 = p21c.run_coeval( redshift = [8.0, 9.0, 10.0], user_params = {"HII_DIM": 100, "BOX_LEN": 100, "USE_INTERPOLATION_TABLES": True}, cosmo_params = p21c.CosmoParams(SIGMA_8=0.8), astro_params = p21c.AstroParams({"HII_EFF_FACTOR":20.0}), random_seed=12345 )
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
There are a number of possible inputs for `run_coeval`, which you can check out either in the [API reference](../reference/py21cmfast.html) or by calling `help(p21c.run_coeval)`. Notably, the `redshift` must be given: it can be a single number, or a list of numbers, defining the redshift at which the output coeval cubes will be defined. Other params we've given here are `user_params`, `cosmo_params` and `astro_params`. These are all used for defining input parameters into the backend C code (there's also another possible input of this kind; `flag_options`). These can be given either as a dictionary (as `user_params` has been), or directly as a relevant object (like `cosmo_params` and `astro_params`). If creating the object directly, the parameters can be passed individually or via a single dictionary. So there's a lot of flexibility there! Nevertheless we *encourage* you to use the basic dictionary. The other ways of passing the information are there so we can use pre-defined objects later on. For more information about these "input structs", see the [API docs](../reference/_autosummary/py21cmfast.inputs.html).We've also given a `direc` option: this is the directory in which to search for cached data (and also where cached data should be written). Throughout this notebook we're going to set this directly to the `_cache` folder, which allows us to manage it directly. By default, the cache location is set in the global configuration in `~/.21cmfast/config.yml`. You'll learn more about caching further on in this tutorial. Finally, we've given a random seed. This sets all the random phases for the simulation, and ensures that we can exactly reproduce the same results on every run. The output of `run_coeval` is a list of `Coeval` instances, one for each input redshift (it's just a single object if a single redshift was passed, not a list). They store *everything* related to that simulation, so that it can be completely compared to other simulations. For example, the input parameters:
print("Random Seed: ", coeval8.random_seed) print("Redshift: ", coeval8.redshift) print(coeval8.user_params)
Random Seed: 12345 Redshift: 8.0 UserParams(BOX_LEN:100, DIM:300, HII_DIM:100, HMF:1, POWER_SPECTRUM:0, USE_FFTW_WISDOM:False, USE_RELATIVE_VELOCITIES:False)
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
This is where the utility of being able to pass a *class instance* for the parameters arises: we could run another iteration of coeval cubes, with the same user parameters, simply by doing `p21c.run_coeval(user_params=coeval8.user_params, ...)`.Also in the `Coeval` instance are the various outputs from the different steps of the computation. You'll see more about what these steps are further on in the tutorial. But for now, we show that various boxes are available:
print(coeval8.hires_density.shape) print(coeval8.brightness_temp.shape)
(300, 300, 300) (100, 100, 100)
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Along with these, full instances of the output from each step are available as attributes that end with "struct". These instances themselves contain the `numpy` arrays of the data cubes, and some other attributes that make them easier to work with:
coeval8.brightness_temp_struct.global_Tb
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
By default, each of the components of the cube are cached to disk (in our `_cache/` folder) as we run it. However, the `Coeval` cube itself is _not_ written to disk by default. Writing it to disk incurs some redundancy, since that data probably already exists in the cache directory in seperate files. Let's save to disk. The save method by default writes in the current directory (not the cache!):
filename = coeval8.save(direc='_cache')
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
The filename of the saved file is returned:
print(os.path.basename(filename))
Coeval_z8.0_a3c7dea665420ae9c872ba2fab1b3d7d_r12345.h5
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Such files can be read in:
new_coeval8 = p21c.Coeval.read(filename, direc='.')
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Some convenient plotting functions exist in the `plotting` module. These can work directly on `Coeval` objects, or any of the output structs (as we'll see further on in the tutorial). By default the `coeval_sliceplot` function will plot the `brightness_temp`, using the standard traditional colormap:
fig, ax = plt.subplots(1,3, figsize=(14,4)) for i, (coeval, redshift) in enumerate(zip([coeval8, coeval9, coeval10], [8,9,10])): plotting.coeval_sliceplot(coeval, ax=ax[i], fig=fig); plt.title("z = %s"%redshift) plt.tight_layout()
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Any 3D field can be plotted, by setting the `kind` argument. For example, we could alternatively have plotted the dark matter density cubes perturbed to each redshift:
fig, ax = plt.subplots(1,3, figsize=(14,4)) for i, (coeval, redshift) in enumerate(zip([coeval8, coeval9, coeval10], [8,9,10])): plotting.coeval_sliceplot(coeval, kind='density', ax=ax[i], fig=fig); plt.title("z = %s"%redshift) plt.tight_layout()
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
To see more options for the plotting routines, see the [API Documentation](../reference/_autosummary/py21cmfast.plotting.html). `Coeval` instances are not cached themselves -- they are containers for data that is itself cached (i.e. each of the `_struct` attributes of `Coeval`). See the [api docs](../reference/_autosummary/py21cmfast.outputs.html) for more detailed information on these. You can see the filename of each of these structs (or the filename it would have if it were cached -- you can opt to *not* write out any given dataset):
coeval8.init_struct.filename
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
You can also write the struct anywhere you'd like on the filesystem. This will not be able to be automatically used as a cache, but it could be useful for sharing files with colleagues.
coeval8.init_struct.save(fname='my_init_struct.h5')
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
This brief example covers most of the basic usage of `21cmFAST` (at least with `Coeval` objects -- there are also `Lightcone` objects for which there is a separate tutorial). For the rest of the tutorial, we'll cover a more advanced usage, in which each step of the calculation is done independently. Advanced Step-by-Step Usage Most users most of the time will want to use the high-level `run_coeval` function from the previous section. However, there are several independent steps when computing the brightness temperature field, and these can be performed one-by-one, adding any other effects between them if desired. This means that the new `21cmFAST` is much more flexible. In this section, we'll go through in more detail how to use the lower-level methods.Each step in the chain will receive a number of input-parameter classes which define how the calculation should run. These are the `user_params`, `cosmo_params`, `astro_params` and `flag_options` that we saw in the previous section.Conversely, each step is performed by running a function which will return a single object. Every major function returns an object of the same fundamental class (an ``OutputStruct``) which has various methods for reading/writing the data, and ensuring that it's in the right state to receive/pass to and from C.These are the objects stored as `init_box_struct` etc. in the `Coeval` class.As we move through each step, we'll outline some extra details, hints and tips about using these inputs and outputs. Initial Conditions The first step is to get the initial conditions, which defines the *cosmological* density field before any redshift evolution is applied.
initial_conditions = p21c.initial_conditions( user_params = {"HII_DIM": 100, "BOX_LEN": 100}, cosmo_params = p21c.CosmoParams(SIGMA_8=0.8), random_seed=54321 )
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
We've already come across all these parameters as inputs to the `run_coeval` function. Indeed, most of the steps have very similar interfaces, and are able to take a random seed and parameters for where to look for the cache. We use a different seed than in the previous section so that all our boxes are "fresh" (we'll show how the caching works in a later section).These initial conditions have 100 cells per side, and a box length of 100 Mpc. Note again that they can either be passed as a dictionary containing the input parameters, or an actual instance of the class. While the former is the suggested way, one benefit of the latter is that it can be queried for the relevant parameters (by using ``help`` or a post-fixed ``?``), or even queried for defaults:
p21c.CosmoParams._defaults_
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
(these defaults correspond to the Planck15 cosmology contained in Astropy). So what is in the ``initial_conditions`` object? It is what we call an ``OutputStruct``, and we have seen it before, as the `init_box_struct` attribute of `Coeval`. It contains a number of arrays specifying the density and velocity fields of our initial conditions, as well as the defining parameters. For example, we can easily show the cosmology parameters that are used (note the non-default $\sigma_8$ that we passed):
initial_conditions.cosmo_params
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
A handy tip is that the ``CosmoParams`` class also has a reference to a corresponding Astropy cosmology, which can be used more broadly:
initial_conditions.cosmo_params.cosmo
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Merely printing the initial conditions object gives a useful representation of its dependent parameters:
print(initial_conditions)
InitialConditions(UserParams(BOX_LEN:100, DIM:300, HII_DIM:100, HMF:1, POWER_SPECTRUM:0, USE_FFTW_WISDOM:False, USE_RELATIVE_VELOCITIES:False); CosmoParams(OMb:0.04897468161869667, OMm:0.30964144154550644, POWER_INDEX:0.9665, SIGMA_8:0.8, hlittle:0.6766); random_seed:54321)
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
(side-note: the string representation of the object is used to uniquely define it in order to save it to the cache... which we'll explore soon!).To see which arrays are defined in the object, access the ``fieldnames`` (this is true for *all* `OutputStruct` objects):
initial_conditions.fieldnames
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
The `coeval_sliceplot` function also works on `OutputStruct` objects (as well as the `Coeval` object as we've already seen). It takes the object, and a specific field name. By default, the field it plots is the _first_ field in `fieldnames` (for any `OutputStruct`).
plotting.coeval_sliceplot(initial_conditions, "hires_density");
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Perturbed Field After obtaining the initial conditions, we need to *perturb* the field to a given redshift (i.e. the redshift we care about). This step clearly requires the results of the previous step, which we can easily just pass in. Let's do that:
perturbed_field = p21c.perturb_field( redshift = 8.0, init_boxes = initial_conditions )
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
Note that we didn't need to pass in any input parameters, because they are all contained in the `initial_conditions` object itself. The random seed is also taken from this object.Again, the output is an `OutputStruct`, so we can view its fields:
perturbed_field.fieldnames
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
This time, it has only density and velocity (the velocity direction is chosen without loss of generality). Let's view the perturbed density field:
plotting.coeval_sliceplot(perturbed_field, "density");
_____no_output_____
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST
It is clear here that the density used is the *low*-res density, but the overall structure of the field looks very similar. Ionization Field Next, we need to ionize the box. This is where things get a little more tricky. In the simplest case (which, let's be clear, is what we're going to do here) the ionization occurs at the *saturated limit*, which means we can safely ignore the contribution of the spin temperature. This means we can directly calculate the ionization on the density/velocity fields that we already have. A few more parameters are needed here, and so two more "input parameter dictionaries" are available, ``astro_params`` and ``flag_options``. Again, a reminder that their parameters can be viewed by using eg. `help(p21c.AstroParams)`, or by looking at the [API docs](../reference/_autosummary/py21cmfast.inputs.html). For now, let's leave everything as default. In that case, we can just do:
ionized_field = p21c.ionize_box( perturbed_field = perturbed_field )
2020-02-29 15:10:43,902 | INFO | Existing init_boxes found and read in (seed=54321).
MIT
docs/tutorials/coeval_cubes.ipynb
daviesje/21cmFAST