markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Variable YearsCode:
data_test['YearsCode'] = data_test['YearsCode'].replace(['More than 50 years'], 50) data_test['YearsCode'] = data_test['YearsCode'].replace(['Less than 1 year'], 1)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable YearsCodePro:
data_test['YearsCodePro'] = data_test['YearsCodePro'].replace(['More than 50 years'], 50) data_test['YearsCodePro'] = data_test['YearsCodePro'].replace(['Less than 1 year'], 1)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable OpSys:
data_test['OpSys'].value_counts() data_test['OpSys'] = data_test['OpSys'].replace(['Windows Subsystem for Linux (WSL)'], 'Windows') data_test['OpSys'] = data_test['OpSys'].replace(['Linux-based'], 'Linux') data_test['OpSys'] = data_test['OpSys'].replace(['Other (please specify)'], 'Otro') data_test['OpSys'].value_counts()
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable Age:
data_test['Age'].value_counts() data_test['Age'] = data_test['Age'].replace(['25-34 years old'], '25-34') data_test['Age'] = data_test['Age'].replace(['35-44 years old'], '35-44') data_test['Age'] = data_test['Age'].replace(['18-24 years old'], '18-24') data_test['Age'] = data_test['Age'].replace(['45-54 years old'], '45-54') data_test['Age'] = data_test['Age'].replace(['55-64 years old'], '55-64') data_test['Age'] = data_test['Age'].replace(['Under 18 years old'], '< 18') data_test['Age'] = data_test['Age'].replace(['65 years or older'], '>= 65') data_test['Age'] = data_test['Age'].replace(['Prefer not to say'], 'No definido') data_test['Age'] = data_test['Age'].replace(['25-34 years old'], '25-34') data_test['Age'].value_counts()
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable Gender:
data_test['Gender'].value_counts() data_test['Gender'] = data_test['Gender'].replace(['Man'], 'Hombre') data_test['Gender'] = data_test['Gender'].replace(['Woman'], 'Mujer') data_test['Gender'] = data_test['Gender'].replace(['Non-binary, genderqueer, or gender non-conforming'], 'No binario u otro') data_test['Gender'] = data_test['Gender'].replace(['Man;Non-binary, genderqueer, or gender non-conforming'], 'No binario u otro') data_test['Gender'] = data_test['Gender'].replace(['Man;Or, in your own words:'], 'Hombre') data_test['Gender'] = data_test['Gender'].replace(['Or, in your own words:'], 'No definido') data_test['Gender'] = data_test['Gender'].replace(['Woman;Non-binary, genderqueer, or gender non-conforming'], 'No binario u otro') data_test['Gender'] = data_test['Gender'].replace(['Man;Woman'], 'No definido') data_test['Gender'] = data_test['Gender'].replace(['Man;Woman;Non-binary, genderqueer, or gender non-conforming;Or, in your own words:'], 'No binario u otro') data_test['Gender'] = data_test['Gender'].replace(['Non-binary, genderqueer, or gender non-conforming;Or, in your own words:'], 'No binario u otro') data_test['Gender'] = data_test['Gender'].replace(['Man;Woman;Non-binary, genderqueer, or gender non-conforming'], 'No binario u otro') data_test['Gender'] = data_test['Gender'].replace(['Prefer not to say'], 'No definido') data_test['Gender'].value_counts()
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable Trans:
data_test['Trans'].value_counts() data_test['Trans'] = data_test['Trans'].replace(['Yes'], 'Si') data_test['Trans'] = data_test['Trans'].replace(['Prefer not to say'], 'No definido') data_test['Trans'] = data_test['Trans'].replace(['Or, in your own words:'], 'No definido') data_test['Trans'].value_counts()
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Variable MentalHealth:
data_test['MentalHealth'].value_counts() from re import search def choose_mental_health(cell_mental_health): val_mental_health_exceptions = ["Or, in your own words:"] if cell_mental_health == "Or, in your own words:": return val_mental_health_exceptions[0] if search(";", cell_mental_health): row_mental_health_values = cell_mental_health.split(';', 10) first_val = row_mental_health_values[0] return first_val else: return cell_mental_health data_test['MentalHealth'] = data_test['MentalHealth'].apply(choose_mental_health) data_test['MentalHealth'].value_counts() data_test['MentalHealth'] = data_test['MentalHealth'].replace(['None of the above'], 'Ninguna de las mencionadas') data_test['MentalHealth'] = data_test['MentalHealth'].replace(['I have a concentration and/or memory disorder (e.g. ADHD)'], 'Desorden de concentración o memoria') data_test['MentalHealth'] = data_test['MentalHealth'].replace(['I have a mood or emotional disorder (e.g. depression, bipolar disorder)'], 'Desorden emocional') data_test['MentalHealth'] = data_test['MentalHealth'].replace(['I have an anxiety disorder'], 'Desorden de ansiedad') data_test['MentalHealth'] = data_test['MentalHealth'].replace(['Prefer not to say'], 'No definido') data_test['MentalHealth'] = data_test['MentalHealth'].replace(["I have autism / an autism spectrum disorder (e.g. Asperger's)"], 'Tipo de autismo') data_test['MentalHealth'] = data_test['MentalHealth'].replace(['Or, in your own words:'], 'No definido') data_test['MentalHealth'].value_counts()
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2. Selección de campos para subdatasetsSe seleccionarán los campos adecuados para responder a cada una de las cuestiones que se plantearon en la primera parte de la práctica. 2.1. Según la autodeterminación de la etnia, ¿Qué etnia tiene un mayor sueldo anual?Se seleccionarán los campos adecuados para responder a esta pregunta
data_etnia = data_test[['Country', 'Ethnicity', 'ConvertedCompYearly']] data_etnia.head() df_data_etnia = data_etnia.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_etnia['ConvertedCompYearly'], 0.1) print(df_data_etnia[mask]) df_data_etnia_no_outliers = df_data_etnia[mask] df_data_etnia_no_outliers = df_data_etnia_no_outliers.copy() df_data_etnia_no_outliers['ConvertedCompYearlyCategorical'] = 'ALTO' df_data_etnia_no_outliers.loc[(df_data_etnia_no_outliers['ConvertedCompYearly'] >= 0) & (df_data_etnia_no_outliers['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO' df_data_etnia_no_outliers.loc[(df_data_etnia_no_outliers['ConvertedCompYearly'] > 32747) & (df_data_etnia_no_outliers['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO' print(df_data_etnia_no_outliers) df_data_etnia_alto = df_data_etnia_no_outliers[df_data_etnia_no_outliers['ConvertedCompYearlyCategorical'] == 'ALTO'] df_data_etnia_alto = df_data_etnia_alto[['Ethnicity', 'ConvertedCompYearlyCategorical']] df_flourish = df_data_etnia_alto['Ethnicity'].value_counts().to_frame('counts').reset_index() df_flourish df_flourish.to_csv('001_df_flourish.csv', index=False) df_data_etnia_alto.to_csv('001_df_data_etnia_alto.csv', index=False) df_data_etnia.to_csv('001_data_etnia_categorical.csv', index=False) data_etnia.to_csv('001_data_etnia.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.2. ¿Cuáles son los porcentajes de programadores que trabajan a tiempo completo, medio tiempo o freelance?Se seleccionarán los campos adecuados para responder a esta pregunta
data_time_work_dev = data_test[['Country', 'Employment', 'ConvertedCompYearly', 'EdLevel', 'Age']] data_time_work_dev.head() df_flourish_002 = data_time_work_dev['Employment'].value_counts().to_frame('counts').reset_index() df_flourish_002 df_flourish_002['counts'] = (df_flourish_002['counts'] * 100 ) / data_time_work_dev.shape[0] df_flourish_002 df_flourish_002['counts'] = df_flourish_002['counts'].round(2) df_flourish_002 df_flourish_002.to_csv('002_df_flourish.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.3. ¿Cuáles son los países con mayor número de programadores profesionales que son activos en la comunidad Stack Overflow?Se seleccionarán los campos adecuados para responder a esta pregunta
data_pro_dev_active_so = data_test[['Country', 'Employment', 'MainBranch', 'EdLevel', 'DevType', 'Age']] data_pro_dev_active_so.head() df_flourish_003 = data_pro_dev_active_so['Country'].value_counts().sort_values(ascending=False).head(10) df_flourish_003 = df_flourish_003.to_frame() df_flourish_003 = df_flourish_003.reset_index() df_flourish_003.columns = ["País", "# Programadores Profesionales"] df_flourish_003.to_csv('003_df_flourish_003.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.4. ¿Cuál es el nivel educativo que mayores ingresos registra entre los encuestados?Se seleccionarán los campos adecuados para responder a esta pregunta
data_edlevel_income = data_test[['ConvertedCompYearly', 'EdLevel']] data_edlevel_income.head() df_data_edlevel_income = data_edlevel_income.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_edlevel_income['ConvertedCompYearly'], 0.1) print(df_data_edlevel_income[mask]) df_data_edlevel_income = df_data_edlevel_income[mask] df_data_edlevel_income['ConvertedCompYearlyCategorical'] = 'ALTO' df_data_edlevel_income.loc[(df_data_edlevel_income['ConvertedCompYearly'] >= 0) & (df_data_edlevel_income['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO' df_data_edlevel_income.loc[(df_data_edlevel_income['ConvertedCompYearly'] > 32747) & (df_data_edlevel_income['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO' print(df_data_edlevel_income) df_data_edlevel_income = df_data_edlevel_income[df_data_edlevel_income['ConvertedCompYearlyCategorical'] == 'ALTO'] df_data_edlevel_income = df_data_edlevel_income[['EdLevel', 'ConvertedCompYearlyCategorical']] df_flourish_004 = df_data_edlevel_income['EdLevel'].value_counts().to_frame('counts').reset_index() df_flourish_004 df_flourish_004.to_csv('004_df_flourish.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.5. ¿Existe brecha salarial entre hombres y mujeres u otros géneros?, y de ¿Cuánto es la diferencia? ¿Cuáles son los peores países en cuanto a brecha salarial? ¿Cuáles son los países que han reducido esta brecha salarial entre programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
data_wage_gap = data_test[['Country', 'ConvertedCompYearly', 'Gender']] data_wage_gap.head() df_data_wage_gap = data_wage_gap.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_wage_gap['ConvertedCompYearly'], 0.1) print(df_data_wage_gap[mask]) df_data_wage_gap = df_data_wage_gap[mask] df_data_wage_gap['ConvertedCompYearlyCategorical'] = 'ALTO' df_data_wage_gap.loc[(df_data_wage_gap['ConvertedCompYearly'] >= 0) & (df_data_wage_gap['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO' df_data_wage_gap.loc[(df_data_wage_gap['ConvertedCompYearly'] > 32747) & (df_data_wage_gap['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO' print(df_data_wage_gap) df_data_wage_gap = df_data_wage_gap[df_data_wage_gap['ConvertedCompYearlyCategorical'].isin(['ALTO', 'MEDIO'])] df_data_wage_gap = df_data_wage_gap[['Country', 'Gender', 'ConvertedCompYearlyCategorical']] df_data_wage_gap.to_csv('005_df_data_wage_gap.csv', index=False) df_data_wage_gap['ConvertedCompYearlyCategorical'].drop_duplicates().sort_values() df_data_wage_gap['Gender'].drop_duplicates().sort_values() df_data_wage_gap['Country'].drop_duplicates().sort_values() df_data_wage_gap1 = df_data_wage_gap.copy() df_flourish_005 = df_data_wage_gap1.groupby(['Country', 'Gender']).size().unstack(fill_value=0).sort_values('Hombre') df_flourish_005 = df_flourish_005.apply(lambda x: pd.concat([x.head(40), x.tail(5)])) df_flourish_005.to_csv('005_flourish_data.csv', index=True)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.6. ¿Cuáles son los ingresos promedios según los rangos de edad? ¿Cuál es el rango de edad con el mejor y peor ingreso?Se seleccionarán los campos adecuados para responder a esta pregunta
data_age_income = data_test[['ConvertedCompYearly', 'Age']] data_age_income.head() df_data_age_income = data_age_income.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_age_income['ConvertedCompYearly'], 0.1) print(df_data_age_income[mask]) df_data_age_income = df_data_age_income[mask] df_data_age_income1 = df_data_age_income.copy() df_data_age_income1.to_csv('006_df_data_age_income1.csv', index=False) grouped_df = df_data_age_income1.groupby("Age") average_df = grouped_df.mean() average_df df_flourish_006 = average_df.copy() df_flourish_006.to_csv('006_df_flourish_006.csv', index=True)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.7. ¿Cuáles son las tecnologías que permiten tener un mejor ingreso salarial anual?Se seleccionarán los campos adecuados para responder a esta pregunta
data_techs_best_income1 = data_test[['ConvertedCompYearly', 'LanguageHaveWorkedWith', 'DatabaseHaveWorkedWith', 'PlatformHaveWorkedWith', 'WebframeHaveWorkedWith', 'MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith', 'NEWCollabToolsHaveWorkedWith']] data_techs_best_income1.head() data_techs_best_income1['AllTechs'] = data_techs_best_income1['LanguageHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['DatabaseHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['PlatformHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['WebframeHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['MiscTechHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['ToolsTechHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['NEWCollabToolsHaveWorkedWith'].map(str) print (data_techs_best_income1) df_data_techs_best_income = data_techs_best_income1[['ConvertedCompYearly', 'AllTechs']].copy() df_data_techs_best_income1 = df_data_techs_best_income.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_techs_best_income1['ConvertedCompYearly'], 0.1) print(df_data_techs_best_income1[mask]) df_data_techs_best_income1 = df_data_techs_best_income1[mask] df_data_techs_best_income1['ConvertedCompYearlyCategorical'] = 'ALTO' df_data_techs_best_income1.loc[(df_data_techs_best_income1['ConvertedCompYearly'] >= 0) & (df_data_techs_best_income1['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO' df_data_techs_best_income1.loc[(df_data_techs_best_income1['ConvertedCompYearly'] > 32747) & (df_data_techs_best_income1['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO' print(df_data_techs_best_income1) df_data_techs_best_income1 = df_data_techs_best_income1[df_data_techs_best_income1['ConvertedCompYearlyCategorical'].isin(['ALTO', 'MEDIO'])] df_data_techs_best_income1['AllTechs'] = df_data_techs_best_income1['AllTechs'].str.replace(' ', '') df_data_techs_best_income1['AllTechs'] = df_data_techs_best_income1['AllTechs'].str.replace(';', ' ') df_counts = df_data_techs_best_income1['AllTechs'].str.split(expand=True).stack().value_counts().rename_axis('Tech').reset_index(name='Count') df_counts.head(10) df_data_techs_best_income_007 = df_counts.head(10) df_data_techs_best_income_007.to_csv('007_df_data_techs_best_income.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.8. ¿Cuántas tecnologías en promedio domina un programador profesional?Se seleccionarán los campos adecuados para responder a esta pregunta
data_techs_dev_pro1 = data_test[['DevType', 'LanguageHaveWorkedWith', 'DatabaseHaveWorkedWith', 'PlatformHaveWorkedWith', 'WebframeHaveWorkedWith', 'MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith', 'NEWCollabToolsHaveWorkedWith']] data_techs_dev_pro1.head() data_techs_dev_pro1['AllTechs'] = data_techs_dev_pro1['LanguageHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['DatabaseHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['PlatformHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['WebframeHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['MiscTechHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['ToolsTechHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['NEWCollabToolsHaveWorkedWith'].map(str) print (data_techs_dev_pro1) df_data_techs_dev_pro = data_techs_dev_pro1[['DevType', 'AllTechs']].copy() df_data_techs_dev_pro = df_data_techs_dev_pro[df_data_techs_dev_pro['DevType'].isin(['Desarrollador full-stack', 'Desarrollador front-end', 'Desarrollador móvil', 'Desarrollador back-end', 'Desarrollador Escritorio', 'Desarrollador de QA o Test', 'Desarrollador de aplicaciones embebidas', 'Administrador de base de datos', 'Desarrollador de juegos o gráfico'])] df_data_techs_dev_pro.info() df_data_techs_dev_pro1 = df_data_techs_dev_pro.copy() df_data_techs_dev_pro1.to_csv('008_df_data_techs_dev_pro1.csv', index=True) def convert_row_to_list(lst): return lst.split(';') df_data_techs_dev_pro1['ListTechs'] = df_data_techs_dev_pro1['AllTechs'].apply(convert_row_to_list) df_data_techs_dev_pro1['LenListTechs'] = df_data_techs_dev_pro1['ListTechs'].map(len) df_flourish_008 = df_data_techs_dev_pro1[['DevType', 'LenListTechs']].copy() df_flourish_008 grouped_df = df_flourish_008.groupby("DevType") average_df_008 = round(grouped_df.mean()) df_flourish_008 = average_df_008.copy() df_flourish_008.to_csv('008_df_flourish_008.csv', index=True)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.9. ¿En qué rango de edad se inició la mayoría de los programadores en la programación?Se seleccionarán los campos adecuados para responder a esta pregunta
data_age1stcode_dev_pro1 = data_test[['Age1stCode']] data_age1stcode_dev_pro1.head() data_age1stcode_dev_pro1 = data_age1stcode_dev_pro1['Age1stCode'].value_counts().to_frame('counts').reset_index() data_age1stcode_dev_pro1.to_csv('009_flourish_data.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.10. ¿Cuántos años como programadores se requiere para obtener un ingreso salarial alto?Se seleccionarán los campos adecuados para responder a esta pregunta
data_yearscode_high_income1 = data_test[['ConvertedCompYearly', 'YearsCode']] data_yearscode_high_income1.head() df_data_yearscode_high_income = data_yearscode_high_income1.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_yearscode_high_income['ConvertedCompYearly'], 0.1) print(df_data_yearscode_high_income[mask]) df_data_yearscode_high_income = df_data_yearscode_high_income[mask] df_data_yearscode_high_income['ConvertedCompYearlyCategorical'] = 'ALTO' df_data_yearscode_high_income.loc[(df_data_yearscode_high_income['ConvertedCompYearly'] >= 0) & (df_data_yearscode_high_income['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO' df_data_yearscode_high_income.loc[(df_data_yearscode_high_income['ConvertedCompYearly'] > 32747) & (df_data_yearscode_high_income['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO' print(df_data_yearscode_high_income) df_data_yearscode_high_income.to_csv('010_df_flourish.csv', index=False) df_data_yearscode_high_income['ConvertedCompYearlyCategorical'].value_counts() df_flourish_010 = df_data_yearscode_high_income[['YearsCode', 'ConvertedCompYearlyCategorical']].copy() df_flourish_010.head() df_flourish_010['YearsCode'] = pd.to_numeric(df_flourish_010['YearsCode']) df_flourish_010.info() grouped_df_010 = df_flourish_010.groupby("ConvertedCompYearlyCategorical") average_df_010 = round(grouped_df_010.mean()) average_df_010 average_df_010.to_csv('010_flourish_data.csv', index=True)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.11. ¿Cuáles son los perfiles que registran los mejores ingresos?Se seleccionarán los campos adecuados para responder a esta pregunta
data_profiles_dev_high_income1 = data_test[['ConvertedCompYearly', 'DevType']].copy() data_profiles_dev_high_income1.head() df_data_profiles_dev_high_income = data_profiles_dev_high_income1.copy() def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_data_profiles_dev_high_income['ConvertedCompYearly'], 0.1) print(df_data_profiles_dev_high_income[mask]) df_data_profiles_dev_high_income = df_data_profiles_dev_high_income[mask] df_data_profiles_dev_high_income['ConvertedCompYearlyCategorical'] = 'ALTO' df_data_profiles_dev_high_income.loc[(df_data_profiles_dev_high_income['ConvertedCompYearly'] >= 0) & (df_data_profiles_dev_high_income['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO' df_data_profiles_dev_high_income.loc[(df_data_profiles_dev_high_income['ConvertedCompYearly'] > 32747) & (df_data_profiles_dev_high_income['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO' print(df_data_profiles_dev_high_income) df_data_profiles_dev_high_income['ConvertedCompYearlyCategorical'].value_counts() df_flourish_011 = df_data_profiles_dev_high_income[['DevType', 'ConvertedCompYearlyCategorical']].copy() df_flourish_011 = df_flourish_011[df_flourish_011['ConvertedCompYearlyCategorical'].isin(['ALTO'])] df_flourish_011.info() df_data_flourish_011 = df_flourish_011['DevType'].value_counts().to_frame('counts').reset_index() df_data_flourish_011 = df_data_flourish_011.head(10) df_data_flourish_011 df_data_flourish_011.to_csv('011_flourish_data.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.12. ¿Cuáles son las 10 tecnologías más usadas entre los programadores por países?Se seleccionarán los campos adecuados para responder a esta pregunta
data_10_techs_popular_dev_countries = data_test[['Country', 'LanguageHaveWorkedWith', 'DatabaseHaveWorkedWith', 'PlatformHaveWorkedWith', 'WebframeHaveWorkedWith', 'MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith', 'NEWCollabToolsHaveWorkedWith']] data_10_techs_popular_dev_countries.head() data_10_techs_popular_dev_countries['AllTechs'] = data_10_techs_popular_dev_countries['LanguageHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['DatabaseHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['PlatformHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['WebframeHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['MiscTechHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['ToolsTechHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['NEWCollabToolsHaveWorkedWith'].map(str) print (data_10_techs_popular_dev_countries) df_data_10_techs_popular_dev_countries = data_10_techs_popular_dev_countries[['Country', 'AllTechs']].copy() df_data_10_techs_popular_dev_countries.head() df_data_10_techs_popular_dev_countries['AllTechs'] = df_data_10_techs_popular_dev_countries['AllTechs'].str.replace(' ', '') df_data_10_techs_popular_dev_countries['AllTechs'] = df_data_10_techs_popular_dev_countries['AllTechs'].str.replace(';', ' ') df_counts = df_data_10_techs_popular_dev_countries['AllTechs'].str.split(expand=True).stack().value_counts().rename_axis('Tech').reset_index(name='Count') df_counts data_10_techs_popular_dev_countries.to_csv('012_data_10_techs_popular_dev_countries.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.13. ¿Cuáles el sistema operativo más usado entre los encuestados?Se seleccionarán los campos adecuados para responder a esta pregunta
df_data_so_devs = data_test[['OpSys']].copy() df_data_so_devs.tail() df_data_so_devs['OpSys'].drop_duplicates().sort_values() df_data_so_devs['OpSys'] = df_data_so_devs['OpSys'].replace(['Other (please specify):'], 'Otro') df_data_so_devs['OpSys'].value_counts() df_counts = df_data_so_devs['OpSys'].str.split(expand=True).stack().value_counts().rename_axis('OS').reset_index(name='Count') df_counts df_counts.to_csv('013_flourish_data.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.14. ¿Qué proporción de programadores tiene algún desorden mental por país?Se seleccionarán los campos adecuados para responder a esta pregunta
data_devs_mental_health_countries = data_test[['Country', 'MentalHealth']] data_devs_mental_health_countries.head() data_devs_mental_health_countries['MentalHealth'].value_counts() df_data_devs_mental_health_countries = data_devs_mental_health_countries.copy() df_data_devs_mental_health_countries = df_data_devs_mental_health_countries[df_data_devs_mental_health_countries['MentalHealth'].isin(['Desorden de concentración o memoria', 'Desorden emocional', 'Desorden de ansiedad', 'Tipo de autismo'])] df_data_devs_mental_health_countries.head() df_data_flourish_014 = df_data_devs_mental_health_countries['Country'].value_counts().to_frame('counts').reset_index() df_data_flourish_014 = df_data_flourish_014.head(10) df_data_flourish_014 df_data_flourish_014_best_ten = df_data_devs_mental_health_countries[df_data_devs_mental_health_countries['Country'].isin(['United States of America', 'United Kingdom of Great Britain and Northern Ireland', 'Brazil', 'Canada', 'India', 'Germany', 'Australia', 'Netherlands', 'Poland', 'Turkey'])] df = df_data_flourish_014_best_ten.copy() df df1 = pd.crosstab(df['Country'], df['MentalHealth']) df1 (df_data_devs_mental_health_countries.groupby(['Country', 'MentalHealth']).size() .sort_values(ascending=False) .reset_index(name='count') .drop_duplicates(subset='Country')) df_flourish_data_014 = (df_data_devs_mental_health_countries.groupby(['Country', 'MentalHealth']).size() .sort_values(ascending=False) .reset_index(name='count')) df_flourish_data_014 = df_flourish_data_014.sort_values('Country') df_data_flourish_014.head(10).to_csv('014_flourish_data_014.csv', index=False) df1.to_csv('014_flourish_data_014.csv', index=True)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.15. ¿Cuáles son los países que tienen los mejores sueldos entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
df_best_incomes_countries = data_test[['Country', 'ConvertedCompYearly']].copy() df_best_incomes_countries def remove_outliers(df, q=0.05): upper = df.quantile(1-q) lower = df.quantile(q) mask = (df < upper) & (df > lower) return mask mask = remove_outliers(df_best_incomes_countries['ConvertedCompYearly'], 0.1) print(df_best_incomes_countries[mask]) df_best_incomes_countries_no_outliers = df_best_incomes_countries[mask] df_best_incomes_countries_no_outliers1 = df_best_incomes_countries_no_outliers.copy() df_best_incomes_countries_no_outliers1['ConvertedCompYearlyCategorical'] = 'ALTO' df_best_incomes_countries_no_outliers1.loc[(df_best_incomes_countries_no_outliers1['ConvertedCompYearly'] >= 0) & (df_best_incomes_countries_no_outliers1['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO' df_best_incomes_countries_no_outliers1.loc[(df_best_incomes_countries_no_outliers1['ConvertedCompYearly'] > 32747) & (df_best_incomes_countries_no_outliers1['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO' print(df_best_incomes_countries_no_outliers1) df_best_incomes_countries_no_outliers1['ConvertedCompYearlyCategorical'].value_counts() df_best_incomes_countries_alto = df_best_incomes_countries_no_outliers1[df_best_incomes_countries_no_outliers1['ConvertedCompYearlyCategorical'] == 'ALTO'] df_alto = df_best_incomes_countries_alto[['Country', 'ConvertedCompYearlyCategorical']].copy() df_flourish_015 = df_alto['Country'].value_counts().to_frame('counts').reset_index() df_flourish_015.head(10) df_flourish_015.head(10).to_csv('015_flourish_data.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.16. ¿Cuáles son los 10 lenguajes de programación más usados entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
df_10_prog_languages_devs = data_test[['LanguageHaveWorkedWith']].copy() df_10_prog_languages_devs.head() df_10_prog_languages_devs['LanguageHaveWorkedWith'] = df_10_prog_languages_devs['LanguageHaveWorkedWith'].str.replace(';', ' ') df_counts_016 = df_10_prog_languages_devs['LanguageHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Languages').reset_index(name='Count') df_counts_016.head(10) df_counts_016.head(10).to_csv('016_flourish_data.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.17. ¿Cuáles son las bases de datos más usadas entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
df_10_databases = data_test[['DatabaseHaveWorkedWith']].copy() df_10_databases.head() df_10_databases['DatabaseHaveWorkedWith'] = df_10_databases['DatabaseHaveWorkedWith'].str.replace(' ', '') df_10_databases['DatabaseHaveWorkedWith'] = df_10_databases['DatabaseHaveWorkedWith'].str.replace(';', ' ') df_counts_017 = df_10_databases['DatabaseHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Databases').reset_index(name='Count') df_counts_017.head(10) df_counts_017.head(10).to_csv('017_flourish_data.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.18. ¿Cuáles son las plataformas más usadas entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
df_10_platforms = data_test[['PlatformHaveWorkedWith']].copy() df_10_platforms.head() df_10_platforms['PlatformHaveWorkedWith'] = df_10_platforms['PlatformHaveWorkedWith'].str.replace(' ', '') df_10_platforms['PlatformHaveWorkedWith'] = df_10_platforms['PlatformHaveWorkedWith'].str.replace(';', ' ') df_counts_018 = df_10_platforms['PlatformHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Platform').reset_index(name='Count') df_counts_018.head(10) df_counts_018.to_csv('018_flourish_data.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.19. ¿Cuáles son los frameworks web más usados entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
df_10_web_frameworks = data_test[['WebframeHaveWorkedWith']].copy() df_10_web_frameworks.head() df_10_web_frameworks['WebframeHaveWorkedWith'] = df_10_web_frameworks['WebframeHaveWorkedWith'].str.replace(' ', '') df_10_web_frameworks['WebframeHaveWorkedWith'] = df_10_web_frameworks['WebframeHaveWorkedWith'].str.replace(';', ' ') df_counts_019 = df_10_web_frameworks['WebframeHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Web framework').reset_index(name='Count') df_counts_019.head(10) df_counts_019.to_csv('019_flourish_data.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.20. ¿Cuáles son las herramientas tecnológicas más usadas entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
df_10_data_misc_techs = data_test[['MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith']].copy() df_10_data_misc_techs.head() df_10_data_misc_techs['AllMiscTechs'] = df_10_data_misc_techs['MiscTechHaveWorkedWith'].map(str) + ';' + df_10_data_misc_techs['ToolsTechHaveWorkedWith'].map(str) df_10_data_misc_techs.head() df_10_data_misc_techs['AllMiscTechs'] = df_10_data_misc_techs['AllMiscTechs'].str.replace(' ', '') df_10_data_misc_techs['AllMiscTechs'] = df_10_data_misc_techs['AllMiscTechs'].str.replace(';', ' ') df_counts_020 = df_10_data_misc_techs['AllMiscTechs'].str.split(expand=True).stack().value_counts().rename_axis('Tecnología').reset_index(name='# Programadores') df_counts_020.head(10) df_counts_020.head(10).to_csv('020_flourish_data.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.21. ¿Cuáles son las herramientas colaborativas más usadas entre programadores?Se seleccionarán los campos adecuados para responder a esta pregunta
df_10_colab = data_test[['NEWCollabToolsHaveWorkedWith']].copy() df_10_colab.head() df_10_colab['NEWCollabToolsHaveWorkedWith'] = df_10_colab['NEWCollabToolsHaveWorkedWith'].str.replace(' ', '') df_10_colab['NEWCollabToolsHaveWorkedWith'] = df_10_colab['NEWCollabToolsHaveWorkedWith'].str.replace(';', ' ') df_counts_021 = df_10_colab['NEWCollabToolsHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Herramienta Colaborativa').reset_index(name='# Programadores') df_counts_021.head(10) df_counts_021.head(10).to_csv('021_flourish_data.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
2.22. ¿Cuáles son los países con mayor número de programadores trabajando a tiempo completo?Se seleccionarán los campos adecuados para responder a esta pregunta
df_fulltime_employment = data_test[['Country', 'Employment']].copy() df_fulltime_employment.head() df_fulltime_employment.info() df_fulltime_only = df_fulltime_employment[df_fulltime_employment['Employment'] == 'Tiempo completo'] df_fulltime_only.head() df_flourish_022 = df_fulltime_only['Country'].value_counts().to_frame('# Programadores').reset_index() df_flourish_022.head(10) df_flourish_022.head(10).to_csv('022_flourish_data.csv', index=False)
_____no_output_____
MIT
M2.859_20211_A9_gbonillas.ipynb
gpbonillas/stackoverflow_2021_wrangling_data
Pyber Analysis 4.3 Loading and Reading CSV files
# Add Matplotlib inline magic command %matplotlib inline # Dependencies and Setup import matplotlib.pyplot as plt import pandas as pd import matplotlib.dates as mdates # File to Load (Remember to change these) city_data_to_load = "Resources/city_data.csv" ride_data_to_load = "Resources/ride_data.csv" # Read the City and Ride Data city_data_df = pd.read_csv(city_data_to_load) ride_data_df = pd.read_csv(ride_data_to_load)
_____no_output_____
Apache-2.0
PyBer_analysis_code.ipynb
rfwilliams92/Pyber_Ridesharing_Analysis
Merge the DataFrames
# Combine the data into a single dataset pyber_data_df = pd.merge(ride_data_df, city_data_df, how="left", on=["city", "city"]) # Display the data table for preview pyber_data_df
_____no_output_____
Apache-2.0
PyBer_analysis_code.ipynb
rfwilliams92/Pyber_Ridesharing_Analysis
Deliverable 1: Get a Summary DataFrame
# 1. Get the total rides for each city type tot_rides_by_type = pyber_data_df.groupby(["type"]).count()["ride_id"] tot_rides_by_type # 2. Get the total drivers for each city type tot_drivers_by_type = city_data_df.groupby(["type"]).sum()["driver_count"] tot_drivers_by_type # 3. Get the total amount of fares for each city type tot_fares_by_type = pyber_data_df.groupby(["type"]).sum()["fare"] tot_fares_by_type # 4. Get the average fare per ride for each city type. avg_fare_by_type = round((tot_fares_by_type / tot_rides_by_type), 2) avg_fare_by_type # 5. Get the average fare per driver for each city type. avg_fare_per_driver_by_type = round((tot_fares_by_type / tot_drivers_by_type), 2) avg_fare_per_driver_by_type # 6. Create a PyBer summary DataFrame. pyber_summary_df = pd.DataFrame({ "Total Rides": tot_rides_by_type, "Total Drivers": tot_drivers_by_type, "Total Fares": tot_fares_by_type, "Average Fare per Ride": avg_fare_by_type, "Average Fare per Driver": avg_fare_per_driver_by_type }) pyber_summary_df.dtypes # 7. Cleaning up the DataFrame. Delete the index name pyber_summary_df.index.name = None pyber_summary_df # 8. Format the columns. pyber_summary_df['Total Rides'] = pyber_summary_df['Total Rides'].map('{:,}'.format) pyber_summary_df['Total Drivers'] = pyber_summary_df['Total Drivers'].map('{:,}'.format) pyber_summary_df['Total Fares'] = pyber_summary_df['Total Fares'].map('${:,}'.format) pyber_summary_df['Average Fare per Ride'] = pyber_summary_df['Average Fare per Ride'].map('${:,}'.format) pyber_summary_df['Average Fare per Driver'] = pyber_summary_df['Average Fare per Driver'].map('${:,}'.format) pyber_summary_df
_____no_output_____
Apache-2.0
PyBer_analysis_code.ipynb
rfwilliams92/Pyber_Ridesharing_Analysis
Deliverable 2. Create a multiple line plot that shows the total weekly of the fares for each type of city.
# 1. Read the merged DataFrame pyber_data_df # 2. Using groupby() to create a new DataFrame showing the sum of the fares # for each date where the indices are the city type and date. tot_fares_by_date_df = pd.DataFrame(pyber_data_df.groupby(["type", "date"]).sum()["fare"]) tot_fares_by_date_df # 3. Reset the index on the DataFrame you created in #1. This is needed to use the 'pivot()' function. # df = df.reset_index() tot_fares_by_date_df = tot_fares_by_date_df.reset_index() tot_fares_by_date_df # 4. Create a pivot table with the 'date' as the index, the columns ='type', and values='fare' # to get the total fares for each type of city by the date. pyber_pivot = tot_fares_by_date_df.pivot(index="date", columns="type", values="fare") pyber_pivot # 5. Create a new DataFrame from the pivot table DataFrame using loc on the given dates, '2019-01-01':'2019-04-29'. pyber_pivot_df = pyber_pivot.loc['2019-01-01':'2019-04-29'] pyber_pivot_df # 6. Set the "date" index to datetime datatype. This is necessary to use the resample() method in Step 8. pyber_pivot_df.index = pd.to_datetime(pyber_pivot_df.index) # 7. Check that the datatype for the index is datetime using df.info() pyber_pivot_df.info() # 8. Create a new DataFrame using the "resample()" function by week 'W' and get the sum of the fares for each week. tot_fares_by_week_df = pyber_pivot_df.resample('W').sum() tot_fares_by_week_df # 8. Using the object-oriented interface method, plot the resample DataFrame using the df.plot() function. # Import the style from Matplotlib. from matplotlib import style # Use the graph style fivethirtyeight. style.use('fivethirtyeight') fig, ax = plt.subplots() tot_fares_by_week_df.plot(figsize=(20,7), ax=ax) ax.set_title("Total Fares by City Type") ax.set_ylabel("Fares($USD)") ax.set_xlabel("Month(Weekly Fare Totals)") ax.legend(labels=["Rural", "Suburban", "Urban"], loc="center") plt.savefig("analysis/PyBer_fare_summary.png") plt.show()
_____no_output_____
Apache-2.0
PyBer_analysis_code.ipynb
rfwilliams92/Pyber_Ridesharing_Analysis
Import Packages
from ndfinance.brokers.backtest import * from ndfinance.core import BacktestEngine from ndfinance.analysis.backtest import BacktestAnalyzer from ndfinance.strategies import PeriodicRebalancingStrategy from ndfinance.visualizers.backtest_visualizer import BasicVisualizer %matplotlib inline import matplotlib.pyplot as plt
2020-10-10 13:22:13,815 INFO resource_spec.py:212 -- Starting Ray with 15.38 GiB memory available for workers and up to 7.7 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>). 2020-10-10 13:22:14,051 WARNING services.py:923 -- Redis failed to start, retrying now. 2020-10-10 13:22:14,252 INFO services.py:1165 -- View the Ray dashboard at localhost:8265
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
build strategy
class AllWeatherPortfolio(PeriodicRebalancingStrategy): def __init__(self, weight_dict, rebalance_period): super(AllWeatherPortfolio, self).__init__(rebalance_period) self.weight_dict = weight_dict def _logic(self): self.broker.order(Rebalance(self.weight_dict.keys(), self.weight_dict.values()))
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
set portfolio elements, weights, rebalance periodyou can adjust and play in your own!
PORTFOLIO = { "GLD" : 0.05, "SPY" : 0.5, "SPTL" : 0.15, "BWZ" : 0.15, "SPHY": 0.15, } REBALANCE_PERIOD = TimeFrames.day * 365
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
Make data provider
dp = BacktestDataProvider() dp.add_yf_tickers(*PORTFOLIO.keys())
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
Make time indexer
indexer = TimeIndexer(dp.get_shortest_timestamp_seq()) dp.set_indexer(indexer) dp.cut_data()
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
Make broker and add assets
brk = BacktestBroker(dp, initial_margin=10000) _ = [brk.add_asset(Asset(ticker=ticker)) for ticker in PORTFOLIO.keys()]
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
Initialize strategy
strategy = AllWeatherPortfolio(PORTFOLIO, rebalance_period=REBALANCE_PERIOD)
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
Initialize backtest engine
engine = BacktestEngine() engine.register_broker(brk) engine.register_strategy(strategy)
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
run
log = engine.run()
[ENGINE]: 100%|██████████| 2090/2090 [00:00<00:00, 11637.76it/s]
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
run analysis
analyzer = BacktestAnalyzer(log) analyzer.print()
-------------------------------------------------- [BACKTEST RESULT] -------------------------------------------------- CAGR:10.644 MDD:19.49 CAGR_MDD_ratio:0.546 win_trade_count:24 lose_trade_count:11 total_trade_count:75 win_rate_percentage:32.0 lose_rate_percentage:14.667 sharpe_ratio:0.279 sortino_ratio:0.361 pnl_ratio_sum:13.677 pnl_ratio:6.269 average_realized_pnl:90.262 max_realized_pnl:1255.897 min_realized_pnl:-232.054 average_realized_pnl_percentage:2.427 max_realized_pnl_percentage:26.859 min_realized_pnl_percentage:-13.873 average_realized_pnl_percentage_weighted:0.661 max_realized_pnl_percentage_weighted:9.432 min_realized_pnl_percentage_weighted:-1.999 average_portfolio_value_total:12706.284 max_portfolio_value_total:18406.706 min_portfolio_value_total:9613.284 average_portfolio_value:12706.284 max_portfolio_value:18406.706 min_portfolio_value:9613.284 average_leverage:0.88 max_leverage:1.0 min_leverage:0.0 average_leverage_total:0.88 max_leverage_total:1.0 min_leverage_total:0.0 average_cash_weight_percentage:11.962 max_cash_weight_percentage:100.0 min_cash_weight_percentage:0.0 average_cash_weight_percentage_total:11.962 max_cash_weight_percentage_total:100.0 min_cash_weight_percentage_total:0.0 average_unrealized_pnl_percentage:2.343 max_unrealized_pnl_percentage:10.526 min_unrealized_pnl_percentage:-11.134 average_unrealized_pnl_percentage_total:2.343 max_unrealized_pnl_percentage_total:10.526 min_unrealized_pnl_percentage_total:-11.134 average_1M_pnl_percentage:0.59 max_1M_pnl_percentage:8.165 min_1M_pnl_percentage:-6.083 average_1D_pnl_percentage:0.0 max_1D_pnl_percentage:0.0 min_1D_pnl_percentage:0.0 average_1W_pnl_percentage:0.107 max_1W_pnl_percentage:5.255 min_1W_pnl_percentage:-9.881
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
visualize
visualizer = BasicVisualizer() visualizer.plot_log(log)
_____no_output_____
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
Export
visualizer.export(EXPORT_PATH) analyzer.export(EXPORT_PATH)
-------------------------------------------------- [EXPORTING FIGURES] -------------------------------------------------- exporting figure to: ./bt_results/all_weather_portfolio/plot/mdd.png exporting figure to: ./bt_results/all_weather_portfolio/plot/cagr.png exporting figure to: ./bt_results/all_weather_portfolio/plot/sharpe.png exporting figure to: ./bt_results/all_weather_portfolio/plot/sortino.png exporting figure to: ./bt_results/all_weather_portfolio/plot/portfolio_value.png exporting figure to: ./bt_results/all_weather_portfolio/plot/portfolio_value_cum_pnl_perc.png exporting figure to: ./bt_results/all_weather_portfolio/plot/portfolio_value_total.png exporting figure to: ./bt_results/all_weather_portfolio/plot/portfolio_value_total_cum_pnl_perc.png exporting figure to: ./bt_results/all_weather_portfolio/plot/realized_pnl_percentage_hist.png exporting figure to: ./bt_results/all_weather_portfolio/plot/realized_pnl_percentage_weighted_hist.png exporting figure to: ./bt_results/all_weather_portfolio/plot/1M_pnl_hist.png exporting figure to: ./bt_results/all_weather_portfolio/plot/1D_pnl_hist.png exporting figure to: ./bt_results/all_weather_portfolio/plot/1W_pnl_hist.png exporting figure to: ./bt_results/all_weather_portfolio/plot/1M_pnl_bar.png exporting figure to: ./bt_results/all_weather_portfolio/plot/1D_pnl_bar.png exporting figure to: ./bt_results/all_weather_portfolio/plot/1W_pnl_bar.png -------------------------------------------------- [EXPORTING RESULT/LOG] -------------------------------------------------- saving log: ./bt_results/all_weather_portfolio/broker_log.csv saving log: ./bt_results/all_weather_portfolio/portfolio_log.csv saving result to: ./bt_results/all_weather_portfolio/result.json
MIT
examples/all_weather_portfolio.ipynb
gomtinQQ/NDFinance
Estatísticas descritivas e Visualização de DadosEste notebook é responsável por mostrar as estatíscas descritivas da base dados com visualizações.Será analisado o comportamento de algumas características que são cruciais na compra/venda de veículos usados.
from Utils import * from tqdm import tqdm from matplotlib import pyplot as plt import seaborn as sns pd.set_option('display.max_colwidth', 100) DATASET = "../datasets/clean_vehicles_2.csv" df = pd.read_csv(DATASET) df.describe()
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Estatísticas UnivariadasAqui vamos análisar o comportamento de alguns dados em relação a sua distribuição. Ano de fabricação
##Análise de média, desvio padrão, mediana e moda do Ano de fabricação print( "Ano do veículo:\n" "Média: "+floatStr(df['year'].mean())+"\n"+ "Desvio padrão: "+floatStr(df['year'].std())+"\n"+ "Mediana: "+floatStr(df['year'].median())+"\n"+ "IQR: "+floatStr(df['year'].describe()[6] - df['year'].describe()[4])+"\n"+ "Moda: "+floatStr(df['year'].mode().loc[0]) )
Ano do veículo: Média: 2010.26 Desvio padrão: 8.67 Mediana: 2012.0 IQR: 9.0 Moda: 2017.0
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Aqui notamos uma mediana maior do que a média. O que nos levar a imaginar que esta grandeza não segue uma distribuição normal.Isto indica que deve haver alguns carros muito antigos sendo vendidos, gerando uma caractéristica de assimetria na curva.Para verifcarmos isso, vamos gerar o histograma
##Plotar o histograma da distribuição em relação ao ano de fabricação do veículo bars = df[df['year']> 0].year.max() - df[df['year']> 0].year.min() df[df['year']> 0].year.hist(bins = int(bars))
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Porém, este plotting não nos dá uma boa visualização. Nesta lista há alguns carros voltados para colecionadores, que não é o perfil que queremos estudar. Então, tomando o ano de 1985 como limiar, analisamos o histograma da distribuição de carros comercializáveis "para uso normal".Agora conseguimos perceber que a maior parte dos carros vendidos são fábricados depois de 2000.
##Plot do histograma dos anos de fabricação do carro limitando à 1985 bars = df['year'].max() - 1985 df[df['year']> 1985].year.hist(bins = int(bars))
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Preço de revenda do veículo
##Análise de estatísticas univariadas dos valores de preço do veículo print( "Preço do veículo:\n" "Média: "+floatStr(df[df['price'] > 0].price.mean())+"\n"+ "Desvio padrão: "+floatStr(df[df['price'] > 0].price.std())+"\n"+ "Mediana: "+floatStr(df[df['price'] > 0].price.median())+"\n"+ "IQR: "+floatStr(df['price'].describe()[6] - df['price'].describe()[4])+"\n"+ "Moda: "+floatStr(df[df['price'] > 0].price.mode().loc[0]) )
Preço do veículo: Média: 36809.65 Desvio padrão: 6571953.45 Mediana: 11495.0 IQR: 13000.0 Moda: 7995
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Aqui encontramos uma diferença muito grandes nestes dados. O que nos faz pensar que temos uma distribuição muito variada e assimétrica de preços.Devido a esta característica, não conseguiremos ver um histograma com todos os dados. Podemos contornar isto de 2 maneiras: * Poderíamos usar o log10 para ter uma noção da ordem de grandeza, mas não conseguiríamos extrair muita informação, pois a maioria se encaixariam em log10(x) = 4. * Outra alternativa seria plotar um subconjunto dos preços. Então, depois de algumas análises, protaremos apenas valores de 0 a $ 100.000.
sns.distplot(df[(df['price'] > 0) & (df['price'] < 100000)].price, bins = 100,norm_hist = False, hist=True, kde=False)
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Leitura atual do Odômetro (Milhas percorrida pelo veículo)
##Análise de estatísticas univariadas dos valores de leitura do Odômetro. ##Note que estamos descartando valores nulos para fazer esta análise print( "Odômetro do veículo:\n" "Média: "+floatStr(df[df['odometer'] > 0].odometer.mean())+"\n"+ "Desvio padrão: "+floatStr(df[df['odometer'] > 0].odometer.std())+"\n"+ "Mediana: "+floatStr(df[df['odometer'] > 0].odometer.median())+"\n"+ "IQR: "+floatStr(df['odometer'].describe()[6] - df['odometer'].describe()[4])+"\n"+ "Moda: "+floatStr(df[df['odometer'] > 0].odometer.mode().loc[0]) )
Odômetro do veículo: Média: 99705.09 Desvio padrão: 111570.94 Mediana: 92200.0 IQR: 92054.0 Moda: 150000.0
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Aqui também temos uma grande varidade de valores. Apenas 492 deles estão acima de 800.000 de milhas registradas. Para fim de análise, iremos utilizar este intervalo.
sns.distplot(df[(df['odometer'] > 0) & (df['odometer'] < 400000)].odometer, bins = 100,norm_hist = False, hist=True, kde=False)
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Visualização de quantidade de anúncios por fabricantes de veículosFaremos uma análise visual para tentar perceber quais as marcas mais populares no mercado de seminovos.
## Plotar a divisão de mercas que são mais anunciadas manufacturers = df['manufacturer'].value_counts().drop(df['manufacturer'].value_counts().index[8]).drop(df['manufacturer'].value_counts().index[13:]) sns.set() plt.figure(figsize=(10,5)) sns.barplot(x=manufacturers.index, y=manufacturers) print("As 3 marcas mais anunciadas (Ford, chevrolet, toyota) equivalem a " +str(round(sum(df['manufacturer'].value_counts().values[0:3])/df['manufacturer'].count()*100,2)) +"% deste mercado.") filter_list = ['ford', 'chevrolet', 'toyota', 'nissan', 'honda'] filtereddf = df[df.manufacturer.isin(filter_list)] ax = sns.boxplot(x="manufacturer", y="price", data= filtereddf[filtereddf['price']< 40000])
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Visualização da relação entre preço de carros classificados por traçãoAqui podemos comparar como o preço variam de acordo com a tração do veículo * 4wd: 4x4 * rwd: tração traseira * fwd: tração dianteira Comparamos a média, mediana e quantidade. Porém, já percebemos de análises anteriores que a mediana nos dá um valor mais razoável, por isto ordenamos baseado nela
df[df['drive'] != 'undefined'].groupby(['drive']).agg(['mean','median','count'])['price'].sort_values(by='median', ascending=False)
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Estatísticas BivariadasAqui vamos tentar encontrar se as grandezas numéricas possuem algum tipo de correlação. Primeiro analisaremos o método de spearman, depois de pearson. Em seguida tentaremos utilizar
##Aplicando-se alguns limitadores para analisar correlações entre as variáveis car = df[(df['odometer']> 0) & (df['odometer']<400000)] car = car[(car['price']>0) & (car['price']<100000)] car = car[car['year']>=1985] car = car.drop(['lat','long'], axis=1) car.cov() car.corr(method='spearman') car.corr(method='pearson') ##Relaçãoo de preço x milhas rodadas entre as 3 marcas mais populares filter_list = ['ford', 'chevrolet', 'toyota'] car[car['manufacturer'].isin(filter_list)].plot.scatter(x='odometer',y='price') g = sns.FacetGrid(car[car['manufacturer']!='undefined'], col="manufacturer", hue='drive') g.map(sns.scatterplot, "year", "price") g.add_legend() #Clique na imagem pequena para expandir
_____no_output_____
MIT
Parte 1/notebooks/3-descriptive_stats.ipynb
mbs8/IF679-ciencia-de-dados
Welcome in the introductory template of the python graph gallery. Here is how to proceed to add a new `.ipynb` file that will be converted to a blogpost in the gallery! Notebook Metadata It is very important to add the following fields to your notebook. It helps building the page later on:- **slug**: the URL of the blogPost. It should be exactly the same as the file title. Example: `70-basic-density-plot-with-seaborn`- **chartType**: the chart type like density or heatmap. For a complete list see [here](https://github.com/holtzy/The-Python-Graph-Gallery/blob/master/src/util/sectionDescriptions.js), it must be one of the `id` options.- **title**: what will be written in big on top of the blogpost! use html syntax there.- **description**: what will be written just below the title, centered text.- **keyword**: list of keywords related with the blogpost- **seoDescription**: a description for the bloppost meta. Should be a bit shorter than the description and must not contain any html syntax. Add a chart description A chart example always come with some explanation. It must:contain keywordslink to related pages like the parent page (graph section)give explanations. In depth for complicated charts. High level for beginner level charts Add a chart
import seaborn as sns, numpy as np np.random.seed(0) x = np.random.randn(100) ax = sns.distplot(x)
_____no_output_____
0BSD
src/notebooks/255-percentage-stacked-area-chart.ipynb
nrslt/The-Python-Graph-Gallery
Airbnb - Rio de Janeiro* Download [data](http://insideairbnb.com/get-the-data.html)* We downloaded `listings.csv` from all monthly dates available Questions1. What was the price and supply behavior before and during the pandemic?2. Does a title in English or Portuguese impact the price?3. What features correlate with the price? Can we predict a price? Which features matters?
import numpy as np import pandas as pd import seaborn as sns import glob import re import pendulum import tqdm import matplotlib.pyplot as plt import langid langid.set_languages(['en','pt'])
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Read filesRead all 30 files and get their date
files = sorted(glob.glob('data/listings*.csv')) df = [] for f in files: date = pendulum.from_format(re.findall(r"\d{4}_\d{2}_\d{2}", f)[0], fmt="YYYY_MM_DD").naive() csv = pd.read_csv(f) csv["date"] = date df.append(csv) df = pd.concat(df) df
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Deal with NaNs* Drop `neighbourhood_group` as it is all NaNs;* Fill `reviews_per_month` with zeros (if there is no review, then reviews per month are zero)* Keep `name` for now* Drop `host_name` rows, as there is not any null host_id* Keep `last_review` too, as there are rooms with no review
df.isna().any() df = df.drop(["host_name", "neighbourhood_group"], axis=1) df["reviews_per_month"] = df["reviews_per_month"].fillna(0.) df.head()
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Detect `name` language* Clean strings for evaluation* Remove common neighbourhoods name in Portuguese from `name` column to diminish misprediction* Remove several non-alphanumeric characters* Detect language using [langid](https://github.com/saffsd/langid.py)* I restricted between pt, en. There are very few rooms listed in other languages.* Drop `name` column
import unicodedata stopwords = pd.unique(df["neighbourhood"]) stopwords = [re.sub(r"[\(\)]", "", x.lower().strip()).split() for x in stopwords] stopwords = [x for item in stopwords for x in item] stopwords += [unicodedata.normalize("NFKD", x).encode('ASCII', 'ignore').decode() for x in stopwords] stopwords += ["rio", "janeiro", "copa", "arpoador", "pepê", "pepe", "lapa", "morro", "corcovado"] stopwords = set(stopwords) docs = [re.sub(r"[\-\_\\\/\,\;\:\!\+\’\%\&\d\*\#\"\´\`\.\|\(\)\[\]\@\'\»\«\>\<\❤️\…]", " ", str(x)) for x in df["name"].tolist()] docs = [" ".join(x.lower().strip().split()) for x in docs] docs = ["".join(e for e in x if (e.isalnum() or " ")) for x in docs] ndocs = [] for doc in tqdm.tqdm(docs): ndocs.append(" ".join([x for x in doc.split() if x not in stopwords])) docs = ndocs results = [] for d in tqdm.tqdm(docs): results.append(langid.classify(d)[0]) df["language"] = results # Because we transformed NaNs into string, fill those detection with nans too df.loc[df["name"].isna(), "language"] = pd.NA
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
* Test accuracy, manually label 383 out of 88191 (95% conf. interval, 5% margin of error)
df.loc[~df["name"].isna()].drop_duplicates("name").shape df.loc[~df["name"].isna()].drop_duplicates("name")[["name", "language"]].sample(n=383, random_state=42).to_csv("lang_pred_1.csv") lang_pred = pd.read_csv("lang_pred.csv", index_col=0) lang_pred.head() overall_accuracy = (lang_pred["pred"] == lang_pred["true"]).sum() / lang_pred.shape[0] pt_accuracy = (lang_pred[lang_pred["true"] == "pt"]["true"] == lang_pred[lang_pred["true"] == "pt"]["pred"]).sum() / lang_pred[lang_pred["true"] == "pt"].shape[0] en_accuracy = (lang_pred[lang_pred["true"] == "en"]["true"] == lang_pred[lang_pred["true"] == "en"]["pred"]).sum() / lang_pred[lang_pred["true"] == "en"].shape[0] print(f"Overall accuracy: {overall_accuracy*100}%") print(f"Portuguese accuracy: {pt_accuracy*100}%") print(f"English accuracy: {en_accuracy*100}%") df = df.drop("name", axis=1) df.head() df["language"].value_counts()
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Calculate how many times a room appeared* There are 30 months of data, and rooms appear multiple times* Calculate for a specific date, how many times the same room appeared up to that date
df = df.set_index(["id", "date"]) df["appearances"] = df.groupby(["id", "date"])["host_id"].count().unstack().cumsum(axis=1).stack() df = df.reset_index() df.head()
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Days since last review* Calculate days since last review* Then categorize them by the length of the days
df.loc[:, "last_review"] = pd.to_datetime(df["last_review"], format="%Y/%m/%d") # For each scraping date, consider the last date to serve as comparision as the maximum date last_date = df.groupby("date")["last_review"].max() df["last_date"] = df.apply(lambda row: last_date.loc[row["date"]], axis=1) df["days_last_review"] = (df["last_date"] - df["last_review"]).dt.days df = df.drop("last_date", axis=1) df.head() df["days_last_review"].describe() def categorize_last_review(days_last_review): """Transform days since last review into categories Transform days since last review into one of those categories: last_week, last_month, last_half_year, last_year, last_two_years, long_time_ago, or never Args: days_last_review (int): Days since the last review Returns: str: A string with the category name. """ if days_last_review <= 7: return "last_week" elif days_last_review <= 30: return "last_month" elif days_last_review <= 182: return "last_half_year" elif days_last_review <= 365: return "last_year" elif days_last_review <= 730: return "last_two_years" elif days_last_review > 730: return "long_time_ago" else: return "never" df.loc[:, "last_review"] = df.apply(lambda row: categorize_last_review(row["days_last_review"]), axis=1) df = df.drop(["days_last_review"], axis=1) df.head() df = df.set_index(["id", "date"]) df.loc[:, "appearances"] = df["appearances"].astype(int) df.loc[:, "host_id"] = df["host_id"].astype("category") df.loc[:, "neighbourhood"] = df["neighbourhood"].astype("category") df.loc[:, "room_type"] = df["room_type"].astype("category") df.loc[:, "last_review"] = df["last_review"].astype("category") df.loc[:, "language"] = df["language"].astype("category") df df.to_pickle("data.pkl")
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Distributions* Check the distribution of features
df = pd.read_pickle("data.pkl") df.head() df["latitude"].hist(bins=250) df["longitude"].hist(bins=250) df["price"].hist(bins=250) df["minimum_nights"].hist(bins=250) df["number_of_reviews"].hist() df["reviews_per_month"].hist(bins=250) df["calculated_host_listings_count"].hist(bins=250) df["availability_365"].hist() df["appearances"].hist(bins=29) df.describe()
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Limits* We are analising mostly for touristic purpose, so get the short-term rentals only* Prices between 10 and 10000 (The luxury Copacabana Palace Penthouse at 8000 for example)* Short-term rentals (minimum_nights < 31)* Impossibility of more than 31 reviews per month
df = pd.read_pickle("data.pkl") total_records = len(df) outbound_values = (df["price"] < 10) | (df["price"] > 10000) df = df[~outbound_values] print(f"Removed values {outbound_values.sum()}, {outbound_values.sum()*100/total_records}%") long_term = df["minimum_nights"] >= 31 df = df[~long_term] print(f"Removed values {long_term.sum()}, {long_term.sum()*100/total_records}%") reviews_limit = df["reviews_per_month"] > 31 df = df[~reviews_limit] print(f"Removed values {reviews_limit.sum()}, {reviews_limit.sum()*100/total_records}%")
Removed values 2, 0.00019089597982611286%
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Log skewed variables* Most numerical values are skewed, so log them
df.describe() # number_of_reviews, reviews_per_month, availability_365 have zeros, thus sum one to all df["number_of_reviews"] = np.log(df["number_of_reviews"] + 1) df["reviews_per_month"] = np.log(df["reviews_per_month"] + 1) df["availability_365"] = np.log(df["availability_365"] + 1) df["price"] = np.log(df["price"]) df["minimum_nights"] = np.log(df["minimum_nights"]) df["calculated_host_listings_count"] = np.log(df["calculated_host_listings_count"]) df["appearances"] = np.log(df["appearances"]) df.describe()
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
Extreme outliers* Most outliers are clearly mistyped values (one can check these rooms ids in their website)* Remove extreme outliers first from large deviations within the same `id` (eliminate rate jumps of same room)* Then remove those from same scraping `date`, `neighbourhood` and `room_type`
df = df.reset_index() q25 = df.groupby(["id"])["price"].quantile(0.25) q75 = df.groupby(["id"])["price"].quantile(0.75) ext = q75 + 3 * (q75 - q25) ext = ext[(q75 - q25) > 0.] affected_rows = [] multiple_id = df[df["id"].isin(ext.index)] for row in tqdm.tqdm(multiple_id.itertuples(), total=len(multiple_id)): if row.price >= ext.loc[row.id]: affected_rows.append(row.Index) df = df.drop(affected_rows) print(f"Removed values {len(affected_rows)}, {len(affected_rows)*100/total_records}%") # Remove extreme outliers per neighbourhood, room_type and scraping date q25 = df.groupby(["date", "neighbourhood", "room_type"])["price"].quantile(0.25) q75 = df.groupby(["date", "neighbourhood", "room_type"])["price"].quantile(0.75) ext = q75 + 3 * (q75 - q25) ext affected_rows = [] for row in tqdm.tqdm(df.itertuples(), total=len(df)): if row.price >= ext.loc[(row.date, row.neighbourhood, row.room_type)]: affected_rows.append(row.Index) df = df.drop(affected_rows) print(f"Removed values {len(affected_rows)}, {len(affected_rows)*100/total_records}%") df.describe() df["price"].hist() df.to_pickle("treated_data.pkl")
_____no_output_____
MIT
airbnb-rj-1/Data Treatment.ipynb
reneoctavio/analysis
[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/real-itu/modern-ai-course/blob/master/lecture-04/lab.ipynb) Lab 4 - Math StatsYou're given the following dataset:
import random random.seed(0) x = [random.gauss(0, 1)**2 for _ in range(20)] print(x)
[0.8868279034128675, 1.9504304025306558, 0.46201173092655257, 0.13727289350107572, 1.0329650747173147, 0.00520129768651921, 0.032111381051647944, 0.6907259056240523, 1.713578821550704, 0.037592456206679545, 0.9865449735727571, 0.418585230265908, 0.11133432341026718, 2.7082355435792898, 0.3123577703699347, 0.26435707416151544, 5.779789763931721, 2.344213906200638, 0.6343578347545124, 4.014607380283022]
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
> Compute the min, max, mean, median, standard deviation and variance of x
# Your code here import math # min mi = min(x) print("min: " + str(mi)) #max ma = max(x) print("max: " + str(ma)) #mean mean = sum(x)/len(x) print("mean: " + str(mean)) #median median = sorted(x)[int(len(x)/2)] print("median: " + str(median)) #stddv #variance lars = 0 for v in x: lars += math.pow(v - mean, 2) variance = lars / len(x) stddv = math.sqrt(variance) print("standard deviation: " + str(stddv)) print("variance: " + str(variance))
min: 0.00520129768651921 max: 5.779789763931721 mean: 1.2261550833868817 median: 0.6907259056240523 standard deviation: 1.4717408201314568 variance: 2.166021041641213
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
VectorsYou're given the two 3 dimensional vectors a and b below.
a = [1, 3, 5] b = [2, 9, 13]
_____no_output_____
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
> Compute 1. $a + b$ 2. $2a-3b$ 3. $ab$ - the inner product
# Your code here first = list(map(lambda t: t[0]+t[1], list(zip(a,b)))) print(first) second = list(map(lambda t: t[0] - t[1], list(zip(list(map(lambda x: x*2, a)), list(map(lambda x: x*3, b)))))) print(second) third = sum(list(map(lambda t: t[0] * t[1], list(zip(a,b))))) print(third)
[3, 12, 18] [-4, -21, -29] 94
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
Gradients Given the function $f(x,y) = 3x^2 + 6y$ > Compute the partial gradients $\frac{df}{dx}$ and $\frac{df}{dy}$ Your answer here $\frac{df}{dx} = 6x$ $\frac{df}{dy} = 6$ The function above corresponds to the following computational graph ![sol (1).png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAqkAAAG5CAYAAACtGK3BAAAAAXNSR0IArs4c6QAABPB0RVh0bXhmaWxlACUzQ214ZmlsZSUyMGhvc3QlM0QlMjJhcHAuZGlhZ3JhbXMubmV0JTIyJTIwbW9kaWZpZWQlM0QlMjIyMDIxLTA5LTA2VDA2JTNBNTMlM0EzNC4zNjlaJTIyJTIwYWdlbnQlM0QlMjI1LjAlMjAoTWFjaW50b3NoJTNCJTIwSW50ZWwlMjBNYWMlMjBPUyUyMFglMjAxMF8xNV83KSUyMEFwcGxlV2ViS2l0JTJGNTM3LjM2JTIwKEtIVE1MJTJDJTIwbGlrZSUyMEdlY2tvKSUyMENocm9tZSUyRjkyLjAuNDUxNS4xNTklMjBTYWZhcmklMkY1MzcuMzYlMjIlMjB2ZXJzaW9uJTNEJTIyMTUuMC4zJTIyJTIwZXRhZyUzRCUyMnF3ZXRIbU15OTBRbE84bFNpQlg0JTIyJTIwdHlwZSUzRCUyMmRldmljZSUyMiUzRSUzQ2RpYWdyYW0lMjBpZCUzRCUyMjZCdm8xaC1aLVVDY3c1QlY4ZHNHJTIyJTNFN1poTGo1c3dFTWMlMkZEZGNLYkI3SmRiTnBlMmhQT1hUMzZBVUhxQnlNSEJPZ243NEdqd1BFaTFUMUVhaVVFOHpmRHp3enZ6RVlCJTJCOU96U2RCeXV3clR5aHprSnMwRG41MkVQSjhGS3BMcDdSYWlYeGZDNm5JRSUyQmcwQ0lmOEJ3WFJCYlhLRTNxZWRKU2NNNW1YVXpIbVJVRmpPZEdJRUx5ZWRqdHlObjFxU1ZKcUNZZVlNRnY5bGljeTAlMkJvR1JZUCUyQm1lWnBacDdzaFZ2ZGNpS21NMHh4emtqQ2F5MzF6dUc5ZzNlQ2M2bnZUczJPc2k1NEppNDZBaDluV3E4TEU3U1F2eklBd1RKa2EzeWppWElWVEM1a3hsTmVFTFlmMUNmQnF5S2gzUVN1c29ZJTJCWHpndmxlZ3A4VHVWc29XOGtVcHlKV1h5eEtDVk5ybDhHZDIlMkZkbE45Q01CNmJtRG0zbWlOVVVqUnZveU4wYWpPSEliMWxobW4lMkZldWNtc2FkVnlJR0NRTkJSS1FVZ2hiWmNmU3UyVkZZVTM2aTZpbXFpNkNNeVB3eW5aMEFYJTJCbTEzNUFDZFFOWmVEOGpzSm9MWVJWTTJ0ZzVZa3loM3VXaXpuSkpEeVhwZmFsVnRVMGpUYzZsNXYlMkJZTjEzR0lCd1hLaVJ0WnBHWmNkVU1BS1JjS0Y3a2dsMlBTZ0drYkZRRlJ2dVQ0UGdQWEZGZzQlMkJxRnklMkZFYVdMdzZYWnI4VmwlMkZXelM1RzkyTTNmTEJyOXRVeHU5dmwwSTFtMENYS2FKeGdqOWFJTUY1dyUyQjkwOEVEYThUclpmZHptR3R6TU12eWtEa3pVQ0hDd0lzTW1VRmE2akVudmJmWFBRVTd6R3VFVkx4czJ6WTdIMnloJTJCSyUyRlhWUzY3OWQlMkJSNWExZHZMTE1kaUdhJTJCUjN0dlgxdWFPOE9MJTJGSFY3dkw4RHJ2JTJGUGVXdkRZWU5aajBSdXVrZDdiYzRQdjN4SGY0SUh2RmRXMWZIYVo5Vmo0eHNvSVYzbjJ2ZDJCJTJGJTJCSFpWNW5ETDh5JTJCYmZRakdPOSUyRkFnJTNEJTNEJTNDJTJGZGlhZ3JhbSUzRSUzQyUyRm14ZmlsZSUzRVBNg+QAACAASURBVHhe7Z0JfFXVtf9XjENLSQELAVuDfQgKaQEZ0iizQgsGHHgVY0FUJkGkRaKgWCxQkVmwPBA0DJWhGmlLS0tMW5Q+Q2QIU7AyCfYZsE8CKuD/UbUM/8/a9mKIIffcm3PvPfuc7/58/KTac/ZZ+7v2uvt39tl77aSzZ8+eFQoEIOBJAlu2bJGSkhLZvXu3HDhwQA4dOiRlZWVy7NgxOXnypJw+fVqSk5OlRo0aUrt2bUlNTZUrr7xSrr76amnWrJm0bNlS2rZt68m2YRQEIAABCECgKgJJiFQ6CAS8Q2D79u1SUFAgr776qqxfv16aNm0qrVq1MoKzcePGkpaWJvXr1zeCVIWpClQVqipYVbgePnxYDh48KPv375ddu3bJjh07ZM+ePdKxY0e56aabpEePHqY+CgQgAAEIQMDrBBCpXvcQ9vmewNtvvy0rVqyQlStXyqlTpyQrK0u6desmnTp1kpSUlGq3/8SJE1JYWChr166V/Px8ufjii6VPnz7Sr18/adKkSbXrpwIIQAACEIBALAggUmNBlToh4IDAn/70J1mwYIERkP3795fs7Gy5/vrrHdxZvUs2btwoeXl5smzZMjPDOmzYMOnevXv1KuVuCEAAAhCAgMsEEKkuA6U6CIQjoLOZM2bMMJ/nhw8fLkOGDAl3S8z+/9zcXJk3b57UqVNHRo8ebWZxKRCAAAQgAAEvEECkesEL2BAIAjt37pTx48ebDVCPPvqo+dzulbJ8+XKZNm2aWfc6ceJEadGihVdMww4IQAACEAgoAURqQB1Ps+NLYMKECTJ9+nSZNGmS5OTkxPfhETxt1qxZMm7cOCOiVVBTIAABCEAAAokigEhNFHmeGwgCult/xIgRJi2UfuJv2LCh59tdWlpqPv1ruqu5c+eSDcDzHsNACEAAAv4kgEj1p19plQcILFq0SAYPHizz5883m5NsK2r3gw8+KLpuddCgQbaZj70QgAAEIGA5AUSq5Q7EfG8S0M/lukFqyZIlVifTLy4uloEDB5oNVbpmlQIBCEAAAhCIFwFEarxI85zAELjnnnvMzn3NfepGntNEg9M8q7rJSzMALF26NNHm8HwIQAACEAgIAURqQBxNM+NDoHfv3uY0KJ1B9Vu577775Pjx47Jq1Sq/NY32QAACEICABwkgUj3oFEyyk4AK1NTUVHnuuefsbIADq4cOHSplZWUIVQesuAQCEIAABKpHAJFaPX7cDQFDQE+M0uNG/TiDWtHFOqN65swZPv3T9yEAAQhAIKYEEKkxxUvlQSCgm6R2794tq1evDkJzTRtvueUWSU9PZzNVYDxOQyEAAQjEnwAiNf7MeaKPCCxcuFDmzJkjRUVFvtgk5dQ1upmqffv28tBDD5Geyik0roMABCAAgYgIIFIjwsXFEPiCwLZt26RNmzaiaZratm0bODTa7szMTNm6dSsJ/wPnfRoMAQhAIPYEEKmxZ8wTfEqgXbt2oummbEzU75ZLNOH/8uXLzUwyBQIQgAAEIOAmAUSqmzSpKzAEJkyYYNah5uXlBabNF2podna2WZ86fvz4wLMAAAQgAAEIuEcAkeoeS2oKCIGSkhLRWdQ9e/ZIWlpaQFp94WaWlpZK06ZNZePGjdKiRYvA8wAABCAAAQi4QwCR6g5HagkQgdtvv106deokOTk5AWp11U2dNWuWFBYWkj+VHgEBCEAAAq4RQKS6hpKKgkBgzZo1MnbsWNm5c2cQmhtRG5s3b25SUmVlZUV0HxdDAAIQgAAEKiOASKVfQCACAjfeeKMMHjzYnGVPOZ+AbqBatGiRrFu3DjQQgAAEIACBahNApFYbIRUEhUBBQYGZRd2+fXtQmhxxO6+77jozm9q9e/eI7+UGCEAAAhCAQHkCiFT6AwQcEtC1qL169TIzqZTKCeTm5kp+fj5rU+kgEIAABCBQbQKI1GojpIIgENi3b585YenIkSNBaG612li3bl3ZsGGDNGnSpFr1cDMEIAABCASbACI12P6n9Q4JaF5UPQpUd7FTqiYwatQoqVWrligzCgQgAAEIQCBaAojUaMlxX6AIaLL6JUuWmGNAKVUT0HypgwYNkrfeegtUEIAABCAAgagJIFKjRseNQSGwbds26du3r0neT3FG4Nprr5WXXnpJWrVq5ewGroIABCAAAQhUIIBIpUtAIAyByZMny9GjR/nUH0FP0U/+qampJhsCBQIQgAAEIBANAURqNNS4J1AEunbtKg8//DBJ6iPwuh56MHv2bFm7dm0Ed3EpBCAAAQhA4AsCiFR6AwTCELjsssvMTGpKSgqsHBLQTWY6k/rJJ584vIPLIAABCEAAAucTQKTSIyBQBYHi4mK5//77SeAfRS/RxP4LFy6Utm3bRnE3t0AAAhCAQNAJIFKD3gNof5UEVGQVFRWZnf2UyAgMGDBAOnToYHb6UyAAAQhAAAKREkCkRkqM6wNFQNeiNmjQQEaPHh2odrvR2OnTp0tZWZnMnDnTjeqoAwIQgAAEAkYAkRowh9PcyAjoUaj33nuv9O7dO7IbY3j1mTNn5Prrr5e///3vZt3nE088IXfeeadcdNFFMXxq5FX/9re/lWXLlnFEauTouAMCEIAABEQEkUo3gEAVBHQ95YIFCzyzrvLs2bMm00BSUpLoTOX69eulS5cu8uyzz8qwYcPMf/dK0fW8w4cPF/1LgQAEIAABCERKAJEaKTGuDxSBhg0bmjWpaWlpnmi3ilQ99UqFn/5vnVVt3ry5mVF97bXXPCVSS0tLpWPHjvLuu+96gh1GQAACEICAXQQQqXb5C2vjTODrX/+6vPfee55KP7V7927RtFiNGjWSnTt3SsuWLeWFF16Qu+++21Of/DUNlYr748ePx9lrPA4CEIAABPxAAJHqBy/ShpgRuPjii+XTTz+V5OTkmD0jmop11nTr1q2yaNEimTdvnvnk7zUbT506JV/96lflX//6VzRN5B4IQAACEAg4AURqwDsAza+agK7x1M/qXivr1q2Tzz77TA4cOCDz58+X3NxcswzAS2tSlZvXhLPX/Ig9sSXQuXNn0VihQAACdhJApNrpN6yOEwGvzqSGmq9C8Ec/+pG8/PLLJt1T3bp140Qm/GNCM6kqpikQiDeB0EuSF18y482C50HAVgKIVFs9h91xIeC1NamnT5+WO+64Q/Q0p/HjxxsG+vfnP/+52Th14403xoWLk4ewJtUJJa6JFQEVp5qWDZEaK8LUC4HYE0Ckxp4xT7CYgNd29//f//2f1KxZ0wjTCRMmmN39miNV16eWlJSIimqvFHb3e8UTwbQDkRpMv9NqfxFApPrLn7TGZQJey5OqolSPGf3lL38p9913n7z55ptmY9fEiRNFDx7wUkJ/8qS63BmpLiICiNSIcHExBDxJAJHqSbdglFcIePHEKR18db3n888/L1dddZVkZWV5SpyGfMeJU17pxcG0A5EaTL/Tan8RQKT6y5+0xmUCerrTFVdcIY888ojLNfu/uhkzZsjhw4dl5syZ/m8sLfQcAUSq51yCQRCImAAiNWJk3BAkAgsXLpQ33nhDFi9eHKRmu9LWAQMGSIcOHczyBAoE4k0AkRpv4jwPAu4TQKS6z5QafURA11Xef//9sn37dh+1Kj5N0QwEKvJ1XS8FAvEmgEiNN3GeBwH3CSBS3WdKjT4joEeQfvDBB2ZXPcUZgY8//ljq1asnn3zyibMbuAoCLhNApLoMlOogkAACiNQEQOeRdhHo2rWr6NpU3aBEcUZgzZo1Mnv2bFm7dq2zG7gKAi4TQKS6DJTqIJAAAojUBEDnkXYRmDx5shw9elRmzZpll+EJtHbUqFGSmpoqY8eOTaAVPDrIBBCpQfY+bfcLAUSqXzxJO2JGYNu2bdK3b1/Zs2dPzJ7ht4qvvfZaeemll6RVq1Z+axrtsYQAItUSR2EmBKoggEile0DAAYH09HRZsmSJZGZmOrg62Jds3LjR7Oh/6623gg2C1ieUACI1ofh5OARcIYBIdQUjlfidgB5DqpuB+OQf3tP6qb9WrVrm2FYKBBJFAJGaKPI8FwLuEUCkuseSmnxMYN++fdK+fXs5cuSIj1vpTtPq1q0rGzZskCZNmrhTIbVAIAoCiNQooHELBDxGAJHqMYdgjncJ6BGpvXr1ksGDB3vXyARblpubK/n5+bJq1aoEW8Ljg04AkRr0HkD7/UAAkeoHL9KGuBAoKCiQxx9/XHQjFaVyArpRaurUqdK9e3cQQSChBBCpCcXPwyHgCgFEqisYqSQoBLp06WJOoNLd/pTzCaxYscKcMLVu3TrQQCDhBBCpCXcBBkCg2gQQqdVGSAVBIqBJ6jX3586dO4PUbEdtbdGihZlF5dADR7i4KMYEEKkxBkz1EIgDAURqHCDzCH8R0LWpnTt3Ft3FTvmcgGY9KCwsZC0qHcIzBBCpnnEFhkAgagKI1KjRcWNQCZSUlEi7du1Mcv+0tLSgYjjX7tLSUmnatKloflSdTaVAwAsEEKle8AI2QKB6BBCp1ePH3QEloHlT9+7da05VCnrJzs6WZs2akRc16B3BY+1HpHrMIZgDgSgIIFKjgMYtEFACOpt67733ytChQwMLZMGCBbJs2TIpKioKLAMa7k0CiFRv+gWrIBAJAURqJLS4FgLlCGgqqjZt2khxcbG0bds2cGy2bNkiGRkZJiWXpp6iQMBLBBCpXvIGtkAgOgKI1Oi4cRcEDAFNuTRnzhx54403pGbNmoGhokfE6kzyyJEjOdwgMF63q6GIVLv8hbUQqIwAIpV+AYFqEhgzZozZRLV69epq1mTP7bfeeqtZhzpt2jR7jMbSQBFApAbK3TTWpwQQqT51LM2KL4H+/fvLJZdcIosXL47vgxPwtAEDBsjp06dl6dKlCXg6j4SAMwKIVGecuAoCXiaASPWyd7DNKgKaP7VBgwaim4n8WnSTWFlZGflQ/epgH7ULkeojZ9KUwBJApAbW9TQ8FgRUqF5++eW+nFHVGdRjx44hUGPRcajTdQKIVNeRUiEE4k4AkRp35DzQ7wT00//x48dFz7JPSUmxvrm6Sapfv35Sq1Ytk26KAgEbCCBSbfASNkKgagKIVHoIBGJAQDdTFRQUmBlVm9NTaZqpgQMHys0338wmqRj0E6qMHQFEauzYUjME4kUAkRov0jwncAQ0PdWQIUNk/vz5MmzYMOvar2trH3jgAcnNzSXNlHXew2BEKn0AAvYTQKTa70Na4GECmuh+xIgR0rBhQ5kxY4akpaV52NrPTTt48KCMHj1aSktLZe7cudK6dWvP24yBEKhIAJFKn4CA/QQQqfb7kBZYQGD8+PEyc+ZMmTRpkowaNcqzFs+aNUueeOIJI1InTJjgWTsxDALhCCBSwxHi/4eA9wkgUr3vIyz0CYGSkhJRsfrOO+/IY489Jn379vVMy3STlybmb9SokUycOFFatmzpGdswBALREECkRkONeyDgLQKIVG/5A2sCQGDNmjXm0/+JEydk+PDhCV3vqetm582bZ3bu6+xpz549A+ABmhgEAojUIHiZNvqdACLV7x6mfZ4loLv/dXNSUVGRaNqq7OxsyczMjLm9mzZtkry8PJNOqn379mZTV48ePWL+XB4AgXgSQKTGkzbPgkBsCCBSY8OVWiHgmMC+fftMTtWVK1fKmTNnJCsrS7p16yadOnWSmjVrOq7nQhdqntPCwkJZu3at5Ofny0UXXSR9+vQxuU+vueaaatdPBRDwIgFEqhe9gk0QiIwAIjUyXlwNgZgS0GwAOsP66quvyvr16yU9PV1atWolzZo1k8aNG5vsAPXr15fatWtLjRo1JDk5WU6fPi0nT540p0EdPnzY7M7fv3+/7Nq1S3bs2GH+dujQQbp27WpmTNmtH1MXUrlHCCBSPeIIzIBANQggUqsBj1shEGsCxcXFohuudu/eLQcOHJBDhw5JWVmZEaQqTFWgqlBVwarCNTU1Va688kq5+uqrjbDVDVAZGRmxNpP6IeA5AohUz7kEgyAQMQFEasTIuAECEIAABLxOAJHqdQ9hHwTCE0CkhmfEFRCAAAQgYBkBRKplDsNcCFRCAJFKt4AABCAAAd8RQKT6zqU0KIAEEKkBdDpNhgAEIOB3AohUv3uY9gWBACI1CF6mjRCAAAQCRgCRGjCH01xfEkCk+tKtNAoCEIBAsAkgUoPtf1rvDwKIVH/4kVZAAAIQgEA5AohUugME7CeASLXfh7QAAhCAAAQqEECk0iUgYD8BRKr9PqQFEIAABCCASKUPQMB3BBCpvnMpDYIABCAAAWZS6QMQsJ8AItV+H9ICCEAAAhBgJpU+AAHfEUCk+s6lNAgCEIAABJhJpQ9AwH4CiFT7fUgLIAABCECAmVT6AAR8RwCR6juX0iAIQAACEGAmlT4AAfsJIFLt9yEtgAAEIAABZlLpAxDwHQFEqu9cSoMgAAEIQICZVPoABOwngEi134e0AAIQgAAEmEmlD0DAdwQQqb5zKQ2CAAQgAAFmUukDELCfACLVfh/SAghAAAIQYCaVPgAB3xFApPrOpTQIAhCAAASYSaUPQMB+AohU+31ICyAAAQhAgJlU+gAEfEcAkeo7l9IgCEAAAhBgJpU+AAH7CSBS7fchLYAABCAAAWZS6QMQ8B0BRKrvXEqDIAABCECAmVT6AATsJ4BItd+HtAACEIAABJhJpQ9AwHcEEKm+cykNggAEIAABZlLpAxCwnwAi1X4f0gIIQAACEGAmlT4AAd8RQKT6zqU0CAIQgAAEmEmlD0DAfgKIVPt9SAsgAAEIQICZVPoABHxHAJHqO5fSIAhAAAIQYCaVPgAB+wkgUu33IS2AAAQgAAFmUukDEPAdAUSq71xKgyAAAQgEk8DTTz8t48aNk2nTpsmPf/xjueiii0RnVH/xi1/IY489JpMnT5ZRo0YFEw6thoCFBBCpFjoNkyEAAQhA4MsETpw4IXXr1pWLL75YatSoIR988IF84xvfkJMnT8rp06fl6NGjkpKSAjoIQMASAohUSxyFmRCAAAQgEJ6AzpjOnj1bPvvss3MXX3rppZKTkyNTpkwJXwFXQAACniGASPWMKzAEAhCAAASqS0BnU1NTU+XTTz89V9Vll10mR44cYRa1unC5HwJxJoBIjTNwHgcBCEAAArElUH42lVnU2LKmdgjEkgAiNZZ0qRsCEIAABOJOoPxsKrOoccfPAyHgGgFEqmsoqQgCEIAABLxCQGdTZ82aJQ8//DBrUb3iFOyAQIQEEKkRAuNyCEAAAhDwPgGdTR0wYID88pe/ZC2q992FhRColEDCRWpxcbGUlJTI7t275cCBA3Lo0CEpKyuTY8eOnUsbkpycbNKJ1K5d2yyIT0tLk0aNGkmzZs2kZcuWkpGRgXsh4EsCxIcv3UqjXCJAfLgEkmp8ScAP8RF3kbpt2zYpKCiQ1157TQoLCyU9PV2uu+4687dx48ZGgNavX98IUhWmKlA1v53muVPhevjwYTl48KDs379fdu3aJTt27DACt0OHDnLTTTdJjx49pHXr1r7scDTK/wSID//7mBZGT4D4iJ4dd/qfgB/jIy4i9e2335YVK1bIyy+/bARnVlaWdOvWTTp16uTKZ5iPP/5YXn/9dVm7dq3k5+ebRM59+vSRfv36SZMmTfzfM2mh1QSID6vdh/ExJkB8xBgw1VtNwO/xEVORqjOmCxYskPXr10v//v3lrrvukszMzJh3iI0bN0peXp4sW7ZMOnbsKMOGDZPu3bvH/Lk8AAKRECA+IqHFtUEjQHwEzeO0NxICQYmPmIjUNWvWyMyZM83n+eHDh8uQIUMiYe/qtbm5ufLss8+a5QOPPPKI9OzZ09X6qQwCkRIgPiIlxvVBIkB8BMnbtDVSAkGLD1dFqm6AGj9+vLzzzjvy6KOPms/tXim63GDq1Klm3evEiROlRYsWXjENOwJCgPgIiKNpZlQEiI+osHFTQAgENT5cE6kTJkyQGTNmyJNPPmnOSPZq0bx548aNkzFjxojaTIFAPAgQH/GgzDNsJUB82Oo57I4HgSDHR7VFqu4mGzFihDRs2NCIVN2d7/VSWloqo0ePNumu5s6dK61atfK6ydhnKQHiw1LHYXZcCBAfccHMQywlQHyIVEukLly40Kw3nT9/vtmcZFvRTV0PPPCA6LrVwYMH22Y+9nqcgF/iQ9sxaNAgj9PGPNsI+CU+GD9s63l22OuX+Kju+BG1SNXP5bq7bPHixdK2bVs7vF6JlZrsduDAgSYt1rRp06xtB4Z7iwDx4S1/YI23CBAf3vIH1niLAPHxhT+iEqmaTur48eMm92lKSoq3vBuFNXp8nm7yqlOnjixdujSKGrgFAl8QID7oDRC4MAHig94BAeLDaR+IWKTefvvtRswtWbLE6TOsuU7Peda0WatWrbLGZgz1FgHiw1v+wBpvESA+vOUPrPEWAeLjy/6ISKQqwAYNGpgE/X4tQ4cOlbKyMoSqXx0cw3b17t1bUlNT5bnnnovhUxJbNfGRWP42P534sNl72B5rAuirygk7Fqn6iUaPG/XjDGpFNDqjqse38uk/1mHpn/qJD//4kpa4T4D4cJ8pNfqHAPFxYV86Eqm6iHfPnj2yevVq//SKMC255ZZbJD09nc1UgfF49A0lPqJnx53+J0B8+N/HtDB6AsRH1ezCilRNHzBnzhwpKiryxSYpp11JN1O1b99eRo4cSXoqp9ACeF3Q4+Ohhx4iPVUA+73TJgc9Phg/nPaUYF4X9PhwMn5UKVI1kWybNm1E0zTZnGYq2u6v7f7e974nyoGE/9FS9O99xEexZGZmytatW4kP/3bzqFtGfDB+RN15AnAj8eFs/KhSpLZr107uueceKxP1u9XHdZPYsmXLzEwyBQLlCRAfYg7yWL58OfFBaHyJAPEhZpMx4wfBURkB4sPZ+HFBkapnxe7evVvy8vIC38Oys7OlWbNmokwoEFACxMcX/UDjQ9dvjx8/ns4BAUOA+Dg/Phg/CIzyBPS3cu/evfLSSy8FHky48aNSkVpSUiI33HCDgZiWlhZ4iKWlpdK0aVPZuHGjtGjRIvA8gg5A40PfgnUzIfEhQnwEPSLObz/xcT4P4oP4KE+A+IgsPioVqZrPrmPHjpKTk0Pv+jeBWbNmSWFhIflT6RGi+ew6depEfJTrC8QHgREiQHx8uS8QH8RH+fjo3LmzjBo1CigO9NWXROqaNWtk7NixsnPnTgBWINC8eXOZOnWq9OzZEzYBJUB8XNjxGh/Tpk2TrKysgPYOmk18VB0fjB/BjhHiI/Lx40si9cYbbzQpl/Qse8r5BFasWCGaMmLdunWgCSgB4uPCjtcNVIsWLSI+Ahob2mzi48LOZ/wIcGD8u+ldunSRIUOGoK8q6QoXGj/OE6kFBQVmFnX79u30pgsQ0FRU+jbcvXt3GAWMAPER3uHXXXedmU0lPsKz8tsVxEd4jzJ+hGfk1ys0Ph5//HGT0pJSOYHKxo/zRKquJerVqxfJ66voQbm5uZKfn8/a1ABGGfER3unER3hGfr2C+AjvWeIjPCO/XkF8hPdsZfFxTqTu27fPnLB05MiR8DUF/Iq6devKhg0bpEmTJgEnEZzmEx/OfU18OGfllyuJD+eeJD6cs/LLlcSHc09WjI9zIlXz2ulRoLoLkVI1Ad2VV6tWLfKmBqijEB/OnU18OGfllyuJD+eeJD6cs/LLlZoX9eOPP0ZfOXBoxfg4J1I1GfeSJUvMMYeUqglovtRBgwbJW2+9BaqAECA+nDua+HDOyi9XEh/OPUl8OGfllyuJD+eerBgfRqTqQt6+ffua5OQUZwSuvfZaefHFF6V169bObuAqawkQH5G7TuNDT1PRjSIUfxMgPiL3L+NH5MxsvWPr1q1mNz/6yrkHy48fRqROnjxZjh49ylS0c4YmEW+9evXMbj2KvwkQH5H7V+MjNTXVZAuh+JsA8RG5fxk/Imdm6x1PPfWUfPDBB+irCBxYfvwwIrVbt25GdJGk3jlFTco7e/ZsWbt2rfObuNJKAl27dpWHH36YJPUReI/4iACW5ZcSH5E7kPiInJmtdxAfkXtO4+OZZ56Rv/zlL2JE6mWXXWZmUlNSUiKvLaB36CJonUn95JNPAkogOM0mPiL3tW7C1JlU4iNydrbdQXxE7jHGj8iZ2XjH2bNn5Stf+YqZSa1Zs6aNTUiIzeXHj6TNmzefvf/++0ngH4UrdL3d888/LxkZGVHczS02ECguLhY/x8eZM2eMGy666CLX3aGJmfWEtrZt27peNxV6g4Df4yOWlBk/YknXG3Vv3rxZhg0bRgL/KNwRGj+ScnNzzxYVFZmd/ZTICAwYMMDkltVjZCn+JKAiy6/xoUdY/vWvfzUCdefOnfKd73zHVSdqfHTo0MFkwqD4k4Cf4yPWHmP8iDXhxNevyek1p/rixYujMkYnEfS3+etf/7o0atQoqjoudJPO8mr9SUlJMZmkqK6x9913n3Ts2FGScnJyzjZo0EBGjx5d3ToDd/+MGTPk/fffl6effjpwbQ9Kg3Utqh/j4/Tp00ZAvvHGG+YFddmyZfLqq6+6+mM1ffp0KSsrk5kzZwaluwSunbGIDx04dQDVlycdQGNd9FnxeE7FdjB+xNqzia8/JydHvvnNb8ojjzwSsTHaL5OTk2X48OGiM7IqdvXf3Sqa2/i//uu/5KOPPpJTp065+tvvho2h8SPptttuO3vvvfdK79693ag3UHWsWrVKli5dyhGpPva6HmXn1/jQHyn9Rz/Zfu973zv3Vu2WO3/7298a8atxQvEngVjER0gwPvHEE/Lzn/88JuBUCF966aVSv359uf766+WXv/xl3PdkMH7ExLWeqlTjQ2cE9W+kRU//1P75u9/9zkwk/OY3v3FdSP7sZz+TN998U/S32umLmqYWzM7Odnx9pO0OXR8aP5LatGlzdsGCBawbi4Lkli1bzHoT/Uuxk4DOguua0wttGtT1lH6P+uAgswAAIABJREFUD/3sr4v6//CHP7jqRBW/Ogugfyl2EkhEfKiA1Bkjzb8aqzy7OtD++te/NrmuNU2atvNf//qXXHzxxXFzFONH3FDH7EHh4qNNmzZm34r+jbRo/xwzZozs37/fCEKnItLpc3Sm9s477xT9/dffaadFha1ObsRiH0N5G0LjR1JaWppZk5qWlubURq77N4GDBw+aNamlpaUwsZTAV7/6VTODqJ9lNOdtRbHasGFDsybVr/Ghb8Qvv/yy6Od/t390NC50TdG7775rae/A7ETEh87s/OhHP5KSkhJXP2+W96Z+HdETA1Uo6svZrbfeagZePb4yXoXxI16kY/eccPGh44Z+pr/yyisdG7Fu3TrRWVSdSdTMGaoxrrnmGunSpYvjOpxcqL/5V199tUk/quPf3//+d9HxLlwZMWKEzJkzx/F4oePrQw89JNquvLw80dO3nJTQ+JGUkpJy9r333ov7p46qjNS3Bw3gFStWmLVJ+oOlM1r6371UNI3It771LdF0CRQ7CfziF7+Qxx577Nynbg1WnVkJiVVdsO61+HCLtK5D1/Wi+mOlP4S6PtXNt3WNC/2RPn78uFsmU0+cCSQiPnTmadOmTaL5JXU9uA7Obr9AaZ//n//5HzNI62yYrhnUE4H0pBstOu7ov7/wwgtmAP/nP/8pV111lav0GT9cxZmQysLFh44j//u//xtR+intl4sWLZLCwkLp1KmT/N///Z/ccsstXxKp2kcre6m65JJLpFatWmazlf6j/1tjqWLR/t2sWTNjm+4duPvuu2XSpEnmv1VVtC7NXxouJtU+fRHU00znzZtnxlH9q+1yUkLjR1JycvLZTz/9NGZvrE6MqXjN3r17pWnTpmYdhtqmg6n+tyuuuCKa6mJ2j/7Q6ZuOLjqm2EvgG9/4hnz44YemAbpOTYWavl3+9Kc/ldq1a5s+6OaCdX2z1GTFNWrUMJ9awgV7tGT1Oc8995z893//t0yZMkXq1Klj3oB1FkkHXP33Y8eOnas+tNMz2udVvE/jQmca9DMqxV4C8YwPHdh017u+MKkA0Fmof/zjH6K7pCu+QOlGv/Klsg1Qmqu3efPmlcLXGVv9pKovak8++aQRo1o0DvTZ+pvQq1cvs6RLfwdee+01V1/iGD/sjYnyllcVHyoQ9fcv0t947YM6+6pLzXSW/0JF1zXrmKXCVMeoymJA4+amm276UhX6cqaCUevQa/RFTcekiRMnntfPQ2kKQxXoxIN+HSvfpsqWI2jc6iSevujdc889Zm3uf/zHfzj+WhEaP3TrpObz91xvmTp1quhxe1pUfffv399zNqpBbs48ebKBATVKg15/HHSxupvx8c4775iUZTrw6eyMBrC+Leunxorlb3/7mxw+fDjsQKw/QBX7odqclZVl3pB1bY+mgdIfjSZNmsjJkydNSpSK7XK7L4d2p7rJL6Dd0XPNjlV86ID47W9/W371q1+Z7BOrV682L4v66b/iQF9dkarxpdlZ9IVRZ5x0rZ0O9MuXLzdHWOpSAI2JBx980PB/9tlnXfeD2zHnuoFUGBWBUHzoJ/uKIs9JhfqpW1MC6iSCmxMk5Z+tX6h1GYGKUrVRX8hUVOs67VDRJQAqLnXWX7MA6IubznDqMq7yRV8s9Z9Q0d98/fKt68qdzLpWxiQ0fnhyJlUN1oH0a1/7mnmbUId5MZh5E3YSbt6/pm7duuZEEC3lZ1J1jarONro1k6o/BD/4wQ/Mj4Gu0dH+oxs1NENEZS9hOoiqyCxfKntTruwtWQdY/cyyb98+k1pKjz7evn27eW7nzp3ND1OsCzOpsSYcn/rjFR/aGk21k5mZab5O6eCskxWaJkeXf0U6G+WUjg6kurFFv9jp83Tpi35e1fjXmNX264xrZXHm9BmVXcf4UR163rm3qvjQGfhoZlJ1n4DO5q9fv/6C2kfHgsomNypOCmjcVLxOr1G79ehRHXtUDOvSmkcfffRLY0PF+nQJgy7hqhiP5TWaxs1tt91mTuWMNkfsuZlUL65J1e6n8PTNVhW8/kD88Ic/9E6v/LclrCnynEsiNkh/CHQNqg4YGmQq4nRw0pkVLW6uSdV+3KdPH/OpRBeoq5D8/ve/b4SkBrPbJSRoVaTqj4/OqMbzZY81qW57NP71xTM+tHX6PP1ypsu7tOh6VB3o9ZNkxUFRx4dwRWejdHNg+aKxfscdd4ieaKNr+kIvi7r0Rmd9dDZJxbJu8Ni9e7f5qxMl+hVEX/DcKowfbpFMXD3h4iOaNanaGn1h+n//7/+Z2fsL/WaHkvE7aX3F2VgVkRpb+pVA40M//esGQl3SEu5lUD/hh3tp1Po1tvRYbM0HrCX0RU83T4Vb96rXn1uT6sXd/QcOHJDGjRubTz66uFfXZeiPxeWXX+7EH3G7ht2ZcUMdswfpmkkdpEK7+0PiNPRAN3f3T5s2zbxVal/WHwLdIKKfGkNrgir+GOnbtM6mhiv6I1PZaVH6I6b/qEDVt3l98dN/14FYZ3RjXdjdH2vCsa8/nvGhrVm7dq1Jh6MvbqG9CbpuTj/9Vywat+FKZafp6ACqQlTX4OlAGpq91c/8+gm0Xbt2JqOHDuz6gqdfJPSTZ0FBgVl24FZh/HCLZOLqCRcf0ezu1/6pL0xDhw41fS9WRZe06GEcGiMqpvVzfjiBqrZoTu2NGzeGvVbHNhW/+nXyz3/+sxl79IQ6pycQntvd78U8qTqA6qCqql6V+He/+13jNG2glwp57rzkjehs0SAaMmTIuZnTirW4mSdVczPqRqbQWjpN8KyzObqBQxOXt27d+rzHh07eCdeyytYs6Yud7hDVgfc///M/zcYpXWOrs1JHjx6NSz5I8qSG85z3//94xofSCOVI1ZlO/fKg+xI0+4abXwB0sNT40CUvKlI1i4xu6HjqqadMNgqdodW40aU4OqOl68f1nkOHDpmNIG4Vxg+3SCaunnDxEWme1NAXPU3ir/3D7YwSFUlpvOnLkk7GOI0xzSuu8eNE0Gr9OkOr46ievOX0GWrnuTypnDgVfQfXGTDd+KInQlD8ScDNE3X0M78uJdAfHk1LokGrA59uoFIB6yTonVLW2SFdyK7LCHSRu77sqUAeOXKkGZzdfNaFbOLEKafesvc6N+MjREEHNt2wqEu8YtVPQy+AW7duNWmndCYp9Cz9/1Qs6MufDqo6q6pZAnSTiZuF8cNNmt6sS9dl6u+wkxOnNF2TZqL44x//aP6ZO3duzPp/dWipnZV9uatOnZXde+7EqZycnLN+PJvcbWCV1cfZy/GgnNhn6OcQTX0WzdnLlVkeSvMUOps8tPMzFoNxqG4daN9++22TE7Kyz5+xIqzxodkJNMUPxZ8E3I4Pf1KqvFWMH/73ti4j08kIJ+OH/l7rpkFNGaibdXXTbpDL9OnTzcbhpNzcXHPilOYkpURGQN+QdCeophSi+JOALjHRnI3R7lD0JxVnrdL40LWETtcgOauVq7xEgPiI3huMH9Gzs+VOze+ruX4jGT8qy+BiS3vdtDM0fiRt3rz5rK4x0PQ0lMgI6LnSuvklIyMjshu52hoCui6G+IjOXbreVkWMrkei+JMA8RG9Xxk/omdny5267ErXNGuaM0pkBELjR5Jm8tdTkzRPpB6PRXFGQNOHaJ4xnZan+JsA8RG5fzU+dD2sroWl+JsA8RG5fxk/Imdm4x06K6rxoUnw0VfOPVh+/DAiVc9i1bVFekINxRkBTa8we/ZskzKF4m8CxEfk/iU+Imdm6x3ER+SeIz4iZ2brHcRH5J4rHx9GpGqaD01Lo3mzKM4I6NnuOlOku7Up/iZAfETuX40P3RGtByVQ/E2A+Ijcv4wfkTOz9Q5NbaZfqtFXzj1YfvwwIlXXS2jSWE2cT3FGQNOW6Bm3FXNbOrubq2wiQHxE7i2ND02rpevuKP4mQHxE7l/Gj8iZ2XqHpjnr168f+ioCB5YfP4xI1Xv16Dfd4a8pEChVE9i0aZPJfbZr1y5QBYQA8eHc0Xoaie7o13x6lGAQID6c+5nxwzkrv1ypx4BqTnU9rYlSNYGK48c5kaonb+hiVaakw3chnYrWY/UmTJgQ/mKu8AUB4sO5G4kP56z8ciXx4dyTxIdzVn65kvhw7smK8XFOpOpZyZrz88iRI85rC+iVuqtfc581adIkoASC12ziw7nPiQ/nrPxyJfHh3JPEh3NWfrmS+HDuyYrxcU6kahV6dFevXr1ITl8FT03OqzvPOArVeafzy5XER3hPanzk5+eLHvlICRYB4iO8vxk/wjPy6xV6ROott9yCvgqjryqOH+eJ1IKCArNbncSzF6aoG0GmTJkiPXr08Gss0a4LECA+wncNjY+pU6dK9+7dw1/MFb4iQHyEdyfjR3hGfr3ilVdekZ/+9KfoqyocXFl8nCdS9d4uXbqYE3Z0tz/lfAIrVqwwJ+isW7cONAElQHxc2PHER0CDolyziQ/igyi4MAHiI/L4+JJI1U/Zmttw586d9LUKBFq0aGFmUXv27AmbgBIgPi7seI0PnUXlUJCABoeIWQrF+FG5/xk/ghsXoZb/8Y9/NLOpJSUlwHCor74kUvU+XVvUuXNn0V1WlM8JaNaD119/nbWodAjio5I+oPFRWFjIWlTig/i4QHwwfhAc6KvK+0BV40elIlVVfrt27Uzy2bS0tMD3rIMHD0rTpk3ljTfekJYtWwaeR9ABEB/n94DS0lITH5rfTmeLKMEmQHyc73/Gj2DHQ8XW79ixw2RSQl99TkbjQ5P3X2j8qFSk6o2a12vv3r3m1Jigl7vuussMwuRFDXpP+KL9xMcXLLKzs0WTVRMfxEeIAPHxRV9g/CAuKhIgPpyPHxcUqVqFzqbee++9MnTo0MD2sgULFsjSpUvNLCoFAuUJEB8iGh/Lli2ToqIiOgcEziNAfHweH4wfBEZlBG644Qa57777Aq+vwo0fVYpUTUXVpk0bKS4ulrZt2waup23ZskUyMjJEz95t3bp14NpPg6smQHx8Hh/KQVOHUCBQngDxwfhBRFyYgOoK/f1UfaU6K2glpK/CjR9VilSFpimX5syZY2YSa9asGRiOekSsrhv5yU9+QvLdwHg98oYGOT50pmzkyJHER+TdJjB3BDk+GD8C082jbqge7jB37lzzJSpo+srp+BFWpCr9MWPGmEW+q1evjtoZtt146623mnWo06dPt8107I0zgaDGh65DnTZtWpxp8zjbCAQ1Phg/bOupibFX40P3//z+979PjAEJeKrqK6fjhyORqm3o37+/XHLJJbJ48eIENCm+jxwwYICcOnXKrLWjQMAJgaDFx+nTp81aOwoEnBAIWnwwfjjpFVwTInD33XfLpZdeGhh9Fcn44VikKkzNn9qgQQOzGNyvZdiwYfL++++TD9WvDo5hu4IQH7qJsqysjHyoMexHfq06CPHB+OHX3hv7dgUhPqIZPyISqSGhevnll/tS8esM6kcffYRAjX08+vYJ+kPj5/g4duwYAtW3vTf2DfN7fDB+xL4P+fkJfo+PaMaPiEVq6NP/8ePHRc/qTklJsb7P6Capfv36Sa1atfjEb703E98A/bRJfCTeD1jgTQLEhzf9glXeIEB8nO+HqESqVqGLfQsKCsyMqs3pqTQNwsCBA6VHjx5skvJGjPrCCr/Fx80338wmKV/0TG80wm/xwfjhjX7lFyv8Fh/VGT+iFqnaGTS9yJAhQ2T+/Pmia3FsK7q29oEHHhBNAzF48GDbzMdejxMgPjzuIMxLKAHiI6H4ebjHCRAfnzuoWiJVK9BErCNGjJCGDRvKjBkzJC0tzeOu//ys2NGjR4ueOa45ykjU73mXWWsg8WGt6zA8DgSIjzhA5hHWEiA+XBCpIe/rWbQzZ86USZMmyahRozzbKWbPni3jxo0zIpWzxj3rJt8ZZkt8zJo1S5544gniw3c90NsNsiU+GD+83Y/8ap0t8RGL8aPaM6nlO0VJSYkozHfeeUcee+wx6du3r2f6jG7y0sTjjRo1kokTJ0rLli09YxuGBIMA8REMP9PK6AgQH9Fx465gEAhqfLgqUkNdZc2aNebT/4kTJ2T48OEJXe+p6zrmzZtndu7r7GnPnj2D0aNppWcJEB+edQ2GeYAA8eEBJ2CCZwkELT5iIlJD3tXd/7o5Sc+l1bQK2dnZkpmZGXPnb9q0SfLy8kw6KT0/WTd16e5LCgS8RID48JI3sMVrBIgPr3kEe7xEICjxEVORGnLovn37TE7VlStXypkzZyQrK0u6desmnTp1kpo1a1bb75rntLCwUNauXSv5+fly0UUXSZ8+fUzu02uuuaba9VMBBGJJgPiIJV3qtp0A8WG7B7E/lgT8Hh9xEanlHaS71fQN4NVXX5X169dLenq6tGrVSpo1ayaNGzc22QHq168vtWvXlho1akhycrLoOa8nT54UPa3g8OHDZnf+/v37ZdeuXbJjxw7zt0OHDtK1a1czY8pu/ViGBHXHkgDxEUu61G07AeLDdg9ifywJ+DE+4i5SKzqouLhYdEHw7t275cCBA3Lo0CFzNrgKUhWmKlBVqKpgVeGampoqV155pVx99dVG2OoGqIyMjFj6nbohkDACxEfC0PNgCwgQHxY4CRMTRsAP8ZFwkZow7/FgCEAAAhCAAAQgEEACukxywIABsmTJEk8fb49IDWDnpMkQgAAEIAABCASXwNixY+Xpp5+Whx9+WKZMmeJZEIhUz7oGwyAAAQhAAAIQgIC7BHQWtV69evLpp5/KZZddJkeOHPHsbCoi1V3fUxsEIAABCEAAAhDwLAGdRdXToT777DO59NJLJScnx7OzqYhUz3YjDIMABCAAAQhAAALuEdBDlnQDus6ihoqXZ1MRqe75npogAAEIQAACEICAZwmUn0UNGenl2VREqme7EoZBAAIQgAAEIAABdwjoLGrdunVNWs+vfe1r8uGHH0qdOnXkn//8p5w6dUo++OADz61NRaS643tqgQAEIAABCEAAAp4loLv5x40bJ1OnTpWRI0ea0zn1FNBnnnlGdIb1qaeeMutTvVQQqV7yBrZAAAIQgAAEIACBOBAIidQ4PCrqRyBSo0bHjRCAAAQgAAEIQMBOAohUO/2G1RCAAAQgAAEIQMDXBBCpvnYvjYMABCAAAQhAAAJ2EkCk2uk3rIYABCAAAQhAAAK+JoBI9bV7aRwEIAABCEAAAhCwkwAi1U6/YTUEIAABCEAAAhDwNQFEqq/dS+MgAAEIQAACEICAnQQQqXb6DashAAEIQAACEICArwkgUn3tXhoHAQhAAAIQgAAE7CSASLXTb1gNAQhAAAIQgAAEfE0Akepr99I4CEAAAhCAAAQgYCcBRKqdfsNqCEAAAhCAAAQg4GsCiFRfu5fGQQACEIAABCAAATsJIFLt9BtWQwACEIAABCAAAV8TQKT62r00DgIQgAAEIAABCNhJAJFqp9+wGgIQgAAEIAABCPiaACLV1+6lcRCAAAQgAAEIQMBOAohUO/2G1RCAAAQgAAEIQMDXBBCpvnYvjYMABCAAAQhAAAJ2EkCk2uk3rIYABCAAAQhAAAK+JoBI9bV7aRwEIAABCEAAAhCwkwAi1U6/YTUEIAABCEAAAhDwNQFEqq/dS+MgAAEIQAACEICAnQQQqXb6DashAAEIQAACEICArwkgUn3tXhoHAQhAAAIQgAAE7CSASLXTb1gNAQhAAAIQgAAEfE0Akepr99I4CEAAAhCAAAQgYCcBRKqdfsNqCEAAAhCAAAQg4GsCiFRfu5fGQQACEIAABCAAATsJIFLt9BtWQwACEIAABCAAAV8TQKQ6cO+WLVukpKREdu/eLQcOHJBDhw5JWVmZHDt2TE6ePCmnT5+W5ORkqVGjhtSuXVtSU1PlyiuvlKuvvlqaNWsmLVu2lLZt2zp4EpdAwD4CxId9PsNiCEAAAjYQQKRW4qXt27fLK6+8Iq+99pqsX79emjZtKq1atZL09HQjPNPS0qR+/fpGkKowVYGqQlUFqwrXw4cPy8GDB2X//v1G2Gp9e/bskQ4dOkjXrl2lR48epj4KBGwkQHzY6DVshgAEIGAfAUTqv3329ttvy4oVK2TlypVy6tQpycrKkm7dukmnTp0kJSWl2p49ceKEFBYWytq1ayU/P18uvvhi6dOnj/Tr10+aNGlS7fqpAAKxJEB8xJIudUMAAhCAQGUEAi9SCwoK5LnnnjMCsn///pKdnS3XX399zHvLxo0bJS8vT5YtWyYdO3aUYcOGSffu3WP+XB4AgUgI/OlPf5IFCxYQH5FA41oIQAACEHCFQGBF6po1a2TmzJnm8/zw4cNlyJAhrgCNppLc3FyZN2+e1KlTR0aPHm1mcSkQSCQB4iOR9Hk2BCAAAQgogcCJ1J07d8r48ePNBqhHH33UfG73Slm+fLlMmzZNGjduLBMnTpQWLVp4xTTsCAgB4iMgjqaZEIAABCwgECiROmHCBJk+fbpMmjRJcnJyPOueWbNmybhx44yIVkFNgUA8CBAf8aDMMyAAAQhAwCmBQIjUbdu2yY9//GOTFko/8evufK+X0tJS8+lf013NnTuXbABed5jF9ulu/REjRpj4mDFjhjRs2NDzrSE+PO8iDIQABCBQbQK+F6kLFy40603nz59vNifZVtTuBx98UHTd6qBBg2wzH3s9ToD48LiDMA8CEIBAgAn4WqTq53LNd7p48WKrk+kXFxfLwIEDzYYqXbNKgYAbBDQ+NB3akiVLiA83gFIHBCAAAQi4SsC3IlXTSR0/ftzkPnUjz6mr1KOoTPOs6iYvzQCwdOnSKGrgFgh8QYD4oDdAAAIQgIDXCfhSpPbu3ducBqUzRH4r9913nxHfq1at8lvTaE+cCBAfcQLNYyAAAQhAoFoEfCdSdQBOTU01Cfr9WoYOHSplZWUIVb86OIbtIj5iCJeqIQABCEDAVQK+Eqn6CVOPG/XjDGpFr+uM6pkzZ/j072o4+Lsy4sPf/qV1EIAABPxGwDcidcyYMbJnzx5ZvXq133x0wfbccsstkp6ezmaqwHg8+obqJqndu3cTH9Ej5E4IQAACEIgzAV+IVE2jM2fOHCkqKvLFJimnfUA3U7Vv314eeugh0lM5hRbA64gP4iOA3Z4mQwACPiBgvUjVRP1t2rQRTdPUtm1bH7gksiZouzMzM2Xr1q0k/I8MXSCuJj6Ij0B0dBoJAQj4koD1IrVdu3Zyzz33WJmo360epQn/ly9fbmaSKRAoT4D4EHOQB/FBXEAAAhCwj4DVIlXPGtd1dnl5efaRd9ni7OxsadasmSgTCgSUAPHxRT/Q+ND12+PHj6dzQAACEICAJQSsFaklJSVyww03yN69eyUtLc0S3LEzU88yb9q0qWzcuFFatGgRuwdRsxUEND50FlU3ExIfIsSHFd0WIyEAAQicR8Bakar5Hjt27Cg5OTm49N8EZs2aJYWFheRPpUfI7bffLp06dSI+yvUF4oPAgAAEIGAXAStF6po1a2Ts2LGyc+dOu2jHwdrmzZublFRZWVlxeBqP8CIB4uPCXiE+vNhjsQkCEIBA5QSsFKk33nijDB482JxlTzmfwIoVK0RTDq1btw40ASVAfFzY8bqBatGiRcRHQGODZkMAAnYRsE6kFhQUmFnU7du320U6jta2atVKpk6dKt27d4/jU3mUFwgQH+G9cN1115mvDcRHeFZcAQEIQCCRBKwTqbrWrlevXmYmlVI5gdzcXMnPz2dtagA7CPER3unER3hGXAEBCEDACwSsEqn79u0zJywdOXLEC+w8bUPdunVlw4YN0qRJE0/biXHuESA+nLMkPpyz4koIQAACiSJglUjVvI96FKju0qVUTWDUqFFSq1Yt8qYGqKMQH86dTXw4Z8WVEIAABBJFwCqRqsm4lyxZYo4BpVRNQPOlDho0SN566y1QBYQA8eHc0cSHc1ZcCQEIQCBRBKwRqXoGed++fU1ycoozAtdee628+OKL0rp1a2c3cJW1BIiPyF2n8fHSSy+JbjSkQAACEICA9whYI1InT54sR48e5VN/BH1IP2nWq1dPHn/88Qju4lIbCRAfkXtN4yM1NdVkC6FAAAIQgID3CFgjUrt16yY6qPTs2dN7FD1qkSZ1nz17tqxdu9ajFmKWWwS6du0qDz/8MIc4RACU+IgAFpdCAAIQSAABa0TqZZddZmZSU1JSEoDJzkd+/PHHZib1k08+sbMBWO2YAPHhGNW5C3UTps6kEh+Rs+MOCEAAAvEgYIVI3bx589n777+fBP5R9Ahdb/f8889LRkZGFHdziw0EiouLxQ/xcfbsWUlKSoorck3srye0tW3bNq7P5WEQgAAEIBCegBUiNTc392xRUZHZ2U+JjMCAAQNMblkOP4iMm01Xq8iyNT5UmP7+97+XmTNnmpRpzzzzTFxz+2p8dOjQwWTCoEAAAhCAgLcIWCFSc3JyzjZo0EBGjx7tLXoWWDNjxgx5//335emnn7bAWkyMhoCuRbU1PqZPn25m+lWcvvDCC3LNNdfIU089FQ2GqO7R55eVlRmRTIEABCAAAW8RsEKk3nbbbWfvvfde6d27t7foWWDNqlWrZOnSpRyRaoGvojVRj0K1MT40nZx+bte/3/72t81xx/q5/w9/+EO0KCK+77e//a0sW7aM+IiYHDdAAAIQiD0BK0RqmzZtzi5YsMBT68bOnDkjf/3rX41NX//61895auXKldKnT5/Ye87hE7Zs2SLDhg0T/Uuxk4DOguua0wttGtQ+GIv40E/x2s9/85vfyB133CH6Y+Fm+dnPfiavv/66vPbaa6ba0HrU0F99vv5T/rlbt26VNm3auGaGrucdPny46F8KBCAAAQh4i4AVIjUtLc2sSU1LS/MEPf08qKmwNDel/vPqq6+agVRnLO+77z4zsHulHDx40KxJLS0t9YpJ2BEhga9+9aumT+Xk5JictxXFasOGDc2aVDfjQ8VhXl6eLFq0SL7xjW+Y2U6ox+O/AAAgAElEQVQ9MKCiUHXa1yvep/Xry9PevXuN6NSjjn/wgx/IK6+8Yp6h9Wp7/vGPf8iHH34oderUMWJWU9E5faYTzBoXHTt2lHfffdfJ5VwDAQhAAAJxJGCFSE1JSTn73nvveSb9VLt27WTgwIGSnJxs/p4+fdrMAunmJE2TpRtBvFI0DdW3vvUt0XQ7FDsJ/OIXv5DHHnvMiDPtZypWNQF9SKzqTL6b8aECcsKECfLnP/9Z3njjDVm8eLHp2zt37pTmzZufg6jXjR8/3sx2VlX0R2bixInnXaJtufnmm+Wzzz4zL3cqSIcOHSpf+cpXjGDVJQy6VCWUH/m2224za1fz8/Pld7/7nWuO1LjQZx8/fty1OqkIAhCAAATcIWCFSE1OTj776aefGlHohaKDtw6eOpt61VVXmU+tKlS/+c1vmoTqY8aM8YKZxga1S3Nonjp1yjM2YUjkBHQ2U2cUtVx66aVGrOrhFj/96U+ldu3a4mZ8/OUvfzGzmnpkaHZ2thGhmzdvlszMTEeGO0klpdd06dLFzJquW7fO1KtxpEsAdPZUN099//vfN18Bdu3aJU2bNpUf/vCH5q9+vXCraFzoTPW//vUvt6qkHghAAAIQcImAFSJVRMfJqmdrXOLhuBoVf3Xr1pVf//rXoqf96Fo5XRuoMz06Q+SlooIm3vknvdR+v9hSMQYuueQSufXWW82aUTfjQ1NajRs3zojFcOtQnXx6V9sqvmCGRKr2SxWp+ldPRlNhqstnbrrpJpk3b57Z1LRhwwbTPp3xnDt3rqsbKCuzzS/9hXZAAAIQsJ1A586dz01keLUtSV6bSVVQIVGq6Z3q168vubm58vOf/1zefvtt88nSK4WZVK94onp26AvRBx988KWZVF2jqus13ZxJ1c/vKhD1U39IXFY2O1qdz/3akEcffVQ2bdp0TqTOmTPHJNbfvn27Eceaw1TFq+ZH3r9/v8mfGoq36tH84m5mUt0iST0QgAAEgkkgyWtrUtUNoc+fb775phEJuk5QP6trrkcvzVqyJtX+oNE1qboGNbT2+aGHHjIbqEJZJdxek6piWJezqGBs3bq1+dyua1L137/zne+4BlTboymoXnzxRfnud79rsmJontRJkyaZGNJ1qNo2nU3VNbLLly83L4FuxhdrUl1zJxVBAAIQCCSBJK/t7lcvhDaX6EzQn/70J+OY5557zqQK8lJhd7+XvBGdLbpmUgVdaHd/+ZRnWmMsdverMNQE97pZ6p577jGbmXRdrJtFlwqo+HzyySfNOltd/6qzpzfeeKN5jO64V8GqywEOHTpkMmfMnz/fVZHK7n43PUpdEIAABIJHIMlreVLfeustGTFihEnRU69ePZN8fMqUKWY39IVyWSbKbeRJTRR5956reVKHDBlyXj7e8rXHMk9qxTyl7rXqi5rKr2sNrYFVYapfKx555JFz61E1B7EeYepmIU+qmzSpCwIQgEDwCCR57cQpzdeoO5F1F7/uCtaUVBs3bnS8+zmeLtQ0ProEwc20PfG0n2eFJ2DriVNVtUzzsmqWgQcffFDuvPNOs6FKhXq4jVzhaZ1/BSdORUqM6yEAAQhAoDyBpJycnLNeOptcZ370GEpdI6cllKrHi26bMWOG2Wyis3EUfxLQtGdXXHGFmXX0S9ENTRpjurRBU09pBg0316KGOGl8HD582CxtoEAAAhCAAAQiJZCUm5trTpzSXb5eKqG0P7EYPN1qp67x01yTuvGF4k8CuqEplHTfTy0MHV4Qy/jS+NAlBIMGDfITOtoCAQhAAAJxIpC0efPms7ohSVPTUCIj0KpVK3NST0ZGRmQ3crU1BHRdJfERnbs0u4CKfF3XS4EABCAAAQhESiBJM/lreidNjVOzZs1I7w/s9Zp+SvNrag5Nir8JEB+R+1fjQzc+fvLJJ5HfzB0QgAAEIAABETEiVdek6dq7rKwsoDgksGbNGpk9e7Y5yYfibwLER+T+JT4iZ8YdEIAABCBwPgEjUvW87qNHj5p8jRRnBPRsd50p0sTvFH8TID4i96/GR2pqqjkogQIBCEAAAhCIhoARqdu2bZO+ffuKpqahOCNw7bXXmtN89NQgir8JEB+R+1fjQzNz6LptCgQgAAEIQCAaAkak6o3p6elmh39mZmY09QTqHj0JS3cu65GWlGAQID6c+1nzGuuOfj2YgwIBCEAAAhCIlsA5kTp+/HjRzQ588g+PUj9l1qpVyxw7SQkGAeLDuZ+JD+esuBICEIAABC5M4JxI3bdvn8n5eeTIEXiFIaC7+jds2CBNmjSBVUAIEB/OHU18OGfFlRCAAAQg4ECk6iV6BGSvXr1ITl9Fj8nNzRXducxRqMELK+IjvM81PvLz80WPDKZAAAIQgAAEqkPg3EyqVlJQUGB2q+tGEUrlBHQjyJQpU6RHjx4gChgB4iO8wzU+pk6dKt27dw9/MVdAAAIQgAAEqiBwnkjV67p06WJO2NHd/pTzCaxYscKcoLNu3TrQBJQA8XFhxxMfAQ0Kmg0BCEAgRgS+JFL1U7bmNty5c2eMHmlvtS1atDCzqD179rS3EVheLQLEx4XxaXzoLCqHglSri3EzBCAAAQj8m8CXRKr+d11717lzZ9FdupTPCWjWg9dff521qHQI4qOSPqDxUVhYyFpU4gMCEIAABFwjUKlILSkpkXbt2pnk/mlpaa49zNaKDh48KE2bNpU33nhDWrZsaWszsNslAsTH+SBLS0tNfGh+VJ1NpUAAAhCAAATcIFCpSNWKNS/k3r17zakxQS933XWXGYTJixr0nvBF+4mPL1hkZ2dLs2bNiA/CAwIQgAAEXCVwQZGqT9HZ1HvvvVeGDh3q6kNtqmzBggWydOlSM4tKgUB5AsSHiMbHsmXLpKioiM4BAQhAAAIQcJVAlSJVU1G1adNGiouLpW3btq4+2IbKtmzZIhkZGbJ161Zp3bq1DSZjYxwJEB+fx4dy0NRTFAhAAAIQgICbBKoUqfogTbk0Z84cM5NYs2ZNN5/t6br0iFg9gesnP/kJhxt42lOJNS7I8aEzySNHjiQ+EtsFeToEIAAB3xIIK1K15WPGjDGbqFavXu1bEBUbduutt5p1qNOnTw9Mm2lodASCGh+6DnXatGnRQeMuCEAAAhCAQBgCjkSq1tG/f3+55JJLZPHixb6HOmDAADl16pRZa0eBgBMCQYuP06dPm7XaFAhAAAIQgECsCDgWqWqA5k9t0KCB2Szh1zJs2DB5//33yYfqVwfHsF1BiA/dRFlWVkY+1Bj2I6qGAAQgAIHPCUQkUkNC9fLLL/fljKrOoH700UcIVKIjagIqVP0cH8eOHUOgRt07uBECEIAABCIhELFI1cr10+bx48dFz+pOSUmJ5HmevFY3SfXr109q1arFJ35Pesguo4gPu/yFtRCAAAQg4E0CUYlUbYpuFikoKDAzqjanp9I0UwMHDpQePXqwScqbfdRKq/wWHzfffDObpKzsiRgNAQhAwF4CUYtUbbKm3xkyZIjMnz9fdC2nbUXX1j7wwAOSm5tLGh3bnGeBvcSHBU7CRAhAAAIQ8CyBaolUbZUm8h4xYoQ0bNhQZsyYIWlpaZ5tbMiwgwcPyujRo0XPHJ87dy6J+j3vMXsNJD7s9R2WQwACEIBAYglUW6SGzNezzGfOnCmTJk2SUaNGJbZVVTx99uzZMm7cOCNSJ0yY4Fk7McxfBGyJj1mzZskTTzxBfPir+9EaCEAAAlYScE2kautLSkpEB+N33nlHHnvsMenbt69noOgmL0083qhRI5k4caK0bNnSM7ZhSDAIEB/B8DOthAAEIAABdwi4KlJDJq1Zs8Z8+j9x4oQMHz48oes9dV3gvHnzzM59nT3t2bOnO+SoBQJREiA+ogTHbRCAAAQgECgCMRGpIYK6+183JxUVFZm0VdnZ2ZKZmRlzwJs2bZK8vDyTTqp9+/ZmU5fu3qdAwEsEiA8veQNbIAABCEDAawRiKlJDjd23b5/Jqbpy5Uo5c+aMZGVlSbdu3aRTp05Ss2bNajPRPKeFhYWydu1ayc/Pl4suukj69Oljcp9ec8011a6fCiAQSwLERyzpUjcEIAABCNhKIC4itTwc3e2sM0ivvvqqrF+/XtLT06VVq1bSrFkzady4sckOUL9+faldu7bUqFFDkpOTRc8JP3nypOhpN4cPHxbdnb9//37ZtWuX7Nixw/zt0KGDdO3a1cyYtm7d2lZ/YHfACRAfAe8ANB8CEIAABM4RiLtIrci+uLjYbLjavXu3HDhwQA4dOmTOBldBqsJUBaoKVRWsKlxTU1PlyiuvlKuvvtoIW90AlZGRgUsh4EsCxIcv3UqjIAABCEDAAYGEi1QHNnIJBCBQCQFd5jJgwABZsmSJL44nxskQgAAEIACB8gQQqfQHCFhKYOzYsfL000/Lww8/LFOmTLG0FZgNAQhAAAIQqJwAIpWeAQELCegsar169eTTTz+Vyy67TI4cOcJsqoV+xGQIQAACELgwAUQqvQMCFhLQWVQ9Heqzzz6TSy+9VHJycphNtdCPmAwBCEAAAohU+gAEfENAD8nQDYQ6ixoqzKb6xr00BAIQgAAE/k2AmVS6AgQsI1B+FjVkOrOpljkRcyEAAQhAICwBRGpYRFwAAe8Q0FnUunXrmrRsX/va1+TDDz+UOnXqyD//+U85deqUfPDBB6xN9Y67sAQCEIAABKpBAJFaDXjcCoF4E9Dd/OPGjZOpU6fKyJEjzelqeorbM888IzrD+tRTT5n1qRQIQAACEICA7QQQqbZ7EPsDTSAkUgMNgcZDAAIQgIAvCSBSfelWGhUUAojUoHiadkIAAhAIHgFEavB8Tot9RACR6iNn0hQIQAACEDiPACKVDgEBiwkgUi12HqZDAAIQgECVBBCpdBAIWEwAkWqx8zAdAhCAAAQQqfQBCPiVACLVr56lXRCAAAQgwEwqfQACFhNApFrsPEyHAAQgAAFmUukDEPArAUSqXz1LuyAAAQhAgJlU+gAELCaASLXYeZgOAQhAAALMpNIHIOBXAohUv3qWdkEAAhCAADOp9AEIWEwAkWqx8zAdAhCAAASYSaUPQMCvBBCpfvUs7YIABCAAAWZS6QMQsJgAItVi52E6BCAAAQgwk0ofgIBfCSBS/epZ2gUBCEAAAsyk0gcgYDEBRKrFzsN0CEAAAhBgJpU+AAG/EkCk+tWztAsCEIAABJhJpQ9AwGICiFSLnYfpEIAABCDATCp9AAJ+JYBI9atnaRcEIAABCDCTSh+AgMUEEKkWOw/TIQABCECAmVT6AAT8SgCR6lfP0i4IQAACEGAmlT4AAYsJIFItdh6mQwACEIAAM6n0AQj4lQAi1a+epV0QgAAEIMBMKn0AAhYTQKRa7DxMhwAEIAABZlLpAxDwKwFEql89S7sgAAEIQICZVPoABCwmgEi12HmYDgEIQAACzKTSByDgVwKIVL96lnZBAAIQgAAzqfQBCFhMAJFqsfMwHQIQgAAEmEmlD0DArwQQqX71LO2CAAQgAAFmUukDELCYACLVYudhOgQgAAEIMJNKH4CAXwkgUv3qWdoFAQhAAALMpNIHIGAxAUSqxc7DdAhAAAIQ8PZManFxsZSUlMju3bvlwIEDcujQISkrK5Njx47JyZMn5fTp05KcnCw1atSQ2rVrS2pqqqSlpUmjRo2kWbNm0rJlS8nIyMDNEAgkAURqIN1OoyEAAQgEgkDcZ1K3bdsmBQUF8tprr0lhYaGkp6fLddddZ/42btzYCND69esbQarCVAWqClUVrCpcDx8+LAcPHpT9+/fLrl27ZMeOHUbgdujQQW666Sbp0aOHtG7dOhDOo5EQQKTSByAAAQhAwK8E4iJS3377bVmxYoW8/PLLRnBmZWVJt27dpFOnTpKSklJtth9//LG8/vrrsnbtWsnPz5eLL75Y+vTpI/369ZMmTZpUu34qgIBXCSBSveoZ7IIABCAAgeoSiKlI1RnTBQsWyPr166V///5y1113SWZmZnVtDnv/xo0bJS8vT5YtWyYdO3aUYcOGSffu3cPexwUQsI0AItU2j2EvBCAAAQg4JRATkbpmzRqZOXOm+Tw/fPhwGTJkiFN7XL8uNzdXnn32WbN84JFHHpGePXu6/gwqhECiCCBSE0We50IAAhCAQKwJuCpSdQPU+PHj5Z133pFHH33UfG73StHlBlOnTjXrXidOnCgtWrTwimnYAYGoCSBSo0bHjRCAAAQg4HECronUCRMmyIwZM+TJJ5+UnJwczzZ71qxZMm7cOBkzZoyozRQI2EwAkWqz97AdAhCAAASqIlBtkaq79UeMGCENGzY0IlV353u9lJaWyujRo026q7lz50qrVq28bjL2QaBSAohUOgYEIAABCPiVQLVE6sKFC8160/nz55vNSbYV3dT1wAMPiK5bHTx4sG3mYy8EBJFKJ4AABCAAAb8SiFqk6udy3b2/ePFiadu2rbV89DCBgQMHmrRY06ZNs7YdGB5MAojUYPqdVkMAAhAIAoGoRKqmkzp+/LjJfepGntNEgz5x4oTZ5FWnTh1ZunRpos3h+RBwTACR6hgVF0IAAhCAgGUEIhapt99+uxFzS5Yssayp4c0dMGCASZu1atWq8BdzBQQ8QACR6gEnYAIEIAABCMSEQEQiVQVqgwYNTIJ+v5ahQ4dKWVkZQtWvDvZZuxCpPnMozYEABCAAgXMEHItU/cSvx436cQa1Yn/QGVU9vpVP/0SK1wkgUr3uIeyDAAQgAIFoCTgSqbpJas+ePbJ69epon2Pdfbfccoukp6ezmco6zwXLYERqsPxNayEAAQgEiUBYkapppubMmSNFRUW+2CTl1Lm6map9+/YycuRI0lM5hcZ1cSeASI07ch4IAQhAAAJxIlClSNVE/W3atBFN02RzmqloWWq7v/e974lyIOF/tBS5L5YEEKmxpEvdEIAABCCQSAJVitR27drJPffcY2Wifreg6iaxZcuWmZlkCgS8RgCR6jWPYA8EIAABCLhF4IIiVc+13717t+Tl5bn1LGvryc7OlmbNmokyoUDASwQQqV7yBrZAAAIQgICbBCoVqSUlJXLDDTfI3r17JS0tzc3nWVlXaWmpNG3aVDZu3CgtWrSwsg0Y7U8CiFR/+pVWQQACEICASKUitXfv3tKxY0fJycmB0b8JzJo1SwoLC8mfSo/wFAFEqqfcgTEQgAAEIOAigS+J1DVr1sjYsWNl586dLj7GH1U1b95cpk6dKj179vRHg2iF9QQQqda7kAZAAAIQgMAFCHxJpN54440m5ZKeZU85n8CKFStEU3KtW7cONBDwBAFEqifcgBEQgAAEIBADAueJ1IKCAjOLun379hg8yh9VaioqnU3t3r27PxpEK6wmgEi12n0YDwEIQAACVRA4T6Tefvvt0qtXL5LXVwEsNzdX8vPzWZtKWHmCACLVE27ACAhAAAIQiAGBcyJ137595oSlI0eOxOAx/qqybt26smHDBmnSpIm/GkZrrCOASLXOZRgMAQhAAAIOCZwTqZoDVI8C1V3slKoJjBo1SmrVqkXeVDpKwgkgUhPuAgyAAAQgAIEYETgnUtPT02XJkiWSmZkZo0f5p1rNlzpo0CB56623/NMoWmIlAUSqlW7DaAhAAAIQcEDAiFQ9m75v376yZ88eB7dwiRK49tpr5cUXX5TWrVsDBAIJI4BITRh6HgwBCEAAAjEmYETq5MmT5ejRo3zqjwC2fvKvV6+ePP744xHcxaUQcJcAItVdntQGAQhAAALeIWBEardu3URFF0nqnTtGDz2YPXu2rF271vlNXAkBlwkgUl0GSnUQgAAEIOAZAkakXnbZZWYmNSUlxTOGed2Qjz/+2MykfvLJJ143Fft8TACR6mPn0jQIQAACASeQtHnz5rP3338/Cfyj6Aia2P/555+XjIyMKO7mFghUnwAitfoMqQECEIAABLxJICk3N/dsUVGR2dlPiYzAgAEDTG5ZPUaWAoFEEECkJoI6z4QABCAAgXgQSMrJyTnboEEDGT16dDyeF9Uzzp49K0lJSVHdG8ubZsyYIe+//748/fTTsXwMdUPgggQQqXQOCEAAAhDwK4Gk22677ey9994rvXv39lQbz5w5I3PnzpXly5eLDsQjRoyQu+++21M2rlq1SpYuXcoRqZ7ySrCMQaQGy9+0FgIQgECQCCS1adPm7IIFC6Rt27aeabcK1DvuuEOaN29u/uoRpHPmzJE333zTUzOqW7ZskWHDhon+pUAgEQQQqYmgzjMhAAEIQCAeBJLS0tLMmtS0tLR4PC/sM/TT/tChQ+Xdd9+VV155xcyiTpw4Uf7xj3/Ic889F/b+eF5w8OBBsya1tLQ0no/lWRA4RwCRSmeAAAQgAAG/EkhKSUk5+95773km/dSJEyekVq1a8rOf/cz8s2nTJjOj6sX0WJqG6lvf+paozRQIJIIAIjUR1HkmBCAAAQjEg0BScnLy2U8//VSSk5Pj8bywz9i7d680bdpUnnjiCbniiitE/33Hjh1GsN50001h74/nBadPnxbNMXvq1Kl4PpZnQYCZVPoABCAAAQj4noBumdd8/p5p6GuvvSZdu3YVTe+0ePFiY9f48eNl4cKF8vbbb0uNGjU8Y6sa4sWsA54ChDExJdClSxdZt25dTJ9B5RCAAAQgAIFEEPDcTOqBAwekcePGsmjRIhk4cKBhkpeXJ3fddZfZOPXd7343EZwqfSYzqZ5xBYZAAAIQgAAEIOAzAp5bk6qfzi+55JLzROqKFStM+qm1a9eaWVavFNakesUT2AEBCEAAAhCAgN8IeG53v6af0hnUq666yuzq16Kf+9esWSObN282u/29Utjd7xVPYAcEIAABCEAAAn4j4Mk8qbpR6gc/+IHceeedZlb1b3/7m4waNUqysrI8xZ88qZ5yB8ZAAAIQgAAEIOAjAp4+cWrlypXSrl07k+bJSzOoIf/riVMvvPCC/O53v/NRl6ApEIAABCAAAQhAIPEEknJycs42aNBARo8enXhrLLNgxowZ8v7778vTTz9tmeWYCwEIQAACEIAABLxNICk3N9ecOLVkyRJvW+pB6zRNlp44NXjwYA9ah0kQgAAEIAABCEDAXgJJmzdvPnv//ffL9u3b7W1Fgixv1aqVPP/885KRkZEgC3gsBCAAAQhAAAIQ8CeBJM3kr6cmffDBB1KzZk1/tjIGrdL0U3Xr1hU9rYsCAQhAAAIQgAAEIOAuASNSNffoww8/7Lnd8+421d3aNCXW7NmzTe5WCgQgAAEIQAACEICAuwSMSJ08ebIcPXpUZs2a5W7tPq5NU2LVq1dPHn/8cR+3kqZBAAIQgAAEIACBxBAwInXbtm3St29f2bNnT2KssPCp1157rbz44ovSunVrC63HZAhAAAIQgAAEIOBtAkakqonp6elmh39mZqa3LfaAdZs2bRLd2b9r1y4PWIMJEIAABCAAAQhAwH8EzolUPXpUNwPxyT+8k/VTf61atWTChAnhL+YKCEAAAhCAAAQgAIGICZwTqfv27TM5P48cORJxJUG7QXf1b9iwQZo0aRK0ptNeCEAAAhCAAAQgEBcC50SqPu3222+XXr16kZy+CvS5ubmiO/s5CjUu/ZOHQAACEIAABCAQUALnidSCggKzW103UlEqJ6AJ/KdMmSI9evQAEQQgAAEIQAACEIBAjAicJ1L1GV26dBE9gUp3+1POJ7BixQpZuHChrFu3DjQQgAAEIAABCEAAAjEk8CWRqp+yx44dKzt37ozhY+2sukWLFmYWtWfPnnY2AKshAAEIQAACEICAJQS+JFLVbl2b2rlzZ9Fd7JTPCWjWg9dff521qHQICEAAAhCAAAQgEAcClYrUkpISadeunUnun5aWFgczvP2IgwcPStOmTeWNN96Qli1bettYrIMABCAAAQhAAAI+IFCpSNV2ad7UvXv3yksvveSDZlavCXfddZcRqeRFrR5H7oYABCAAAQhAAAJOCVxQpGoFOpt6zz33yLBhw5zW57vrFixYIEuXLjWzqBQIQAACEIAABCAAgfgQqFKkaiqqNm3aSHFxsbRt2zY+FnnoKVu2bJGMjAzZunWrtG7d2kOWYQoEIAABCEAAAhDwN4EqRao2XVMuzZkzR4qKiiQlJcXfNMq1To+I1RO4fvKTn3C4QWC8TkMhAAEIQAACEPAKgbAiVQ0dM2aM2US1evVqr9gdcztuvfVWsw51+vTpMX8WD4AABCAAAQhAAAIQOJ+AI5Gqt/Tv318uueQSWbx4se8ZDhgwQE6dOiXLli3zfVtpIAQgAAEIQAACEPAiAcciVY3X/KkNGjQQ3Uzk16KbxN5//33yofrVwbQLAhCAAAQgAAErCEQkUkNC9fLLL/fljKrOoH700UcIVCu6LkZCAAIQgAAEIOBnAhGLVIWhn/6PHz8uv/rVr6RmzZrW89FNUv369ZNatWrxid96b9IACEAAAhCAAAT8QCAqkaoN181UBQUFsmTJEpOmytaiaaYGDhwoPXr0YJOUrU7EbghAAAIQgAAEfEcgapGqJDQ91ZAhQ8wa1aFDh1oHR+1+4IEHJDc3lzRT1nkPgyEAAQhAAAIQ8DOBaolUBaMJ/0eMGCENGzaUGTNmSFpamud5HTx4UEaPHi2lpaUyd+5cEvV73mMYCAEIQAACEIBA0AhUW6SGgI0fP15mzpwpkyZNklGjRnmW4+zZs2XcuHFGpE6YMMGzdmIYBCAAAQhAAAIQCDIB10SqQiwpKREVq++884489thj0rdvX8+wXbFihUybNk0aNWokEydOlJYtW3rGNgyBAAQgAAEIQAACEDifgKsiNVT1H//4RzOreuLECRk+fHhC13vqutl58+aZnfs6e9qzZ0/6AAQgAAEIQAACEICAxwnERKSG2qy7/3VzUlFRkUlblZ2dLZmZmTFHsmnTJsnLyzPppNq3by+aoF9371MgAAEIQAACEIAABOwgECnyJc0AAAENSURBVFORGkKwb98+0c/tK1eulDNnzkhWVpZ069ZNOnXq5EqeVc1z+vrrr8urr74q+fn5ctFFF8mdd95plhtcc801dngCKyEAAQhAAAIQgAAEzhGIi0gtz1uzAegMqwrK9evXS3p6ulx33XXmb+PGjU12gPr160vt2rWlRo0akpycLKdPn5aTJ0/KsWPH5PDhw6K78/fv3y+7du2SHTt2mL8dO3aUm266ycyYtm7dGhdDAAIQgAAEIAABCFhMIO4itSKr4uJis+Fq9+7dcuDAATl06JCUlZUZQarCVAWqClUVrCpcU1NT5corr5Srr75amjVrZjZAZWRkWOwCTIcABCAAAQhAAAIQqEjg/wOzX8Uhbh50FQAAAABJRU5ErkJggg==) > Denote each arrow with the corresponding partial gradient, e.g. $\frac{df}{dc} = 1$ between $f$ and $c$, and use the generalized chain rule on graphs to compute the gradients $\frac{df}{dc}$, $\frac{df}{db}$, $\frac{df}{da}$, $\frac{df}{dx}$, $\frac{df}{dy}$. Your answer here $\frac{df}{dc} = 1$ $\frac{dc}{d6} = y$ $\frac{dc}{dy} = 6$ $\frac{df}{db} = 1$ $\frac{db}{d3} = a$ $\frac{db}{da} = 3$ $\frac{da}{dx} = 2x$ ---------------------------------- $\frac{df}{da} = \frac{df}{db} * \frac{db}{da} = 1 * 3 = 3$ $\frac{df}{dx} = \frac{df}{db} * \frac{db}{da} * \frac{da}{dx} = 1 * 3 * 2x = 6x$ $\frac{df}{dy} = \frac{df}{dc} * \frac{dc}{dy} = 1 * 6 = 6$ AutodiffThis exercise is quite hard. It's OK if you don't finish it, but you should try your best! You are given the following function (pseudo-code):
def parents_grads(node): """ returns parents of node and the gradients of node w.r.t each parent e.g. in the example graph above parents_grads(f) would return: [(b, df/db), (c, df/dc)] """
_____no_output_____
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
> Complete the `backprop` method below to create a recursive algorithm such that calling `backward(node)` computes the gradient of `node` w.r.t. every (upstream - to the left) node in the computational graph. Every node has a `node.grad` attribute that is initialized to `0.0`, it's numerical gradient. The algorithm should modify this property directly, it should not return anything. Assume the gradients from `parents_grads` can be treated like real numbers, so you can e.g. multiply and add them.
def backprop(node, df_dnode): node.grad += df_dnode # Your code here parents = parents_grads(node) for parent, grad in parents: backprop(parent, grad + df_dnode) def backward(node): """ Computes the gradient of every (upstream) node in the computational graph w.r.t. node. """ backprop(node, 1.0) # The gradient of a node w.r.t. itself is 1 by definition.
_____no_output_____
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
Ok, now let's try to actually make it work! We'll define a class `Node` which contains the node value, gradient and parents and their gradients
from typing import Sequence, Tuple class Node: def __init__(self, value: float, parents_grads: Sequence[Tuple['Node', float]]): self.value = value self.grad = 0.0 self.parents_grads = parents_grads def __repr__(self): return "Node(value=%.4f, grad=%.4f)"%(self.value, self.grad)
_____no_output_____
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
So far no magic. We still havn't defined how we get the `parents_grads`, but we'll get there. Now move the `backprop` and `grad` function into the class, and modify it so it works with the class.
# Your code here from typing import Sequence, Tuple class Node: def __init__(self, value: float, parents_grads: Sequence[Tuple['Node', float]]): self.value = value self.grad = 0.0 self.parents_grads = parents_grads def __repr__(self): return "Node(value=%.4f, grad=%.4f)"%(self.value, self.grad) def backprop(self, df_dnode): self.grad += df_dnode # Your code here for parent, grad in self.parents_grads: parent.backprop(grad * df_dnode) def backward(self): """ Computes the gradient of every (upstream) node in the computational graph w.r.t. node. """ self.backprop(1.0) # The gradient of a node w.r.t. itself is 1 by definition.
_____no_output_____
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
Now let's create a simple graph: $y = x^2$, and compute it for $x=2$. We'll set the parent_grads directly based on our knowledge that $\frac{dx^2}{dx}=2x$
x = Node(2.0, []) y = Node(x.value**2, parents_grads=[(x, 2*x.value)])
_____no_output_____
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
And print the two nodes
print("x", x, "y", y)
x Node(value=2.0000, grad=0.0000) y Node(value=4.0000, grad=0.0000)
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
> Verify that the `y.backward()` call below computes the correct gradients
y.backward() print("x", x, "y", y)
x Node(value=2.0000, grad=4.0000) y Node(value=4.0000, grad=1.0000)
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
$\frac{dy}{dx}$ should be 4 and $\frac{dy}{dy}$ should be 1 Ok, so it seems to work, but it's not very easy to use, since you have to define all the `parents_grads` whenever you're creating new nodes. **Here's the trick.** We can make a function `square(node:Node)->Node` which can square any Node. See below
def square(node: Node) -> Node: return Node(node.value**2, [(node, 2*node.value)])
_____no_output_____
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
Let's verify that it works
x = Node(3.0, []) y = square(x) print("x", x, "y", y) y.backward() print("x", x, "y", y)
x Node(value=3.0000, grad=0.0000) y Node(value=9.0000, grad=0.0000) x Node(value=3.0000, grad=6.0000) y Node(value=9.0000, grad=1.0000)
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
Now we're getting somewhere. These calls to square can of course be chained
x = Node(3.0, []) y = square(x) z = square(y) print("x", x, "y", y, "z", z) z.backward() print("x", x, "y", y,"z", z)
x Node(value=3.0000, grad=0.0000) y Node(value=9.0000, grad=0.0000) z Node(value=81.0000, grad=0.0000) x Node(value=3.0000, grad=108.0000) y Node(value=9.0000, grad=18.0000) z Node(value=81.0000, grad=1.0000)
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
> Compute the $\frac{dz}{dx}$ gradient by hand and verify that it's correct Your answer here $\frac{dz}{dx} = \frac{dz}{dy} * \frac{dy}{dx} = 2y * 2x = 2 (x^2) * 2x = 2 * 3^2 * 2 * 3 = 108$ Similarly we can create functions like this for all the common operators, plus, minus, multiplication, etc. With enough base operators like this we can create any computation we want, and compute the gradients automatically with `.backward()`> Finish the plus function below and verify that it works
def plus(a: Node, b:Node)->Node: """ Computes a+b """ # Your code here return Node(a.value + b.value, [(a, 1), (b, 1)]) x = Node(4.0, []) y = Node(5.0, []) z = plus(x, y) print("x", x, "y", y, "z", z) z.backward() print("x", x, "y", y,"z", z)
x Node(value=4.0000, grad=0.0000) y Node(value=5.0000, grad=0.0000) z Node(value=9.0000, grad=0.0000) x Node(value=4.0000, grad=1.0000) y Node(value=5.0000, grad=1.0000) z Node(value=9.0000, grad=1.0000)
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
> Finish the multiply function below and verify that it works:
def multiply(a: Node, b:Node)->Node: """ Computes a*b """ # Your code hre return Node(a.value*b.value, [(a,b.value),(b,a.value)]) x = Node(4.0, []) y = Node(5.0, []) z = multiply(x, y) print("x", x, "y", y, "z", z) z.backward() print("x", x, "y", y,"z", z)
x Node(value=4.0000, grad=0.0000) y Node(value=5.0000, grad=0.0000) z Node(value=20.0000, grad=0.0000) x Node(value=4.0000, grad=5.0000) y Node(value=5.0000, grad=4.0000) z Node(value=20.0000, grad=1.0000)
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
We'll stop here, but with just a few more functions we could compute a lot of common computations, and get their gradients automatically!This is super nice, but it's kind of annoying having to write `plus(a,b)`. Wouldn't it be nice if we could just write `a+b`? With python operator overloading we can! If we define the `__add__` method on `Node`, this will be executed instead of the regular plus operation when we add something to a `Node`. > Modify the `Node` class so that it overload the plus, `__add__(self, other)`, and multiplication, `__mul__(self, other)`, operators and run the code below to verify that it works.
# Your code here class Node: def __init__(self, value: float, parents_grads: Sequence[Tuple['Node', float]]): self.value = value self.grad = 0.0 self.parents_grads = parents_grads def __repr__(self): return "Node(value=%.4f, grad=%.4f)"%(self.value, self.grad) def backprop(self, df_dnode): self.grad += df_dnode # Your code here for parent, grad in self.parents_grads: parent.backprop(grad * df_dnode) def backward(self): """ Computes the gradient of every (upstream) node in the computational graph w.r.t. node. """ self.backprop(1.0) # The gradient of a node w.r.t. itself is 1 by definition. def __add__(self, other): return Node(self.value + other.value, [(self, 1), (other, 1)]) def __mul__(self, other): return Node(self.value * other.value, [(self, other.value), (other, self.value)]) a = Node(2.0, []) b = Node(3.0, []) c = Node(4.0, []) d = a*b + c # Behold the magic of operator overloading! print("a", a, "b", b, "c", c, "d", d) d.backward() print("a", a, "b", b, "c", c, "d", d)
a Node(value=2.0000, grad=0.0000) b Node(value=3.0000, grad=0.0000) c Node(value=4.0000, grad=0.0000) d Node(value=10.0000, grad=0.0000) a Node(value=2.0000, grad=3.0000) b Node(value=3.0000, grad=2.0000) c Node(value=4.0000, grad=1.0000) d Node(value=10.0000, grad=1.0000)
MIT
lecture-04/lab.ipynb
LuxTheDude/modern-ai-course
To DoDownload a dataset from DomainConvert all string columns to unique integers ---> could use hashes
domain_node = sy.login(email="[email protected]", password="changethis", port=8081) domain_node.store.pandas import pandas as pd canada = pd.read_csv("../../trade_demo/datasets/ca - feb 2021.csv") canada.head() import hashlib hashlib.algorithms_available test_string = "February 2021" hashlib.md5(test_string.encode("utf-8")) int(hashlib.sha256(test_string.encode("utf-8")).hexdigest(), 16) % 10**8 def convert_string(s: str, digits: int=15): """Maps a string to a unique hash using SHA, converts it to a hash or an int""" if to_int: return int(hashlib.sha256(s.encode("utf-8")).hexdigest(), 16) % 10**digits else: return hashlib.sha256(s.encode("utf-8")).hexdigest() convert_string("Canada", to_int=False) convert_string("Canada", to_int=True, digits=10) convert_string("Canada", to_int=True, digits=260) canada.columns #domain_node.load_dataset(canada) canada.shape domain_node.datasets.pandas canada['Trade Flow'] domain_node.store.pandas
_____no_output_____
Apache-2.0
notebooks/Experimental/Ishan/ADP Demo/Old Versions/DataFrame to NumPy.ipynb
Noob-can-Compile/PySyft
```{note}This feature requires MPI, and may not be able to be run on Colab.``` Distributed VariablesAt times when you need to perform a computation using large input arrays, you may want to perform that computation in multiple processes, where each process operates on some subset of the input values. This may be done purely for performance reasons, or it may be necessary because the entire input will not fit in the memory of a single machine. In any case, this can be accomplished in OpenMDAO by declaring those inputs and outputs as distributed. By definition, a variable is distributed if each process contains only a part of the whole variable. Conversely, when a variable is not distributed (i.e., serial), each process contains a copy of the entire variable. A component that has at least one distributed variable can also be called a distributed component.We’ve already seen that by using [src_indices](connect-with-src-indices), we can connect an input to only a subset of an output variable. By giving different values for src_indices in each MPI process, we can distribute computations on a distributed output across the processes. All of the scenarios that involve connecting distributed and serial variables are detailed in [Connections involving distributed variables](../working_with_groups/dist_serial.ipynb). Example: Simple Component with Distributed Input and OutputThe following example shows how to create a simple component, *SimpleDistrib*, that takes a distributed variable as an input and computes a distributed output. The calculation is divided across the available processes, but the details of that division are not contained in the component. In fact, the input is sized based on it's connected source using the "shape_by_conn" argument.
%%px import numpy as np import openmdao.api as om class SimpleDistrib(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) # Distributed Output self.add_output('out_dist', copy_shape='in_dist', distributed=True) def compute(self, inputs, outputs): x = inputs['in_dist'] # "Computationally Intensive" operation that we wish to parallelize. f_x = x**2 - 2.0*x + 4.0 outputs['out_dist'] = f_x
_____no_output_____
Apache-2.0
openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb
markleader/OpenMDAO
In the next part of the example, we take the `SimpleDistrib` component, place it into a model, and run it. Suppose the vector of data we want to process has 7 elements. We have 4 processors available for computation, so if we distribute them as evenly as we can, 3 procs can handle 2 elements each, and the 4th processor can pick up the last one. OpenMDAO's utilities includes the `evenly_distrib_idxs` function which computes the sizes and offsets for all ranks. The sizes are used to determine how much of the array to allocate on any specific rank. The offsets are used to figure out where the local portion of the array starts, and in this example, is used to set the initial value properly. In this case, the initial value for the full distributed input "in_dist" is a vector of 7 values between 3.0 and 9.0, and each processor has a 1 or 2 element piece of it.
%%px from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI size = 7 if MPI: comm = MPI.COMM_WORLD rank = comm.rank sizes, offsets = evenly_distrib_idxs(comm.size, size) else: # When running in serial, the entire variable is on rank 0. rank = 0 sizes = {rank : size} offsets = {rank : 0} prob = om.Problem() model = prob.model # Create a distributed source for the distributed input. ivc = om.IndepVarComp() ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True) model.add_subsystem("indep", ivc) model.add_subsystem("D1", SimpleDistrib()) model.connect('indep.x_dist', 'D1.in_dist') prob.setup() # Set initial values of distributed variable. x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]] prob.set_val('indep.x_dist', x_dist_init) prob.run_model() # Values on each rank. for var in ['indep.x_dist', 'D1.out_dist']: print(var, prob.get_val(var)) # Full gathered values. for var in ['indep.x_dist', 'D1.out_dist']: print(var, prob.get_val(var, get_remote=True)) print('') %%px from openmdao.utils.assert_utils import assert_near_equal assert_near_equal(prob.get_val(var, get_remote=True), np.array([7., 12., 19., 28., 39., 52., 67.]))
_____no_output_____
Apache-2.0
openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb
markleader/OpenMDAO
Note that we created a connection source 'x_dist' that passes its value to 'D1.in_dist'. OpenMDAO requires a source for non-constant inputs, and usually creates one automatically as an output of a component referred to as an 'Auto-IVC'. However, the automatic creation is not supported for distributed variables. We must manually create an `IndepVarComp` and connect it to our input. When using distributed variables, OpenMDAO can't always size the component inputs based on the shape of the connected source. In this example, the component determines its own split using `evenly_distrib_idxs`. This requires that the component know the full vector size, which is passed in via the option 'vec_size'.
%%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class SimpleDistrib(om.ExplicitComponent): def initialize(self): self.options.declare('vec_size', types=int, default=1, desc="Total size of vector.") def setup(self): comm = self.comm rank = comm.rank size = self.options['vec_size'] sizes, _ = evenly_distrib_idxs(comm.size, size) mysize = sizes[rank] # Distributed Input self.add_input('in_dist', np.ones(mysize, float), distributed=True) # Distributed Output self.add_output('out_dist', np.ones(mysize, float), distributed=True) def compute(self, inputs, outputs): x = inputs['in_dist'] # "Computationally Intensive" operation that we wish to parallelize. f_x = x**2 - 2.0*x + 4.0 outputs['out_dist'] = f_x size = 7 if MPI: comm = MPI.COMM_WORLD rank = comm.rank sizes, offsets = evenly_distrib_idxs(comm.size, size) else: # When running in serial, the entire variable is on rank 0. rank = 0 sizes = {rank : size} offsets = {rank : 0} prob = om.Problem() model = prob.model # Create a distributed source for the distributed input. ivc = om.IndepVarComp() ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True) model.add_subsystem("indep", ivc) model.add_subsystem("D1", SimpleDistrib(vec_size=size)) model.connect('indep.x_dist', 'D1.in_dist') prob.setup() # Set initial values of distributed variable. x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]] prob.set_val('indep.x_dist', x_dist_init) prob.run_model() # Values on each rank. for var in ['indep.x_dist', 'D1.out_dist']: print(var, prob.get_val(var)) # Full gathered values. for var in ['indep.x_dist', 'D1.out_dist']: print(var, prob.get_val(var, get_remote=True)) print('') %%px from openmdao.utils.assert_utils import assert_near_equal assert_near_equal(prob.get_val(var, get_remote=True), np.array([7., 12., 19., 28., 39., 52., 67.]))
_____no_output_____
Apache-2.0
openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb
markleader/OpenMDAO
Example: Distributed I/O and a Serial InputOpenMDAO supports both serial and distributed I/O on the same component, so in this example, we expand the problem to include a serial input. In this case, the serial input also has a vector width of 7, but those values will be the same on each processor. This serial input is included in the computation by taking the vector sum and adding it to the distributed output.
%%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class MixedDistrib1(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) # Serial Input self.add_input('in_serial', shape_by_conn=True) # Distributed Output self.add_output('out_dist', copy_shape='in_dist', distributed=True) def compute(self, inputs, outputs): x = inputs['in_dist'] y = inputs['in_serial'] # "Computationally Intensive" operation that we wish to parallelize. f_x = x**2 - 2.0*x + 4.0 # This operation is repeated on all procs. f_y = y ** 0.5 outputs['out_dist'] = f_x + np.sum(f_y) size = 7 if MPI: comm = MPI.COMM_WORLD rank = comm.rank sizes, offsets = evenly_distrib_idxs(comm.size, size) else: # When running in serial, the entire variable is on rank 0. rank = 0 sizes = {rank : size} offsets = {rank : 0} prob = om.Problem() model = prob.model # Create a distributed source for the distributed input. ivc = om.IndepVarComp() ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True) ivc.add_output('x_serial', np.zeros(size)) model.add_subsystem("indep", ivc) model.add_subsystem("D1", MixedDistrib1()) model.connect('indep.x_dist', 'D1.in_dist') model.connect('indep.x_serial', 'D1.in_serial') prob.setup() # Set initial values of distributed variable. x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]] prob.set_val('indep.x_dist', x_dist_init) # Set initial values of serial variable. x_serial_init = 1.0 + 2.0*np.arange(size) prob.set_val('indep.x_serial', x_serial_init) prob.run_model() # Values on each rank. for var in ['indep.x_dist', 'indep.x_serial', 'D1.out_dist']: print(var, prob.get_val(var)) # Full gathered values. for var in ['indep.x_dist', 'indep.x_serial', 'D1.out_dist']: print(var, prob.get_val(var, get_remote=True)) print('') %%px assert_near_equal(prob.get_val(var, get_remote=True), np.array([24.53604616, 29.53604616, 36.53604616, 45.53604616, 56.53604616, 69.53604616, 84.53604616]), 1e-6)
_____no_output_____
Apache-2.0
openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb
markleader/OpenMDAO
Example: Distributed I/O and a Serial OuputYou can also create a component with a serial output and distributed outputs and inputs. This situation tends to be more tricky and usually requires you to performe some MPI operations in your component's `run` method. If the serial output is only a function of the serial inputs, then you can handle that variable just like you do on any other component. However, this example extends the previous component to include a serial output that is a function of both the serial and distributed inputs. In this case, it's a function of the sum of the square root of each element in the full distributed vector. Since the data is not all on any local processor, we use an MPI operation, in this case `Allreduce`, to make a summation across the distributed vector, and gather the answer back to each processor. The MPI operation and your implementation will vary, but consider this to be a general example.
%%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class MixedDistrib2(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) # Serial Input self.add_input('in_serial', shape_by_conn=True) # Distributed Output self.add_output('out_dist', copy_shape='in_dist', distributed=True) # Serial Output self.add_output('out_serial', copy_shape='in_serial') def compute(self, inputs, outputs): x = inputs['in_dist'] y = inputs['in_serial'] # "Computationally Intensive" operation that we wish to parallelize. f_x = x**2 - 2.0*x + 4.0 # These operations are repeated on all procs. f_y = y ** 0.5 g_y = y**2 + 3.0*y - 5.0 # Compute square root of our portion of the distributed input. g_x = x ** 0.5 # Distributed output outputs['out_dist'] = f_x + np.sum(f_y) # Serial output if MPI and comm.size > 1: # We need to gather the summed values to compute the total sum over all procs. local_sum = np.array(np.sum(g_x)) total_sum = local_sum.copy() self.comm.Allreduce(local_sum, total_sum, op=MPI.SUM) outputs['out_serial'] = g_y + total_sum else: # Recommended to make sure your code can run in serial too, for testing. outputs['out_serial'] = g_y + np.sum(g_x) size = 7 if MPI: comm = MPI.COMM_WORLD rank = comm.rank sizes, offsets = evenly_distrib_idxs(comm.size, size) else: # When running in serial, the entire variable is on rank 0. rank = 0 sizes = {rank : size} offsets = {rank : 0} prob = om.Problem() model = prob.model # Create a distributed source for the distributed input. ivc = om.IndepVarComp() ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True) ivc.add_output('x_serial', np.zeros(size)) model.add_subsystem("indep", ivc) model.add_subsystem("D1", MixedDistrib2()) model.connect('indep.x_dist', 'D1.in_dist') model.connect('indep.x_serial', 'D1.in_serial') prob.setup() # Set initial values of distributed variable. x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]] prob.set_val('indep.x_dist', x_dist_init) # Set initial values of serial variable. x_serial_init = 1.0 + 2.0*np.arange(size) prob.set_val('indep.x_serial', x_serial_init) prob.run_model() # Values on each rank. for var in ['indep.x_dist', 'indep.x_serial', 'D1.out_dist', 'D1.out_serial']: print(var, prob.get_val(var)) # Full gathered values. for var in ['indep.x_dist', 'indep.x_serial', 'D1.out_dist', 'D1.out_serial']: print(var, prob.get_val(var, get_remote=True)) print('') %%px assert_near_equal(prob.get_val(var, get_remote=True), np.array([15.89178696, 29.89178696, 51.89178696, 81.89178696, 119.89178696, 165.89178696, 219.89178696]), 1e-6)
_____no_output_____
Apache-2.0
openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb
markleader/OpenMDAO
```{note}In this example, we introduce a new component called an [IndepVarComp](indepvarcomp.ipynb). If you used OpenMDAO prior to version 3.2, then you are familiar with this component. It is used to define an independent variable.You usually do not have to define these because OpenMDAO defines and uses them automatically for all unconnected inputs in your model. This automatically-created `IndepVarComp` is called an Auto-IVC.However, when we define a distributed input, we often use the “src_indices” attribute to determine the allocation of that input to the processors that the component sees. For some sets of these indices, it isn’t possible to easily determine the full size of the corresponding independent variable, and the *IndepVarComp* cannot be created automatically. So, for unconnected inputs on a distributed component, you must manually create one, as we did in this example.``` Derivatives with Distributed VariablesIn the following examples, we show how to add analytic derivatives to the distributed examples given above. In most cases it is straighforward, but when you have a serial output and a distributed input, the [matrix-free](matrix-free-api) format is required. Derivatives: Distributed I/O and a Serial InputIn this example, we have a distributed input, a distributed output, and a serial input. The derivative of 'out_dist' with respect to 'in_dict' has a diagonal Jacobian, so we use sparse declaration and each processor gives `declare_partials` the local number of rows and columns. The derivatives are verified against complex step using `check_totals` since our component is complex-safe.
%%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class MixedDistrib1(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) # Serial Input self.add_input('in_serial', shape_by_conn=True) # Distributed Output self.add_output('out_dist', copy_shape='in_dist', distributed=True) def setup_partials(self): meta = self.get_io_metadata(metadata_keys=['shape']) local_size = meta['in_dist']['shape'][0] row_col_d = np.arange(local_size) self.declare_partials('out_dist', 'in_dist', rows=row_col_d, cols=row_col_d) self.declare_partials('out_dist', 'in_serial') def compute(self, inputs, outputs): x = inputs['in_dist'] y = inputs['in_serial'] # "Computationally Intensive" operation that we wish to parallelize. f_x = x**2 - 2.0*x + 4.0 # This operation is repeated on all procs. f_y = y ** 0.5 outputs['out_dist'] = f_x + np.sum(f_y) def compute_partials(self, inputs, partials): x = inputs['in_dist'] y = inputs['in_serial'] size = len(y) local_size = len(x) partials['out_dist', 'in_dist'] = 2.0 * x - 2.0 df_dy = 0.5 / y ** 0.5 partials['out_dist', 'in_serial'] = np.tile(df_dy, local_size).reshape((local_size, size)) size = 7 if MPI: comm = MPI.COMM_WORLD rank = comm.rank sizes, offsets = evenly_distrib_idxs(comm.size, size) else: # When running in serial, the entire variable is on rank 0. rank = 0 sizes = {rank : size} offsets = {rank : 0} prob = om.Problem() model = prob.model # Create a distributed source for the distributed input. ivc = om.IndepVarComp() ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True) ivc.add_output('x_serial', np.zeros(size)) model.add_subsystem("indep", ivc) model.add_subsystem("D1", MixedDistrib1()) model.connect('indep.x_dist', 'D1.in_dist') model.connect('indep.x_serial', 'D1.in_serial') model.add_design_var('indep.x_serial') model.add_design_var('indep.x_dist') model.add_objective('D1.out_dist') prob.setup(force_alloc_complex=True) # Set initial values of distributed variable. x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]] prob.set_val('indep.x_dist', x_dist_init) # Set initial values of serial variable. x_serial_init = 1.0 + 2.0*np.arange(size) prob.set_val('indep.x_serial', x_serial_init) prob.run_model() if rank > 0: prob.check_totals(method='cs', out_stream=None) else: prob.check_totals(method='cs') %%px totals = prob.check_totals(method='cs', out_stream=None) for key, val in totals.items(): assert_near_equal(val['rel error'][0], 0.0, 1e-6)
_____no_output_____
Apache-2.0
openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb
markleader/OpenMDAO
Derivatives: Distributed I/O and a Serial OutputIf you have a component with distributed inputs and a serial output, then the standard `compute_partials` API will not work for specifying the derivatives. You will need to use the matrix-free API with `compute_jacvec_product`, which is described in the feature document for [ExplicitComponent](explicit_component.ipynb)Computing the matrix-vector product for the derivative of the serial output with respect to a distributed input will require you to use MPI operations to gather the required parts of the Jacobian to all processors. The following example shows how to implement derivatives on the earlier `MixedDistrib2` component.
%%px import numpy as np import openmdao.api as om from openmdao.utils.array_utils import evenly_distrib_idxs from openmdao.utils.mpi import MPI class MixedDistrib2(om.ExplicitComponent): def setup(self): # Distributed Input self.add_input('in_dist', shape_by_conn=True, distributed=True) # Serial Input self.add_input('in_serial', shape_by_conn=True) # Distributed Output self.add_output('out_dist', copy_shape='in_dist', distributed=True) # Serial Output self.add_output('out_serial', copy_shape='in_serial') def compute(self, inputs, outputs): x = inputs['in_dist'] y = inputs['in_serial'] # "Computationally Intensive" operation that we wish to parallelize. f_x = x**2 - 2.0*x + 4.0 # These operations are repeated on all procs. f_y = y ** 0.5 g_y = y**2 + 3.0*y - 5.0 # Compute square root of our portion of the distributed input. g_x = x ** 0.5 # Distributed output outputs['out_dist'] = f_x + np.sum(f_y) # Serial output if MPI and comm.size > 1: # We need to gather the summed values to compute the total sum over all procs. local_sum = np.array(np.sum(g_x)) total_sum = local_sum.copy() self.comm.Allreduce(local_sum, total_sum, op=MPI.SUM) outputs['out_serial'] = g_y + total_sum else: # Recommended to make sure your code can run in serial too, for testing. outputs['out_serial'] = g_y + np.sum(g_x) def compute_jacvec_product(self, inputs, d_inputs, d_outputs, mode): x = inputs['in_dist'] y = inputs['in_serial'] df_dx = 2.0 * x - 2.0 df_dy = 0.5 / y ** 0.5 dg_dx = 0.5 / x ** 0.5 dg_dy = 2.0 * y + 3.0 local_size = len(x) size = len(y) if mode == 'fwd': if 'out_dist' in d_outputs: if 'in_dist' in d_inputs: d_outputs['out_dist'] += df_dx * d_inputs['in_dist'] if 'in_serial' in d_inputs: d_outputs['out_dist'] += np.tile(df_dy, local_size).reshape((local_size, size)).dot(d_inputs['in_serial']) if 'out_serial' in d_outputs: if 'in_dist' in d_inputs: if MPI and comm.size > 1: deriv = np.tile(dg_dx, size).reshape((size, local_size)).dot(d_inputs['in_dist']) deriv_sum = np.zeros(deriv.size) self.comm.Allreduce(deriv, deriv_sum, op=MPI.SUM) d_outputs['out_serial'] += deriv_sum else: # Recommended to make sure your code can run in serial too, for testing. d_outputs['out_serial'] += np.tile(dg_dx, local_size).reshape((local_size, size)).dot(d_inputs['in_dist']) if 'in_serial' in d_inputs: d_outputs['out_serial'] += dg_dy * d_inputs['in_serial'] else: if 'out_dist' in d_outputs: if 'in_dist' in d_inputs: d_inputs['in_dist'] += df_dx * d_outputs['out_dist'] if 'in_serial' in d_inputs: d_inputs['in_serial'] += np.tile(df_dy, local_size).reshape((local_size, size)).dot(d_outputs['out_dist']) if 'out_serial' in d_outputs: if 'out_serial' in d_outputs: if 'in_dist' in d_inputs: if MPI and comm.size > 1: deriv = np.tile(dg_dx, size).reshape((size, local_size)).dot(d_outputs['out_serial']) deriv_sum = np.zeros(deriv.size) self.comm.Allreduce(deriv, deriv_sum, op=MPI.SUM) d_inputs['in_dist'] += deriv_sum else: # Recommended to make sure your code can run in serial too, for testing. d_inputs['in_dist'] += np.tile(dg_dx, local_size).reshape((local_size, size)).dot(d_outputs['out_serial']) if 'in_serial' in d_inputs: d_inputs['in_serial'] += dg_dy * d_outputs['out_serial'] size = 7 if MPI: comm = MPI.COMM_WORLD rank = comm.rank sizes, offsets = evenly_distrib_idxs(comm.size, size) else: # When running in serial, the entire variable is on rank 0. rank = 0 sizes = {rank : size} offsets = {rank : 0} prob = om.Problem() model = prob.model # Create a distributed source for the distributed input. ivc = om.IndepVarComp() ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True) ivc.add_output('x_serial', np.zeros(size)) model.add_subsystem("indep", ivc) model.add_subsystem("D1", MixedDistrib2()) model.connect('indep.x_dist', 'D1.in_dist') model.connect('indep.x_serial', 'D1.in_serial') model.add_design_var('indep.x_serial') model.add_design_var('indep.x_dist') model.add_constraint('D1.out_dist', lower=0.0) model.add_constraint('D1.out_serial', lower=0.0) prob.setup(force_alloc_complex=True) # Set initial values of distributed variable. x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]] prob.set_val('indep.x_dist', x_dist_init) # Set initial values of serial variable. x_serial_init = 1.0 + 2.0*np.arange(size) prob.set_val('indep.x_serial', x_serial_init) prob.run_model() if rank > 0: prob.check_totals(method='cs', out_stream=None) else: prob.check_totals(method='cs') %%px totals = prob.check_totals(method='cs', out_stream=None) for key, val in totals.items(): assert_near_equal(val['rel error'][0], 0.0, 1e-6)
_____no_output_____
Apache-2.0
openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb
markleader/OpenMDAO
Lambda School Data Science*Unit 2, Sprint 1, Module 4*--- Logistic Regression- do train/validate/test split- begin with baselines for classification- express and explain the intuition and interpretation of Logistic Regression- use sklearn.linear_model.LogisticRegression to fit and interpret Logistic Regression modelsLogistic regression is the baseline for classification models, as well as a handy way to predict probabilities (since those too live in the unit interval). While relatively simple, it is also the foundation for more sophisticated classification techniques such as neural networks (many of which can effectively be thought of as networks of logistic models). SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries:- category_encoders- numpy- pandas- scikit-learn
%%capture import sys # If you're on Colab: if 'google.colab' in sys.modules: DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/' !pip install category_encoders==2.* # If you're working locally: else: DATA_PATH = '../data/'
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
Do train/validate/test split Overview Predict Titanic survival 🚢Kaggle is a platform for machine learning competitions. [Kaggle has used the Titanic dataset](https://www.kaggle.com/c/titanic/data) for their most popular "getting started" competition. Kaggle splits the data into train and test sets for participants. Let's load both:
import pandas as pd train = pd.read_csv(DATA_PATH+'titanic/train.csv') test = pd.read_csv(DATA_PATH+'titanic/test.csv')
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
Notice that the train set has one more column than the test set:
train.shape, test.shape
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
Which column is in train but not test? The target!
set(train.columns) - set(test.columns)
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
Why doesn't Kaggle give you the target for the test set? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> One great thing about Kaggle competitions is that they force you to think about validation sets more rigorously (in order to do well). For those who are new to Kaggle, it is a platform that hosts machine learning competitions. Kaggle typically breaks the data into two sets you can download:>> 1. a **training set**, which includes the _independent variables,_ as well as the _dependent variable_ (what you are trying to predict).>> 2. a **test set**, which just has the _independent variables._ You will make predictions for the test set, which you can submit to Kaggle and get back a score of how well you did.>> This is the basic idea needed to get started with machine learning, but to do well, there is a bit more complexity to understand. **You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle.**>> The most important reason for this is that Kaggle has split the test data into two sets: for the public and private leaderboards. The score you see on the public leaderboard is just for a subset of your predictions (and you don’t know which subset!). How your predictions fare on the private leaderboard won’t be revealed until the end of the competition. The reason this is important is that you could end up overfitting to the public leaderboard and you wouldn’t realize it until the very end when you did poorly on the private leaderboard. Using a good validation set can prevent this. You can check if your validation set is any good by seeing if your model has similar scores on it to compared with on the Kaggle test set. ...>> Understanding these distinctions is not just useful for Kaggle. In any predictive machine learning project, you want your model to be able to perform well on new data. 2-way train/test split is not enough Hastie, Tibshirani, and Friedman, [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/), Chapter 7: Model Assessment and Selection> If we are in a data-rich situation, the best approach is to randomly divide the dataset into three parts: a training set, a validation set, and a test set. The training set is used to fit the models; the validation set is used to estimate prediction error for model selection; the test set is used for assessment of the generalization error of the final chosen model. Ideally, the test set should be kept in a "vault," and be brought out only at the end of the data analysis. Suppose instead that we use the test-set repeatedly, choosing the model with the smallest test-set error. Then the test set error of the final chosen model will underestimate the true test error, sometimes substantially. Andreas Mueller and Sarah Guido, [Introduction to Machine Learning with Python](https://books.google.com/books?id=1-4lDQAAQBAJ&pg=PA270)> The distinction between the training set, validation set, and test set is fundamentally important to applying machine learning methods in practice. Any choices made based on the test set accuracy "leak" information from the test set into the model. Therefore, it is important to keep a separate test set, which is only used for the final evaluation. It is good practice to do all exploratory analysis and model selection using the combination of a training and a validation set, and reserve the test set for a final evaluation - this is even true for exploratory visualization. Strictly speaking, evaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is. Hadley Wickham, [R for Data Science](https://r4ds.had.co.nz/model-intro.htmlhypothesis-generation-vs.hypothesis-confirmation)> There is a pair of ideas that you must understand in order to do inference correctly:>> 1. Each observation can either be used for exploration or confirmation, not both.>> 2. You can use an observation as many times as you like for exploration, but you can only use it once for confirmation. As soon as you use an observation twice, you’ve switched from confirmation to exploration.>> This is necessary because to confirm a hypothesis you must use data independent of the data that you used to generate the hypothesis. Otherwise you will be over optimistic. There is absolutely nothing wrong with exploration, but you should never sell an exploratory analysis as a confirmatory analysis because it is fundamentally misleading.>> If you are serious about doing an confirmatory analysis, one approach is to split your data into three pieces before you begin the analysis. Sebastian Raschka, [Model Evaluation](https://sebastianraschka.com/blog/2018/model-evaluation-selection-part4.html)> Since “a picture is worth a thousand words,” I want to conclude with a figure (shown below) that summarizes my personal recommendations ...Usually, we want to do **"Model selection (hyperparameter optimization) _and_ performance estimation."** (The green box in the diagram.)Therefore, we usually do **"3-way holdout method (train/validation/test split)"** or **"cross-validation with independent test set."** What's the difference between Training, Validation, and Testing sets? Brandon Rohrer, [Training, Validation, and Testing Data Sets](https://end-to-end-machine-learning.teachable.com/blog/146320/training-validation-testing-data-sets)> The validation set is for adjusting a model's hyperparameters. The testing data set is the ultimate judge of model performance.>> Testing data is what you hold out until very last. You only run your model on it once. You don’t make any changes or adjustments to your model after that. ... Follow Along> You will want to create your own training and validation sets (by splitting the Kaggle “training” data).Do this, using the [sklearn.model_selection.train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function:
from sklearn.model_selection import train_test_split train.shape, test.shape train, val = train_test_split(train, random_state=28) train.shape, val.shape, test.shape
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
Challenge For your assignment, you'll do a 3-way train/validate/test split.Then next sprint, you'll begin to participate in a private Kaggle challenge, just for your cohort! You will be provided with data split into 2 sets: training and test. You will create your own training and validation sets, by splitting the Kaggle "training" data, so you'll end up with 3 sets total. Begin with baselines for classification Overview We'll begin with the **majority class baseline.**[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488)> A baseline for classification can be the most common class in the training dataset.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> For classification tasks, one good baseline is the _majority classifier,_ a naive classifier that always chooses the majority class of the training dataset (see Note: Base rate in Holdout Data and Fitting Graphs). This may seem like advice so obvious it can be passed over quickly, but it is worth spending an extra moment here. There are many cases where smart, analytical people have been tripped up in skipping over this basic comparison. For example, an analyst may see a classification accuracy of 94% from her classifier and conclude that it is doing fairly well—when in fact only 6% of the instances are positive. So, the simple majority prediction classifier also would have an accuracy of 94%. Follow Along Determine majority class
target = 'Survived' y_train = train[target] y_train.value_counts()
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models
What if we guessed the majority class for every prediction?
y_pred = y_train.apply(lambda x : 0)
_____no_output_____
MIT
module4-logistic-regression/LS_DS_214.ipynb
cedro-gasque/DS-Unit-2-Linear-Models