markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Variable YearsCode: | data_test['YearsCode'] = data_test['YearsCode'].replace(['More than 50 years'], 50)
data_test['YearsCode'] = data_test['YearsCode'].replace(['Less than 1 year'], 1) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
Variable YearsCodePro: | data_test['YearsCodePro'] = data_test['YearsCodePro'].replace(['More than 50 years'], 50)
data_test['YearsCodePro'] = data_test['YearsCodePro'].replace(['Less than 1 year'], 1) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
Variable OpSys: | data_test['OpSys'].value_counts()
data_test['OpSys'] = data_test['OpSys'].replace(['Windows Subsystem for Linux (WSL)'], 'Windows')
data_test['OpSys'] = data_test['OpSys'].replace(['Linux-based'], 'Linux')
data_test['OpSys'] = data_test['OpSys'].replace(['Other (please specify)'], 'Otro')
data_test['OpSys'].value_counts() | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
Variable Age: | data_test['Age'].value_counts()
data_test['Age'] = data_test['Age'].replace(['25-34 years old'], '25-34')
data_test['Age'] = data_test['Age'].replace(['35-44 years old'], '35-44')
data_test['Age'] = data_test['Age'].replace(['18-24 years old'], '18-24')
data_test['Age'] = data_test['Age'].replace(['45-54 years old'], '45-54')
data_test['Age'] = data_test['Age'].replace(['55-64 years old'], '55-64')
data_test['Age'] = data_test['Age'].replace(['Under 18 years old'], '< 18')
data_test['Age'] = data_test['Age'].replace(['65 years or older'], '>= 65')
data_test['Age'] = data_test['Age'].replace(['Prefer not to say'], 'No definido')
data_test['Age'] = data_test['Age'].replace(['25-34 years old'], '25-34')
data_test['Age'].value_counts() | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
Variable Gender: | data_test['Gender'].value_counts()
data_test['Gender'] = data_test['Gender'].replace(['Man'], 'Hombre')
data_test['Gender'] = data_test['Gender'].replace(['Woman'], 'Mujer')
data_test['Gender'] = data_test['Gender'].replace(['Non-binary, genderqueer, or gender non-conforming'], 'No binario u otro')
data_test['Gender'] = data_test['Gender'].replace(['Man;Non-binary, genderqueer, or gender non-conforming'], 'No binario u otro')
data_test['Gender'] = data_test['Gender'].replace(['Man;Or, in your own words:'], 'Hombre')
data_test['Gender'] = data_test['Gender'].replace(['Or, in your own words:'], 'No definido')
data_test['Gender'] = data_test['Gender'].replace(['Woman;Non-binary, genderqueer, or gender non-conforming'], 'No binario u otro')
data_test['Gender'] = data_test['Gender'].replace(['Man;Woman'], 'No definido')
data_test['Gender'] = data_test['Gender'].replace(['Man;Woman;Non-binary, genderqueer, or gender non-conforming;Or, in your own words:'], 'No binario u otro')
data_test['Gender'] = data_test['Gender'].replace(['Non-binary, genderqueer, or gender non-conforming;Or, in your own words:'], 'No binario u otro')
data_test['Gender'] = data_test['Gender'].replace(['Man;Woman;Non-binary, genderqueer, or gender non-conforming'], 'No binario u otro')
data_test['Gender'] = data_test['Gender'].replace(['Prefer not to say'], 'No definido')
data_test['Gender'].value_counts() | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
Variable Trans: | data_test['Trans'].value_counts()
data_test['Trans'] = data_test['Trans'].replace(['Yes'], 'Si')
data_test['Trans'] = data_test['Trans'].replace(['Prefer not to say'], 'No definido')
data_test['Trans'] = data_test['Trans'].replace(['Or, in your own words:'], 'No definido')
data_test['Trans'].value_counts() | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
Variable MentalHealth: | data_test['MentalHealth'].value_counts()
from re import search
def choose_mental_health(cell_mental_health):
val_mental_health_exceptions = ["Or, in your own words:"]
if cell_mental_health == "Or, in your own words:":
return val_mental_health_exceptions[0]
if search(";", cell_mental_health):
row_mental_health_values = cell_mental_health.split(';', 10)
first_val = row_mental_health_values[0]
return first_val
else:
return cell_mental_health
data_test['MentalHealth'] = data_test['MentalHealth'].apply(choose_mental_health)
data_test['MentalHealth'].value_counts()
data_test['MentalHealth'] = data_test['MentalHealth'].replace(['None of the above'], 'Ninguna de las mencionadas')
data_test['MentalHealth'] = data_test['MentalHealth'].replace(['I have a concentration and/or memory disorder (e.g. ADHD)'], 'Desorden de concentración o memoria')
data_test['MentalHealth'] = data_test['MentalHealth'].replace(['I have a mood or emotional disorder (e.g. depression, bipolar disorder)'], 'Desorden emocional')
data_test['MentalHealth'] = data_test['MentalHealth'].replace(['I have an anxiety disorder'], 'Desorden de ansiedad')
data_test['MentalHealth'] = data_test['MentalHealth'].replace(['Prefer not to say'], 'No definido')
data_test['MentalHealth'] = data_test['MentalHealth'].replace(["I have autism / an autism spectrum disorder (e.g. Asperger's)"], 'Tipo de autismo')
data_test['MentalHealth'] = data_test['MentalHealth'].replace(['Or, in your own words:'], 'No definido')
data_test['MentalHealth'].value_counts() | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2. Selección de campos para subdatasetsSe seleccionarán los campos adecuados para responder a cada una de las cuestiones que se plantearon en la primera parte de la práctica. 2.1. Según la autodeterminación de la etnia, ¿Qué etnia tiene un mayor sueldo anual?Se seleccionarán los campos adecuados para responder a esta pregunta | data_etnia = data_test[['Country', 'Ethnicity', 'ConvertedCompYearly']]
data_etnia.head()
df_data_etnia = data_etnia.copy()
def remove_outliers(df, q=0.05):
upper = df.quantile(1-q)
lower = df.quantile(q)
mask = (df < upper) & (df > lower)
return mask
mask = remove_outliers(df_data_etnia['ConvertedCompYearly'], 0.1)
print(df_data_etnia[mask])
df_data_etnia_no_outliers = df_data_etnia[mask]
df_data_etnia_no_outliers = df_data_etnia_no_outliers.copy()
df_data_etnia_no_outliers['ConvertedCompYearlyCategorical'] = 'ALTO'
df_data_etnia_no_outliers.loc[(df_data_etnia_no_outliers['ConvertedCompYearly'] >= 0) & (df_data_etnia_no_outliers['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO'
df_data_etnia_no_outliers.loc[(df_data_etnia_no_outliers['ConvertedCompYearly'] > 32747) & (df_data_etnia_no_outliers['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO'
print(df_data_etnia_no_outliers)
df_data_etnia_alto = df_data_etnia_no_outliers[df_data_etnia_no_outliers['ConvertedCompYearlyCategorical'] == 'ALTO']
df_data_etnia_alto = df_data_etnia_alto[['Ethnicity', 'ConvertedCompYearlyCategorical']]
df_flourish = df_data_etnia_alto['Ethnicity'].value_counts().to_frame('counts').reset_index()
df_flourish
df_flourish.to_csv('001_df_flourish.csv', index=False)
df_data_etnia_alto.to_csv('001_df_data_etnia_alto.csv', index=False)
df_data_etnia.to_csv('001_data_etnia_categorical.csv', index=False)
data_etnia.to_csv('001_data_etnia.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.2. ¿Cuáles son los porcentajes de programadores que trabajan a tiempo completo, medio tiempo o freelance?Se seleccionarán los campos adecuados para responder a esta pregunta | data_time_work_dev = data_test[['Country', 'Employment', 'ConvertedCompYearly', 'EdLevel', 'Age']]
data_time_work_dev.head()
df_flourish_002 = data_time_work_dev['Employment'].value_counts().to_frame('counts').reset_index()
df_flourish_002
df_flourish_002['counts'] = (df_flourish_002['counts'] * 100 ) / data_time_work_dev.shape[0]
df_flourish_002
df_flourish_002['counts'] = df_flourish_002['counts'].round(2)
df_flourish_002
df_flourish_002.to_csv('002_df_flourish.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.3. ¿Cuáles son los países con mayor número de programadores profesionales que son activos en la comunidad Stack Overflow?Se seleccionarán los campos adecuados para responder a esta pregunta | data_pro_dev_active_so = data_test[['Country', 'Employment', 'MainBranch', 'EdLevel', 'DevType', 'Age']]
data_pro_dev_active_so.head()
df_flourish_003 = data_pro_dev_active_so['Country'].value_counts().sort_values(ascending=False).head(10)
df_flourish_003 = df_flourish_003.to_frame()
df_flourish_003 = df_flourish_003.reset_index()
df_flourish_003.columns = ["País", "# Programadores Profesionales"]
df_flourish_003.to_csv('003_df_flourish_003.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.4. ¿Cuál es el nivel educativo que mayores ingresos registra entre los encuestados?Se seleccionarán los campos adecuados para responder a esta pregunta | data_edlevel_income = data_test[['ConvertedCompYearly', 'EdLevel']]
data_edlevel_income.head()
df_data_edlevel_income = data_edlevel_income.copy()
def remove_outliers(df, q=0.05):
upper = df.quantile(1-q)
lower = df.quantile(q)
mask = (df < upper) & (df > lower)
return mask
mask = remove_outliers(df_data_edlevel_income['ConvertedCompYearly'], 0.1)
print(df_data_edlevel_income[mask])
df_data_edlevel_income = df_data_edlevel_income[mask]
df_data_edlevel_income['ConvertedCompYearlyCategorical'] = 'ALTO'
df_data_edlevel_income.loc[(df_data_edlevel_income['ConvertedCompYearly'] >= 0) & (df_data_edlevel_income['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO'
df_data_edlevel_income.loc[(df_data_edlevel_income['ConvertedCompYearly'] > 32747) & (df_data_edlevel_income['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO'
print(df_data_edlevel_income)
df_data_edlevel_income = df_data_edlevel_income[df_data_edlevel_income['ConvertedCompYearlyCategorical'] == 'ALTO']
df_data_edlevel_income = df_data_edlevel_income[['EdLevel', 'ConvertedCompYearlyCategorical']]
df_flourish_004 = df_data_edlevel_income['EdLevel'].value_counts().to_frame('counts').reset_index()
df_flourish_004
df_flourish_004.to_csv('004_df_flourish.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.5. ¿Existe brecha salarial entre hombres y mujeres u otros géneros?, y de ¿Cuánto es la diferencia? ¿Cuáles son los peores países en cuanto a brecha salarial? ¿Cuáles son los países que han reducido esta brecha salarial entre programadores?Se seleccionarán los campos adecuados para responder a esta pregunta | data_wage_gap = data_test[['Country', 'ConvertedCompYearly', 'Gender']]
data_wage_gap.head()
df_data_wage_gap = data_wage_gap.copy()
def remove_outliers(df, q=0.05):
upper = df.quantile(1-q)
lower = df.quantile(q)
mask = (df < upper) & (df > lower)
return mask
mask = remove_outliers(df_data_wage_gap['ConvertedCompYearly'], 0.1)
print(df_data_wage_gap[mask])
df_data_wage_gap = df_data_wage_gap[mask]
df_data_wage_gap['ConvertedCompYearlyCategorical'] = 'ALTO'
df_data_wage_gap.loc[(df_data_wage_gap['ConvertedCompYearly'] >= 0) & (df_data_wage_gap['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO'
df_data_wage_gap.loc[(df_data_wage_gap['ConvertedCompYearly'] > 32747) & (df_data_wage_gap['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO'
print(df_data_wage_gap)
df_data_wage_gap = df_data_wage_gap[df_data_wage_gap['ConvertedCompYearlyCategorical'].isin(['ALTO', 'MEDIO'])]
df_data_wage_gap = df_data_wage_gap[['Country', 'Gender', 'ConvertedCompYearlyCategorical']]
df_data_wage_gap.to_csv('005_df_data_wage_gap.csv', index=False)
df_data_wage_gap['ConvertedCompYearlyCategorical'].drop_duplicates().sort_values()
df_data_wage_gap['Gender'].drop_duplicates().sort_values()
df_data_wage_gap['Country'].drop_duplicates().sort_values()
df_data_wage_gap1 = df_data_wage_gap.copy()
df_flourish_005 = df_data_wage_gap1.groupby(['Country', 'Gender']).size().unstack(fill_value=0).sort_values('Hombre')
df_flourish_005 = df_flourish_005.apply(lambda x: pd.concat([x.head(40), x.tail(5)]))
df_flourish_005.to_csv('005_flourish_data.csv', index=True) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.6. ¿Cuáles son los ingresos promedios según los rangos de edad? ¿Cuál es el rango de edad con el mejor y peor ingreso?Se seleccionarán los campos adecuados para responder a esta pregunta | data_age_income = data_test[['ConvertedCompYearly', 'Age']]
data_age_income.head()
df_data_age_income = data_age_income.copy()
def remove_outliers(df, q=0.05):
upper = df.quantile(1-q)
lower = df.quantile(q)
mask = (df < upper) & (df > lower)
return mask
mask = remove_outliers(df_data_age_income['ConvertedCompYearly'], 0.1)
print(df_data_age_income[mask])
df_data_age_income = df_data_age_income[mask]
df_data_age_income1 = df_data_age_income.copy()
df_data_age_income1.to_csv('006_df_data_age_income1.csv', index=False)
grouped_df = df_data_age_income1.groupby("Age")
average_df = grouped_df.mean()
average_df
df_flourish_006 = average_df.copy()
df_flourish_006.to_csv('006_df_flourish_006.csv', index=True) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.7. ¿Cuáles son las tecnologías que permiten tener un mejor ingreso salarial anual?Se seleccionarán los campos adecuados para responder a esta pregunta | data_techs_best_income1 = data_test[['ConvertedCompYearly', 'LanguageHaveWorkedWith', 'DatabaseHaveWorkedWith', 'PlatformHaveWorkedWith', 'WebframeHaveWorkedWith', 'MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith', 'NEWCollabToolsHaveWorkedWith']]
data_techs_best_income1.head()
data_techs_best_income1['AllTechs'] = data_techs_best_income1['LanguageHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['DatabaseHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['PlatformHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['WebframeHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['MiscTechHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['ToolsTechHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['NEWCollabToolsHaveWorkedWith'].map(str)
print (data_techs_best_income1)
df_data_techs_best_income = data_techs_best_income1[['ConvertedCompYearly', 'AllTechs']].copy()
df_data_techs_best_income1 = df_data_techs_best_income.copy()
def remove_outliers(df, q=0.05):
upper = df.quantile(1-q)
lower = df.quantile(q)
mask = (df < upper) & (df > lower)
return mask
mask = remove_outliers(df_data_techs_best_income1['ConvertedCompYearly'], 0.1)
print(df_data_techs_best_income1[mask])
df_data_techs_best_income1 = df_data_techs_best_income1[mask]
df_data_techs_best_income1['ConvertedCompYearlyCategorical'] = 'ALTO'
df_data_techs_best_income1.loc[(df_data_techs_best_income1['ConvertedCompYearly'] >= 0) & (df_data_techs_best_income1['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO'
df_data_techs_best_income1.loc[(df_data_techs_best_income1['ConvertedCompYearly'] > 32747) & (df_data_techs_best_income1['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO'
print(df_data_techs_best_income1)
df_data_techs_best_income1 = df_data_techs_best_income1[df_data_techs_best_income1['ConvertedCompYearlyCategorical'].isin(['ALTO', 'MEDIO'])]
df_data_techs_best_income1['AllTechs'] = df_data_techs_best_income1['AllTechs'].str.replace(' ', '')
df_data_techs_best_income1['AllTechs'] = df_data_techs_best_income1['AllTechs'].str.replace(';', ' ')
df_counts = df_data_techs_best_income1['AllTechs'].str.split(expand=True).stack().value_counts().rename_axis('Tech').reset_index(name='Count')
df_counts.head(10)
df_data_techs_best_income_007 = df_counts.head(10)
df_data_techs_best_income_007.to_csv('007_df_data_techs_best_income.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.8. ¿Cuántas tecnologías en promedio domina un programador profesional?Se seleccionarán los campos adecuados para responder a esta pregunta | data_techs_dev_pro1 = data_test[['DevType', 'LanguageHaveWorkedWith', 'DatabaseHaveWorkedWith', 'PlatformHaveWorkedWith', 'WebframeHaveWorkedWith', 'MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith', 'NEWCollabToolsHaveWorkedWith']]
data_techs_dev_pro1.head()
data_techs_dev_pro1['AllTechs'] = data_techs_dev_pro1['LanguageHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['DatabaseHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['PlatformHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['WebframeHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['MiscTechHaveWorkedWith'].map(str) + ';' + data_techs_best_income1['ToolsTechHaveWorkedWith'].map(str) + ';' + data_techs_dev_pro1['NEWCollabToolsHaveWorkedWith'].map(str)
print (data_techs_dev_pro1)
df_data_techs_dev_pro = data_techs_dev_pro1[['DevType', 'AllTechs']].copy()
df_data_techs_dev_pro = df_data_techs_dev_pro[df_data_techs_dev_pro['DevType'].isin(['Desarrollador full-stack', 'Desarrollador front-end', 'Desarrollador móvil', 'Desarrollador back-end', 'Desarrollador Escritorio', 'Desarrollador de QA o Test', 'Desarrollador de aplicaciones embebidas', 'Administrador de base de datos', 'Desarrollador de juegos o gráfico'])]
df_data_techs_dev_pro.info()
df_data_techs_dev_pro1 = df_data_techs_dev_pro.copy()
df_data_techs_dev_pro1.to_csv('008_df_data_techs_dev_pro1.csv', index=True)
def convert_row_to_list(lst):
return lst.split(';')
df_data_techs_dev_pro1['ListTechs'] = df_data_techs_dev_pro1['AllTechs'].apply(convert_row_to_list)
df_data_techs_dev_pro1['LenListTechs'] = df_data_techs_dev_pro1['ListTechs'].map(len)
df_flourish_008 = df_data_techs_dev_pro1[['DevType', 'LenListTechs']].copy()
df_flourish_008
grouped_df = df_flourish_008.groupby("DevType")
average_df_008 = round(grouped_df.mean())
df_flourish_008 = average_df_008.copy()
df_flourish_008.to_csv('008_df_flourish_008.csv', index=True) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.9. ¿En qué rango de edad se inició la mayoría de los programadores en la programación?Se seleccionarán los campos adecuados para responder a esta pregunta | data_age1stcode_dev_pro1 = data_test[['Age1stCode']]
data_age1stcode_dev_pro1.head()
data_age1stcode_dev_pro1 = data_age1stcode_dev_pro1['Age1stCode'].value_counts().to_frame('counts').reset_index()
data_age1stcode_dev_pro1.to_csv('009_flourish_data.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.10. ¿Cuántos años como programadores se requiere para obtener un ingreso salarial alto?Se seleccionarán los campos adecuados para responder a esta pregunta | data_yearscode_high_income1 = data_test[['ConvertedCompYearly', 'YearsCode']]
data_yearscode_high_income1.head()
df_data_yearscode_high_income = data_yearscode_high_income1.copy()
def remove_outliers(df, q=0.05):
upper = df.quantile(1-q)
lower = df.quantile(q)
mask = (df < upper) & (df > lower)
return mask
mask = remove_outliers(df_data_yearscode_high_income['ConvertedCompYearly'], 0.1)
print(df_data_yearscode_high_income[mask])
df_data_yearscode_high_income = df_data_yearscode_high_income[mask]
df_data_yearscode_high_income['ConvertedCompYearlyCategorical'] = 'ALTO'
df_data_yearscode_high_income.loc[(df_data_yearscode_high_income['ConvertedCompYearly'] >= 0) & (df_data_yearscode_high_income['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO'
df_data_yearscode_high_income.loc[(df_data_yearscode_high_income['ConvertedCompYearly'] > 32747) & (df_data_yearscode_high_income['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO'
print(df_data_yearscode_high_income)
df_data_yearscode_high_income.to_csv('010_df_flourish.csv', index=False)
df_data_yearscode_high_income['ConvertedCompYearlyCategorical'].value_counts()
df_flourish_010 = df_data_yearscode_high_income[['YearsCode', 'ConvertedCompYearlyCategorical']].copy()
df_flourish_010.head()
df_flourish_010['YearsCode'] = pd.to_numeric(df_flourish_010['YearsCode'])
df_flourish_010.info()
grouped_df_010 = df_flourish_010.groupby("ConvertedCompYearlyCategorical")
average_df_010 = round(grouped_df_010.mean())
average_df_010
average_df_010.to_csv('010_flourish_data.csv', index=True) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.11. ¿Cuáles son los perfiles que registran los mejores ingresos?Se seleccionarán los campos adecuados para responder a esta pregunta | data_profiles_dev_high_income1 = data_test[['ConvertedCompYearly', 'DevType']].copy()
data_profiles_dev_high_income1.head()
df_data_profiles_dev_high_income = data_profiles_dev_high_income1.copy()
def remove_outliers(df, q=0.05):
upper = df.quantile(1-q)
lower = df.quantile(q)
mask = (df < upper) & (df > lower)
return mask
mask = remove_outliers(df_data_profiles_dev_high_income['ConvertedCompYearly'], 0.1)
print(df_data_profiles_dev_high_income[mask])
df_data_profiles_dev_high_income = df_data_profiles_dev_high_income[mask]
df_data_profiles_dev_high_income['ConvertedCompYearlyCategorical'] = 'ALTO'
df_data_profiles_dev_high_income.loc[(df_data_profiles_dev_high_income['ConvertedCompYearly'] >= 0) & (df_data_profiles_dev_high_income['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO'
df_data_profiles_dev_high_income.loc[(df_data_profiles_dev_high_income['ConvertedCompYearly'] > 32747) & (df_data_profiles_dev_high_income['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO'
print(df_data_profiles_dev_high_income)
df_data_profiles_dev_high_income['ConvertedCompYearlyCategorical'].value_counts()
df_flourish_011 = df_data_profiles_dev_high_income[['DevType', 'ConvertedCompYearlyCategorical']].copy()
df_flourish_011 = df_flourish_011[df_flourish_011['ConvertedCompYearlyCategorical'].isin(['ALTO'])]
df_flourish_011.info()
df_data_flourish_011 = df_flourish_011['DevType'].value_counts().to_frame('counts').reset_index()
df_data_flourish_011 = df_data_flourish_011.head(10)
df_data_flourish_011
df_data_flourish_011.to_csv('011_flourish_data.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.12. ¿Cuáles son las 10 tecnologías más usadas entre los programadores por países?Se seleccionarán los campos adecuados para responder a esta pregunta | data_10_techs_popular_dev_countries = data_test[['Country', 'LanguageHaveWorkedWith', 'DatabaseHaveWorkedWith', 'PlatformHaveWorkedWith', 'WebframeHaveWorkedWith', 'MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith', 'NEWCollabToolsHaveWorkedWith']]
data_10_techs_popular_dev_countries.head()
data_10_techs_popular_dev_countries['AllTechs'] = data_10_techs_popular_dev_countries['LanguageHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['DatabaseHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['PlatformHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['WebframeHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['MiscTechHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['ToolsTechHaveWorkedWith'].map(str) + ';' + data_10_techs_popular_dev_countries['NEWCollabToolsHaveWorkedWith'].map(str)
print (data_10_techs_popular_dev_countries)
df_data_10_techs_popular_dev_countries = data_10_techs_popular_dev_countries[['Country', 'AllTechs']].copy()
df_data_10_techs_popular_dev_countries.head()
df_data_10_techs_popular_dev_countries['AllTechs'] = df_data_10_techs_popular_dev_countries['AllTechs'].str.replace(' ', '')
df_data_10_techs_popular_dev_countries['AllTechs'] = df_data_10_techs_popular_dev_countries['AllTechs'].str.replace(';', ' ')
df_counts = df_data_10_techs_popular_dev_countries['AllTechs'].str.split(expand=True).stack().value_counts().rename_axis('Tech').reset_index(name='Count')
df_counts
data_10_techs_popular_dev_countries.to_csv('012_data_10_techs_popular_dev_countries.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.13. ¿Cuáles el sistema operativo más usado entre los encuestados?Se seleccionarán los campos adecuados para responder a esta pregunta | df_data_so_devs = data_test[['OpSys']].copy()
df_data_so_devs.tail()
df_data_so_devs['OpSys'].drop_duplicates().sort_values()
df_data_so_devs['OpSys'] = df_data_so_devs['OpSys'].replace(['Other (please specify):'], 'Otro')
df_data_so_devs['OpSys'].value_counts()
df_counts = df_data_so_devs['OpSys'].str.split(expand=True).stack().value_counts().rename_axis('OS').reset_index(name='Count')
df_counts
df_counts.to_csv('013_flourish_data.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.14. ¿Qué proporción de programadores tiene algún desorden mental por país?Se seleccionarán los campos adecuados para responder a esta pregunta | data_devs_mental_health_countries = data_test[['Country', 'MentalHealth']]
data_devs_mental_health_countries.head()
data_devs_mental_health_countries['MentalHealth'].value_counts()
df_data_devs_mental_health_countries = data_devs_mental_health_countries.copy()
df_data_devs_mental_health_countries = df_data_devs_mental_health_countries[df_data_devs_mental_health_countries['MentalHealth'].isin(['Desorden de concentración o memoria', 'Desorden emocional', 'Desorden de ansiedad', 'Tipo de autismo'])]
df_data_devs_mental_health_countries.head()
df_data_flourish_014 = df_data_devs_mental_health_countries['Country'].value_counts().to_frame('counts').reset_index()
df_data_flourish_014 = df_data_flourish_014.head(10)
df_data_flourish_014
df_data_flourish_014_best_ten = df_data_devs_mental_health_countries[df_data_devs_mental_health_countries['Country'].isin(['United States of America', 'United Kingdom of Great Britain and Northern Ireland', 'Brazil', 'Canada', 'India', 'Germany', 'Australia', 'Netherlands', 'Poland', 'Turkey'])]
df = df_data_flourish_014_best_ten.copy()
df
df1 = pd.crosstab(df['Country'], df['MentalHealth'])
df1
(df_data_devs_mental_health_countries.groupby(['Country', 'MentalHealth']).size()
.sort_values(ascending=False)
.reset_index(name='count')
.drop_duplicates(subset='Country'))
df_flourish_data_014 = (df_data_devs_mental_health_countries.groupby(['Country', 'MentalHealth']).size()
.sort_values(ascending=False)
.reset_index(name='count'))
df_flourish_data_014 = df_flourish_data_014.sort_values('Country')
df_data_flourish_014.head(10).to_csv('014_flourish_data_014.csv', index=False)
df1.to_csv('014_flourish_data_014.csv', index=True) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.15. ¿Cuáles son los países que tienen los mejores sueldos entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta | df_best_incomes_countries = data_test[['Country', 'ConvertedCompYearly']].copy()
df_best_incomes_countries
def remove_outliers(df, q=0.05):
upper = df.quantile(1-q)
lower = df.quantile(q)
mask = (df < upper) & (df > lower)
return mask
mask = remove_outliers(df_best_incomes_countries['ConvertedCompYearly'], 0.1)
print(df_best_incomes_countries[mask])
df_best_incomes_countries_no_outliers = df_best_incomes_countries[mask]
df_best_incomes_countries_no_outliers1 = df_best_incomes_countries_no_outliers.copy()
df_best_incomes_countries_no_outliers1['ConvertedCompYearlyCategorical'] = 'ALTO'
df_best_incomes_countries_no_outliers1.loc[(df_best_incomes_countries_no_outliers1['ConvertedCompYearly'] >= 0) & (df_best_incomes_countries_no_outliers1['ConvertedCompYearly'] <= 32747), 'ConvertedCompYearlyCategorical'] = 'BAJO'
df_best_incomes_countries_no_outliers1.loc[(df_best_incomes_countries_no_outliers1['ConvertedCompYearly'] > 32747) & (df_best_incomes_countries_no_outliers1['ConvertedCompYearly'] <= 90000), 'ConvertedCompYearlyCategorical'] = 'MEDIO'
print(df_best_incomes_countries_no_outliers1)
df_best_incomes_countries_no_outliers1['ConvertedCompYearlyCategorical'].value_counts()
df_best_incomes_countries_alto = df_best_incomes_countries_no_outliers1[df_best_incomes_countries_no_outliers1['ConvertedCompYearlyCategorical'] == 'ALTO']
df_alto = df_best_incomes_countries_alto[['Country', 'ConvertedCompYearlyCategorical']].copy()
df_flourish_015 = df_alto['Country'].value_counts().to_frame('counts').reset_index()
df_flourish_015.head(10)
df_flourish_015.head(10).to_csv('015_flourish_data.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.16. ¿Cuáles son los 10 lenguajes de programación más usados entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta | df_10_prog_languages_devs = data_test[['LanguageHaveWorkedWith']].copy()
df_10_prog_languages_devs.head()
df_10_prog_languages_devs['LanguageHaveWorkedWith'] = df_10_prog_languages_devs['LanguageHaveWorkedWith'].str.replace(';', ' ')
df_counts_016 = df_10_prog_languages_devs['LanguageHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Languages').reset_index(name='Count')
df_counts_016.head(10)
df_counts_016.head(10).to_csv('016_flourish_data.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.17. ¿Cuáles son las bases de datos más usadas entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta | df_10_databases = data_test[['DatabaseHaveWorkedWith']].copy()
df_10_databases.head()
df_10_databases['DatabaseHaveWorkedWith'] = df_10_databases['DatabaseHaveWorkedWith'].str.replace(' ', '')
df_10_databases['DatabaseHaveWorkedWith'] = df_10_databases['DatabaseHaveWorkedWith'].str.replace(';', ' ')
df_counts_017 = df_10_databases['DatabaseHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Databases').reset_index(name='Count')
df_counts_017.head(10)
df_counts_017.head(10).to_csv('017_flourish_data.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.18. ¿Cuáles son las plataformas más usadas entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta | df_10_platforms = data_test[['PlatformHaveWorkedWith']].copy()
df_10_platforms.head()
df_10_platforms['PlatformHaveWorkedWith'] = df_10_platforms['PlatformHaveWorkedWith'].str.replace(' ', '')
df_10_platforms['PlatformHaveWorkedWith'] = df_10_platforms['PlatformHaveWorkedWith'].str.replace(';', ' ')
df_counts_018 = df_10_platforms['PlatformHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Platform').reset_index(name='Count')
df_counts_018.head(10)
df_counts_018.to_csv('018_flourish_data.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.19. ¿Cuáles son los frameworks web más usados entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta | df_10_web_frameworks = data_test[['WebframeHaveWorkedWith']].copy()
df_10_web_frameworks.head()
df_10_web_frameworks['WebframeHaveWorkedWith'] = df_10_web_frameworks['WebframeHaveWorkedWith'].str.replace(' ', '')
df_10_web_frameworks['WebframeHaveWorkedWith'] = df_10_web_frameworks['WebframeHaveWorkedWith'].str.replace(';', ' ')
df_counts_019 = df_10_web_frameworks['WebframeHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Web framework').reset_index(name='Count')
df_counts_019.head(10)
df_counts_019.to_csv('019_flourish_data.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.20. ¿Cuáles son las herramientas tecnológicas más usadas entre los programadores?Se seleccionarán los campos adecuados para responder a esta pregunta | df_10_data_misc_techs = data_test[['MiscTechHaveWorkedWith', 'ToolsTechHaveWorkedWith']].copy()
df_10_data_misc_techs.head()
df_10_data_misc_techs['AllMiscTechs'] = df_10_data_misc_techs['MiscTechHaveWorkedWith'].map(str) + ';' + df_10_data_misc_techs['ToolsTechHaveWorkedWith'].map(str)
df_10_data_misc_techs.head()
df_10_data_misc_techs['AllMiscTechs'] = df_10_data_misc_techs['AllMiscTechs'].str.replace(' ', '')
df_10_data_misc_techs['AllMiscTechs'] = df_10_data_misc_techs['AllMiscTechs'].str.replace(';', ' ')
df_counts_020 = df_10_data_misc_techs['AllMiscTechs'].str.split(expand=True).stack().value_counts().rename_axis('Tecnología').reset_index(name='# Programadores')
df_counts_020.head(10)
df_counts_020.head(10).to_csv('020_flourish_data.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.21. ¿Cuáles son las herramientas colaborativas más usadas entre programadores?Se seleccionarán los campos adecuados para responder a esta pregunta | df_10_colab = data_test[['NEWCollabToolsHaveWorkedWith']].copy()
df_10_colab.head()
df_10_colab['NEWCollabToolsHaveWorkedWith'] = df_10_colab['NEWCollabToolsHaveWorkedWith'].str.replace(' ', '')
df_10_colab['NEWCollabToolsHaveWorkedWith'] = df_10_colab['NEWCollabToolsHaveWorkedWith'].str.replace(';', ' ')
df_counts_021 = df_10_colab['NEWCollabToolsHaveWorkedWith'].str.split(expand=True).stack().value_counts().rename_axis('Herramienta Colaborativa').reset_index(name='# Programadores')
df_counts_021.head(10)
df_counts_021.head(10).to_csv('021_flourish_data.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
2.22. ¿Cuáles son los países con mayor número de programadores trabajando a tiempo completo?Se seleccionarán los campos adecuados para responder a esta pregunta | df_fulltime_employment = data_test[['Country', 'Employment']].copy()
df_fulltime_employment.head()
df_fulltime_employment.info()
df_fulltime_only = df_fulltime_employment[df_fulltime_employment['Employment'] == 'Tiempo completo']
df_fulltime_only.head()
df_flourish_022 = df_fulltime_only['Country'].value_counts().to_frame('# Programadores').reset_index()
df_flourish_022.head(10)
df_flourish_022.head(10).to_csv('022_flourish_data.csv', index=False) | _____no_output_____ | MIT | M2.859_20211_A9_gbonillas.ipynb | gpbonillas/stackoverflow_2021_wrangling_data |
Pyber Analysis 4.3 Loading and Reading CSV files | # Add Matplotlib inline magic command
%matplotlib inline
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import matplotlib.dates as mdates
# File to Load (Remember to change these)
city_data_to_load = "Resources/city_data.csv"
ride_data_to_load = "Resources/ride_data.csv"
# Read the City and Ride Data
city_data_df = pd.read_csv(city_data_to_load)
ride_data_df = pd.read_csv(ride_data_to_load) | _____no_output_____ | Apache-2.0 | PyBer_analysis_code.ipynb | rfwilliams92/Pyber_Ridesharing_Analysis |
Merge the DataFrames | # Combine the data into a single dataset
pyber_data_df = pd.merge(ride_data_df, city_data_df, how="left", on=["city", "city"])
# Display the data table for preview
pyber_data_df | _____no_output_____ | Apache-2.0 | PyBer_analysis_code.ipynb | rfwilliams92/Pyber_Ridesharing_Analysis |
Deliverable 1: Get a Summary DataFrame | # 1. Get the total rides for each city type
tot_rides_by_type = pyber_data_df.groupby(["type"]).count()["ride_id"]
tot_rides_by_type
# 2. Get the total drivers for each city type
tot_drivers_by_type = city_data_df.groupby(["type"]).sum()["driver_count"]
tot_drivers_by_type
# 3. Get the total amount of fares for each city type
tot_fares_by_type = pyber_data_df.groupby(["type"]).sum()["fare"]
tot_fares_by_type
# 4. Get the average fare per ride for each city type.
avg_fare_by_type = round((tot_fares_by_type / tot_rides_by_type), 2)
avg_fare_by_type
# 5. Get the average fare per driver for each city type.
avg_fare_per_driver_by_type = round((tot_fares_by_type / tot_drivers_by_type), 2)
avg_fare_per_driver_by_type
# 6. Create a PyBer summary DataFrame.
pyber_summary_df = pd.DataFrame({
"Total Rides": tot_rides_by_type,
"Total Drivers": tot_drivers_by_type,
"Total Fares": tot_fares_by_type,
"Average Fare per Ride": avg_fare_by_type,
"Average Fare per Driver": avg_fare_per_driver_by_type
})
pyber_summary_df.dtypes
# 7. Cleaning up the DataFrame. Delete the index name
pyber_summary_df.index.name = None
pyber_summary_df
# 8. Format the columns.
pyber_summary_df['Total Rides'] = pyber_summary_df['Total Rides'].map('{:,}'.format)
pyber_summary_df['Total Drivers'] = pyber_summary_df['Total Drivers'].map('{:,}'.format)
pyber_summary_df['Total Fares'] = pyber_summary_df['Total Fares'].map('${:,}'.format)
pyber_summary_df['Average Fare per Ride'] = pyber_summary_df['Average Fare per Ride'].map('${:,}'.format)
pyber_summary_df['Average Fare per Driver'] = pyber_summary_df['Average Fare per Driver'].map('${:,}'.format)
pyber_summary_df | _____no_output_____ | Apache-2.0 | PyBer_analysis_code.ipynb | rfwilliams92/Pyber_Ridesharing_Analysis |
Deliverable 2. Create a multiple line plot that shows the total weekly of the fares for each type of city. | # 1. Read the merged DataFrame
pyber_data_df
# 2. Using groupby() to create a new DataFrame showing the sum of the fares
# for each date where the indices are the city type and date.
tot_fares_by_date_df = pd.DataFrame(pyber_data_df.groupby(["type", "date"]).sum()["fare"])
tot_fares_by_date_df
# 3. Reset the index on the DataFrame you created in #1. This is needed to use the 'pivot()' function.
# df = df.reset_index()
tot_fares_by_date_df = tot_fares_by_date_df.reset_index()
tot_fares_by_date_df
# 4. Create a pivot table with the 'date' as the index, the columns ='type', and values='fare'
# to get the total fares for each type of city by the date.
pyber_pivot = tot_fares_by_date_df.pivot(index="date", columns="type", values="fare")
pyber_pivot
# 5. Create a new DataFrame from the pivot table DataFrame using loc on the given dates, '2019-01-01':'2019-04-29'.
pyber_pivot_df = pyber_pivot.loc['2019-01-01':'2019-04-29']
pyber_pivot_df
# 6. Set the "date" index to datetime datatype. This is necessary to use the resample() method in Step 8.
pyber_pivot_df.index = pd.to_datetime(pyber_pivot_df.index)
# 7. Check that the datatype for the index is datetime using df.info()
pyber_pivot_df.info()
# 8. Create a new DataFrame using the "resample()" function by week 'W' and get the sum of the fares for each week.
tot_fares_by_week_df = pyber_pivot_df.resample('W').sum()
tot_fares_by_week_df
# 8. Using the object-oriented interface method, plot the resample DataFrame using the df.plot() function.
# Import the style from Matplotlib.
from matplotlib import style
# Use the graph style fivethirtyeight.
style.use('fivethirtyeight')
fig, ax = plt.subplots()
tot_fares_by_week_df.plot(figsize=(20,7), ax=ax)
ax.set_title("Total Fares by City Type")
ax.set_ylabel("Fares($USD)")
ax.set_xlabel("Month(Weekly Fare Totals)")
ax.legend(labels=["Rural", "Suburban", "Urban"],
loc="center")
plt.savefig("analysis/PyBer_fare_summary.png")
plt.show() | _____no_output_____ | Apache-2.0 | PyBer_analysis_code.ipynb | rfwilliams92/Pyber_Ridesharing_Analysis |
Import Packages | from ndfinance.brokers.backtest import *
from ndfinance.core import BacktestEngine
from ndfinance.analysis.backtest import BacktestAnalyzer
from ndfinance.strategies import PeriodicRebalancingStrategy
from ndfinance.visualizers.backtest_visualizer import BasicVisualizer
%matplotlib inline
import matplotlib.pyplot as plt | 2020-10-10 13:22:13,815 INFO resource_spec.py:212 -- Starting Ray with 15.38 GiB memory available for workers and up to 7.7 GiB for objects. You can adjust these settings with ray.init(memory=<bytes>, object_store_memory=<bytes>).
2020-10-10 13:22:14,051 WARNING services.py:923 -- Redis failed to start, retrying now.
2020-10-10 13:22:14,252 INFO services.py:1165 -- View the Ray dashboard at [1m[32mlocalhost:8265[39m[22m
| MIT | examples/all_weather_portfolio.ipynb | gomtinQQ/NDFinance |
build strategy | class AllWeatherPortfolio(PeriodicRebalancingStrategy):
def __init__(self, weight_dict, rebalance_period):
super(AllWeatherPortfolio, self).__init__(rebalance_period)
self.weight_dict = weight_dict
def _logic(self):
self.broker.order(Rebalance(self.weight_dict.keys(), self.weight_dict.values())) | _____no_output_____ | MIT | examples/all_weather_portfolio.ipynb | gomtinQQ/NDFinance |
set portfolio elements, weights, rebalance periodyou can adjust and play in your own! | PORTFOLIO = {
"GLD" : 0.05,
"SPY" : 0.5,
"SPTL" : 0.15,
"BWZ" : 0.15,
"SPHY": 0.15,
}
REBALANCE_PERIOD = TimeFrames.day * 365 | _____no_output_____ | MIT | examples/all_weather_portfolio.ipynb | gomtinQQ/NDFinance |
Make data provider | dp = BacktestDataProvider()
dp.add_yf_tickers(*PORTFOLIO.keys()) | _____no_output_____ | MIT | examples/all_weather_portfolio.ipynb | gomtinQQ/NDFinance |
Make time indexer | indexer = TimeIndexer(dp.get_shortest_timestamp_seq())
dp.set_indexer(indexer)
dp.cut_data() | _____no_output_____ | MIT | examples/all_weather_portfolio.ipynb | gomtinQQ/NDFinance |
Make broker and add assets | brk = BacktestBroker(dp, initial_margin=10000)
_ = [brk.add_asset(Asset(ticker=ticker)) for ticker in PORTFOLIO.keys()] | _____no_output_____ | MIT | examples/all_weather_portfolio.ipynb | gomtinQQ/NDFinance |
Initialize strategy | strategy = AllWeatherPortfolio(PORTFOLIO, rebalance_period=REBALANCE_PERIOD) | _____no_output_____ | MIT | examples/all_weather_portfolio.ipynb | gomtinQQ/NDFinance |
Initialize backtest engine | engine = BacktestEngine()
engine.register_broker(brk)
engine.register_strategy(strategy) | _____no_output_____ | MIT | examples/all_weather_portfolio.ipynb | gomtinQQ/NDFinance |
run | log = engine.run() | [ENGINE]: 100%|██████████| 2090/2090 [00:00<00:00, 11637.76it/s]
| MIT | examples/all_weather_portfolio.ipynb | gomtinQQ/NDFinance |
run analysis | analyzer = BacktestAnalyzer(log)
analyzer.print() |
-------------------------------------------------- [BACKTEST RESULT] --------------------------------------------------
CAGR:10.644
MDD:19.49
CAGR_MDD_ratio:0.546
win_trade_count:24
lose_trade_count:11
total_trade_count:75
win_rate_percentage:32.0
lose_rate_percentage:14.667
sharpe_ratio:0.279
sortino_ratio:0.361
pnl_ratio_sum:13.677
pnl_ratio:6.269
average_realized_pnl:90.262
max_realized_pnl:1255.897
min_realized_pnl:-232.054
average_realized_pnl_percentage:2.427
max_realized_pnl_percentage:26.859
min_realized_pnl_percentage:-13.873
average_realized_pnl_percentage_weighted:0.661
max_realized_pnl_percentage_weighted:9.432
min_realized_pnl_percentage_weighted:-1.999
average_portfolio_value_total:12706.284
max_portfolio_value_total:18406.706
min_portfolio_value_total:9613.284
average_portfolio_value:12706.284
max_portfolio_value:18406.706
min_portfolio_value:9613.284
average_leverage:0.88
max_leverage:1.0
min_leverage:0.0
average_leverage_total:0.88
max_leverage_total:1.0
min_leverage_total:0.0
average_cash_weight_percentage:11.962
max_cash_weight_percentage:100.0
min_cash_weight_percentage:0.0
average_cash_weight_percentage_total:11.962
max_cash_weight_percentage_total:100.0
min_cash_weight_percentage_total:0.0
average_unrealized_pnl_percentage:2.343
max_unrealized_pnl_percentage:10.526
min_unrealized_pnl_percentage:-11.134
average_unrealized_pnl_percentage_total:2.343
max_unrealized_pnl_percentage_total:10.526
min_unrealized_pnl_percentage_total:-11.134
average_1M_pnl_percentage:0.59
max_1M_pnl_percentage:8.165
min_1M_pnl_percentage:-6.083
average_1D_pnl_percentage:0.0
max_1D_pnl_percentage:0.0
min_1D_pnl_percentage:0.0
average_1W_pnl_percentage:0.107
max_1W_pnl_percentage:5.255
min_1W_pnl_percentage:-9.881
| MIT | examples/all_weather_portfolio.ipynb | gomtinQQ/NDFinance |
visualize | visualizer = BasicVisualizer()
visualizer.plot_log(log) | _____no_output_____ | MIT | examples/all_weather_portfolio.ipynb | gomtinQQ/NDFinance |
Export | visualizer.export(EXPORT_PATH)
analyzer.export(EXPORT_PATH) |
-------------------------------------------------- [EXPORTING FIGURES] --------------------------------------------------
exporting figure to: ./bt_results/all_weather_portfolio/plot/mdd.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/cagr.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/sharpe.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/sortino.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/portfolio_value.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/portfolio_value_cum_pnl_perc.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/portfolio_value_total.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/portfolio_value_total_cum_pnl_perc.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/realized_pnl_percentage_hist.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/realized_pnl_percentage_weighted_hist.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/1M_pnl_hist.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/1D_pnl_hist.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/1W_pnl_hist.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/1M_pnl_bar.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/1D_pnl_bar.png
exporting figure to: ./bt_results/all_weather_portfolio/plot/1W_pnl_bar.png
-------------------------------------------------- [EXPORTING RESULT/LOG] --------------------------------------------------
saving log: ./bt_results/all_weather_portfolio/broker_log.csv
saving log: ./bt_results/all_weather_portfolio/portfolio_log.csv
saving result to: ./bt_results/all_weather_portfolio/result.json
| MIT | examples/all_weather_portfolio.ipynb | gomtinQQ/NDFinance |
Estatísticas descritivas e Visualização de DadosEste notebook é responsável por mostrar as estatíscas descritivas da base dados com visualizações.Será analisado o comportamento de algumas características que são cruciais na compra/venda de veículos usados. | from Utils import *
from tqdm import tqdm
from matplotlib import pyplot as plt
import seaborn as sns
pd.set_option('display.max_colwidth', 100)
DATASET = "../datasets/clean_vehicles_2.csv"
df = pd.read_csv(DATASET)
df.describe() | _____no_output_____ | MIT | Parte 1/notebooks/3-descriptive_stats.ipynb | mbs8/IF679-ciencia-de-dados |
Estatísticas UnivariadasAqui vamos análisar o comportamento de alguns dados em relação a sua distribuição. Ano de fabricação | ##Análise de média, desvio padrão, mediana e moda do Ano de fabricação
print(
"Ano do veículo:\n"
"Média: "+floatStr(df['year'].mean())+"\n"+
"Desvio padrão: "+floatStr(df['year'].std())+"\n"+
"Mediana: "+floatStr(df['year'].median())+"\n"+
"IQR: "+floatStr(df['year'].describe()[6] - df['year'].describe()[4])+"\n"+
"Moda: "+floatStr(df['year'].mode().loc[0])
) | Ano do veículo:
Média: 2010.26
Desvio padrão: 8.67
Mediana: 2012.0
IQR: 9.0
Moda: 2017.0
| MIT | Parte 1/notebooks/3-descriptive_stats.ipynb | mbs8/IF679-ciencia-de-dados |
Aqui notamos uma mediana maior do que a média. O que nos levar a imaginar que esta grandeza não segue uma distribuição normal.Isto indica que deve haver alguns carros muito antigos sendo vendidos, gerando uma caractéristica de assimetria na curva.Para verifcarmos isso, vamos gerar o histograma | ##Plotar o histograma da distribuição em relação ao ano de fabricação do veículo
bars = df[df['year']> 0].year.max() - df[df['year']> 0].year.min()
df[df['year']> 0].year.hist(bins = int(bars)) | _____no_output_____ | MIT | Parte 1/notebooks/3-descriptive_stats.ipynb | mbs8/IF679-ciencia-de-dados |
Porém, este plotting não nos dá uma boa visualização. Nesta lista há alguns carros voltados para colecionadores, que não é o perfil que queremos estudar. Então, tomando o ano de 1985 como limiar, analisamos o histograma da distribuição de carros comercializáveis "para uso normal".Agora conseguimos perceber que a maior parte dos carros vendidos são fábricados depois de 2000. | ##Plot do histograma dos anos de fabricação do carro limitando à 1985
bars = df['year'].max() - 1985
df[df['year']> 1985].year.hist(bins = int(bars))
| _____no_output_____ | MIT | Parte 1/notebooks/3-descriptive_stats.ipynb | mbs8/IF679-ciencia-de-dados |
Preço de revenda do veículo | ##Análise de estatísticas univariadas dos valores de preço do veículo
print(
"Preço do veículo:\n"
"Média: "+floatStr(df[df['price'] > 0].price.mean())+"\n"+
"Desvio padrão: "+floatStr(df[df['price'] > 0].price.std())+"\n"+
"Mediana: "+floatStr(df[df['price'] > 0].price.median())+"\n"+
"IQR: "+floatStr(df['price'].describe()[6] - df['price'].describe()[4])+"\n"+
"Moda: "+floatStr(df[df['price'] > 0].price.mode().loc[0])
) | Preço do veículo:
Média: 36809.65
Desvio padrão: 6571953.45
Mediana: 11495.0
IQR: 13000.0
Moda: 7995
| MIT | Parte 1/notebooks/3-descriptive_stats.ipynb | mbs8/IF679-ciencia-de-dados |
Aqui encontramos uma diferença muito grandes nestes dados. O que nos faz pensar que temos uma distribuição muito variada e assimétrica de preços.Devido a esta característica, não conseguiremos ver um histograma com todos os dados. Podemos contornar isto de 2 maneiras: * Poderíamos usar o log10 para ter uma noção da ordem de grandeza, mas não conseguiríamos extrair muita informação, pois a maioria se encaixariam em log10(x) = 4. * Outra alternativa seria plotar um subconjunto dos preços. Então, depois de algumas análises, protaremos apenas valores de 0 a $ 100.000. | sns.distplot(df[(df['price'] > 0) & (df['price'] < 100000)].price, bins = 100,norm_hist = False, hist=True, kde=False) | _____no_output_____ | MIT | Parte 1/notebooks/3-descriptive_stats.ipynb | mbs8/IF679-ciencia-de-dados |
Leitura atual do Odômetro (Milhas percorrida pelo veículo) | ##Análise de estatísticas univariadas dos valores de leitura do Odômetro.
##Note que estamos descartando valores nulos para fazer esta análise
print(
"Odômetro do veículo:\n"
"Média: "+floatStr(df[df['odometer'] > 0].odometer.mean())+"\n"+
"Desvio padrão: "+floatStr(df[df['odometer'] > 0].odometer.std())+"\n"+
"Mediana: "+floatStr(df[df['odometer'] > 0].odometer.median())+"\n"+
"IQR: "+floatStr(df['odometer'].describe()[6] - df['odometer'].describe()[4])+"\n"+
"Moda: "+floatStr(df[df['odometer'] > 0].odometer.mode().loc[0])
) | Odômetro do veículo:
Média: 99705.09
Desvio padrão: 111570.94
Mediana: 92200.0
IQR: 92054.0
Moda: 150000.0
| MIT | Parte 1/notebooks/3-descriptive_stats.ipynb | mbs8/IF679-ciencia-de-dados |
Aqui também temos uma grande varidade de valores. Apenas 492 deles estão acima de 800.000 de milhas registradas. Para fim de análise, iremos utilizar este intervalo. | sns.distplot(df[(df['odometer'] > 0) & (df['odometer'] < 400000)].odometer, bins = 100,norm_hist = False, hist=True, kde=False) | _____no_output_____ | MIT | Parte 1/notebooks/3-descriptive_stats.ipynb | mbs8/IF679-ciencia-de-dados |
Visualização de quantidade de anúncios por fabricantes de veículosFaremos uma análise visual para tentar perceber quais as marcas mais populares no mercado de seminovos. | ## Plotar a divisão de mercas que são mais anunciadas
manufacturers = df['manufacturer'].value_counts().drop(df['manufacturer'].value_counts().index[8]).drop(df['manufacturer'].value_counts().index[13:])
sns.set()
plt.figure(figsize=(10,5))
sns.barplot(x=manufacturers.index, y=manufacturers)
print("As 3 marcas mais anunciadas (Ford, chevrolet, toyota) equivalem a "
+str(round(sum(df['manufacturer'].value_counts().values[0:3])/df['manufacturer'].count()*100,2))
+"% deste mercado.")
filter_list = ['ford', 'chevrolet', 'toyota', 'nissan', 'honda']
filtereddf = df[df.manufacturer.isin(filter_list)]
ax = sns.boxplot(x="manufacturer", y="price", data= filtereddf[filtereddf['price']< 40000]) | _____no_output_____ | MIT | Parte 1/notebooks/3-descriptive_stats.ipynb | mbs8/IF679-ciencia-de-dados |
Visualização da relação entre preço de carros classificados por traçãoAqui podemos comparar como o preço variam de acordo com a tração do veículo * 4wd: 4x4 * rwd: tração traseira * fwd: tração dianteira Comparamos a média, mediana e quantidade. Porém, já percebemos de análises anteriores que a mediana nos dá um valor mais razoável, por isto ordenamos baseado nela | df[df['drive'] != 'undefined'].groupby(['drive']).agg(['mean','median','count'])['price'].sort_values(by='median', ascending=False) | _____no_output_____ | MIT | Parte 1/notebooks/3-descriptive_stats.ipynb | mbs8/IF679-ciencia-de-dados |
Estatísticas BivariadasAqui vamos tentar encontrar se as grandezas numéricas possuem algum tipo de correlação. Primeiro analisaremos o método de spearman, depois de pearson. Em seguida tentaremos utilizar | ##Aplicando-se alguns limitadores para analisar correlações entre as variáveis
car = df[(df['odometer']> 0) & (df['odometer']<400000)]
car = car[(car['price']>0) & (car['price']<100000)]
car = car[car['year']>=1985]
car = car.drop(['lat','long'], axis=1)
car.cov()
car.corr(method='spearman')
car.corr(method='pearson')
##Relaçãoo de preço x milhas rodadas entre as 3 marcas mais populares
filter_list = ['ford', 'chevrolet', 'toyota']
car[car['manufacturer'].isin(filter_list)].plot.scatter(x='odometer',y='price')
g = sns.FacetGrid(car[car['manufacturer']!='undefined'], col="manufacturer", hue='drive')
g.map(sns.scatterplot, "year", "price")
g.add_legend()
#Clique na imagem pequena para expandir | _____no_output_____ | MIT | Parte 1/notebooks/3-descriptive_stats.ipynb | mbs8/IF679-ciencia-de-dados |
Welcome in the introductory template of the python graph gallery. Here is how to proceed to add a new `.ipynb` file that will be converted to a blogpost in the gallery! Notebook Metadata It is very important to add the following fields to your notebook. It helps building the page later on:- **slug**: the URL of the blogPost. It should be exactly the same as the file title. Example: `70-basic-density-plot-with-seaborn`- **chartType**: the chart type like density or heatmap. For a complete list see [here](https://github.com/holtzy/The-Python-Graph-Gallery/blob/master/src/util/sectionDescriptions.js), it must be one of the `id` options.- **title**: what will be written in big on top of the blogpost! use html syntax there.- **description**: what will be written just below the title, centered text.- **keyword**: list of keywords related with the blogpost- **seoDescription**: a description for the bloppost meta. Should be a bit shorter than the description and must not contain any html syntax. Add a chart description A chart example always come with some explanation. It must:contain keywordslink to related pages like the parent page (graph section)give explanations. In depth for complicated charts. High level for beginner level charts Add a chart | import seaborn as sns, numpy as np
np.random.seed(0)
x = np.random.randn(100)
ax = sns.distplot(x) | _____no_output_____ | 0BSD | src/notebooks/255-percentage-stacked-area-chart.ipynb | nrslt/The-Python-Graph-Gallery |
Airbnb - Rio de Janeiro* Download [data](http://insideairbnb.com/get-the-data.html)* We downloaded `listings.csv` from all monthly dates available Questions1. What was the price and supply behavior before and during the pandemic?2. Does a title in English or Portuguese impact the price?3. What features correlate with the price? Can we predict a price? Which features matters? | import numpy as np
import pandas as pd
import seaborn as sns
import glob
import re
import pendulum
import tqdm
import matplotlib.pyplot as plt
import langid
langid.set_languages(['en','pt']) | _____no_output_____ | MIT | airbnb-rj-1/Data Treatment.ipynb | reneoctavio/analysis |
Read filesRead all 30 files and get their date | files = sorted(glob.glob('data/listings*.csv'))
df = []
for f in files:
date = pendulum.from_format(re.findall(r"\d{4}_\d{2}_\d{2}", f)[0], fmt="YYYY_MM_DD").naive()
csv = pd.read_csv(f)
csv["date"] = date
df.append(csv)
df = pd.concat(df)
df | _____no_output_____ | MIT | airbnb-rj-1/Data Treatment.ipynb | reneoctavio/analysis |
Deal with NaNs* Drop `neighbourhood_group` as it is all NaNs;* Fill `reviews_per_month` with zeros (if there is no review, then reviews per month are zero)* Keep `name` for now* Drop `host_name` rows, as there is not any null host_id* Keep `last_review` too, as there are rooms with no review | df.isna().any()
df = df.drop(["host_name", "neighbourhood_group"], axis=1)
df["reviews_per_month"] = df["reviews_per_month"].fillna(0.)
df.head() | _____no_output_____ | MIT | airbnb-rj-1/Data Treatment.ipynb | reneoctavio/analysis |
Detect `name` language* Clean strings for evaluation* Remove common neighbourhoods name in Portuguese from `name` column to diminish misprediction* Remove several non-alphanumeric characters* Detect language using [langid](https://github.com/saffsd/langid.py)* I restricted between pt, en. There are very few rooms listed in other languages.* Drop `name` column | import unicodedata
stopwords = pd.unique(df["neighbourhood"])
stopwords = [re.sub(r"[\(\)]", "", x.lower().strip()).split() for x in stopwords]
stopwords = [x for item in stopwords for x in item]
stopwords += [unicodedata.normalize("NFKD", x).encode('ASCII', 'ignore').decode() for x in stopwords]
stopwords += ["rio", "janeiro", "copa", "arpoador", "pepê", "pepe", "lapa", "morro", "corcovado"]
stopwords = set(stopwords)
docs = [re.sub(r"[\-\_\\\/\,\;\:\!\+\’\%\&\d\*\#\"\´\`\.\|\(\)\[\]\@\'\»\«\>\<\❤️\…]", " ", str(x)) for x in df["name"].tolist()]
docs = [" ".join(x.lower().strip().split()) for x in docs]
docs = ["".join(e for e in x if (e.isalnum() or " ")) for x in docs]
ndocs = []
for doc in tqdm.tqdm(docs):
ndocs.append(" ".join([x for x in doc.split() if x not in stopwords]))
docs = ndocs
results = []
for d in tqdm.tqdm(docs):
results.append(langid.classify(d)[0])
df["language"] = results
# Because we transformed NaNs into string, fill those detection with nans too
df.loc[df["name"].isna(), "language"] = pd.NA | _____no_output_____ | MIT | airbnb-rj-1/Data Treatment.ipynb | reneoctavio/analysis |
* Test accuracy, manually label 383 out of 88191 (95% conf. interval, 5% margin of error) | df.loc[~df["name"].isna()].drop_duplicates("name").shape
df.loc[~df["name"].isna()].drop_duplicates("name")[["name", "language"]].sample(n=383, random_state=42).to_csv("lang_pred_1.csv")
lang_pred = pd.read_csv("lang_pred.csv", index_col=0)
lang_pred.head()
overall_accuracy = (lang_pred["pred"] == lang_pred["true"]).sum() / lang_pred.shape[0]
pt_accuracy = (lang_pred[lang_pred["true"] == "pt"]["true"] == lang_pred[lang_pred["true"] == "pt"]["pred"]).sum() / lang_pred[lang_pred["true"] == "pt"].shape[0]
en_accuracy = (lang_pred[lang_pred["true"] == "en"]["true"] == lang_pred[lang_pred["true"] == "en"]["pred"]).sum() / lang_pred[lang_pred["true"] == "en"].shape[0]
print(f"Overall accuracy: {overall_accuracy*100}%")
print(f"Portuguese accuracy: {pt_accuracy*100}%")
print(f"English accuracy: {en_accuracy*100}%")
df = df.drop("name", axis=1)
df.head()
df["language"].value_counts() | _____no_output_____ | MIT | airbnb-rj-1/Data Treatment.ipynb | reneoctavio/analysis |
Calculate how many times a room appeared* There are 30 months of data, and rooms appear multiple times* Calculate for a specific date, how many times the same room appeared up to that date | df = df.set_index(["id", "date"])
df["appearances"] = df.groupby(["id", "date"])["host_id"].count().unstack().cumsum(axis=1).stack()
df = df.reset_index()
df.head() | _____no_output_____ | MIT | airbnb-rj-1/Data Treatment.ipynb | reneoctavio/analysis |
Days since last review* Calculate days since last review* Then categorize them by the length of the days | df.loc[:, "last_review"] = pd.to_datetime(df["last_review"], format="%Y/%m/%d")
# For each scraping date, consider the last date to serve as comparision as the maximum date
last_date = df.groupby("date")["last_review"].max()
df["last_date"] = df.apply(lambda row: last_date.loc[row["date"]], axis=1)
df["days_last_review"] = (df["last_date"] - df["last_review"]).dt.days
df = df.drop("last_date", axis=1)
df.head()
df["days_last_review"].describe()
def categorize_last_review(days_last_review):
"""Transform days since last review into categories
Transform days since last review into one of those categories:
last_week, last_month, last_half_year, last_year, last_two_years,
long_time_ago, or never
Args:
days_last_review (int): Days since the last review
Returns:
str: A string with the category name.
"""
if days_last_review <= 7:
return "last_week"
elif days_last_review <= 30:
return "last_month"
elif days_last_review <= 182:
return "last_half_year"
elif days_last_review <= 365:
return "last_year"
elif days_last_review <= 730:
return "last_two_years"
elif days_last_review > 730:
return "long_time_ago"
else:
return "never"
df.loc[:, "last_review"] = df.apply(lambda row: categorize_last_review(row["days_last_review"]), axis=1)
df = df.drop(["days_last_review"], axis=1)
df.head()
df = df.set_index(["id", "date"])
df.loc[:, "appearances"] = df["appearances"].astype(int)
df.loc[:, "host_id"] = df["host_id"].astype("category")
df.loc[:, "neighbourhood"] = df["neighbourhood"].astype("category")
df.loc[:, "room_type"] = df["room_type"].astype("category")
df.loc[:, "last_review"] = df["last_review"].astype("category")
df.loc[:, "language"] = df["language"].astype("category")
df
df.to_pickle("data.pkl") | _____no_output_____ | MIT | airbnb-rj-1/Data Treatment.ipynb | reneoctavio/analysis |
Distributions* Check the distribution of features | df = pd.read_pickle("data.pkl")
df.head()
df["latitude"].hist(bins=250)
df["longitude"].hist(bins=250)
df["price"].hist(bins=250)
df["minimum_nights"].hist(bins=250)
df["number_of_reviews"].hist()
df["reviews_per_month"].hist(bins=250)
df["calculated_host_listings_count"].hist(bins=250)
df["availability_365"].hist()
df["appearances"].hist(bins=29)
df.describe() | _____no_output_____ | MIT | airbnb-rj-1/Data Treatment.ipynb | reneoctavio/analysis |
Limits* We are analising mostly for touristic purpose, so get the short-term rentals only* Prices between 10 and 10000 (The luxury Copacabana Palace Penthouse at 8000 for example)* Short-term rentals (minimum_nights < 31)* Impossibility of more than 31 reviews per month | df = pd.read_pickle("data.pkl")
total_records = len(df)
outbound_values = (df["price"] < 10) | (df["price"] > 10000)
df = df[~outbound_values]
print(f"Removed values {outbound_values.sum()}, {outbound_values.sum()*100/total_records}%")
long_term = df["minimum_nights"] >= 31
df = df[~long_term]
print(f"Removed values {long_term.sum()}, {long_term.sum()*100/total_records}%")
reviews_limit = df["reviews_per_month"] > 31
df = df[~reviews_limit]
print(f"Removed values {reviews_limit.sum()}, {reviews_limit.sum()*100/total_records}%") | Removed values 2, 0.00019089597982611286%
| MIT | airbnb-rj-1/Data Treatment.ipynb | reneoctavio/analysis |
Log skewed variables* Most numerical values are skewed, so log them | df.describe()
# number_of_reviews, reviews_per_month, availability_365 have zeros, thus sum one to all
df["number_of_reviews"] = np.log(df["number_of_reviews"] + 1)
df["reviews_per_month"] = np.log(df["reviews_per_month"] + 1)
df["availability_365"] = np.log(df["availability_365"] + 1)
df["price"] = np.log(df["price"])
df["minimum_nights"] = np.log(df["minimum_nights"])
df["calculated_host_listings_count"] = np.log(df["calculated_host_listings_count"])
df["appearances"] = np.log(df["appearances"])
df.describe() | _____no_output_____ | MIT | airbnb-rj-1/Data Treatment.ipynb | reneoctavio/analysis |
Extreme outliers* Most outliers are clearly mistyped values (one can check these rooms ids in their website)* Remove extreme outliers first from large deviations within the same `id` (eliminate rate jumps of same room)* Then remove those from same scraping `date`, `neighbourhood` and `room_type` | df = df.reset_index()
q25 = df.groupby(["id"])["price"].quantile(0.25)
q75 = df.groupby(["id"])["price"].quantile(0.75)
ext = q75 + 3 * (q75 - q25)
ext = ext[(q75 - q25) > 0.]
affected_rows = []
multiple_id = df[df["id"].isin(ext.index)]
for row in tqdm.tqdm(multiple_id.itertuples(), total=len(multiple_id)):
if row.price >= ext.loc[row.id]:
affected_rows.append(row.Index)
df = df.drop(affected_rows)
print(f"Removed values {len(affected_rows)}, {len(affected_rows)*100/total_records}%")
# Remove extreme outliers per neighbourhood, room_type and scraping date
q25 = df.groupby(["date", "neighbourhood", "room_type"])["price"].quantile(0.25)
q75 = df.groupby(["date", "neighbourhood", "room_type"])["price"].quantile(0.75)
ext = q75 + 3 * (q75 - q25)
ext
affected_rows = []
for row in tqdm.tqdm(df.itertuples(), total=len(df)):
if row.price >= ext.loc[(row.date, row.neighbourhood, row.room_type)]:
affected_rows.append(row.Index)
df = df.drop(affected_rows)
print(f"Removed values {len(affected_rows)}, {len(affected_rows)*100/total_records}%")
df.describe()
df["price"].hist()
df.to_pickle("treated_data.pkl") | _____no_output_____ | MIT | airbnb-rj-1/Data Treatment.ipynb | reneoctavio/analysis |
[](https://colab.research.google.com/github/real-itu/modern-ai-course/blob/master/lecture-04/lab.ipynb) Lab 4 - Math StatsYou're given the following dataset: | import random
random.seed(0)
x = [random.gauss(0, 1)**2 for _ in range(20)]
print(x) | [0.8868279034128675, 1.9504304025306558, 0.46201173092655257, 0.13727289350107572, 1.0329650747173147, 0.00520129768651921, 0.032111381051647944, 0.6907259056240523, 1.713578821550704, 0.037592456206679545, 0.9865449735727571, 0.418585230265908, 0.11133432341026718, 2.7082355435792898, 0.3123577703699347, 0.26435707416151544, 5.779789763931721, 2.344213906200638, 0.6343578347545124, 4.014607380283022]
| MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
> Compute the min, max, mean, median, standard deviation and variance of x | # Your code here
import math
# min
mi = min(x)
print("min: " + str(mi))
#max
ma = max(x)
print("max: " + str(ma))
#mean
mean = sum(x)/len(x)
print("mean: " + str(mean))
#median
median = sorted(x)[int(len(x)/2)]
print("median: " + str(median))
#stddv
#variance
lars = 0
for v in x:
lars += math.pow(v - mean, 2)
variance = lars / len(x)
stddv = math.sqrt(variance)
print("standard deviation: " + str(stddv))
print("variance: " + str(variance)) | min: 0.00520129768651921
max: 5.779789763931721
mean: 1.2261550833868817
median: 0.6907259056240523
standard deviation: 1.4717408201314568
variance: 2.166021041641213
| MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
VectorsYou're given the two 3 dimensional vectors a and b below. | a = [1, 3, 5]
b = [2, 9, 13] | _____no_output_____ | MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
> Compute 1. $a + b$ 2. $2a-3b$ 3. $ab$ - the inner product | # Your code here
first = list(map(lambda t: t[0]+t[1], list(zip(a,b))))
print(first)
second = list(map(lambda t: t[0] - t[1], list(zip(list(map(lambda x: x*2, a)), list(map(lambda x: x*3, b))))))
print(second)
third = sum(list(map(lambda t: t[0] * t[1], list(zip(a,b)))))
print(third) | [3, 12, 18]
[-4, -21, -29]
94
| MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
Gradients Given the function $f(x,y) = 3x^2 + 6y$
> Compute the partial gradients $\frac{df}{dx}$ and $\frac{df}{dy}$ Your answer here
$\frac{df}{dx} = 6x$
$\frac{df}{dy} = 6$ The function above corresponds to the following computational graph  > Denote each arrow with the corresponding partial gradient, e.g. $\frac{df}{dc} = 1$ between $f$ and $c$, and use the generalized chain rule on graphs to compute the gradients $\frac{df}{dc}$, $\frac{df}{db}$, $\frac{df}{da}$, $\frac{df}{dx}$, $\frac{df}{dy}$. Your answer here
$\frac{df}{dc} = 1$
$\frac{dc}{d6} = y$
$\frac{dc}{dy} = 6$
$\frac{df}{db} = 1$
$\frac{db}{d3} = a$
$\frac{db}{da} = 3$
$\frac{da}{dx} = 2x$
----------------------------------
$\frac{df}{da} = \frac{df}{db} * \frac{db}{da} = 1 * 3 = 3$
$\frac{df}{dx} = \frac{df}{db} * \frac{db}{da} * \frac{da}{dx} = 1 * 3 * 2x = 6x$
$\frac{df}{dy} = \frac{df}{dc} * \frac{dc}{dy} = 1 * 6 = 6$ AutodiffThis exercise is quite hard. It's OK if you don't finish it, but you should try your best! You are given the following function (pseudo-code): | def parents_grads(node):
"""
returns parents of node and the gradients of node w.r.t each parent
e.g. in the example graph above parents_grads(f) would return: [(b, df/db), (c, df/dc)]
""" | _____no_output_____ | MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
> Complete the `backprop` method below to create a recursive algorithm such that calling `backward(node)` computes the gradient of `node` w.r.t. every (upstream - to the left) node in the computational graph. Every node has a `node.grad` attribute that is initialized to `0.0`, it's numerical gradient. The algorithm should modify this property directly, it should not return anything. Assume the gradients from `parents_grads` can be treated like real numbers, so you can e.g. multiply and add them. | def backprop(node, df_dnode):
node.grad += df_dnode
# Your code here
parents = parents_grads(node)
for parent, grad in parents:
backprop(parent, grad + df_dnode)
def backward(node):
"""
Computes the gradient of every (upstream) node in the computational graph w.r.t. node.
"""
backprop(node, 1.0) # The gradient of a node w.r.t. itself is 1 by definition. | _____no_output_____ | MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
Ok, now let's try to actually make it work! We'll define a class `Node` which contains the node value, gradient and parents and their gradients | from typing import Sequence, Tuple
class Node:
def __init__(self, value: float, parents_grads: Sequence[Tuple['Node', float]]):
self.value = value
self.grad = 0.0
self.parents_grads = parents_grads
def __repr__(self):
return "Node(value=%.4f, grad=%.4f)"%(self.value, self.grad) | _____no_output_____ | MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
So far no magic. We still havn't defined how we get the `parents_grads`, but we'll get there. Now move the `backprop` and `grad` function into the class, and modify it so it works with the class. | # Your code here
from typing import Sequence, Tuple
class Node:
def __init__(self, value: float, parents_grads: Sequence[Tuple['Node', float]]):
self.value = value
self.grad = 0.0
self.parents_grads = parents_grads
def __repr__(self):
return "Node(value=%.4f, grad=%.4f)"%(self.value, self.grad)
def backprop(self, df_dnode):
self.grad += df_dnode
# Your code here
for parent, grad in self.parents_grads:
parent.backprop(grad * df_dnode)
def backward(self):
"""
Computes the gradient of every (upstream) node in the computational graph w.r.t. node.
"""
self.backprop(1.0) # The gradient of a node w.r.t. itself is 1 by definition. | _____no_output_____ | MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
Now let's create a simple graph: $y = x^2$, and compute it for $x=2$. We'll set the parent_grads directly based on our knowledge that $\frac{dx^2}{dx}=2x$ | x = Node(2.0, [])
y = Node(x.value**2, parents_grads=[(x, 2*x.value)]) | _____no_output_____ | MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
And print the two nodes | print("x", x, "y", y) | x Node(value=2.0000, grad=0.0000) y Node(value=4.0000, grad=0.0000)
| MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
> Verify that the `y.backward()` call below computes the correct gradients | y.backward()
print("x", x, "y", y) | x Node(value=2.0000, grad=4.0000) y Node(value=4.0000, grad=1.0000)
| MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
$\frac{dy}{dx}$ should be 4 and $\frac{dy}{dy}$ should be 1 Ok, so it seems to work, but it's not very easy to use, since you have to define all the `parents_grads` whenever you're creating new nodes. **Here's the trick.** We can make a function `square(node:Node)->Node` which can square any Node. See below | def square(node: Node) -> Node:
return Node(node.value**2, [(node, 2*node.value)]) | _____no_output_____ | MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
Let's verify that it works | x = Node(3.0, [])
y = square(x)
print("x", x, "y", y)
y.backward()
print("x", x, "y", y) | x Node(value=3.0000, grad=0.0000) y Node(value=9.0000, grad=0.0000)
x Node(value=3.0000, grad=6.0000) y Node(value=9.0000, grad=1.0000)
| MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
Now we're getting somewhere. These calls to square can of course be chained | x = Node(3.0, [])
y = square(x)
z = square(y)
print("x", x, "y", y, "z", z)
z.backward()
print("x", x, "y", y,"z", z) | x Node(value=3.0000, grad=0.0000) y Node(value=9.0000, grad=0.0000) z Node(value=81.0000, grad=0.0000)
x Node(value=3.0000, grad=108.0000) y Node(value=9.0000, grad=18.0000) z Node(value=81.0000, grad=1.0000)
| MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
> Compute the $\frac{dz}{dx}$ gradient by hand and verify that it's correct Your answer here
$\frac{dz}{dx} = \frac{dz}{dy} * \frac{dy}{dx} = 2y * 2x = 2 (x^2) * 2x = 2 * 3^2 * 2 * 3 = 108$ Similarly we can create functions like this for all the common operators, plus, minus, multiplication, etc. With enough base operators like this we can create any computation we want, and compute the gradients automatically with `.backward()`> Finish the plus function below and verify that it works | def plus(a: Node, b:Node)->Node:
"""
Computes a+b
"""
# Your code here
return Node(a.value + b.value, [(a, 1), (b, 1)])
x = Node(4.0, [])
y = Node(5.0, [])
z = plus(x, y)
print("x", x, "y", y, "z", z)
z.backward()
print("x", x, "y", y,"z", z) | x Node(value=4.0000, grad=0.0000) y Node(value=5.0000, grad=0.0000) z Node(value=9.0000, grad=0.0000)
x Node(value=4.0000, grad=1.0000) y Node(value=5.0000, grad=1.0000) z Node(value=9.0000, grad=1.0000)
| MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
> Finish the multiply function below and verify that it works: | def multiply(a: Node, b:Node)->Node:
"""
Computes a*b
"""
# Your code hre
return Node(a.value*b.value, [(a,b.value),(b,a.value)])
x = Node(4.0, [])
y = Node(5.0, [])
z = multiply(x, y)
print("x", x, "y", y, "z", z)
z.backward()
print("x", x, "y", y,"z", z) | x Node(value=4.0000, grad=0.0000) y Node(value=5.0000, grad=0.0000) z Node(value=20.0000, grad=0.0000)
x Node(value=4.0000, grad=5.0000) y Node(value=5.0000, grad=4.0000) z Node(value=20.0000, grad=1.0000)
| MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
We'll stop here, but with just a few more functions we could compute a lot of common computations, and get their gradients automatically!This is super nice, but it's kind of annoying having to write `plus(a,b)`. Wouldn't it be nice if we could just write `a+b`? With python operator overloading we can! If we define the `__add__` method on `Node`, this will be executed instead of the regular plus operation when we add something to a `Node`. > Modify the `Node` class so that it overload the plus, `__add__(self, other)`, and multiplication, `__mul__(self, other)`, operators and run the code below to verify that it works. | # Your code here
class Node:
def __init__(self, value: float, parents_grads: Sequence[Tuple['Node', float]]):
self.value = value
self.grad = 0.0
self.parents_grads = parents_grads
def __repr__(self):
return "Node(value=%.4f, grad=%.4f)"%(self.value, self.grad)
def backprop(self, df_dnode):
self.grad += df_dnode
# Your code here
for parent, grad in self.parents_grads:
parent.backprop(grad * df_dnode)
def backward(self):
"""
Computes the gradient of every (upstream) node in the computational graph w.r.t. node.
"""
self.backprop(1.0) # The gradient of a node w.r.t. itself is 1 by definition.
def __add__(self, other):
return Node(self.value + other.value, [(self, 1), (other, 1)])
def __mul__(self, other):
return Node(self.value * other.value, [(self, other.value), (other, self.value)])
a = Node(2.0, [])
b = Node(3.0, [])
c = Node(4.0, [])
d = a*b + c # Behold the magic of operator overloading!
print("a", a, "b", b, "c", c, "d", d)
d.backward()
print("a", a, "b", b, "c", c, "d", d) | a Node(value=2.0000, grad=0.0000) b Node(value=3.0000, grad=0.0000) c Node(value=4.0000, grad=0.0000) d Node(value=10.0000, grad=0.0000)
a Node(value=2.0000, grad=3.0000) b Node(value=3.0000, grad=2.0000) c Node(value=4.0000, grad=1.0000) d Node(value=10.0000, grad=1.0000)
| MIT | lecture-04/lab.ipynb | LuxTheDude/modern-ai-course |
To DoDownload a dataset from DomainConvert all string columns to unique integers ---> could use hashes | domain_node = sy.login(email="[email protected]", password="changethis", port=8081)
domain_node.store.pandas
import pandas as pd
canada = pd.read_csv("../../trade_demo/datasets/ca - feb 2021.csv")
canada.head()
import hashlib
hashlib.algorithms_available
test_string = "February 2021"
hashlib.md5(test_string.encode("utf-8"))
int(hashlib.sha256(test_string.encode("utf-8")).hexdigest(), 16) % 10**8
def convert_string(s: str, digits: int=15):
"""Maps a string to a unique hash using SHA, converts it to a hash or an int"""
if to_int:
return int(hashlib.sha256(s.encode("utf-8")).hexdigest(), 16) % 10**digits
else:
return hashlib.sha256(s.encode("utf-8")).hexdigest()
convert_string("Canada", to_int=False)
convert_string("Canada", to_int=True, digits=10)
convert_string("Canada", to_int=True, digits=260)
canada.columns
#domain_node.load_dataset(canada)
canada.shape
domain_node.datasets.pandas
canada['Trade Flow']
domain_node.store.pandas | _____no_output_____ | Apache-2.0 | notebooks/Experimental/Ishan/ADP Demo/Old Versions/DataFrame to NumPy.ipynb | Noob-can-Compile/PySyft |
```{note}This feature requires MPI, and may not be able to be run on Colab.``` Distributed VariablesAt times when you need to perform a computation using large input arrays, you may want to perform that computation in multiple processes, where each process operates on some subset of the input values. This may be done purely for performance reasons, or it may be necessary because the entire input will not fit in the memory of a single machine. In any case, this can be accomplished in OpenMDAO by declaring those inputs and outputs as distributed. By definition, a variable is distributed if each process contains only a part of the whole variable. Conversely, when a variable is not distributed (i.e., serial), each process contains a copy of the entire variable. A component that has at least one distributed variable can also be called a distributed component.We’ve already seen that by using [src_indices](connect-with-src-indices), we can connect an input to only a subset of an output variable. By giving different values for src_indices in each MPI process, we can distribute computations on a distributed output across the processes. All of the scenarios that involve connecting distributed and serial variables are detailed in [Connections involving distributed variables](../working_with_groups/dist_serial.ipynb). Example: Simple Component with Distributed Input and OutputThe following example shows how to create a simple component, *SimpleDistrib*, that takes a distributed variable as an input and computes a distributed output. The calculation is divided across the available processes, but the details of that division are not contained in the component. In fact, the input is sized based on it's connected source using the "shape_by_conn" argument. | %%px
import numpy as np
import openmdao.api as om
class SimpleDistrib(om.ExplicitComponent):
def setup(self):
# Distributed Input
self.add_input('in_dist', shape_by_conn=True, distributed=True)
# Distributed Output
self.add_output('out_dist', copy_shape='in_dist', distributed=True)
def compute(self, inputs, outputs):
x = inputs['in_dist']
# "Computationally Intensive" operation that we wish to parallelize.
f_x = x**2 - 2.0*x + 4.0
outputs['out_dist'] = f_x | _____no_output_____ | Apache-2.0 | openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb | markleader/OpenMDAO |
In the next part of the example, we take the `SimpleDistrib` component, place it into a model, and run it. Suppose the vector of data we want to process has 7 elements. We have 4 processors available for computation, so if we distribute them as evenly as we can, 3 procs can handle 2 elements each, and the 4th processor can pick up the last one. OpenMDAO's utilities includes the `evenly_distrib_idxs` function which computes the sizes and offsets for all ranks. The sizes are used to determine how much of the array to allocate on any specific rank. The offsets are used to figure out where the local portion of the array starts, and in this example, is used to set the initial value properly. In this case, the initial value for the full distributed input "in_dist" is a vector of 7 values between 3.0 and 9.0, and each processor has a 1 or 2 element piece of it. | %%px
from openmdao.utils.array_utils import evenly_distrib_idxs
from openmdao.utils.mpi import MPI
size = 7
if MPI:
comm = MPI.COMM_WORLD
rank = comm.rank
sizes, offsets = evenly_distrib_idxs(comm.size, size)
else:
# When running in serial, the entire variable is on rank 0.
rank = 0
sizes = {rank : size}
offsets = {rank : 0}
prob = om.Problem()
model = prob.model
# Create a distributed source for the distributed input.
ivc = om.IndepVarComp()
ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True)
model.add_subsystem("indep", ivc)
model.add_subsystem("D1", SimpleDistrib())
model.connect('indep.x_dist', 'D1.in_dist')
prob.setup()
# Set initial values of distributed variable.
x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]]
prob.set_val('indep.x_dist', x_dist_init)
prob.run_model()
# Values on each rank.
for var in ['indep.x_dist', 'D1.out_dist']:
print(var, prob.get_val(var))
# Full gathered values.
for var in ['indep.x_dist', 'D1.out_dist']:
print(var, prob.get_val(var, get_remote=True))
print('')
%%px
from openmdao.utils.assert_utils import assert_near_equal
assert_near_equal(prob.get_val(var, get_remote=True), np.array([7., 12., 19., 28., 39., 52., 67.])) | _____no_output_____ | Apache-2.0 | openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb | markleader/OpenMDAO |
Note that we created a connection source 'x_dist' that passes its value to 'D1.in_dist'. OpenMDAO requires a source for non-constant inputs, and usually creates one automatically as an output of a component referred to as an 'Auto-IVC'. However, the automatic creation is not supported for distributed variables. We must manually create an `IndepVarComp` and connect it to our input. When using distributed variables, OpenMDAO can't always size the component inputs based on the shape of the connected source. In this example, the component determines its own split using `evenly_distrib_idxs`. This requires that the component know the full vector size, which is passed in via the option 'vec_size'. | %%px
import numpy as np
import openmdao.api as om
from openmdao.utils.array_utils import evenly_distrib_idxs
from openmdao.utils.mpi import MPI
class SimpleDistrib(om.ExplicitComponent):
def initialize(self):
self.options.declare('vec_size', types=int, default=1,
desc="Total size of vector.")
def setup(self):
comm = self.comm
rank = comm.rank
size = self.options['vec_size']
sizes, _ = evenly_distrib_idxs(comm.size, size)
mysize = sizes[rank]
# Distributed Input
self.add_input('in_dist', np.ones(mysize, float), distributed=True)
# Distributed Output
self.add_output('out_dist', np.ones(mysize, float), distributed=True)
def compute(self, inputs, outputs):
x = inputs['in_dist']
# "Computationally Intensive" operation that we wish to parallelize.
f_x = x**2 - 2.0*x + 4.0
outputs['out_dist'] = f_x
size = 7
if MPI:
comm = MPI.COMM_WORLD
rank = comm.rank
sizes, offsets = evenly_distrib_idxs(comm.size, size)
else:
# When running in serial, the entire variable is on rank 0.
rank = 0
sizes = {rank : size}
offsets = {rank : 0}
prob = om.Problem()
model = prob.model
# Create a distributed source for the distributed input.
ivc = om.IndepVarComp()
ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True)
model.add_subsystem("indep", ivc)
model.add_subsystem("D1", SimpleDistrib(vec_size=size))
model.connect('indep.x_dist', 'D1.in_dist')
prob.setup()
# Set initial values of distributed variable.
x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]]
prob.set_val('indep.x_dist', x_dist_init)
prob.run_model()
# Values on each rank.
for var in ['indep.x_dist', 'D1.out_dist']:
print(var, prob.get_val(var))
# Full gathered values.
for var in ['indep.x_dist', 'D1.out_dist']:
print(var, prob.get_val(var, get_remote=True))
print('')
%%px
from openmdao.utils.assert_utils import assert_near_equal
assert_near_equal(prob.get_val(var, get_remote=True), np.array([7., 12., 19., 28., 39., 52., 67.])) | _____no_output_____ | Apache-2.0 | openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb | markleader/OpenMDAO |
Example: Distributed I/O and a Serial InputOpenMDAO supports both serial and distributed I/O on the same component, so in this example, we expand the problem to include a serial input. In this case, the serial input also has a vector width of 7, but those values will be the same on each processor. This serial input is included in the computation by taking the vector sum and adding it to the distributed output. | %%px
import numpy as np
import openmdao.api as om
from openmdao.utils.array_utils import evenly_distrib_idxs
from openmdao.utils.mpi import MPI
class MixedDistrib1(om.ExplicitComponent):
def setup(self):
# Distributed Input
self.add_input('in_dist', shape_by_conn=True, distributed=True)
# Serial Input
self.add_input('in_serial', shape_by_conn=True)
# Distributed Output
self.add_output('out_dist', copy_shape='in_dist', distributed=True)
def compute(self, inputs, outputs):
x = inputs['in_dist']
y = inputs['in_serial']
# "Computationally Intensive" operation that we wish to parallelize.
f_x = x**2 - 2.0*x + 4.0
# This operation is repeated on all procs.
f_y = y ** 0.5
outputs['out_dist'] = f_x + np.sum(f_y)
size = 7
if MPI:
comm = MPI.COMM_WORLD
rank = comm.rank
sizes, offsets = evenly_distrib_idxs(comm.size, size)
else:
# When running in serial, the entire variable is on rank 0.
rank = 0
sizes = {rank : size}
offsets = {rank : 0}
prob = om.Problem()
model = prob.model
# Create a distributed source for the distributed input.
ivc = om.IndepVarComp()
ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True)
ivc.add_output('x_serial', np.zeros(size))
model.add_subsystem("indep", ivc)
model.add_subsystem("D1", MixedDistrib1())
model.connect('indep.x_dist', 'D1.in_dist')
model.connect('indep.x_serial', 'D1.in_serial')
prob.setup()
# Set initial values of distributed variable.
x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]]
prob.set_val('indep.x_dist', x_dist_init)
# Set initial values of serial variable.
x_serial_init = 1.0 + 2.0*np.arange(size)
prob.set_val('indep.x_serial', x_serial_init)
prob.run_model()
# Values on each rank.
for var in ['indep.x_dist', 'indep.x_serial', 'D1.out_dist']:
print(var, prob.get_val(var))
# Full gathered values.
for var in ['indep.x_dist', 'indep.x_serial', 'D1.out_dist']:
print(var, prob.get_val(var, get_remote=True))
print('')
%%px
assert_near_equal(prob.get_val(var, get_remote=True), np.array([24.53604616, 29.53604616, 36.53604616, 45.53604616, 56.53604616, 69.53604616, 84.53604616]), 1e-6) | _____no_output_____ | Apache-2.0 | openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb | markleader/OpenMDAO |
Example: Distributed I/O and a Serial OuputYou can also create a component with a serial output and distributed outputs and inputs. This situation tends to be more tricky and usually requires you to performe some MPI operations in your component's `run` method. If the serial output is only a function of the serial inputs, then you can handle that variable just like you do on any other component. However, this example extends the previous component to include a serial output that is a function of both the serial and distributed inputs. In this case, it's a function of the sum of the square root of each element in the full distributed vector. Since the data is not all on any local processor, we use an MPI operation, in this case `Allreduce`, to make a summation across the distributed vector, and gather the answer back to each processor. The MPI operation and your implementation will vary, but consider this to be a general example. | %%px
import numpy as np
import openmdao.api as om
from openmdao.utils.array_utils import evenly_distrib_idxs
from openmdao.utils.mpi import MPI
class MixedDistrib2(om.ExplicitComponent):
def setup(self):
# Distributed Input
self.add_input('in_dist', shape_by_conn=True, distributed=True)
# Serial Input
self.add_input('in_serial', shape_by_conn=True)
# Distributed Output
self.add_output('out_dist', copy_shape='in_dist', distributed=True)
# Serial Output
self.add_output('out_serial', copy_shape='in_serial')
def compute(self, inputs, outputs):
x = inputs['in_dist']
y = inputs['in_serial']
# "Computationally Intensive" operation that we wish to parallelize.
f_x = x**2 - 2.0*x + 4.0
# These operations are repeated on all procs.
f_y = y ** 0.5
g_y = y**2 + 3.0*y - 5.0
# Compute square root of our portion of the distributed input.
g_x = x ** 0.5
# Distributed output
outputs['out_dist'] = f_x + np.sum(f_y)
# Serial output
if MPI and comm.size > 1:
# We need to gather the summed values to compute the total sum over all procs.
local_sum = np.array(np.sum(g_x))
total_sum = local_sum.copy()
self.comm.Allreduce(local_sum, total_sum, op=MPI.SUM)
outputs['out_serial'] = g_y + total_sum
else:
# Recommended to make sure your code can run in serial too, for testing.
outputs['out_serial'] = g_y + np.sum(g_x)
size = 7
if MPI:
comm = MPI.COMM_WORLD
rank = comm.rank
sizes, offsets = evenly_distrib_idxs(comm.size, size)
else:
# When running in serial, the entire variable is on rank 0.
rank = 0
sizes = {rank : size}
offsets = {rank : 0}
prob = om.Problem()
model = prob.model
# Create a distributed source for the distributed input.
ivc = om.IndepVarComp()
ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True)
ivc.add_output('x_serial', np.zeros(size))
model.add_subsystem("indep", ivc)
model.add_subsystem("D1", MixedDistrib2())
model.connect('indep.x_dist', 'D1.in_dist')
model.connect('indep.x_serial', 'D1.in_serial')
prob.setup()
# Set initial values of distributed variable.
x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]]
prob.set_val('indep.x_dist', x_dist_init)
# Set initial values of serial variable.
x_serial_init = 1.0 + 2.0*np.arange(size)
prob.set_val('indep.x_serial', x_serial_init)
prob.run_model()
# Values on each rank.
for var in ['indep.x_dist', 'indep.x_serial', 'D1.out_dist', 'D1.out_serial']:
print(var, prob.get_val(var))
# Full gathered values.
for var in ['indep.x_dist', 'indep.x_serial', 'D1.out_dist', 'D1.out_serial']:
print(var, prob.get_val(var, get_remote=True))
print('')
%%px
assert_near_equal(prob.get_val(var, get_remote=True), np.array([15.89178696, 29.89178696, 51.89178696, 81.89178696, 119.89178696, 165.89178696, 219.89178696]), 1e-6) | _____no_output_____ | Apache-2.0 | openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb | markleader/OpenMDAO |
```{note}In this example, we introduce a new component called an [IndepVarComp](indepvarcomp.ipynb). If you used OpenMDAO prior to version 3.2, then you are familiar with this component. It is used to define an independent variable.You usually do not have to define these because OpenMDAO defines and uses them automatically for all unconnected inputs in your model. This automatically-created `IndepVarComp` is called an Auto-IVC.However, when we define a distributed input, we often use the “src_indices” attribute to determine the allocation of that input to the processors that the component sees. For some sets of these indices, it isn’t possible to easily determine the full size of the corresponding independent variable, and the *IndepVarComp* cannot be created automatically. So, for unconnected inputs on a distributed component, you must manually create one, as we did in this example.``` Derivatives with Distributed VariablesIn the following examples, we show how to add analytic derivatives to the distributed examples given above. In most cases it is straighforward, but when you have a serial output and a distributed input, the [matrix-free](matrix-free-api) format is required. Derivatives: Distributed I/O and a Serial InputIn this example, we have a distributed input, a distributed output, and a serial input. The derivative of 'out_dist' with respect to 'in_dict' has a diagonal Jacobian, so we use sparse declaration and each processor gives `declare_partials` the local number of rows and columns. The derivatives are verified against complex step using `check_totals` since our component is complex-safe. | %%px
import numpy as np
import openmdao.api as om
from openmdao.utils.array_utils import evenly_distrib_idxs
from openmdao.utils.mpi import MPI
class MixedDistrib1(om.ExplicitComponent):
def setup(self):
# Distributed Input
self.add_input('in_dist', shape_by_conn=True, distributed=True)
# Serial Input
self.add_input('in_serial', shape_by_conn=True)
# Distributed Output
self.add_output('out_dist', copy_shape='in_dist', distributed=True)
def setup_partials(self):
meta = self.get_io_metadata(metadata_keys=['shape'])
local_size = meta['in_dist']['shape'][0]
row_col_d = np.arange(local_size)
self.declare_partials('out_dist', 'in_dist', rows=row_col_d, cols=row_col_d)
self.declare_partials('out_dist', 'in_serial')
def compute(self, inputs, outputs):
x = inputs['in_dist']
y = inputs['in_serial']
# "Computationally Intensive" operation that we wish to parallelize.
f_x = x**2 - 2.0*x + 4.0
# This operation is repeated on all procs.
f_y = y ** 0.5
outputs['out_dist'] = f_x + np.sum(f_y)
def compute_partials(self, inputs, partials):
x = inputs['in_dist']
y = inputs['in_serial']
size = len(y)
local_size = len(x)
partials['out_dist', 'in_dist'] = 2.0 * x - 2.0
df_dy = 0.5 / y ** 0.5
partials['out_dist', 'in_serial'] = np.tile(df_dy, local_size).reshape((local_size, size))
size = 7
if MPI:
comm = MPI.COMM_WORLD
rank = comm.rank
sizes, offsets = evenly_distrib_idxs(comm.size, size)
else:
# When running in serial, the entire variable is on rank 0.
rank = 0
sizes = {rank : size}
offsets = {rank : 0}
prob = om.Problem()
model = prob.model
# Create a distributed source for the distributed input.
ivc = om.IndepVarComp()
ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True)
ivc.add_output('x_serial', np.zeros(size))
model.add_subsystem("indep", ivc)
model.add_subsystem("D1", MixedDistrib1())
model.connect('indep.x_dist', 'D1.in_dist')
model.connect('indep.x_serial', 'D1.in_serial')
model.add_design_var('indep.x_serial')
model.add_design_var('indep.x_dist')
model.add_objective('D1.out_dist')
prob.setup(force_alloc_complex=True)
# Set initial values of distributed variable.
x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]]
prob.set_val('indep.x_dist', x_dist_init)
# Set initial values of serial variable.
x_serial_init = 1.0 + 2.0*np.arange(size)
prob.set_val('indep.x_serial', x_serial_init)
prob.run_model()
if rank > 0:
prob.check_totals(method='cs', out_stream=None)
else:
prob.check_totals(method='cs')
%%px
totals = prob.check_totals(method='cs', out_stream=None)
for key, val in totals.items():
assert_near_equal(val['rel error'][0], 0.0, 1e-6) | _____no_output_____ | Apache-2.0 | openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb | markleader/OpenMDAO |
Derivatives: Distributed I/O and a Serial OutputIf you have a component with distributed inputs and a serial output, then the standard `compute_partials` API will not work for specifying the derivatives. You will need to use the matrix-free API with `compute_jacvec_product`, which is described in the feature document for [ExplicitComponent](explicit_component.ipynb)Computing the matrix-vector product for the derivative of the serial output with respect to a distributed input will require you to use MPI operations to gather the required parts of the Jacobian to all processors. The following example shows how to implement derivatives on the earlier `MixedDistrib2` component. | %%px
import numpy as np
import openmdao.api as om
from openmdao.utils.array_utils import evenly_distrib_idxs
from openmdao.utils.mpi import MPI
class MixedDistrib2(om.ExplicitComponent):
def setup(self):
# Distributed Input
self.add_input('in_dist', shape_by_conn=True, distributed=True)
# Serial Input
self.add_input('in_serial', shape_by_conn=True)
# Distributed Output
self.add_output('out_dist', copy_shape='in_dist', distributed=True)
# Serial Output
self.add_output('out_serial', copy_shape='in_serial')
def compute(self, inputs, outputs):
x = inputs['in_dist']
y = inputs['in_serial']
# "Computationally Intensive" operation that we wish to parallelize.
f_x = x**2 - 2.0*x + 4.0
# These operations are repeated on all procs.
f_y = y ** 0.5
g_y = y**2 + 3.0*y - 5.0
# Compute square root of our portion of the distributed input.
g_x = x ** 0.5
# Distributed output
outputs['out_dist'] = f_x + np.sum(f_y)
# Serial output
if MPI and comm.size > 1:
# We need to gather the summed values to compute the total sum over all procs.
local_sum = np.array(np.sum(g_x))
total_sum = local_sum.copy()
self.comm.Allreduce(local_sum, total_sum, op=MPI.SUM)
outputs['out_serial'] = g_y + total_sum
else:
# Recommended to make sure your code can run in serial too, for testing.
outputs['out_serial'] = g_y + np.sum(g_x)
def compute_jacvec_product(self, inputs, d_inputs, d_outputs, mode):
x = inputs['in_dist']
y = inputs['in_serial']
df_dx = 2.0 * x - 2.0
df_dy = 0.5 / y ** 0.5
dg_dx = 0.5 / x ** 0.5
dg_dy = 2.0 * y + 3.0
local_size = len(x)
size = len(y)
if mode == 'fwd':
if 'out_dist' in d_outputs:
if 'in_dist' in d_inputs:
d_outputs['out_dist'] += df_dx * d_inputs['in_dist']
if 'in_serial' in d_inputs:
d_outputs['out_dist'] += np.tile(df_dy, local_size).reshape((local_size, size)).dot(d_inputs['in_serial'])
if 'out_serial' in d_outputs:
if 'in_dist' in d_inputs:
if MPI and comm.size > 1:
deriv = np.tile(dg_dx, size).reshape((size, local_size)).dot(d_inputs['in_dist'])
deriv_sum = np.zeros(deriv.size)
self.comm.Allreduce(deriv, deriv_sum, op=MPI.SUM)
d_outputs['out_serial'] += deriv_sum
else:
# Recommended to make sure your code can run in serial too, for testing.
d_outputs['out_serial'] += np.tile(dg_dx, local_size).reshape((local_size, size)).dot(d_inputs['in_dist'])
if 'in_serial' in d_inputs:
d_outputs['out_serial'] += dg_dy * d_inputs['in_serial']
else:
if 'out_dist' in d_outputs:
if 'in_dist' in d_inputs:
d_inputs['in_dist'] += df_dx * d_outputs['out_dist']
if 'in_serial' in d_inputs:
d_inputs['in_serial'] += np.tile(df_dy, local_size).reshape((local_size, size)).dot(d_outputs['out_dist'])
if 'out_serial' in d_outputs:
if 'out_serial' in d_outputs:
if 'in_dist' in d_inputs:
if MPI and comm.size > 1:
deriv = np.tile(dg_dx, size).reshape((size, local_size)).dot(d_outputs['out_serial'])
deriv_sum = np.zeros(deriv.size)
self.comm.Allreduce(deriv, deriv_sum, op=MPI.SUM)
d_inputs['in_dist'] += deriv_sum
else:
# Recommended to make sure your code can run in serial too, for testing.
d_inputs['in_dist'] += np.tile(dg_dx, local_size).reshape((local_size, size)).dot(d_outputs['out_serial'])
if 'in_serial' in d_inputs:
d_inputs['in_serial'] += dg_dy * d_outputs['out_serial']
size = 7
if MPI:
comm = MPI.COMM_WORLD
rank = comm.rank
sizes, offsets = evenly_distrib_idxs(comm.size, size)
else:
# When running in serial, the entire variable is on rank 0.
rank = 0
sizes = {rank : size}
offsets = {rank : 0}
prob = om.Problem()
model = prob.model
# Create a distributed source for the distributed input.
ivc = om.IndepVarComp()
ivc.add_output('x_dist', np.zeros(sizes[rank]), distributed=True)
ivc.add_output('x_serial', np.zeros(size))
model.add_subsystem("indep", ivc)
model.add_subsystem("D1", MixedDistrib2())
model.connect('indep.x_dist', 'D1.in_dist')
model.connect('indep.x_serial', 'D1.in_serial')
model.add_design_var('indep.x_serial')
model.add_design_var('indep.x_dist')
model.add_constraint('D1.out_dist', lower=0.0)
model.add_constraint('D1.out_serial', lower=0.0)
prob.setup(force_alloc_complex=True)
# Set initial values of distributed variable.
x_dist_init = 3.0 + np.arange(size)[offsets[rank]:offsets[rank] + sizes[rank]]
prob.set_val('indep.x_dist', x_dist_init)
# Set initial values of serial variable.
x_serial_init = 1.0 + 2.0*np.arange(size)
prob.set_val('indep.x_serial', x_serial_init)
prob.run_model()
if rank > 0:
prob.check_totals(method='cs', out_stream=None)
else:
prob.check_totals(method='cs')
%%px
totals = prob.check_totals(method='cs', out_stream=None)
for key, val in totals.items():
assert_near_equal(val['rel error'][0], 0.0, 1e-6) | _____no_output_____ | Apache-2.0 | openmdao/docs/openmdao_book/features/core_features/working_with_components/distributed_components.ipynb | markleader/OpenMDAO |
Lambda School Data Science*Unit 2, Sprint 1, Module 4*--- Logistic Regression- do train/validate/test split- begin with baselines for classification- express and explain the intuition and interpretation of Logistic Regression- use sklearn.linear_model.LogisticRegression to fit and interpret Logistic Regression modelsLogistic regression is the baseline for classification models, as well as a handy way to predict probabilities (since those too live in the unit interval). While relatively simple, it is also the foundation for more sophisticated classification techniques such as neural networks (many of which can effectively be thought of as networks of logistic models). SetupRun the code cell below. You can work locally (follow the [local setup instructions](https://lambdaschool.github.io/ds/unit2/local/)) or on Colab.Libraries:- category_encoders- numpy- pandas- scikit-learn | %%capture
import sys
# If you're on Colab:
if 'google.colab' in sys.modules:
DATA_PATH = 'https://raw.githubusercontent.com/LambdaSchool/DS-Unit-2-Linear-Models/master/data/'
!pip install category_encoders==2.*
# If you're working locally:
else:
DATA_PATH = '../data/' | _____no_output_____ | MIT | module4-logistic-regression/LS_DS_214.ipynb | cedro-gasque/DS-Unit-2-Linear-Models |
Do train/validate/test split Overview Predict Titanic survival 🚢Kaggle is a platform for machine learning competitions. [Kaggle has used the Titanic dataset](https://www.kaggle.com/c/titanic/data) for their most popular "getting started" competition. Kaggle splits the data into train and test sets for participants. Let's load both: | import pandas as pd
train = pd.read_csv(DATA_PATH+'titanic/train.csv')
test = pd.read_csv(DATA_PATH+'titanic/test.csv') | _____no_output_____ | MIT | module4-logistic-regression/LS_DS_214.ipynb | cedro-gasque/DS-Unit-2-Linear-Models |
Notice that the train set has one more column than the test set: | train.shape, test.shape | _____no_output_____ | MIT | module4-logistic-regression/LS_DS_214.ipynb | cedro-gasque/DS-Unit-2-Linear-Models |
Which column is in train but not test? The target! | set(train.columns) - set(test.columns) | _____no_output_____ | MIT | module4-logistic-regression/LS_DS_214.ipynb | cedro-gasque/DS-Unit-2-Linear-Models |
Why doesn't Kaggle give you the target for the test set? Rachel Thomas, [How (and why) to create a good validation set](https://www.fast.ai/2017/11/13/validation-sets/)> One great thing about Kaggle competitions is that they force you to think about validation sets more rigorously (in order to do well). For those who are new to Kaggle, it is a platform that hosts machine learning competitions. Kaggle typically breaks the data into two sets you can download:>> 1. a **training set**, which includes the _independent variables,_ as well as the _dependent variable_ (what you are trying to predict).>> 2. a **test set**, which just has the _independent variables._ You will make predictions for the test set, which you can submit to Kaggle and get back a score of how well you did.>> This is the basic idea needed to get started with machine learning, but to do well, there is a bit more complexity to understand. **You will want to create your own training and validation sets (by splitting the Kaggle “training” data). You will just use your smaller training set (a subset of Kaggle’s training data) for building your model, and you can evaluate it on your validation set (also a subset of Kaggle’s training data) before you submit to Kaggle.**>> The most important reason for this is that Kaggle has split the test data into two sets: for the public and private leaderboards. The score you see on the public leaderboard is just for a subset of your predictions (and you don’t know which subset!). How your predictions fare on the private leaderboard won’t be revealed until the end of the competition. The reason this is important is that you could end up overfitting to the public leaderboard and you wouldn’t realize it until the very end when you did poorly on the private leaderboard. Using a good validation set can prevent this. You can check if your validation set is any good by seeing if your model has similar scores on it to compared with on the Kaggle test set. ...>> Understanding these distinctions is not just useful for Kaggle. In any predictive machine learning project, you want your model to be able to perform well on new data. 2-way train/test split is not enough Hastie, Tibshirani, and Friedman, [The Elements of Statistical Learning](http://statweb.stanford.edu/~tibs/ElemStatLearn/), Chapter 7: Model Assessment and Selection> If we are in a data-rich situation, the best approach is to randomly divide the dataset into three parts: a training set, a validation set, and a test set. The training set is used to fit the models; the validation set is used to estimate prediction error for model selection; the test set is used for assessment of the generalization error of the final chosen model. Ideally, the test set should be kept in a "vault," and be brought out only at the end of the data analysis. Suppose instead that we use the test-set repeatedly, choosing the model with the smallest test-set error. Then the test set error of the final chosen model will underestimate the true test error, sometimes substantially. Andreas Mueller and Sarah Guido, [Introduction to Machine Learning with Python](https://books.google.com/books?id=1-4lDQAAQBAJ&pg=PA270)> The distinction between the training set, validation set, and test set is fundamentally important to applying machine learning methods in practice. Any choices made based on the test set accuracy "leak" information from the test set into the model. Therefore, it is important to keep a separate test set, which is only used for the final evaluation. It is good practice to do all exploratory analysis and model selection using the combination of a training and a validation set, and reserve the test set for a final evaluation - this is even true for exploratory visualization. Strictly speaking, evaluating more than one model on the test set and choosing the better of the two will result in an overly optimistic estimate of how accurate the model is. Hadley Wickham, [R for Data Science](https://r4ds.had.co.nz/model-intro.htmlhypothesis-generation-vs.hypothesis-confirmation)> There is a pair of ideas that you must understand in order to do inference correctly:>> 1. Each observation can either be used for exploration or confirmation, not both.>> 2. You can use an observation as many times as you like for exploration, but you can only use it once for confirmation. As soon as you use an observation twice, you’ve switched from confirmation to exploration.>> This is necessary because to confirm a hypothesis you must use data independent of the data that you used to generate the hypothesis. Otherwise you will be over optimistic. There is absolutely nothing wrong with exploration, but you should never sell an exploratory analysis as a confirmatory analysis because it is fundamentally misleading.>> If you are serious about doing an confirmatory analysis, one approach is to split your data into three pieces before you begin the analysis. Sebastian Raschka, [Model Evaluation](https://sebastianraschka.com/blog/2018/model-evaluation-selection-part4.html)> Since “a picture is worth a thousand words,” I want to conclude with a figure (shown below) that summarizes my personal recommendations ...Usually, we want to do **"Model selection (hyperparameter optimization) _and_ performance estimation."** (The green box in the diagram.)Therefore, we usually do **"3-way holdout method (train/validation/test split)"** or **"cross-validation with independent test set."** What's the difference between Training, Validation, and Testing sets? Brandon Rohrer, [Training, Validation, and Testing Data Sets](https://end-to-end-machine-learning.teachable.com/blog/146320/training-validation-testing-data-sets)> The validation set is for adjusting a model's hyperparameters. The testing data set is the ultimate judge of model performance.>> Testing data is what you hold out until very last. You only run your model on it once. You don’t make any changes or adjustments to your model after that. ... Follow Along> You will want to create your own training and validation sets (by splitting the Kaggle “training” data).Do this, using the [sklearn.model_selection.train_test_split](https://scikit-learn.org/stable/modules/generated/sklearn.model_selection.train_test_split.html) function: | from sklearn.model_selection import train_test_split
train.shape, test.shape
train, val = train_test_split(train, random_state=28)
train.shape, val.shape, test.shape | _____no_output_____ | MIT | module4-logistic-regression/LS_DS_214.ipynb | cedro-gasque/DS-Unit-2-Linear-Models |
Challenge For your assignment, you'll do a 3-way train/validate/test split.Then next sprint, you'll begin to participate in a private Kaggle challenge, just for your cohort! You will be provided with data split into 2 sets: training and test. You will create your own training and validation sets, by splitting the Kaggle "training" data, so you'll end up with 3 sets total. Begin with baselines for classification Overview We'll begin with the **majority class baseline.**[Will Koehrsen](https://twitter.com/koehrsen_will/status/1088863527778111488)> A baseline for classification can be the most common class in the training dataset.[*Data Science for Business*](https://books.google.com/books?id=4ZctAAAAQBAJ&pg=PT276), Chapter 7.3: Evaluation, Baseline Performance, and Implications for Investments in Data> For classification tasks, one good baseline is the _majority classifier,_ a naive classifier that always chooses the majority class of the training dataset (see Note: Base rate in Holdout Data and Fitting Graphs). This may seem like advice so obvious it can be passed over quickly, but it is worth spending an extra moment here. There are many cases where smart, analytical people have been tripped up in skipping over this basic comparison. For example, an analyst may see a classification accuracy of 94% from her classifier and conclude that it is doing fairly well—when in fact only 6% of the instances are positive. So, the simple majority prediction classifier also would have an accuracy of 94%. Follow Along Determine majority class | target = 'Survived'
y_train = train[target]
y_train.value_counts() | _____no_output_____ | MIT | module4-logistic-regression/LS_DS_214.ipynb | cedro-gasque/DS-Unit-2-Linear-Models |
What if we guessed the majority class for every prediction? | y_pred = y_train.apply(lambda x : 0) | _____no_output_____ | MIT | module4-logistic-regression/LS_DS_214.ipynb | cedro-gasque/DS-Unit-2-Linear-Models |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.